text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-203] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Heat_transfer] | [TOKENS: 7556] |
Contents Heat transfer Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat. Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process. Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface. In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life. The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others. Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Mechanisms The fundamental modes of heat transfer are: By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula: ϕ q = v ρ c p Δ T {\displaystyle \phi _{q}=v\rho c_{p}\Delta T} where On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time. Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means. Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction. Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current. Convective cooling is sometimes described as Newton's law of cooling: The rate of heat loss of a body is proportional to the temperature difference between the body and its surroundings. However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply. In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong. The Rayleigh number ( R a {\displaystyle \mathrm {Ra} } ) is the product of the Grashof ( G r {\displaystyle \mathrm {Gr} } ) and Prandtl ( P r {\displaystyle \mathrm {Pr} } ) numbers. It is a measure that determines the relative strength of conduction and convection. R a = G r ⋅ P r = g Δ ρ L 3 μ α = g β Δ T L 3 ν α {\displaystyle \mathrm {Ra} =\mathrm {Gr} \cdot \mathrm {Pr} ={\frac {g\Delta \rho L^{3}}{\mu \alpha }}={\frac {g\beta \Delta TL^{3}}{\nu \alpha }}} where The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system. The buoyancy force driving the convection is roughly g Δ ρ L 3 {\displaystyle g\Delta \rho L^{3}} , so the corresponding pressure is roughly g Δ ρ L {\displaystyle g\Delta \rho L} . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals μ V / L = μ / T conv {\displaystyle \mu V/L=\mu /T_{\text{conv}}} , where V is the typical fluid velocity due to convection and T conv {\displaystyle T_{\text{conv}}} the order of its timescale. The conduction timescale, on the other hand, is of the order of T cond = L 2 / α {\displaystyle T_{\text{cond}}=L^{2}/\alpha } . Convection occurs when the Rayleigh number is above 1,000–2,000. Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference. When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation. For an object in vacuum, the equation is: ϕ q = ϵ σ T 4 . {\displaystyle \phi _{q}=\epsilon \sigma T^{4}.} For radiative transfer between two objects, the equation is as follows: ϕ q = ϵ σ F ( T a 4 − T b 4 ) , {\displaystyle \phi _{q}=\epsilon \sigma F(T_{a}^{4}-T_{b}^{4}),} where The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength. The study of these cases is called near-field radiative heat transfer. Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to 285 °C (545 °F). The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France. Phase transition Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water. The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation. Phase transitions involve the four fundamental states of matter: The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume. In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB). At similar standard atmospheric pressure and high temperatures, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF). The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier". Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure. There are several types of condensation: Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state. Modeling approaches Heat transfer can be modeled in various ways. The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.). Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling. System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time. In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object. Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. Engineering Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances.[citation needed] Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, chemical engineering and power station engineering. Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference. Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator. The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature.[citation needed] A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work. A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power. A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect. A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction. A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface. Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.[further explanation needed] A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces. Applications Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors. Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases. An alternative method is passive daytime radiative cooling, which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K) to lower ambient temperatures while requiring zero energy input. The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun. The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level. Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered. To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids. Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity. Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect. In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level. Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect. Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide (CO2) at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere. Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.[citation needed] History In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures (graduum caloris, "degrees of heat") between the body and its surroundings. The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same. In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true. In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences. In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: You remembre you gave me a wire of five metals all drawn thro the same hole Viz. one, of gould, one of silver, copper steel and iron. I supplyed here the two others Viz. the one of tin the other of lead. I fixed these seven wires into a wooden frame at an equal distance of one an other ... I dipt the seven wires into this melted wax as deep as the wooden frame ... By taking them out they were covred with a coat of wax ... When I found that this crust was there about of an equal thikness upon all the wires, I placed them all in a glased earthen vessel full of olive oil heated to some degrees under boiling, taking care that each wire was dipt just as far in the oil as the other ... Now, as they had been all dipt alike at the same time in the same oil, it must follow, that the wire, upon which the wax had been melted the highest, had been the best conductor of heat. ... Silver conducted heat far the best of all other metals, next to this was copper, then gold, tin, iron, steel, Lead. During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria, reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim. During his years in Mannheim and later in Munich, Thompson made a large number of discoveries and inventions related to heat. In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a "Torricellian vacuum". From the striking analogy between the electric fluid and heat respecting their conductors and non-conductors (having found that bodies, in general, which are conductors of the electric fluid, are likewise good conductors of heat, and, on the contrary, that electric bodies, or such as are bad conductors of the electric fluid, are likewise bad conductors of heat), I was led to imagine that the Torricellian vacuum, which is known to afford so ready a passage to the electric fluid, would also have afforded a ready passage to heat. For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction. These were his results. After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", but only a very small difference between common air and rarefied air. He also noted the great difference between dry air and moist air, and the great benefit this affords. I cannot help observing, with what infinite wisdom and goodness Divine Providence appears to have guarded us against the evil effects of excessive heat and cold in the atmosphere; for if it were possible for the air to be equally damp during the severe cold of the winter ... as it sometimes is in summer, its conducing power, and consequently its apparent coldness ... would become quite intolerable; but, happily for us, its power to hold water in solution is diminished, and with it its power to rob us of our animal heat. Every body knows how very disagreeable a very moderate degree of cold is when the air is very damp; and from hence it appears, why the thermometer is not always a just measure of the apparent or sensible heat of the atmosphere. If colds ... are occasioned by our bodies being robbed of our animal heat, the reason is plain why those disorders prevail most during the cold autumnal rains, and upon the breaking up of the frost in the spring. It is likewise plain [why] ... inhabiting damp houses, is so very dangerous; and why the evening air is so pernicious in summer ... and why it is not so during the hard frosts of winter. Thompson concluded with some comments on the important difference between temperature and sensible heat. The ... sensation of hot or cold depends not intirely upon the temperature of the body exciting in us those sensations ... but upon the quantity of heat it is capable of communicating to us, or receiving from us ... and this depends in a great measure upon the conducing powers of the bodies in question. The sensation of hot is the entrance of heat into our bodies; that of cold is its exit ... This is another proof that the thermometer cannot be a just measure of sensible heat ... or rather, that the touch does not afford us a just indication of ... real temperatures. In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says: This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms. Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water". See also Citations References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ras_Abu_%27Ammar] | [TOKENS: 898] |
Contents Ras Abu 'Ammar Ras Abu 'Ammar (Arabic: رأس أبو عمار) was a small Palestinian Arab village in the Jerusalem Subdistrict. It was depopulated during the 1948 Arab-Israeli War on October 21, 1948, by the Har'el Brigade of Operation ha-Har. It was located 14 km west of Jerusalem, surrounded on three sides by the Wadi al-Sarar. History Ras Abu 'Ammar is thought to have been established in the 19th century. The nearby Kh. Kafr Sum have remains from the Crusader era, including a court-yard building and rock-cut cisterns. A tower to the south east was later turned into Maqam ash-sheikh Musafar. Victor Guérin noted that: "There are a lot of rickety houses, which are built of small, almost unhewn stones, near one waly, which stands in the shade of a mulberry tree of several hundreds years old. Not far from it there is a semicircle swimming pool, built in a crude way". And further: "A large structure, partly built of ancient stones with typical projection, served as a mosque, as we can tell from the presence of the mihrab in it. It is very likely that the structure had stood before the Muslims settled here, and they just adopted it for their cult". The SWP described it as "a small stone village on a hill; to the east in a small valley is a good spring, with a rock-cut tomb beside it". In 1838, both et-Ras and Kefr Sur were noted as villages in the el-Arkub district, southwest of Jerusalem. In 1863 Victor Guérin was pointed out on a mountain the small village of Ras Abu 'Ammar, which high position had given its name. An Ottoman village list from around 1870 showed that Ras Abu Ammar had 6 (?) houses and a population of 92, though the population count only included men. In 1883, the PEF's Survey of Western Palestine (SWP) described Ras (Abu 'Ammar) as "a large stone village on a spur, with a fine spring in the valley to the north-west. The hill has only a little scrub on it, but the valley, which is open and rather flat, has olives in it." In 1896 the population of Ras Abu 'Ammar was estimated to be about 279 persons. In the 1922 census of Palestine conducted by the British Mandate authorities, Ras Abu Ammar had a population 339, all Muslims, increasing in the 1931 census when it was counted with Aqqur and Ein Hubin, to 488, in 106 houses. In the 1945 statistics, the village, with a population of 620 Muslims, had 8,342 dunams of land according to an official land and population survey. Of the land, 925 dunams were plantations and irrigable land 2,791 were for cereals, while 40 dunams were built-up (urban) land. On 4 August, 1948, two weeks into the Second truce of the 1948 Arab–Israeli War, Grand Mufti of Jerusalem and Palestinian nationalist Amin al Husseini noted that ‘for two weeks now . . . the Jews have continued with their attacks on the Arab villages and outposts in all areas. Stormy battles are continuing in the villages of Sataf, Deiraban, Beit Jimal, Ras Abu ‘Amr, ‘Aqqur, and ‘Artuf . . .’ The village was depopulated on October 21, 1948. The area was later incorporated into the State of Israel and the village of Tzur Hadassah was established on Ras Abu 'Ammar land in 1960. In 1992 the village site was described: "The stone rubble of the village houses is strewn across the site. Wild vegetation grows among the debris, in addition to almond, olive, and carob trees. Cactuses grow on the southeastern and southwestern sides of the site; a two-room stone building that used to be the schoolhouse still stands to the southeast." References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Flux] | [TOKENS: 4198] |
Contents Flux Flux describes any effect that appears to pass or travel (whether it actually moves or not) through a surface or substance. Flux is a concept in applied mathematics and vector calculus which has many applications in physics. For transport phenomena, flux is a vector quantity, describing the magnitude and direction of the flow of a substance or property. In vector calculus, flux is a scalar quantity, defined as the surface integral of the perpendicular component of a vector field over a surface. Terminology The word flux comes from Latin: fluxus means "flow", and fluere is "to flow". As fluxion, this term was introduced into differential calculus by Isaac Newton. The concept of heat flux was a key contribution of Joseph Fourier, in the analysis of heat transfer phenomena. His seminal treatise Théorie analytique de la chaleur (The Analytical Theory of Heat), defines fluxion as a central quantity and proceeds to derive the now well-known expressions of flux in terms of temperature differences across a slab, and then more generally in terms of temperature gradients or differentials of temperature, across other geometries. One could argue, based on the work of James Clerk Maxwell, that the transport definition precedes the definition of flux used in electromagnetism. The specific quote from Maxwell is: In the case of fluxes, we have to take the integral, over a surface, of the flux through every element of the surface. The result of this operation is called the surface integral of the flux. It represents the quantity which passes through the surface. — James Clerk Maxwell According to the transport definition, flux may be a single vector, or it may be a vector field / function of position. In the latter case flux can readily be integrated over a surface. By contrast, according to the electromagnetism definition, flux is the integral over a surface; it makes no sense to integrate a second-definition flux for one would be integrating over a surface twice. Thus, Maxwell's quote only makes sense if "flux" is being used according to the transport definition (and furthermore is a vector field rather than single vector). This is ironic because Maxwell was one of the major developers of what we now call "electric flux" and "magnetic flux" according to the electromagnetism definition. Their names in accordance with the quote (and transport definition) would be "surface integral of electric flux" and "surface integral of magnetic flux", in which case "electric flux" would instead be defined as "electric field" and "magnetic flux" defined as "magnetic field". This implies that Maxwell conceived of these fields as flows/fluxes of some sort. Given a flux according to the electromagnetism definition, the corresponding flux density, if that term is used, refers to its derivative along the surface that was integrated. By the Fundamental theorem of calculus, the corresponding flux density is a flux according to the transport definition. Given a current such as electric current—charge per time, current density would also be a flux according to the transport definition—charge per time per area. Due to the conflicting definitions of flux, and the interchangeability of flux, flow, and current in nontechnical English, all of the terms used in this paragraph are sometimes used interchangeably and ambiguously. Concrete fluxes in the rest of this article will be used in accordance to their broad acceptance in the literature, regardless of which definition of flux the term corresponds to. Flux as flow rate per unit area In transport phenomena (heat transfer, mass transfer and fluid dynamics), flux is defined as the rate of flow of a property per unit area, which has the dimensions [quantity]·[time]−1·[area]−1. The area is of the surface the property is flowing "through" or "across". For example, the amount of water that flows through a cross section of a river each second divided by the area of that cross section, or the amount of sunlight energy that lands on a patch of ground each second divided by the area of the patch, are kinds of flux. Here are 3 definitions in increasing order of complexity. Each is a special case of the following. In all cases the frequent symbol j, (or J) is used for flux, q for the physical quantity that flows, t for time, and A for area. These identifiers will be written in bold when and only when they are vectors. First, flux as a (single) scalar: j = I A , {\displaystyle j={\frac {I}{A}},} where I = lim Δ t → 0 Δ q Δ t = d q d t . {\displaystyle I=\lim _{\Delta t\to 0}{\frac {\Delta q}{\Delta t}}={\frac {\mathrm {d} q}{\mathrm {d} t}}.} In this case the surface in which flux is being measured is fixed and has area A. The surface is assumed to be flat, and the flow is assumed to be everywhere constant with respect to position and perpendicular to the surface. Second, flux as a scalar field defined along a surface, i.e. a function of points on the surface: j ( p ) = ∂ I ∂ A ( p ) , {\displaystyle j(\mathbf {p} )={\frac {\partial I}{\partial A}}(\mathbf {p} ),} I ( A , p ) = d q d t ( A , p ) . {\displaystyle I(A,\mathbf {p} )={\frac {\mathrm {d} q}{\mathrm {d} t}}(A,\mathbf {p} ).} As before, the surface is assumed to be flat, and the flow is assumed to be everywhere perpendicular to it. However the flow need not be constant. q is now a function of p, a point on the surface, and A, an area. Rather than measure the total flow through the surface, q measures the flow through the disk with area A centered at p along the surface. Finally, flux as a vector field: j ( p ) = ∂ I ∂ A ( p ) , {\displaystyle \mathbf {j} (\mathbf {p} )={\frac {\partial \mathbf {I} }{\partial A}}(\mathbf {p} ),} I ( A , p ) = a r g m a x n ^ n ^ p d q d t ( A , p , n ^ ) . {\displaystyle \mathbf {I} (A,\mathbf {p} )={\underset {\mathbf {\hat {n}} }{\operatorname {arg\,max} }}\;\mathbf {\hat {n}} _{\mathbf {p} }{\frac {\mathrm {d} q}{\mathrm {d} t}}(A,\mathbf {p} ,\mathbf {\hat {n}} ).} In this case, there is no fixed surface we are measuring over. q is a function of a point, an area, and a direction (given by a unit vector n ^ {\displaystyle \mathbf {\hat {n}} } ), and measures the flow through the disk of area A perpendicular to that unit vector. I is defined picking the unit vector that maximizes the flow around the point, because the true flow is maximized across the disk that is perpendicular to it. The unit vector thus uniquely maximizes the function when it points in the "true direction" of the flow. (Strictly speaking, this is an abuse of notation because the "arg max" cannot directly compare vectors; we take the vector with the biggest norm instead.) These direct definitions, especially the last, are rather unwieldy [citation needed]. For example, the arg max construction is artificial from the perspective of empirical measurements, when with a weathervane or similar one can easily deduce the direction of flux at a point. Rather than defining the vector flux directly, it is often more intuitive to state some properties about it. Furthermore, from these properties the flux can uniquely be determined anyway. If the flux j passes through the area at an angle θ to the area normal n ^ {\displaystyle \mathbf {\hat {n}} } , then the dot product j ⋅ n ^ = j cos θ . {\displaystyle \mathbf {j} \cdot \mathbf {\hat {n}} =j\cos \theta .} That is, the component of flux passing through the surface (i.e. normal to it) is j cos θ, while the component of flux passing tangential to the area is j sin θ, but there is no flux actually passing through the area in the tangential direction. The only component of flux passing normal to the area is the cosine component. For vector flux, the surface integral of j over a surface S, gives the proper flowing per unit of time through the surface: d q d t = ∬ S j ⋅ n ^ d A = ∬ S j ⋅ d A , {\displaystyle {\frac {\mathrm {d} q}{\mathrm {d} t}}=\iint _{S}\mathbf {j} \cdot \mathbf {\hat {n}} \,dA=\iint _{S}\mathbf {j} \cdot d\mathbf {A} ,} where A (and its infinitesimal) is the vector area – combination A = A n ^ {\displaystyle \mathbf {A} =A\mathbf {\hat {n}} } of the magnitude of the area A through which the property passes and a unit vector n ^ {\displaystyle \mathbf {\hat {n}} } normal to the area. Unlike in the second set of equations, the surface here need not be flat. Finally, we can integrate again over the time duration t1 to t2, getting the total amount of the property flowing through the surface in that time (t2 − t1): q = ∫ t 1 t 2 ∬ S j ⋅ d A d t . {\displaystyle q=\int _{t_{1}}^{t_{2}}\iint _{S}\mathbf {j} \cdot d\mathbf {A} \,dt.} Eight of the most common forms of flux from the transport phenomena literature are defined as follows: [citation needed] These fluxes are vectors at each point in space, and have a definite magnitude and direction. Also, one can take the divergence of any of these fluxes to determine the accumulation rate of the quantity in a control volume around a given point in space. For incompressible flow, the divergence of the volume flux is zero. As mentioned above, chemical molar flux of a component A in an isothermal, isobaric system is defined in Fick's law of diffusion as: J A = − D A B ∇ c A {\displaystyle \mathbf {J} _{A}=-D_{AB}\nabla c_{A}} where the nabla symbol ∇ denotes the gradient operator, DAB is the diffusion coefficient (m2·s−1) of component A diffusing through component B, cA is the concentration (mol/m3) of component A. This flux has units of mol·m−2·s−1, and fits Maxwell's original definition of flux. For dilute gases, kinetic molecular theory relates the diffusion coefficient D to the particle density n = N/V, the molecular mass m, the collision cross section σ {\displaystyle \sigma } , and the absolute temperature T by D = 2 3 n σ k T π m {\displaystyle D={\frac {2}{3n\sigma }}{\sqrt {\frac {kT}{\pi m}}}} where the second factor is the mean free path and the square root (with the Boltzmann constant k) is the mean velocity of the particles. In turbulent flows, the transport by eddy motion can be expressed as a grossly increased diffusion coefficient. In quantum mechanics, particles of mass m in the quantum state ψ(r, t) have a probability density defined as ρ = ψ ∗ ψ = | ψ | 2 . {\displaystyle \rho =\psi ^{*}\psi =|\psi |^{2}.} So the probability of finding a particle in a differential volume element d3r is d P = | ψ | 2 d 3 r . {\displaystyle dP=|\psi |^{2}\,d^{3}\mathbf {r} .} Then the number of particles passing perpendicularly through unit area of a cross-section per unit time is the probability flux; J = i ℏ 2 m ( ψ ∇ ψ ∗ − ψ ∗ ∇ ψ ) . {\displaystyle \mathbf {J} ={\frac {i\hbar }{2m}}\left(\psi \nabla \psi ^{*}-\psi ^{*}\nabla \psi \right).} This is sometimes referred to as the probability current or current density, or probability flux density. Flux as a surface integral As a mathematical concept, flux is represented by the surface integral of a vector field, Φ F = ∬ A F ⋅ d A {\displaystyle \Phi _{F}=\iint _{A}\mathbf {F} \cdot \mathrm {d} \mathbf {A} } Φ F = ∬ A F ⋅ n d A {\displaystyle \Phi _{F}=\iint _{A}\mathbf {F} \cdot \mathbf {n} \,\mathrm {d} A} where F is a vector field, and dA is the vector area of the surface A, directed as the surface normal. For the second, n is the outward pointed unit normal vector to the surface. The surface has to be orientable, i.e. two sides can be distinguished: the surface does not fold back onto itself. Also, the surface has to be actually oriented, i.e. we use a convention as to flowing which way is counted positive; flowing backward is then counted negative. The surface normal is usually directed by the right-hand rule. Conversely, one can consider the flux the more fundamental quantity and call the vector field the flux density. Often a vector field is drawn by curves (field lines) following the "flow"; the magnitude of the vector field is then the line density, and the flux through a surface is the number of lines. Lines originate from areas of positive divergence (sources) and end at areas of negative divergence (sinks). See also the image at right: the number of red arrows passing through a unit area is the flux density, the curve encircling the red arrows denotes the boundary of the surface, and the orientation of the arrows with respect to the surface denotes the sign of the inner product of the vector field with the surface normals. If the surface encloses a 3D region, usually the surface is oriented such that the influx is counted positive; the opposite is the outflux. The divergence theorem states that the net outflux through a closed surface, in other words the net outflux from a 3D region, is found by adding the local net outflow from each point in the region (which is expressed by the divergence). If the surface is not closed, it has an oriented curve as boundary. Stokes' theorem states that the flux of the curl of a vector field is the line integral of the vector field over this boundary. This path integral is also called circulation, especially in fluid dynamics. Thus the curl is the circulation density. We can apply the flux and these theorems to many disciplines in which we see currents, forces, etc., applied through areas. An electric "charge", such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m2/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.) Two forms of electric flux are used, one for the E-field: and one for the D-field (called the electric displacement): This quantity arises in Gauss's law – which states that the flux of the electric field E out of a closed surface is proportional to the electric charge QA enclosed in the surface (independent of how that charge is distributed), the integral form is: where ε0 is the permittivity of free space. If one considers the flux of the electric field vector, E, for a tube near a point charge in the field of the charge but not containing it with sides formed by lines tangent to the field, the flux for the sides is zero and there is an equal and opposite flux at both ends of the tube. This is a consequence of Gauss's law applied to an inverse square field. The flux for any cross-sectional surface of the tube will be the same. The total flux for any surface surrounding a charge q is q/ε0. In free space the electric displacement is given by the constitutive relation D = ε0 E, so for any bounding surface the D-field flux equals the charge QA within it. Here the expression "flux of" indicates a mathematical operation and, as can be seen, the result is not necessarily a "flow", since nothing actually flows along electric field lines. The magnetic flux density (magnetic field) having the unit Wb/m2 (tesla) is denoted by B, and magnetic flux is defined analogously: Φ B = ∬ A B ⋅ d A {\displaystyle \Phi _{B}=\iint _{A}\mathbf {B} \cdot \mathrm {d} \mathbf {A} } with the same notation above. The quantity arises in Faraday's law of induction, where the magnetic flux is time-dependent either because the boundary is time-dependent or magnetic field is time-dependent. In integral form: − d Φ B d t = ∮ ∂ A E ⋅ d ℓ {\displaystyle -{\frac {{\rm {d}}\Phi _{B}}{{\rm {d}}t}}=\oint _{\partial A}\mathbf {E} \cdot d{\boldsymbol {\ell }}} where dℓ is an infinitesimal vector line element of the closed curve ∂ A {\displaystyle \partial A} , with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve ∂ A {\displaystyle \partial A} , with the sign determined by the integration direction. The time-rate of change of the magnetic flux through a loop of wire is minus the electromotive force created in that wire. The direction is such that if current is allowed to pass through the wire, the electromotive force will cause a current which "opposes" the change in magnetic field by itself producing a magnetic field opposite to the change. This is the basis for inductors and many electric generators. Using this definition, the flux of the Poynting vector S over a specified surface is the rate at which electromagnetic energy flows through that surface, defined like before: The flux of the Poynting vector through a surface is the electromagnetic power, or energy per unit time, passing through that surface. This is commonly used in analysis of electromagnetic radiation, but has application to other electromagnetic systems as well. Confusingly, the Poynting vector is sometimes called the power flux, which is an example of the first usage of flux, above. It has units of watts per square metre (W/m2). SI radiometry units See also Notes Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Elon_Musk#cite_note-419] | [TOKENS: 10515] |
Contents Elon Musk Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and entrepreneur known for his leadership of Tesla, SpaceX, Twitter, and xAI. Musk has been the wealthiest person in the world since 2025; as of February 2026,[update] Forbes estimates his net worth to be around US$852 billion. Born into a wealthy family in Pretoria, South Africa, Musk emigrated in 1989 to Canada; he has Canadian citizenship since his mother was born there. He received bachelor's degrees in 1997 from the University of Pennsylvania before moving to California to pursue business ventures. In 1995, Musk co-founded the software company Zip2. Following its sale in 1999, he co-founded X.com, an online payment company that later merged to form PayPal, which was acquired by eBay in 2002. Musk also became an American citizen in 2002. In 2002, Musk founded the space technology company SpaceX, becoming its CEO and chief engineer; the company has since led innovations in reusable rockets and commercial spaceflight. Musk joined the automaker Tesla as an early investor in 2004 and became its CEO and product architect in 2008; it has since become a leader in electric vehicles. In 2015, he co-founded OpenAI to advance artificial intelligence (AI) research, but later left; growing discontent with the organization's direction and their leadership in the AI boom in the 2020s led him to establish xAI, which became a subsidiary of SpaceX in 2026. In 2022, he acquired the social network Twitter, implementing significant changes, and rebranding it as X in 2023. His other businesses include the neurotechnology company Neuralink, which he co-founded in 2016, and the tunneling company the Boring Company, which he founded in 2017. In November 2025, a Tesla pay package worth $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Musk was the largest donor in the 2024 U.S. presidential election, where he supported Donald Trump. After Trump was inaugurated as president in early 2025, Musk served as Senior Advisor to the President and as the de facto head of the Department of Government Efficiency (DOGE). After a public feud with Trump, Musk left the Trump administration and returned to managing his companies. Musk is a supporter of global far-right figures, causes, and political parties. His political activities, views, and statements have made him a polarizing figure. Musk has been criticized for COVID-19 misinformation, promoting conspiracy theories, and affirming antisemitic, racist, and transphobic comments. His acquisition of Twitter was controversial due to a subsequent increase in hate speech and the spread of misinformation on the service, following his pledge to decrease censorship. His role in the second Trump administration attracted public backlash, particularly in response to DOGE. The emails he sent to Jeffrey Epstein are included in the Epstein files, which were published between 2025–26 and became a topic of worldwide debate. Early life Elon Reeve Musk was born on June 28, 1971, in Pretoria, South Africa's administrative capital. He is of British and Pennsylvania Dutch ancestry. His mother, Maye (née Haldeman), is a model and dietitian born in Saskatchewan, Canada, and raised in South Africa. Musk therefore holds both South African and Canadian citizenship from birth. His father, Errol Musk, is a South African electromechanical engineer, pilot, sailor, consultant, emerald dealer, and property developer, who partly owned a rental lodge at Timbavati Private Nature Reserve. His maternal grandfather, Joshua N. Haldeman, who died in a plane crash when Elon was a toddler, was an American-born Canadian chiropractor, aviator and political activist in the technocracy movement who moved to South Africa in 1950. Elon has a younger brother, Kimbal, a younger sister, Tosca, and four paternal half-siblings. Musk was baptized as a child in the Anglican Church of Southern Africa. Despite both Elon and Errol previously stating that Errol was a part owner of a Zambian emerald mine, in 2023, Errol recounted that the deal he made was to receive "a portion of the emeralds produced at three small mines". Errol was elected to the Pretoria City Council as a representative of the anti-apartheid Progressive Party and has said that his children shared their father's dislike of apartheid. After his parents divorced in 1979, Elon, aged around 9, chose to live with his father because Errol Musk had an Encyclopædia Britannica and a computer. Elon later regretted his decision and became estranged from his father. Elon has recounted trips to a wilderness school that he described as a "paramilitary Lord of the Flies" where "bullying was a virtue" and children were encouraged to fight over rations. In one incident, after an altercation with a fellow pupil, Elon was thrown down concrete steps and beaten severely, leading to him being hospitalized for his injuries. Elon described his father berating him after he was discharged from the hospital. Errol denied berating Elon and claimed, "The [other] boy had just lost his father to suicide, and Elon had called him stupid. Elon had a tendency to call people stupid. How could I possibly blame that child?" Elon was an enthusiastic reader of books, and had attributed his success in part to having read The Lord of the Rings, the Foundation series, and The Hitchhiker's Guide to the Galaxy. At age ten, he developed an interest in computing and video games, teaching himself how to program from the VIC-20 user manual. At age twelve, Elon sold his BASIC-based game Blastar to PC and Office Technology magazine for approximately $500 (equivalent to $1,600 in 2025). Musk attended Waterkloof House Preparatory School, Bryanston High School, and then Pretoria Boys High School, where he graduated. Musk was a decent but unexceptional student, earning a 61/100 in Afrikaans and a B on his senior math certification. Musk applied for a Canadian passport through his Canadian-born mother to avoid South Africa's mandatory military service, which would have forced him to participate in the apartheid regime, as well as to ease his path to immigration to the United States. While waiting for his application to be processed, he attended the University of Pretoria for five months. Musk arrived in Canada in June 1989, connected with a second cousin in Saskatchewan, and worked odd jobs, including at a farm and a lumber mill. In 1990, he entered Queen's University in Kingston, Ontario. Two years later, he transferred to the University of Pennsylvania, where he studied until 1995. Although Musk has said that he earned his degrees in 1995, the University of Pennsylvania did not award them until 1997 – a Bachelor of Arts in physics and a Bachelor of Science in economics from the university's Wharton School. He reportedly hosted large, ticketed house parties to help pay for tuition, and wrote a business plan for an electronic book-scanning service similar to Google Books. In 1994, Musk held two internships in Silicon Valley: one at energy storage startup Pinnacle Research Institute, which investigated electrolytic supercapacitors for energy storage, and another at Palo Alto–based startup Rocket Science Games. In 1995, he was accepted to a graduate program in materials science at Stanford University, but did not enroll. Musk decided to join the Internet boom of the 1990s, applying for a job at Netscape, to which he reportedly never received a response. The Washington Post reported that Musk lacked legal authorization to remain and work in the United States after failing to enroll at Stanford. In response, Musk said he was allowed to work at that time and that his student visa transitioned to an H1-B. According to numerous former business associates and shareholders, Musk said he was on a student visa at the time. Business career In 1995, Musk, his brother Kimbal, and Greg Kouri founded the web software company Zip2 with funding from a group of angel investors. They housed the venture at a small rented office in Palo Alto. Replying to Rolling Stone, Musk denounced the notion that they started their company with funds borrowed from Errol Musk, but in a tweet, he recognized that his father contributed 10% of a later funding round. The company developed and marketed an Internet city guide for the newspaper publishing industry, with maps, directions, and yellow pages. According to Musk, "The website was up during the day and I was coding it at night, seven days a week, all the time." To impress investors, Musk built a large plastic structure around a standard computer to create the impression that Zip2 was powered by a small supercomputer. The Musk brothers obtained contracts with The New York Times and the Chicago Tribune, and persuaded the board of directors to abandon plans for a merger with CitySearch. Musk's attempts to become CEO were thwarted by the board. Compaq acquired Zip2 for $307 million in cash in February 1999 (equivalent to $590,000,000 in 2025), and Musk received $22 million (equivalent to $43,000,000 in 2025) for his 7-percent share. In 1999, Musk co-founded X.com, an online financial services and e-mail payment company. The startup was one of the first federally insured online banks, and, in its initial months of operation, over 200,000 customers joined the service. The company's investors regarded Musk as inexperienced and replaced him with Intuit CEO Bill Harris by the end of the year. The following year, X.com merged with online bank Confinity to avoid competition. Founded by Max Levchin and Peter Thiel, Confinity had its own money-transfer service, PayPal, which was more popular than X.com's service. Within the merged company, Musk returned as CEO. Musk's preference for Microsoft software over Unix created a rift in the company and caused Thiel to resign. Due to resulting technological issues and lack of a cohesive business model, the board ousted Musk and replaced him with Thiel in 2000.[b] Under Thiel, the company focused on the PayPal service and was renamed PayPal in 2001. In 2002, PayPal was acquired by eBay for $1.5 billion (equivalent to $2,700,000,000 in 2025) in stock, of which Musk—the largest shareholder with 11.72% of shares—received $175.8 million (equivalent to $320,000,000 in 2025). In 2017, Musk purchased the domain X.com from PayPal for an undisclosed amount, stating that it had sentimental value. In 2001, Musk became involved with the nonprofit Mars Society and discussed funding plans to place a growth-chamber for plants on Mars. Seeking a way to launch the greenhouse payloads into space, Musk made two unsuccessful trips to Moscow to purchase intercontinental ballistic missiles (ICBMs) from Russian companies NPO Lavochkin and Kosmotras. Musk instead decided to start a company to build affordable rockets. With $100 million of his early fortune, (equivalent to $180,000,000 in 2025) Musk founded SpaceX in May 2002 and became the company's CEO and Chief Engineer. SpaceX attempted its first launch of the Falcon 1 rocket in 2006. Although the rocket failed to reach Earth orbit, it was awarded a Commercial Orbital Transportation Services program contract from NASA, then led by Mike Griffin. After two more failed attempts that nearly caused Musk to go bankrupt, SpaceX succeeded in launching the Falcon 1 into orbit in 2008. Later that year, SpaceX received a $1.6 billion NASA contract (equivalent to $2,400,000,000 in 2025) for Falcon 9-launched Dragon spacecraft flights to the International Space Station (ISS), replacing the Space Shuttle after its 2011 retirement. In 2012, the Dragon vehicle docked with the ISS, a first for a commercial spacecraft. Working towards its goal of reusable rockets, in 2015 SpaceX successfully landed the first stage of a Falcon 9 on a land platform. Later landings were achieved on autonomous spaceport drone ships, an ocean-based recovery platform. In 2018, SpaceX launched the Falcon Heavy; the inaugural mission carried Musk's personal Tesla Roadster as a dummy payload. Since 2019, SpaceX has been developing Starship, a reusable, super heavy-lift launch vehicle intended to replace the Falcon 9 and Falcon Heavy. In 2020, SpaceX launched its first crewed flight, the Demo-2, becoming the first private company to place astronauts into orbit and dock a crewed spacecraft with the ISS. In 2024, NASA awarded SpaceX an $843 million (equivalent to $865,000,000 in 2025) contract to build a spacecraft that NASA will use to deorbit the ISS at the end of its lifespan. In 2015, SpaceX began development of the Starlink constellation of low Earth orbit satellites to provide satellite Internet access. After the launch of prototype satellites in 2018, the first large constellation was deployed in May 2019. As of May 2025[update], over 7,600 Starlink satellites are operational, comprising 65% of all operational Earth satellites. The total cost of the decade-long project to design, build, and deploy the constellation was estimated by SpaceX in 2020 to be $10 billion (equivalent to $12,000,000,000 in 2025).[c] During the Russian invasion of Ukraine, Musk provided free Starlink service to Ukraine, permitting Internet access and communication at a yearly cost to SpaceX of $400 million (equivalent to $440,000,000 in 2025). However, Musk refused to block Russian state media on Starlink. In 2023, Musk denied Ukraine's request to activate Starlink over Crimea to aid an attack against the Russian navy, citing fears of a nuclear response. Tesla, Inc., originally Tesla Motors, was incorporated in July 2003 by Martin Eberhard and Marc Tarpenning. Both men played active roles in the company's early development prior to Musk's involvement. Musk led the Series A round of investment in February 2004; he invested $6.35 million (equivalent to $11,000,000 in 2025), became the majority shareholder, and joined Tesla's board of directors as chairman. Musk took an active role within the company and oversaw Roadster product design, but was not deeply involved in day-to-day business operations. Following a series of escalating conflicts in 2007 and the 2008 financial crisis, Eberhard was ousted from the firm.[page needed] Musk assumed leadership of the company as CEO and product architect in 2008. A 2009 lawsuit settlement with Eberhard designated Musk as a Tesla co-founder, along with Tarpenning and two others. Tesla began delivery of the Roadster, an electric sports car, in 2008. With sales of about 2,500 vehicles, it was the first mass production all-electric car to use lithium-ion battery cells. Under Musk, Tesla has since launched several well-selling electric vehicles, including the four-door sedan Model S (2012), the crossover Model X (2015), the mass-market sedan Model 3 (2017), the crossover Model Y (2020), and the pickup truck Cybertruck (2023). In May 2020, Musk resigned as chairman of the board as part of the settlement of a lawsuit from the SEC over him tweeting that funding had been "secured" for potentially taking Tesla private. The company has also constructed multiple lithium-ion battery and electric vehicle factories, called Gigafactories. Since its initial public offering in 2010, Tesla stock has risen significantly; it became the most valuable carmaker in summer 2020, and it entered the S&P 500 later that year. In October 2021, it reached a market capitalization of $1 trillion (equivalent to $1,200,000,000,000 in 2025), the sixth company in U.S. history to do so. Musk provided the initial concept and financial capital for SolarCity, which his cousins Lyndon and Peter Rive founded in 2006. By 2013, SolarCity was the second largest provider of solar power systems in the United States. In 2014, Musk promoted the idea of SolarCity building an advanced production facility in Buffalo, New York, triple the size of the largest solar plant in the United States. Construction of the factory started in 2014 and was completed in 2017. It operated as a joint venture with Panasonic until early 2020. Tesla acquired SolarCity for $2 billion in 2016 (equivalent to $2,700,000,000 in 2025) and merged it with its battery unit to create Tesla Energy. The deal's announcement resulted in a more than 10% drop in Tesla's stock price; at the time, SolarCity was facing liquidity issues. Multiple shareholder groups filed a lawsuit against Musk and Tesla's directors, stating that the purchase of SolarCity was done solely to benefit Musk and came at the expense of Tesla and its shareholders. Tesla directors settled the lawsuit in January 2020, leaving Musk the sole remaining defendant. Two years later, the court ruled in Musk's favor. In 2016, Musk co-founded Neuralink, a neurotechnology startup, with an investment of $100 million. Neuralink aims to integrate the human brain with artificial intelligence (AI) by creating devices that are embedded in the brain. Such technology could enhance memory or allow the devices to communicate with software. The company also hopes to develop devices to treat neurological conditions like spinal cord injuries. In 2022, Neuralink announced that clinical trials would begin by the end of the year. In September 2023, the Food and Drug Administration approved Neuralink to initiate six-year human trials. Neuralink has conducted animal testing on macaques at the University of California, Davis. In 2021, the company released a video in which a macaque played the video game Pong via a Neuralink implant. The company's animal trials—which have caused the deaths of some monkeys—have led to claims of animal cruelty. The Physicians Committee for Responsible Medicine has alleged that Neuralink violated the Animal Welfare Act. Employees have complained that pressure from Musk to accelerate development has led to botched experiments and unnecessary animal deaths. In 2022, a federal probe was launched into possible animal welfare violations by Neuralink.[needs update] In 2017, Musk founded the Boring Company to construct tunnels; he also revealed plans for specialized, underground, high-occupancy vehicles that could travel up to 150 miles per hour (240 km/h) and thus circumvent above-ground traffic in major cities. Early in 2017, the company began discussions with regulatory bodies and initiated construction of a 30-foot (9.1 m) wide, 50-foot (15 m) long, and 15-foot (4.6 m) deep "test trench" on the premises of SpaceX's offices, as that required no permits. The Los Angeles tunnel, less than two miles (3.2 km) in length, debuted to journalists in 2018. It used Tesla Model Xs and was reported to be a rough ride while traveling at suboptimal speeds. Two tunnel projects announced in 2018, in Chicago and West Los Angeles, have been canceled. A tunnel beneath the Las Vegas Convention Center was completed in early 2021. Local officials have approved further expansions of the tunnel system. April 14, 2022 In early 2017, Musk expressed interest in buying Twitter and had questioned the platform's commitment to freedom of speech. By 2022, Musk had reached 9.2% stake in the company, making him the largest shareholder.[d] Musk later agreed to a deal that would appoint him to Twitter's board of directors and prohibit him from acquiring more than 14.9% of the company. Days later, Musk made a $43 billion offer to buy Twitter. By the end of April Musk had successfully concluded his bid for approximately $44 billion. This included approximately $12.5 billion in loans and $21 billion in equity financing. Having backtracked on his initial decision, Musk bought the company on October 27, 2022. Immediately after the acquisition, Musk fired several top Twitter executives including CEO Parag Agrawal; Musk became the CEO instead. Under Elon Musk, Twitter instituted monthly subscriptions for a "blue check", and laid off a significant portion of the company's staff. Musk lessened content moderation and hate speech also increased on the platform after his takeover. In late 2022, Musk released internal documents relating to Twitter's moderation of Hunter Biden's laptop controversy in the lead-up to the 2020 presidential election. Musk also promised to step down as CEO after a Twitter poll, and five months later, Musk stepped down as CEO and transitioned his role to executive chairman and chief technology officer (CTO). Despite Musk stepping down as CEO, X continues to struggle with challenges such as viral misinformation, hate speech, and antisemitism controversies. Musk has been accused of trying to silence some of his critics such as Twitch streamer Asmongold, who criticized him during one of his streams. Musk has been accused of removing their accounts' blue checkmarks, which hinders visibility and is considered a form of shadow banning, or suspending their accounts without justification. Other activities In August 2013, Musk announced plans for a version of a vactrain, and assigned engineers from SpaceX and Tesla to design a transport system between Greater Los Angeles and the San Francisco Bay Area, at an estimated cost of $6 billion. Later that year, Musk unveiled the concept, dubbed the Hyperloop, intended to make travel cheaper than any other mode of transport for such long distances. In December 2015, Musk co-founded OpenAI, a not-for-profit artificial intelligence (AI) research company aiming to develop artificial general intelligence, intended to be safe and beneficial to humanity. Musk pledged $1 billion of funding to the company, and initially gave $50 million. In 2018, Musk left the OpenAI board. Since 2018, OpenAI has made significant advances in machine learning. In July 2023, Musk launched the artificial intelligence company xAI, which aims to develop a generative AI program that competes with existing offerings like OpenAI's ChatGPT. Musk obtained funding from investors in SpaceX and Tesla, and xAI hired engineers from Google and OpenAI. December 16, 2022 Musk uses a private jet owned by Falcon Landing LLC, a SpaceX-linked company, and acquired a second jet in August 2020. His heavy use of the jets and the consequent fossil fuel usage have received criticism. Musk's flight usage is tracked on social media through ElonJet. In December 2022, Musk banned the ElonJet account on Twitter, and made temporary bans on the accounts of journalists that posted stories regarding the incident, including Donie O'Sullivan, Keith Olbermann, and journalists from The New York Times, The Washington Post, CNN, and The Intercept. In October 2025, Musk's company xAI launched Grokipedia, an AI-generated online encyclopedia that he promoted as an alternative to Wikipedia. Articles on Grokipedia are generated and reviewed by xAI's Grok chatbot. Media coverage and academic analysis described Grokipedia as frequently reusing Wikipedia content but framing contested political and social topics in line with Musk's own views and right-wing narratives. A study by Cornell University researchers and NBC News stated that Grokipedia cites sources that are blacklisted or considered "generally unreliable" on Wikipedia, for example, the conspiracy site Infowars and the neo-Nazi forum Stormfront. Wired, The Guardian and Time criticized Grokipedia for factual errors and for presenting Musk himself in unusually positive terms while downplaying controversies. Politics Musk is an outlier among business leaders who typically avoid partisan political advocacy. Musk was a registered independent voter when he lived in California. Historically, he has donated to both Democrats and Republicans, many of whom serve in states in which he has a vested interest. Since 2022, his political contributions have mostly supported Republicans, with his first vote for a Republican going to Mayra Flores in the 2022 Texas's 34th congressional district special election. In 2024, he started supporting international far-right political parties, activists, and causes, and has shared misinformation and numerous conspiracy theories. Since 2024, his views have been generally described as right-wing. Musk supported Barack Obama in 2008 and 2012, Hillary Clinton in 2016, Joe Biden in 2020, and Donald Trump in 2024. In the 2020 Democratic Party presidential primaries, Musk endorsed candidate Andrew Yang and expressed support for Yang's proposed universal basic income, and endorsed Kanye West's 2020 presidential campaign. In 2021, Musk publicly expressed opposition to the Build Back Better Act, a $3.5 trillion legislative package endorsed by Joe Biden that ultimately failed to pass due to unanimous opposition from congressional Republicans and several Democrats. In 2022, gave over $50 million to Citizens for Sanity, a conservative political action committee. In 2023, he supported Republican Ron DeSantis for the 2024 U.S. presidential election, giving $10 million to his campaign, and hosted DeSantis's campaign announcement on a Twitter Spaces event. From June 2023 to January 2024, Musk hosted a bipartisan set of X Spaces with Republican and Democratic candidates, including Robert F. Kennedy Jr., Vivek Ramaswamy, and Dean Phillips. In October 2025, former vice-president Kamala Harris commented that it was a mistake from the Democratic side to not invite Musk to a White House electric vehicle event organized in August 2021 and featuring executives from General Motors, Ford and Stellantis, despite Tesla being "the major American manufacturer of extraordinary innovation in this space." Fortune remarked that this was a nod to United Auto Workers and organized labor. Harris said presidents should put aside political loyalties when it came to recognizing innovation, and guessed that the non-invitation impacted Musk's perspective. Fortune noted that, at the time, Musk said, "Yeah, seems odd that Tesla wasn't invited." A month later, he criticized Biden as "not the friendliest administration." Jacob Silverman, author of the book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley, said that the tech industry represented by Musk, Thiel, Andreessen and other capitalists, actually flourished under Biden, but the tech leaders chose Trump for their common ground on cultural issues. By early 2024, Musk had become a vocal and financial supporter of Donald Trump. In July 2024, minutes after the attempted assassination of Donald Trump, Musk endorsed him for president saying; "I fully endorse President Trump and hope for his rapid recovery." During the presidential campaign, Musk joined Trump on stage at a campaign rally, and during the campaign promoted conspiracy theories and falsehoods about Democrats, election fraud and immigration, in support of Trump. Musk was the largest individual donor of the 2024 election. In 2025, Musk contributed $19 million to the Wisconsin Supreme Court race, hoping to influence the state's future redistricting efforts and its regulations governing car manufacturers and dealers. In 2023, Musk said he shunned the World Economic Forum because it was boring. The organization commented that they had not invited him since 2015. He has participated in Dialog, dubbed "Tech Bilderberg" and organized by Peter Thiel and Auren Hoffman, though. Musk's international political actions and comments have come under increasing scrutiny and criticism, especially from the governments and leaders of France, Germany, Norway, Spain and the United Kingdom, particularly due to his position in the U.S. government as well as ownership of X. An NBC News analysis found he had boosted far-right political movements to cut immigration and curtail regulation of business in at least 18 countries on six continents since 2023. During his speech after the second inauguration of Donald Trump, Musk twice made a gesture interpreted by many as a Nazi or a fascist Roman salute.[e] He thumped his right hand over his heart, fingers spread wide, and then extended his right arm out, emphatically, at an upward angle, palm down and fingers together. He then repeated the gesture to the crowd behind him. As he finished the gestures, he said to the crowd, "My heart goes out to you. It is thanks to you that the future of civilization is assured." It was widely condemned as an intentional Nazi salute in Germany, where making such gestures is illegal. The Anti-Defamation League said it was not a Nazi salute, but other Jewish organizations disagreed and condemned the salute. American public opinion was divided on partisan lines as to whether it was a fascist salute. Musk dismissed the accusations of Nazi sympathies, deriding them as "dirty tricks" and a "tired" attack. Neo-Nazi and white supremacist groups celebrated it as a Nazi salute. Multiple European political parties demanded that Musk be banned from entering their countries. The concept of DOGE emerged in a discussion between Musk and Donald Trump, and in August 2024, Trump committed to giving Musk an advisory role, with Musk accepting the offer. In November and December 2024, Musk suggested that the organization could help to cut the U.S. federal budget, consolidate the number of federal agencies, and eliminate the Consumer Financial Protection Bureau, and that its final stage would be "deleting itself". In January 2025, the organization was created by executive order, and Musk was designated a "special government employee". Musk led the organization and was a senior advisor to the president, although his official role is not clear. In sworn statement during a lawsuit, the director of the White House Office of Administration stated that Musk "is not an employee of the U.S. DOGE Service or U.S. DOGE Service Temporary Organization", "is not the U.S. DOGE Service administrator", and has "no actual or formal authority to make government decisions himself". Trump said two days later that he had put Musk in charge of DOGE. A federal judge has ruled that Musk acted as the de facto leader of DOGE. Musk's role in the second Trump administration, particularly in response to DOGE, has attracted public backlash. He was criticized for his treatment of federal government employees, including his influence over the mass layoffs of the federal workforce. He has prioritized secrecy within the organization and has accused others of violating privacy laws. A Senate report alleged that Musk could avoid up to $2 billion in legal liability as a result of DOGE's actions. In May 2025, Bill Gates accused Musk of "killing the world's poorest children" through his cuts to USAID, which modeling by Boston University estimated had resulted in 300,000 deaths by this time, most of them of children. By November 2025, the estimated death toll had increased to 400,000 children and 200,000 adults. Musk announced on May 28, 2025, that he would depart from the Trump administration as planned when the special government employee's 130 day deadline expired, with a White House official confirming that Musk's offboarding from the Trump administration was already underway. His departure was officially confirmed during a joint Oval Office press conference with Trump on May 30, 2025. @realDonaldTrump is in the Epstein files. That is the real reason they have not been made public. June 5, 2025 After leaving office, Musk criticized the Trump administration's Big Beautiful Bill, calling it a "disgusting abomination" due to its provisions increasing the deficit. A feud began between Musk and Trump, with its most notable event being Musk alleging Trump had ties to sex offender Jeffrey Epstein on X (formerly Twitter) on June 5, 2025. Trump responded on Truth Social stating that Musk went "CRAZY" after the "EV Mandate" was purportedly taken away and threatened to cut Musk's government contracts. Musk then called for a third Trump impeachment. The next day, Trump stated that he did not wish to reconcile with Musk, and added that Musk would face "very serious consequences" if he funds Democratic candidates. On June 11, Musk publicly apologized for the tweets against Trump, saying they "went too far". Views November 6, 2022 Rejecting the conservative label, Musk has described himself as a political moderate, even as his views have become more right-wing over time. His views have been characterized as libertarian and far-right, and after his involvement in European politics, they have received criticism from world leaders such as Emmanuel Macron and Olaf Scholz. Within the context of American politics, Musk supported Democratic candidates up until 2022, at which point he voted for a Republican for the first time. He has stated support for universal basic income, gun rights, freedom of speech, a tax on carbon emissions, and H-1B visas. Musk has expressed concern about issues such as artificial intelligence (AI) and climate change, and has been a critic of wealth tax, short-selling, and government subsidies. An immigrant himself, Musk has been accused of being anti-immigration, and regularly blames immigration policies for illegal immigration. He is also a pronatalist who believes population decline is the biggest threat to civilization, and identifies as a cultural Christian. Musk has long been an advocate for space colonization, especially the colonization of Mars. He has repeatedly pushed for humanity colonizing Mars, in order to become an interplanetary species and lower the risks of human extinction. Musk has promoted conspiracy theories and made controversial statements that have led to accusations of racism, sexism, antisemitism, transphobia, disseminating disinformation, and support of white pride. While describing himself as a "pro-Semite", his comments regarding George Soros and Jewish communities have been condemned by the Anti-Defamation League and the Biden White House. Musk was criticized during the COVID-19 pandemic for making unfounded epidemiological claims, defying COVID-19 lockdowns restrictions, and supporting the Canada convoy protest against vaccine mandates. He has amplified false claims of white genocide in South Africa. Musk has been critical of Israel's actions in the Gaza Strip during the Gaza war, praised China's economic and climate goals, suggested that Taiwan and China should resolve cross-strait relations, and was described as having a close relationship with the Chinese government. In Europe, Musk expressed support for Ukraine in 2022 during the Russian invasion, recommended referendums and peace deals on the annexed Russia-occupied territories, and supported the far-right Alternative for Germany political party in 2024. Regarding British politics, Musk blamed the 2024 UK riots on mass migration and open borders, criticized Prime Minister Keir Starmer for what he described as a "two-tier" policing system, and was subsequently attacked as being responsible for spreading misinformation and amplifying the far-right. He has also voiced his support for far-right activist Tommy Robinson and pledged electoral support for Reform UK. In February 2026, Musk described Spanish Prime Minister Pedro Sánchez as a "tyrant" following Sánchez's proposal to prohibit minors under the age of 16 from accessing social media platforms. Legal affairs In 2018, Musk was sued by the U.S. Securities and Exchange Commission (SEC) for a tweet stating that funding had been secured for potentially taking Tesla private.[f] The securities fraud lawsuit characterized the tweet as false, misleading, and damaging to investors, and sought to bar Musk from serving as CEO of publicly traded companies. Two days later, Musk settled with the SEC, without admitting or denying the SEC's allegations. As a result, Musk and Tesla were fined $20 million each, and Musk was forced to step down for three years as Tesla chairman but was able to remain as CEO. Shareholders filed a lawsuit over the tweet, and in February 2023, a jury found Musk and Tesla not liable. Musk has stated in interviews that he does not regret posting the tweet that triggered the SEC investigation. In 2019, Musk stated in a tweet that Tesla would build half a million cars that year. The SEC reacted by asking a court to hold him in contempt for violating the terms of the 2018 settlement agreement. A joint agreement between Musk and the SEC eventually clarified the previous agreement details, including a list of topics about which Musk needed preclearance. In 2020, a judge blocked a lawsuit that claimed a tweet by Musk regarding Tesla stock price ("too high imo") violated the agreement. Freedom of Information Act (FOIA)-released records showed that the SEC concluded Musk had subsequently violated the agreement twice by tweeting regarding "Tesla's solar roof production volumes and its stock price". In October 2023, the SEC sued Musk over his refusal to testify a third time in an investigation into whether he violated federal law by purchasing Twitter stock in 2022. In February 2024, Judge Laurel Beeler ruled that Musk must testify again. In January 2025, the SEC filed a lawsuit against Musk for securities violations related to his purchase of Twitter. In January 2024, Delaware judge Kathaleen McCormick ruled in a 2018 lawsuit that Musk's $55 billion pay package from Tesla be rescinded. McCormick called the compensation granted by the company's board "an unfathomable sum" that was unfair to shareholders. The Delaware Supreme Court overturned McCormick's decision in December 2025, restoring Musk's compensation package and awarding $1 in nominal damages. Personal life Musk became a U.S. citizen in 2002. From the early 2000s until late 2020, Musk resided in California, where both Tesla and SpaceX were founded. He then relocated to Cameron County, Texas, saying that California had become "complacent" about its economic success. While hosting Saturday Night Live in 2021, Musk stated that he has Asperger syndrome (an outdated term for autism spectrum disorder). When asked about his experience growing up with Asperger's syndrome in a TED2022 conference in Vancouver, Musk stated that "the social cues were not intuitive ... I would just tend to take things very literally ... but then that turned out to be wrong — [people were not] simply saying exactly what they mean, there's all sorts of other things that are meant, and [it] took me a while to figure that out." Musk suffers from back pain and has undergone several spine-related surgeries, including a disc replacement. In 2000, he contracted a severe case of malaria while on vacation in South Africa. Musk has stated he uses doctor-prescribed ketamine for occasional depression and that he doses "a small amount once every other week or something like that"; since January 2024, some media outlets have reported that he takes ketamine, marijuana, LSD, ecstasy, mushrooms, cocaine and other drugs. Musk at first refused to comment on his alleged drug use, before responding that he had not tested positive for drugs, and that if drugs somehow improved his productivity, "I would definitely take them!". The New York Times' investigations revealed Musk's overuse of ketamine and numerous other drugs, as well as strained family relationships and concerns from close associates who have become troubled by his public behavior as he became more involved in political activities and government work. According to The Washington Post, President Trump described Musk as "a big-time drug addict". Through his own label Emo G Records, Musk released a rap track, "RIP Harambe", on SoundCloud in March 2019. The following year, he released an EDM track, "Don't Doubt Ur Vibe", featuring his own lyrics and vocals. Musk plays video games, which he stated has a "'restoring effect' that helps his 'mental calibration'". Some games he plays include Quake, Diablo IV, Elden Ring, and Polytopia. Musk once claimed to be one of the world's top video game players but has since admitted to "account boosting", or cheating by hiring outside services to achieve top player rankings. Musk has justified the boosting by claiming that all top accounts do it so he has to as well to remain competitive. In 2024 and 2025, Musk criticized the video game Assassin's Creed Shadows and its creator Ubisoft for "woke" content. Musk posted to X that "DEI kills art" and specified the inclusion of the historical figure Yasuke in the Assassin's Creed game as offensive; he also called the game "terrible". Ubisoft responded by saying that Musk's comments were "just feeding hatred" and that they were focused on producing a game not pushing politics. Musk has fathered at least 14 children, one of whom died as an infant. The Wall Street Journal reported in 2025 that sources close to Musk suggest that the "true number of Musk's children is much higher than publicly known". He had six children with his first wife, Canadian author Justine Wilson, whom he met while attending Queen's University in Ontario, Canada; they married in 2000. In 2002, their first child Nevada Musk died of sudden infant death syndrome at the age of 10 weeks. After his death, the couple used in vitro fertilization (IVF) to continue their family; they had twins in 2004, followed by triplets in 2006. The couple divorced in 2008 and have shared custody of their children. The elder twin he had with Wilson came out as a trans woman and, in 2022, officially changed her name to Vivian Jenna Wilson, adopting her mother's surname because she no longer wished to be associated with Musk. Musk began dating English actress Talulah Riley in 2008. They married two years later at Dornoch Cathedral in Scotland. In 2012, the couple divorced, then remarried the following year. After briefly filing for divorce in 2014, Musk finalized a second divorce from Riley in 2016. Musk then dated the American actress Amber Heard for several months in 2017; he had reportedly been "pursuing" her since 2012. In 2018, Musk and Canadian musician Grimes confirmed they were dating. Grimes and Musk have three children, born in 2020, 2021, and 2022.[g] Musk and Grimes originally gave their eldest child the name "X Æ A-12", which would have violated California regulations as it contained characters that are not in the modern English alphabet; the names registered on the birth certificate are "X" as a first name, "Æ A-Xii" as a middle name, and "Musk" as a last name. They received criticism for choosing a name perceived to be impractical and difficult to pronounce; Musk has said the intended pronunciation is "X Ash A Twelve". Their second child was born via surrogacy. Despite the pregnancy, Musk confirmed reports that the couple were "semi-separated" in September 2021; in an interview with Time in December 2021, he said he was single. In October 2023, Grimes sued Musk over parental rights and custody of X Æ A-Xii. Elon Musk has taken X Æ A-Xii to multiple official events in Washington, D.C. during Trump's second term in office. Also in July 2022, The Wall Street Journal reported that Musk allegedly had an affair with Nicole Shanahan, the wife of Google co-founder Sergey Brin, in 2021, leading to their divorce the following year. Musk denied the report. Musk also had a relationship with Australian actress Natasha Bassett, who has been described as "an occasional girlfriend". In October 2024, The New York Times reported Musk bought a Texas compound for his children and their mothers, though Musk denied having done so. Musk also has four children with Shivon Zilis, director of operations and special projects at Neuralink: twins born via IVF in 2021, a child born in 2024 via surrogacy and a child born in 2025.[h] On February 14, 2025, Ashley St. Clair, an influencer and author, posted on X claiming to have given birth to Musk's son Romulus five months earlier, which media outlets reported as Musk's supposed thirteenth child.[i] On February 22, 2025, it was reported that St Clair had filed for sole custody of her five-month-old son and for Musk to be recognised as the child's father. On March 31, 2025, Musk wrote that, while he was unsure if he was the father of St. Clair's child, he had paid St. Clair $2.5 million and would continue paying her $500,000 per year.[j] Later reporting from the Wall Street Journal indicated that $1 million of these payments to St. Clair were structured as a loan. In 2014, Musk and Ghislaine Maxwell appeared together in a photograph taken at an Academy Awards after-party, which Musk later described as a "photobomb". The January 2026 Epstein files contain emails between Musk and Epstein from 2012 to 2013, after Epstein's first conviction. Emails released on January 30, 2026, indicated that Epstein invited Musk to visit his private island on multiple occasions. The correspondence showed that while Epstein repeatedly encouraged Musk to attend, Musk did not visit the island. In one instance, Musk discussed the possibility of attending a party with his then-wife Talulah Riley and asked which day would be the "wildest party"; according to the emails, the visit did not take place after Epstein later cancelled the plans.[k] On Christmas day in 2012, Musk emailed Epstein asking "Do you have any parties planned? I’ve been working to the edge of sanity this year and so, once my kids head home after Christmas, I really want to hit the party scene in St Barts or elsewhere and let loose. The invitation is much appreciated, but a peaceful island experience is the opposite of what I’m looking for". Epstein replied that the "ratio on my island" might make Musk's wife uncomfortable to which Musk responded, "Ratio is not a problem for Talulah". On September 11, 2013, Epstein sent an email asking Musk if he had any plans for coming to New York for the opening of the United Nations General Assembly where many "interesting people" would be coming to his house to which Musk responded that "Flying to NY to see UN diplomats do nothing would be an unwise use of time". Epstein responded by stating "Do you think i am retarded. Just kidding, there is no one over 25 and all very cute." Musk has denied any close relationship with Epstein and described him as a "creep" who attempted to ingratiate himself with influential people. When Musk was asked in 2019 if he introduced Epstein to Mark Zuckerberg, Musk responded: "I don’t recall introducing Epstein to anyone, as I don’t know the guy well enough to do so." The released emails nonetheless showed cordial exchanges on a range of topics, including Musk's inquiry about parties on the island. The correspondence also indicated that Musk suggested hosting Epstein at SpaceX, while Epstein separately discussed plans to tour SpaceX and bring "the girls", though there is no evidence that such a visit occurred. Musk has described the release of the files a "distraction", later accusing the second Trump administration of suppressing them to protect powerful individuals, including Trump himself.[l] Wealth Elon Musk is the wealthiest person in the world, with an estimated net worth of US$690 billion as of January 2026, according to the Bloomberg Billionaires Index, and $852 billion according to Forbes, primarily from his ownership stakes in SpaceX and Tesla. Having been first listed on the Forbes Billionaires List in 2012, around 75% of Musk's wealth was derived from Tesla stock in November 2020, although he describes himself as "cash poor". According to Forbes, he became the first person in the world to achieve a net worth of $300 billion in 2021; $400 billion in December 2024; $500 billion in October 2025; $600 billion in mid-December 2025; $700 billion later that month; and $800 billion in February 2026. In November 2025, a Tesla pay package worth potentially $1 trillion for Musk was approved, which he is to receive over 10 years if he meets specific goals. Public image Although his ventures have been highly influential within their separate industries starting in the 2000s, Musk only became a public figure in the early 2010s. He has been described as an eccentric who makes spontaneous and impactful decisions, while also often making controversial statements, contrary to other billionaires who prefer reclusiveness to protect their businesses. Musk's actions and his expressed views have made him a polarizing figure. Biographer Ashlee Vance described people's opinions of Musk as polarized due to his "part philosopher, part troll" persona on Twitter. He has drawn denouncement for using his platform to mock the self-selection of personal pronouns, while also receiving praise for bringing international attention to matters like British survivors of grooming gangs. Musk has been described as an American oligarch due to his extensive influence over public discourse, social media, industry, politics, and government policy. After Trump's re-election, Musk's influence and actions during the transition period and the second presidency of Donald Trump led some to call him "President Musk", the "actual president-elect", "shadow president" or "co-president". Awards for his contributions to the development of the Falcon rockets include the American Institute of Aeronautics and Astronautics George Low Transportation Award in 2008, the Fédération Aéronautique Internationale Gold Space Medal in 2010, and the Royal Aeronautical Society Gold Medal in 2012. In 2015, he received an honorary doctorate in engineering and technology from Yale University and an Institute of Electrical and Electronics Engineers Honorary Membership. Musk was elected a Fellow of the Royal Society (FRS) in 2018.[m] In 2022, Musk was elected to the National Academy of Engineering. Time has listed Musk as one of the most influential people in the world in 2010, 2013, 2018, and 2021. Musk was selected as Time's "Person of the Year" for 2021. Then Time editor-in-chief Edward Felsenthal wrote that, "Person of the Year is a marker of influence, and few individuals have had more influence than Musk on life on Earth, and potentially life off Earth too." Notes References Works cited Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Solar_wind#Atmospheres] | [TOKENS: 6034] |
Contents Solar wind The solar wind is a stream of charged particles released from the Sun's outermost atmospheric layer, the corona. This plasma mostly consists of electrons, protons and alpha particles with kinetic energy between 0.5 and 10 keV. The composition of the solar wind plasma also includes a mixture of particle species found in the solar plasma: trace amounts of heavy ions and atomic nuclei of elements such as carbon, nitrogen, oxygen, neon, magnesium, silicon, sulfur, and iron. There are also rarer traces of some other nuclei and isotopes such as phosphorus, titanium, chromium, and nickel's isotopes 58Ni, 60Ni, and 62Ni. Superimposed with the solar-wind plasma is the interplanetary magnetic field. The solar wind varies in density, temperature and speed over time and over solar latitude and longitude. Its particles can escape the Sun's gravity because of their high energy resulting from the high temperature of the corona, which in turn is a result of the coronal magnetic field. The boundary separating the corona from the solar wind is called the Alfvén surface. At a distance of more than a few solar radii from the Sun, the solar wind reaches speeds of 250–750 km/s and is supersonic, meaning it moves faster than the speed of fast magnetosonic waves. The flow of the solar wind is no longer supersonic at the termination shock. Other related phenomena include the aurora (northern and southern lights), comet tails that always point away from the Sun, and geomagnetic storms that can change the direction of magnetic field lines. History The existence of particles flowing outward from the Sun to the Earth was first suggested by British astronomer Richard C. Carrington. In 1859, Carrington and Richard Hodgson independently made the first observations of what would later be called a solar flare. This is a sudden, localised increase in brightness on the solar disc, which is now known to often occur in conjunction with an episodic ejection of material and magnetic flux from the Sun's atmosphere, known as a coronal mass ejection. The following day, a powerful geomagnetic storm was observed, and Carrington suspected that there might be a connection; the geomagnetic storm is now attributed to the arrival of the coronal mass ejection in near-Earth space and its subsequent interaction with the Earth's magnetosphere. Irish academic George FitzGerald later suggested that matter was being regularly accelerated away from the Sun, reaching the Earth after several days. In 1910, British astrophysicist Arthur Eddington essentially suggested the existence of the solar wind, without naming it, in a footnote to an article on Comet Morehouse. Eddington's proposition was never fully embraced, even though he had also made a similar suggestion at a Royal Institution address the previous year, in which he had postulated that the ejected material consisted of electrons, whereas in his study of Comet Morehouse he had supposed them to be ions. The idea that the ejected material consisted of both ions and electrons was first suggested by Norwegian scientist Kristian Birkeland. His geomagnetic surveys showed that auroral activity was almost uninterrupted. As these displays and other geomagnetic activity were being produced by particles from the Sun, he concluded that the Earth was being continually bombarded by "rays of electric corpuscles emitted by the Sun". He proposed in 1916 that, "From a physical point of view it is most probable that solar rays are neither exclusively negative nor positive rays, but of both kinds"; in other words, the solar wind consists of both negative electrons and positive ions. Three years later, in 1919, British physicist Frederick Lindemann also suggested that the Sun ejects particles of both polarities: protons as well as electrons. Around the 1930s, scientists had concluded that the temperature of the solar corona must be a million degrees Celsius because of the way it extended into space (as seen during a total solar eclipse). Later spectroscopic work confirmed this extraordinary temperature to be the case. In the mid-1950s, British mathematician Sydney Chapman calculated the properties of a gas at such a temperature and determined that the corona being such a superb conductor of heat, it must extend way out into space, beyond the orbit of Earth. Also in the 1950s, German astronomer Ludwig Biermann became interested in the fact that the tail of a comet always points away from the Sun, regardless of the direction in which the comet is travelling. Biermann postulated that this happens because the Sun emits a steady stream of particles that pushes the comet's tail away. German astronomer Paul Ahnert is credited (by Wilfried Schröder) as being the first to relate solar wind to the direction of a comet's tail based on observations of the comet Whipple–Fedke (1942g). In 1956, Biermann came to the University of Chicago, where he discussed his results with the astrophysicist Eugene Parker. Parker also discussed the solar corona with mathematician Sydney Chapman, who mentioned that "the corona is so hot that it should extend clear to the orbit of the Earth". Parker then conjectured that "the corona and solar corpuscular radiation must be the same thing": Parker himself said that the math needed for the solar wind discovery was just "four lines of algebra". I called it the solar wind because I felt that solar corpuscular radiation gives the wrong idea. With that term, one thinks of individual particles being shot out, which was the original picture we had. But it really is an ordinary flow of gas. The math needed to discover the solar wind was, per Parker just "four lines of algebra". Parker proposed that although the Sun's corona is strongly attracted by solar gravity, it is such a good conductor of heat that it is still very hot at large distances from the Sun. As solar gravity weakens with increasing distance from the Sun, the hydrodynamic effect is identical to a de Laval nozzle, inciting a transition from subsonic to supersonic flow. When Parker wrote hydrodynamic equations for an isothermal, extended coronal atmosphere, the plasma flow velocity integrated to a closed form: [ v 2 v m 2 − ln ( v 2 v m 2 ) ] = 4 ln ( r a ) + ( v esc 2 v m 2 ) ( a r ) − 4 ln ( v esc 2 v m 2 ) − 3 + ln 256 {\displaystyle \left[{\frac {v^{2}}{v_{m}^{2}}}-\ln \left({\frac {v^{2}}{v_{m}^{2}}}\right)\right]=4\ln \left({\frac {r}{a}}\right)+\left({\frac {v_{\text{esc}}^{2}}{v_{m}^{2}}}\right)\left({\frac {a}{r}}\right)-4\ln \left({\frac {v_{\text{esc}}^{2}}{v_{m}^{2}}}\right)-3+\ln 256} One solution to this equation was immediately recognizable as a solar wind. Parker's theory of supersonic solar wind also predicted the shape of the solar magnetic field in the outer Solar System. Parker argued that a million-degree corona cannot remain static: pressure forces must drive a radially expanding flow that accelerates from subsonic near the Sun to supersonic beyond a critical point. He further noted that solar rotation winds outward-advected magnetic field lines into a spiral pattern in the ecliptic, now called the Parker spiral. His theoretical modeling was not immediately accepted by the astronomical community: when Parker submitted the results to The Astrophysical Journal in 1958, two reviewers recommended its rejection. One reviewer commented on the paper: "Well I would suggest that Parker go to the library and read up on the subject before he tries to write a paper about it, because this is utter nonsense." The editor of the journal and Parker's colleague at the University of Chicago, future Nobel prize-winner Subrahmanyan Chandrasekhar, finding no obvious errors in the paper, overruled the reviewers and published the paper, even though he disagreed with Parker's theory. A colleague at the University of Chicago, Joseph W. Chamberlain, who published a paper in 1960 showing that the plasma flow velocity equation also admitted a solution with an exponential decay in flow velocity away from the sun. Chamberlain's subsonic solution was called the "solar breeze", and Italian plasma physicist Marco Velli later showed that "the breeze solution is unstable" to low frequency perturbations. Parker's theoretical predictions were confirmed by satellite observations; it is called to be "a unique example in astrophysics, due to its immediate and brief confirmation by observations". In January 1959, the Soviet spacecraft Luna 1 first directly observed the solar wind and measured its strength, using hemispherical ion traps. The discovery, made by Konstantin Gringauz [ru], was verified by Luna 2, Luna 3, and the more distant measurements of Venera 1. Three years later, a similar measurement was performed by American geophysicist Marcia Neugebauer and collaborators using the Mariner 2 spacecraft. Mariner 2 data revealed two types of solar wind, a low- and a high-speed components. The first numerical simulation of the solar wind in the solar corona, including closed and open field lines, was performed by Pneuman and Kopp in 1971. The magnetohydrodynamics equations in steady state were solved iteratively starting with an initial dipolar configuration. In 1990, the Ulysses probe was launched to study the solar wind from high solar latitudes. All prior observations had been made at or near the Solar System's ecliptic plane. In the late 1990s, the Ultraviolet Coronal Spectrometer (UVCS) instrument on board the SOHO spacecraft observed the acceleration region of the fast solar wind emanating from the poles of the Sun and found that the wind accelerates much faster than can be accounted for by thermodynamic expansion alone. Parker's model predicted that the wind should make the transition to supersonic flow at an altitude of about four solar radii (approx. 3,000,000 km) from the photosphere (surface); but the transition (or "sonic point") now appears to be much lower, perhaps only one solar radius (approx. 700,000 km) above the photosphere, suggesting that some additional mechanism accelerates the solar wind away from the Sun. The acceleration of the fast wind is still not understood and cannot be fully explained by Parker's theory. However, the gravitational and electromagnetic explanation for this acceleration is detailed in an earlier paper by 1970 Nobel laureate in Physics, Hannes Alfvén. From May 10 to May 12, 1999, NASA's Advanced Composition Explorer (ACE) and WIND spacecraft observed a 98% decrease of solar wind density. This allowed energetic electrons from the Sun to flow to Earth in narrow beams known as "strahl", which caused a highly unusual "polar rain" event, in which a visible aurora appeared over the North Pole. In addition, Earth's magnetosphere increased to between 5 and 6 times its normal size. The STEREO mission was launched in 2006 to study coronal mass ejections and the solar corona, using stereoscopy from two widely separated imaging systems. Each STEREO spacecraft carried two heliospheric imagers: highly sensitive wide-field cameras capable of imaging the solar wind itself, via Thomson scattering of sunlight off of free electrons. Movies from STEREO revealed the solar wind near the ecliptic, as a large-scale turbulent flow. On December 13, 2010, Voyager 1 determined that the velocity of the solar wind, at its location 10.8 billion miles (17.4 billion kilometres) from Earth had slowed to zero. "We have gotten to the point where the wind from the Sun, which until now has always had an outward motion, is no longer moving outward; it is only moving sideways so that it can end up going down the tail of the heliosphere, which is a comet-shaped-like object", said Voyager project scientist Edward Stone. In 2018, NASA launched the Parker Solar Probe, named in honor of American astrophysicist Eugene Parker, on a mission to study the structure and dynamics of the solar corona, in an attempt to understand the mechanisms that cause particles to be heated and accelerated as solar wind. During its seven-year mission, the probe will make twenty-four orbits of the Sun, passing further into the corona with each orbit's perihelion, ultimately passing within 0.04 astronomical units of the Sun's surface. It is the first NASA spacecraft named for a living person, and Parker, at age 91, was on hand to observe the launch. Acceleration mechanism While early models of the solar wind relied primarily on thermal energy to accelerate the material, by the 1960s it was clear that thermal acceleration alone cannot account for the high speed of solar wind. An additional unknown acceleration mechanism is required and likely relates to magnetic fields in the solar atmosphere. The Sun's corona, or extended outer layer, is a region of plasma that is heated to over a megakelvin. As a result of thermal collisions, the particles within the inner corona have a range and distribution of speeds described by a Maxwellian distribution. The mean velocity of these particles is about 145 km/s, which is well below the solar escape velocity of 618 km/s. However, a few of the particles achieve energies sufficient to reach the terminal velocity of 400 km/s, which allows them to feed the solar wind. At the same temperature, electrons, due to their much smaller mass, reach escape velocity and build up an electric field that further accelerates ions away from the Sun. The total number of particles carried away from the Sun by the solar wind is about 1.3×1036 per second. Thus, the total mass loss each year is about (2–3)×10−14 solar masses, or about 1.3–1.9 million tonnes per second. This is equivalent to losing a mass equal to the Earth every 150 million years. However, since the Sun's formation, only about 0.01% of its initial mass has been lost through the solar wind. Other stars have much stronger stellar winds that result in significantly higher mass-loss rates. In March 2023 solar extreme ultraviolet observations have shown that small-scale magnetic reconnection could be a driver of the solar wind as a swarm of nanoflares in the form omnipresent jetting activity a.k.a. jetlets producing short-lived streams of hot plasma and Alfvén waves at the base of the solar corona. This activity could also be connected to the magnetic switchback phenomenon of the solar wind. Properties and structure The solar wind is observed to exist in two fundamental states, termed the slow solar wind and the fast solar wind, though their differences extend well beyond their speeds. In near-Earth space, the slow solar wind is observed to have a velocity of 300–500 km/s, a temperature of ~100 kilokelvin and a composition that is a close match to the corona. By contrast, the fast solar wind has a typical velocity of 750 km/s, a temperature of 800 kilokelvin[citation needed] and it nearly matches the composition of the Sun's photosphere. The slow solar wind is twice as dense and more variable in nature than the fast solar wind. The slow solar wind appears to originate from a region around the Sun's equatorial belt that is known as the "streamer belt", where coronal streamers are produced by magnetic flux open to the heliosphere draping over closed magnetic loops.[clarification needed] The exact coronal structures involved in slow solar wind formation and the method by which the material is released is still under debate. Observations of the Sun between 1996 and 2001 showed that emission of the slow solar wind occurred at latitudes up to 30–35° during the solar minimum (the period of lowest solar activity), then expanded toward the poles as the solar cycle approached maximum. At solar maximum, the poles were also emitting a slow solar wind. The fast solar wind originates from coronal holes, which are funnel-like regions of open field lines in the Sun's magnetic field. Such open lines are particularly prevalent around the Sun's magnetic poles. The plasma source is small magnetic fields created by convection cells in the solar atmosphere. These fields confine the plasma and transport it into the narrow necks of the coronal funnels, which are located only 20,000 km above the photosphere. The plasma is released into the funnel when these magnetic field lines reconnect. Near the Earth's orbit at 1 astronomical unit (AU) the plasma flows at speeds ranging from 250 to 750 km/s with a density ranging between 3 and 10 particles per cubic centimeter and temperature ranging from 104 to 106 kelvin. On average, the plasma density decreases with the square of the distance from the Sun,: Sect. 2.4 while the velocity decreases and flattens out at 1 AU.: Fig. 5 Voyager 1 and Voyager 2 reported plasma density n between 0.001 and 0.005 particles/cm3 at distances of 80 to 120 AU, increasing rapidly beyond 120 AU at heliopause to between 0.05 and 0.2 particles/cm3. At 1 AU, the wind exerts a pressure typically in the range of 1–6 nPa ((1–6)×10−9 N/m2), although it can readily vary outside that range. The ram pressure is a function of wind speed and density. The formula is where mp is the proton mass, pressure P is in Pa (pascals), n is the density in particles/cm3 and V is the speed in km/s of the solar wind. Both the fast and slow solar wind can be interrupted by large, fast-moving bursts of plasma called coronal mass ejections, or CMEs. CMEs are caused by a release of magnetic energy at the Sun. CMEs are often called "solar storms" or "space storms" in the popular media. They are sometimes, but not always, associated with solar flares, which are another manifestation of magnetic energy release at the Sun. CMEs cause shock waves in the thin plasma of the heliosphere, launching electromagnetic waves and accelerating particles (mostly protons and electrons) to form showers of ionizing radiation that precede the CME. When a CME impacts the Earth's magnetosphere, it temporarily deforms the Earth's magnetic field, changing the direction of compass needles and inducing large electrical ground currents in Earth itself; this is called a geomagnetic storm and it is a global phenomenon. CME impacts can induce magnetic reconnection in Earth's magnetotail (the midnight side of the magnetosphere); this launches protons and electrons downward toward Earth's atmosphere, where they form the aurora. CMEs are not the only cause of space weather. Different patches on the Sun are known to give rise to slightly different speeds and densities of wind depending on local conditions. In isolation, each of these different wind streams would form a spiral with a slightly different angle, with fast-moving streams moving out more directly and slow-moving streams wrapping more around the Sun. Fast-moving streams tend to overtake slower streams that originate westward of them on the Sun, forming turbulent co-rotating interaction regions that give rise to wave motions and accelerated particles, and that affect Earth's magnetosphere in the same way as, but more gently than, CMEs. CMEs have a complex internal structure, with a highly turbulent region of hot and compressed plasma (known as sheath) preceding an arrival of relatively cold and strongly magnetized plasma region (known as magnetic cloud or ejecta). Sheath and ejecta have very different impact on the Earth's magnetosphere and on various space weather phenomena, such as the behavior of Van Allen radiation belts. Magnetic switchbacks are sudden reversals in the magnetic field of the solar wind. They can also be described as traveling disturbances in the solar wind that caused the magnetic field to bend back on itself. They were first observed by the NASA–ESA mission Ulysses, the first spacecraft to fly over the Sun's poles. Parker Solar Probe observed first switchbacks in 2018. Solar System effects Over the Sun's lifetime, the interaction of its surface layers with the escaping solar wind has significantly decreased its surface rotation rate. The wind is considered responsible for comets' tails, along with the Sun's radiation. The solar wind contributes to fluctuations in celestial radio waves observed on the Earth, through an effect called interplanetary scintillation. Where the solar wind intersects with a planet that has a well-developed magnetic field (such as Earth, Jupiter or Saturn), the particles are deflected by the Lorentz force. This region, known as the magnetosphere, causes the particles to travel around the planet rather than bombarding the atmosphere or surface. The magnetosphere is roughly shaped like a hemisphere on the side facing the Sun, then is drawn out in a long wake on the opposite side. The boundary of this region is called the magnetopause, and some of the particles are able to penetrate the magnetosphere through this region by partial reconnection of the magnetic field lines. The solar wind is responsible for the overall shape of Earth's magnetosphere. Fluctuations in its speed, density, direction, and entrained magnetic field strongly affect Earth's local space environment. For example, the levels of ionizing radiation and radio interference can vary by factors of hundreds to thousands; and the shape and location of the magnetopause and bow shock wave upstream of it can change by several Earth radii, exposing geosynchronous satellites to the direct solar wind. These phenomena are collectively called space weather. From the European Space Agency's Cluster mission, a new study has taken place that proposes that it is easier for the solar wind to infiltrate the magnetosphere than previously believed. A group of scientists directly observed the existence of certain waves in the solar wind that were not expected. A recent study shows that these waves enable incoming charged particles of solar wind to breach the magnetopause. This suggests that the magnetic bubble forms more as a filter than a continuous barrier. This latest discovery occurred through the distinctive arrangement of the four identical Cluster spacecraft, which fly in a controlled configuration through near-Earth space. As they sweep from the magnetosphere into interplanetary space and back again, the fleet provides exceptional three-dimensional insights on the phenomena that connect the sun to Earth. The research characterised variances in formation of the interplanetary magnetic field (IMF) largely influenced by Kelvin–Helmholtz instability (which occur at the interface of two fluids) as a result of differences in thickness and numerous other characteristics of the boundary layer. Experts believe that this was the first occasion that the appearance of Kelvin–Helmholtz waves at the magnetopause had been displayed at high latitude downward orientation of the IMF. These waves are being seen in unforeseen places under solar wind conditions that were formerly believed to be undesired for their generation. These discoveries show how Earth's magnetosphere can be penetrated by solar particles under specific IMF circumstances. The findings are also relevant to studies of magnetospheric progressions around other planetary bodies. This study suggests that Kelvin–Helmholtz waves can be a somewhat common, and possibly constant, instrument for the entrance of solar wind into terrestrial magnetospheres under various IMF orientations. The solar wind affects other incoming cosmic rays interacting with planetary atmospheres. Moreover, planets with a weak or non-existent magnetosphere are subject to atmospheric stripping by the solar wind. Venus, the nearest and most similar planet to Earth, has 100 times denser atmosphere, with little or no geo-magnetic field. Space probes discovered a comet-like tail that extends to Earth's orbit. Earth itself is largely protected from the solar wind by its magnetic field, which deflects most of the charged particles; however, some of the charged particles are trapped in the Van Allen radiation belt. A smaller number of particles from the solar wind manage to travel, as though on an electromagnetic energy transmission line, to the Earth's upper atmosphere and ionosphere in the auroral zones. The only time the solar wind is observable on the Earth is when it is strong enough to produce phenomena such as the aurora and geomagnetic storms. Bright auroras strongly heat the ionosphere, causing its plasma to expand into the magnetosphere, increasing the size of the plasma geosphere and injecting atmospheric matter into the solar wind. Geomagnetic storms result when the pressure of plasmas contained inside the magnetosphere is sufficiently large to inflate and thereby distort the geomagnetic field. Although Mars is larger than Mercury and four times farther from the Sun, it is thought that the solar wind has stripped away up to a third of its original atmosphere, leaving a layer 1/100 as dense as the Earth's. It is believed the mechanism for this atmospheric stripping is gas caught in bubbles of the magnetic field, which are ripped off by the solar wind. In 2015 the NASA Mars Atmosphere and Volatile Evolution (MAVEN) mission measured the rate of atmospheric stripping caused by the magnetic field carried by the solar wind as it flows past Mars, which generates an electric field, much as a turbine on Earth can be used to generate electricity. This electric field accelerates electrically charged gas atoms, called ions, in Mars's upper atmosphere and shoots them into space. The MAVEN mission measured the rate of atmospheric stripping at about 100 grams (≈1/4 lb) per second. Mercury, the nearest planet to the Sun, bears the full brunt of the solar wind, and since its atmosphere is vestigial and transient, its surface is bathed in radiation. Mercury has an intrinsic magnetic field, so under normal solar wind conditions, the solar wind cannot penetrate its magnetosphere and particles only reach the surface in the cusp regions. During coronal mass ejections, however, the magnetopause may get pressed into the surface of the planet, and under these conditions, the solar wind may interact freely with the planetary surface. The Earth's Moon has no atmosphere or intrinsic magnetic field, and consequently its surface is bombarded with the full solar wind. The Project Apollo missions deployed passive aluminum collectors in an attempt to sample the solar wind, and lunar soil returned for study confirmed that the lunar regolith is enriched in atomic nuclei deposited from the solar wind. These elements may prove useful resources for future lunar expeditions. Limits The Alfvén surface is the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal. Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface. The solar wind "blows a bubble" in the interstellar medium (the rarefied hydrogen and helium gas that permeates the galaxy). The point where the solar wind's strength is no longer great enough to push back the interstellar medium is known as the heliopause and is often considered to be the outer border of the Solar System. The distance to the heliopause is not precisely known and probably depends on the current velocity of the solar wind and the local density of the interstellar medium, but it is far outside Pluto's orbit. Scientists hope to gain perspective on the heliopause from data acquired through the Interstellar Boundary Explorer (IBEX) mission, launched in October 2008. The heliopause is noted as one of the ways of defining the extent of the Solar System, along with the Kuiper Belt and the radius at which the Sun's gravitational influence is matched by other stars. The maximum extent of that influence has been estimated at between 50,000 astronomical units (7,500 billion kilometres; 0.79 light-years) and 2 light-years (130,000 au), compared to the heliopause (the outer boundary of the heliosphere), which has been detected at about 120 au (18,000 million km) by the Voyager 1 spacecraft. The Voyager 2 spacecraft crossed the termination shock more than five times between August 30 and December 10, 2007. Voyager 2 crossed the shock about a billion kilometers closer to the Sun than the 13.5 billion km (90 au) distance where Voyager 1 came upon the termination shock. The spacecraft moved outward through the termination shock into the heliosheath and onward toward the interstellar medium. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:PlayStation_4_games] | [TOKENS: 109] |
Category:PlayStation 4 games This category includes articles of Sony PlayStation 4 games. If you do not see an article here for a game you are interested in, search for it, or start an article for it, and please add the article to this category. Subcategories This category has the following 13 subcategories, out of 13 total. Pages in category "PlayStation 4 games" The following 200 pages are in this category, out of approximately 3,419 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Executive_Office_of_the_President_of_the_United_States] | [TOKENS: 1371] |
Contents Executive Office of the President of the United States The Executive Office of the President of the United States (EOP) comprises the offices and agencies that support the work of the president at the center of the executive branch of the United States federal government. The office consists of several offices and agencies, such as the White House Office (the staff working closest with the president, including West Wing staff), the National Security Council, Homeland Security Council, Office of Management and Budget, Council of Economic Advisers, and others. The Eisenhower Executive Office Building houses most staff. The office is also referred to as a "permanent government", since many policy programs, and the people who are charged with implementing them, continue between presidential administrations. The civil servants who work in the Executive Office of the President are regarded as nonpartisan and politically neutral, so they are capable of providing objective and impartial advice. With the increase in technological and global advancement, the size of the White House staff has increased to include an array of policy experts responsible with managing various federal governmental functions and policy areas. As of 2015, it included approximately 1,800 positions, most of which did not require confirmation from the U.S. Senate. The office is overseen by the White House chief of staff. Since January 20, 2025, that position has been held by Susie Wiles, who was appointed by President Donald Trump. She is the first woman to hold the title. History In 1937, the Brownlow Committee, which was a presidentially commissioned panel of political science and public administration experts, recommended sweeping changes to the executive branch of the U.S. federal government, including the creation of the Executive Office of the President. Based on these recommendations, President Franklin D. Roosevelt in 1939 lobbied Congress to approve the Reorganization Act of 1939. The Act led to Reorganization Plan No. 1, which created the office, which reported directly to the president. The office encompassed two subunits at its outset, the White House Office (WHO) and the Bureau of the Budget, the predecessor to today's Office of Management and Budget, which was created in 1921 and originally located in the Treasury Department. It absorbed most of the functions of the National Emergency Council. Initially, the new staff system appeared more ambitious on paper than in practice; the increase in the size of the staff was quite modest at the start. However, it laid the groundwork for the large and organizationally complex White House staff that emerged during the presidencies of Roosevelt's successors. Roosevelt's efforts are also notable in contrast to those of his predecessors in office. During the 19th century, presidents had few staff resources. Thomas Jefferson had one messenger and one secretary at his disposal, both of whose salaries were paid by the president personally. It was not until 1857 that Congress appropriated money ($2,500) for the hiring of one clerk. By Ulysses S. Grant's presidency (1869–1877), the staff had grown to three. By 1900, the White House staff included one "secretary to the president" (then the title of the president's chief aide), two assistant secretaries, two executive clerks, a stenographer, and seven other office personnel. Under Warren G. Harding, there were thirty-one staff, although most were in clerical positions. During Herbert Hoover's presidency, two additional secretaries to the president were added by Congress, one of whom Hoover designated as his press secretary. From 1933 to 1939, as he greatly expanded the scope of the federal government's policies and powers in response to the Great Depression, Roosevelt relied on his "brain trust" of top advisers, who were often appointed to vacant positions in agencies and departments, from which they drew their salaries since the White House lacked statutory or budgetary authority to create new staff positions. After World War II, in particular, during the Eisenhower presidency, the staff was expanded and reorganized. Eisenhower, a former U.S. Army general, had been Supreme Allied Commander during the war and reorganized the Executive Office to suit his leadership style. As of 2009, the staff is much bigger. Estimates indicate some 3,000 to 4,000 persons serve in office staff positions with policy-making responsibilities, with a budget of $300 to $400 million (George W. Bush's budget request for Fiscal Year 2005 was for $341 million in support of 1,850 personnel). Some observers have noted a problem of control for the president due to the increase in staff and departments, making coordination and cooperation between the various departments of the Executive Office more difficult. Organization The president had the power to reorganize the Executive Office due to the 1949 Reorganization Act which gave the president considerable discretion, until 1983 when it was renewed due to President Reagan's administration allegedly encountering "disloyalty and obstruction". The chief of staff is the head of the Executive Office and can therefore ultimately decide what the president needs to deal with personally and what can be dealt with by other staff. Senior staff within the Executive Office of the President have the title Assistant to the President, second-level staff have the title Deputy Assistant to the President, and third-level staff have the title Special Assistant to the President. The core White House staff appointments, and most Executive Office officials generally, are not required to be confirmed by the U.S. Senate, although there are a handful of exceptions (e.g., the director of the Office of Management and Budget, the chair of the Council of Economic Advisers, and the United States Trade Representative). The information in the following table is current as of January 20, 2025. Only principal executives are listed; for subordinate officers, see individual office pages. The White House Office (including its various offices listed below) is a sub-unit of the Executive Office of the President (office). The various agencies of the office are listed above. Congress Congress as well as the president has some control over the Executive Office of the President. Some of this authority stems from its appropriation powers given by the Constitution, such as the "power of the purse", which affects the Office of Management and Budget and the funding of the rest of federal departments and agencies. Congress also has the right to investigate the operation of the Executive Office, normally holding hearings bringing forward individual personnel to testify before a congressional committee. The Executive Office often helps with legislation by filling in specific points understood and written by experts, as Congressional legislation sometimes starts in broad terms. Budget history This table specifies the budget of the Executive Office for the years 2008–2017, and the actual outlays for the years 1993–2007. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-:0_80-0] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/U.S._Congress] | [TOKENS: 12298] |
Contents United States Congress Page version status This is an accepted version of this page Minority (47) Minority (214) Vacant (3) The United States Congress is the legislative branch of the federal government of the United States. It is a bicameral legislature, including a lower body, the U.S. House of Representatives, and an upper body, the U.S. Senate. They both meet in the United States Capitol in Washington, D.C. Members of Congress are chosen through direct election,[b] though vacancies in the Senate may be filled by a governor's appointment. Congress has a total of 535 voting members, a figure which includes 100 senators and 435 representatives; the House of Representatives has 6 additional non-voting members. The vice president of the United States, as president of the Senate, has a vote in the Senate only when there is a tie. Congress[c] convenes for a two-year term (a Congress), commencing every other January. Each Congress is usually split into two sessions, one for each year. Elections are held every even-numbered year on Election Day. The members of the House of Representatives are elected for the two-year term of a Congress. The Reapportionment Act of 1929 established that there be 435 representatives, and the Uniform Congressional District Act requires that they be elected from single-member constituencies or districts. It is also required that the congressional districts be apportioned among states by population every ten years using the U.S. census results, provided that each state has at least one congressional representative. Each senator is elected at-large in their state for a six-year term, with terms staggered, so every two years approximately one-third of the Senate is up for election. Each state, regardless of population or size, has two senators, so currently, there are 100 senators for the 50 states. Article One of the Constitution requires that members of Congress be at least 25 years old for the House and at least 30 years old for the Senate, be a U.S. citizen for seven years for the House and nine years for the Senate, and be an inhabitant of the state which they represent. Members in both chambers may stand for re-election an unlimited number of times. Congress was created by the Constitution's First Article and first met in 1789, replacing the Congress of the Confederation in its legislative function. Although not legally mandated, in practice members of Congress since the late 19th century are typically affiliated with one of the two major parties, the Democratic Party or the Republican Party, and only rarely with a third party or independents affiliated with no party. Members can also switch parties at any time, though this is uncommon. Overview Article One of the United States Constitution states, "All legislative Powers herein granted shall be vested in a Congress of the United States, which shall consist of a Senate and House of Representatives." The House and Senate are equal partners in the legislative process – legislation cannot be enacted without the consent of both chambers. The Constitution grants each chamber some unique powers. The Senate ratifies treaties and approves presidential appointments while the House initiates revenue-raising bills.[citation needed] The House initiates and decides impeachment while the Senate votes on conviction and removal of office for impeachment cases. A two-thirds vote of the Senate is required before an impeached person can be removed from office. The term Congress can also refer to a particular meeting of the legislature. A Congress covers two years; the current one, the 119th Congress, began on January 3, 2025, and will end on January 3, 2027. Since the adoption of the Twentieth Amendment to the United States Constitution, the Congress has started and ended at noon on the third day of January of every odd-numbered year. Members of the Senate are referred to as senators, while members of the House of Representatives are commonly referred to as representatives, congressmen, or congresswomen.[citation needed] Scholar and representative Lee H. Hamilton asserted that the "historic mission of Congress has been to maintain freedom" and insisted it was a "driving force in American government" and a "remarkably resilient institution". Congress is the "heart and soul of our democracy", according to this view, even though legislators rarely achieve the prestige or name recognition of presidents or Supreme Court justices; one wrote that "legislators remain ghosts in America's historical imagination." One analyst argues that it is not a solely reactive institution but has played an active role in shaping government policy and is extraordinarily sensitive to public pressure. Several academics described Congress: Congress reflects us in all our strengths and all our weaknesses. It reflects our regional idiosyncrasies, our ethnic, religious, and racial diversity, our multitude of professions, and our shadings of opinion on everything from the value of war to the war over values. Congress is the government's most representative body ... Congress is essentially charged with reconciling our many points of view on the great public policy issues of the day. Congress is constantly changing and is constantly in flux. In recent times, the American South and West have gained House seats according to demographic changes recorded by the census and includes more women and minorities. While power balances among the different parts of government continue to change, the internal structure of Congress is important to understand along with its interactions with so-called intermediary institutions such as political parties, civic associations, interest groups, and the mass media. The Congress of the United States serves two distinct purposes that overlap: local representation to the federal government of a congressional district by representatives and a state's at-large representation to the federal government by senators.[citation needed] Most incumbents seek re-election, and their historical likelihood of winning subsequent elections exceeds 90 percent. The historical records of the House of Representatives and the Senate are maintained by the Center for Legislative Archives, which is a part of the National Archives and Records Administration. Congress is directly responsible for the governing of the District of Columbia, the current seat of the federal government.[citation needed] History The First Continental Congress was a gathering of representatives from twelve of the Thirteen Colonies. On July 4, 1776, the Second Continental Congress adopted the Declaration of Independence, referring to the new nation as the "United States of America". The Articles of Confederation in 1781 created the Congress of the Confederation, a unicameral body with equal representation among the states in which each state had a veto over most decisions. Congress had executive but not legislative authority, and the federal judiciary was confined to admiralty and lacked authority to collect taxes, regulate commerce, or enforce laws. Government powerlessness led to the Convention of 1787 which proposed a revised constitution with a two-chamber or bicameral Congress. Smaller states argued for equal representation for each state. The two-chamber structure had functioned well in state governments. A compromise plan, the Connecticut Compromise, was adopted with representatives chosen by population (benefiting larger states) and exactly two senators chosen by state governments (benefiting smaller states). The ratified constitution created a federal structure with two overlapping power centers so that each citizen as an individual is subject to the powers of state government and national government. To protect against abuse of power, each branch of government – executive, legislative, and judicial – had a separate sphere of authority and could check other branches according to the principle of the separation of powers. Furthermore, there were checks and balances within the legislature since there were two separate chambers. The new government became active in 1789. Political scientist Julian E. Zelizer suggested there were four main congressional eras, with considerable overlap, and included the formative era (1780s–1820s), the partisan era (1830s–1900s), the committee era (1910s–1960s), and the contemporary era (1970–present). Federalists and anti-federalists jostled for power in the early years as political parties became pronounced. With the passage of the Constitution and the Bill of Rights, the anti-federalist movement was exhausted. Some activists joined the Anti-Administration Party that James Madison and Thomas Jefferson were forming about 1790–1791 to oppose policies of Treasury Secretary Alexander Hamilton; it soon became the Democratic-Republican Party or the Jeffersonian Republican Party and thus began the era of the First Party System.[citation needed] In 1800, Thomas Jefferson's election to the presidency marked a peaceful transition of power between the parties. John Marshall, 4th chief justice of the Supreme Court, empowered the courts by establishing the principle of judicial review in law in the landmark case Marbury v. Madison in 1803, effectively giving the Supreme Court a power to nullify congressional legislation. The Civil War, which lasted from 1861 to 1865, resolved the slavery issue and unified the nation under federal authority but weakened the power of states' rights. The Gilded Age (1877–1901) was marked by Republican dominance of Congress. During this time, lobbying activity became more intense, particularly during the administration of President Ulysses S. Grant in which influential lobbies advocated for railroad subsidies and tariffs on wool. Immigration and high birth rates swelled the ranks of citizens and the nation grew at a rapid pace. The Progressive Era was characterized by strong party leadership in both houses of Congress and calls for reform; sometimes reformers said lobbyists corrupted politics. The position of Speaker of the House became extremely powerful under leaders such as Thomas Reed in 1890 and Joseph Gurney Cannon.[citation needed] By the beginning of the 20th century, party structures and leadership emerged as key organizers of Senate proceedings. A system of seniority, in which long-time members of Congress gained more and more power, encouraged politicians of both parties to seek long terms. Committee chairmen remained influential in both houses until the reforms of the 1970s. Important structural changes included the direct popular election of senators according to the Seventeenth Amendment, ratified on April 8, 1913. Supreme Court decisions based on the Constitution's commerce clause expanded congressional power to regulate the economy. One effect of popular election of senators was to reduce the difference between the House and Senate in terms of their link to the electorate. Lame duck reforms according to the Twentieth Amendment reduced the power of defeated and retiring members of Congress to wield influence despite their lack of accountability. The Great Depression ushered in President Franklin Roosevelt and strong control by Democrats and historic New Deal policies. Roosevelt's election in 1932 marked a shift in government power towards the executive branch. Numerous New Deal initiatives came from the White House rather initiated by Congress. President Roosevelt pushed his agenda in Congress by detailing Executive Branch staff to friendly Senate committees, a practice that ended with the Legislative Reorganization Act of 1946. The Democratic Party controlled both houses of Congress for many years. During this time, Republicans and conservative southern Democrats formed the Conservative Coalition. Democrats maintained control of Congress during World War II. Congress struggled with efficiency in the postwar era partly by reducing the number of standing congressional committees. Southern Democrats became a powerful force in many influential committees although political power alternated between Republicans and Democrats during these years. More complex issues required greater specialization and expertise, such as space flight and atomic energy policy. Senator Joseph McCarthy exploited the fear of communism during the Second Red Scare and conducted televised hearings. In 1960, Democratic candidate John F. Kennedy narrowly won the presidency and power shifted again to the Democrats who dominated both chambers of Congress from 1961 to 1980, and retained a consistent majority in the House from 1955 to 1994. Congress enacted Johnson's Great Society program to fight poverty and hunger. The Watergate Scandal had a powerful effect of waking up a somewhat dormant Congress which investigated presidential wrongdoing and coverups; the scandal "substantially reshaped" relations between the branches of government, suggested political scientist Bruce J. Schulman. Partisanship returned, particularly after 1994; one analyst attributes partisan infighting to slim congressional majorities which discouraged friendly social gatherings in meeting rooms such as the Board of Education. Congress began reasserting its authority. Lobbying became a big factor despite the 1971 Federal Election Campaign Act. Political action committees or PACs could make substantive donations to congressional candidates via such means as soft money contributions. While soft money funds were not given to specific campaigns for candidates, the money often benefited candidates substantially in an indirect way and helped reelect candidates. Reforms such as the 2002 Bipartisan Campaign Reform Act limited campaign donations but did not limit soft money contributions. One source suggests post-Watergate laws amended in 1974 meant to reduce the "influence of wealthy contributors and end payoffs" instead "legitimized PACs" since they "enabled individuals to band together in support of candidates". From 1974 to 1984, PACs grew from 608 to 3,803 and donations leaped from $12.5 million to $120 million along with concern over PAC influence in Congress. In 2009, there were 4,600 business, labor and special-interest PACs including ones for lawyers, electricians, and real estate brokers. From 2007 to 2008, 175 members of Congress received "half or more of their campaign cash" from PACs. From 1970 to 2009, the House expanded delegates, along with their powers and privileges representing U.S. citizens in non-state areas, beginning with representation on committees for Puerto Rico's resident commissioner in 1970. In 1971, a delegate for the District of Columbia was authorized, and in 1972 new delegate positions were established for U.S. Virgin Islands and Guam. In 1978, an additional delegate for American Samoa were added.[citation needed] In the late 20th century, the media became more important in Congress's work. Analyst Michael Schudson suggested that greater publicity undermined the power of political parties and caused "more roads to open up in Congress for individual representatives to influence decisions". Norman Ornstein suggested that media prominence led to a greater emphasis on the negative and sensational side of Congress, and referred to this as the tabloidization of media coverage. Others saw pressure to squeeze a political position into a thirty-second soundbite. A report characterized Congress in 2013 as unproductive, gridlocked, and "setting records for futility". In October 2013, with Congress unable to compromise, the government was shut down for several weeks and risked a serious default on debt payments, causing 60% of the public to say they would "fire every member of Congress" including their own representative. One report suggested Congress posed the "biggest risk to the U.S. economy" because of its brinksmanship, "down-to-the-wire budget and debt crises" and "indiscriminate spending cuts", resulting in slowed economic activity and keeping up to two million people unemployed. There has been increasing public dissatisfaction with Congress, with extremely low approval ratings which dropped to 5% in October 2013. In 2009, Congress authorized another delegate for the Northern Mariana Islands. These six members of Congress enjoy floor privileges to introduce bills and resolutions, and in recent Congresses they vote in permanent and select committees, in party caucuses and in joint conferences with the Senate. They have Capitol Hill offices, staff and two annual appointments to each of the four military academies. While their votes are constitutional when Congress authorizes their House Committee of the Whole votes, recent Congresses have not allowed for that, and they cannot vote when the House is meeting as the House of Representatives. On January 6, 2021, Congress gathered to confirm the election of Joe Biden, when supporters of the outgoing president Donald Trump attacked the building. The session of Congress ended prematurely, and Congress representatives evacuated. Trump supporters occupied Congress until D.C. police evacuated the area. The event was the first time since the Burning of Washington by the British during the War of 1812 that the United States Congress was forcefully occupied. Despite the importance of Congress outlined in Article One, Congress has[when?] lost power to the executive and judiciary both intentionally and unintentionally. Women in Congress Various social and structural barriers have prevented women from gaining seats in Congress. In the early 20th century, women's domestic roles and the inability to vote forestalled opportunities to run for and hold public office. The two party system and the lack of term limits favored incumbent white men, making the widow's succession – in which a woman temporarily took over a seat vacated by the death of her husband – the most common path to Congress for white women. Women candidates began making substantial inroads in the later 20th century, due in part to new political support mechanisms and public awareness of their underrepresentation in Congress. Recruitment and financial support for women candidates were rare until the second-wave feminism movement, when activists moved into electoral politics. Beginning in the 1970s, donors and political action committees like EMILY's List began recruiting, training and funding women candidates. Watershed political moments like the confirmation of Clarence Thomas and the 2016 presidential election created momentum for women candidates, resulting in the Year of the Woman and the election of members of The Squad, respectively. Women of color faced additional challenges that made their ascension to Congress even more difficult. Jim Crow laws, voter suppression and other forms of structural racism made it virtually impossible for women of color to reach Congress prior to 1965. The passage of the Voting Rights Act that year, and the elimination of race-based immigration laws in the 1960s opened the possibility for Black, Asian American, Latina and other non-white women candidates to run for Congress. Racially polarized voting, racial stereotypes and lack of institutional support still prevent women of color from reaching Congress as easily as white people. Senate elections, which require victories in statewide electorates, have been particularly difficult for women of color. Carol Moseley Braun became the first woman of color to reach the Senate in 1993. The second, Mazie Hirono, won in 2013. In 2021, Kamala Harris became the first female President of the Senate, which came with her role as the first female Vice President of the United States. Role Article One of the Constitution creates and sets forth the structure and most of the powers of Congress. Sections One through Six describe how Congress is elected and gives each House the power to create its own structure. Section Seven lays out the process for creating laws, and Section Eight enumerates numerous powers. Section Nine is a list of powers Congress does not have, and Section Ten enumerates powers of the state, some of which may only be granted by Congress. Constitutional amendments have granted Congress additional powers. Congress also has implied powers derived from the Constitution's Necessary and Proper Clause.[citation needed] Congress has authority over financial and budgetary policy through the enumerated power to "lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States". There is vast authority over budgets, although analyst Eric Patashnik suggested that much of Congress's power to manage the budget has been lost when the welfare state expanded since "entitlements were institutionally detached from Congress's ordinary legislative routine and rhythm." Another factor leading to less control over the budget was a Keynesian belief that balanced budgets were unnecessary. The Sixteenth Amendment in 1913 extended congressional power of taxation to include income taxes without apportionment among the several States, and without regard to any census or enumeration. The Constitution also grants Congress the exclusive power to appropriate funds, and this power of the purse is one of Congress's primary checks on the executive branch. Congress can borrow money on the credit of the United States, regulate commerce with foreign nations and among the states, and coin money. Generally, the Senate and the House of Representatives have equal legislative authority, although only the House may originate revenue and appropriation bills. Congress has an important role in national defense, including the exclusive power to declare war, to raise and maintain the armed forces, and to make rules for the military. Some critics charge that the executive branch has usurped Congress's constitutionally defined task of declaring war. While historically presidents initiated the process for going to war, they asked for and received formal war declarations from Congress for the War of 1812, the Mexican–American War, the Spanish–American War, World War I, and World War II, although President Theodore Roosevelt's military move into Panama in 1903 did not get congressional approval. In the early days after the North Korean invasion of 1950, President Truman described the American response as a "police action". According to Time magazine in 1970, "U.S. presidents [had] ordered troops into position or action without a formal congressional declaration a total of 149 times." In 1993, Michael Kinsley wrote that "Congress's war power has become the most flagrantly disregarded provision in the Constitution," and that the "real erosion [of Congress's war power] began after World War II." Disagreement about the extent of congressional versus presidential power regarding war has been present periodically throughout the nation's history. Congress can establish post offices and post roads, issue patents and copyrights, fix standards of weights and measures, establish Courts inferior to the Supreme Court, and "make all Laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof". Article Four gives Congress the power to admit new states into the Union.[citation needed] One of Congress's foremost non-legislative functions is the power to investigate and oversee the executive branch. Congressional oversight is usually delegated to committees and is facilitated by Congress's subpoena power. Some critics have charged that Congress has in some instances failed to do an adequate job of overseeing the other branches of government. In the Plame affair, critics including Representative Henry A. Waxman charged that Congress was not doing an adequate job of oversight in this case. There have been concerns about congressional oversight of executive actions such as warrantless wiretapping, although others respond that Congress did investigate the legality of presidential decisions. Political scientists Ornstein and Mann suggested that oversight functions do not help members of Congress win reelection. Congress also has the exclusive power of removal, allowing impeachment and removal of the president, federal judges and other federal officers. There have been charges that presidents acting under the doctrine of the unitary executive have assumed important legislative and budgetary powers that should belong to Congress. So-called signing statements are one way in which a president can "tip the balance of power between Congress and the White House a little more in favor of the executive branch", according to one account. Past presidents, including Ronald Reagan, George H. W. Bush, Bill Clinton, and George W. Bush, have made public statements when signing congressional legislation about how they understand a bill or plan to execute it, and commentators, including the American Bar Association, have described this practice as against the spirit of the Constitution. There have been concerns that presidential authority to cope with financial crises is eclipsing the power of Congress. In 2008, George F. Will called the Capitol building a "tomb for the antiquated idea that the legislative branch matters". The Constitution enumerates the powers of Congress in detail. In addition, other congressional powers have been granted, or confirmed, by constitutional amendments. The Thirteenth (1865), Fourteenth (1868), and Fifteenth Amendments (1870) gave Congress authority to enact legislation to enforce rights of African Americans, including voting rights, due process, and equal protection under the law. Generally militia forces are controlled by state governments, not Congress. Congress also has implied powers deriving from the Constitution's Necessary and Proper Clause which permit Congress to "make all laws which shall be necessary and proper for carrying into Execution the foregoing Powers, and all other Powers vested by this Constitution in the Government of the United States, or in any Department or Officer thereof". Broad interpretations of this clause and of the Commerce Clause, the enumerated power to regulate commerce, in rulings such as McCulloch v. Maryland, have effectively widened the scope of Congress's legislative authority far beyond that prescribed in Section Eight. Constitutional responsibility for the oversight of Washington, D.C., the federal district and national capital, and the U.S. territories of Guam, American Samoa, Puerto Rico, the U.S. Virgin Islands, and the Northern Mariana Islands rests with Congress. The republican form of government in territories is devolved by congressional statute to the respective territories including direct election of governors, the D.C. mayor and locally elective territorial legislatures. Each territory and Washington, D.C., elects a non-voting delegate to the U.S. House of Representatives as they have throughout congressional history. They "possess the same powers as other members of the House, except that they may not vote when the House is meeting as the House of Representatives". They are assigned offices and allowances for staff, participate in debate, and appoint constituents to the four military service academies for the Army, Navy, Air Force and Coast Guard. Washington, D.C., citizens alone among U.S. territories have the right to directly vote for the President of the United States, although the Democratic and Republican political parties nominate their presidential candidates at national conventions which include delegates from the five major territories. Representative Lee H. Hamilton explained how Congress functions within the federal government: To me the key to understanding it is balance. The founders went to great lengths to balance institutions against each other – balancing powers among the three branches: Congress, the president, and the Supreme Court; between the House of Representatives and the Senate; between the federal government and the states; among states of different sizes and regions with different interests; between the powers of government and the rights of citizens, as spelled out in the Bill of Rights ... No one part of government dominates the other.: 6 The Constitution provides checks and balances among the three branches of the federal government. Its authors expected the greater power to lie with Congress as described in Article One. The influence of Congress on the presidency has varied from period to period depending on factors such as congressional leadership, presidential political influence, historical circumstances such as war, and individual initiative by members of Congress. The impeachment of Andrew Johnson made the presidency less powerful than Congress for a considerable period afterwards. The 20th and 21st centuries have seen the rise of presidential power under politicians such as Theodore Roosevelt, Woodrow Wilson, Franklin D. Roosevelt, Richard Nixon, Ronald Reagan, and George W. Bush. Congress restricted presidential power with laws such as the Congressional Budget and Impoundment Control Act of 1974 and the War Powers Resolution. The presidency remains considerably more powerful today than during the 19th century. Executive branch officials are often loath to reveal sensitive information to members of Congress because of concern that information could not be kept secret; in return, knowing they may be in the dark about executive branch activity, congressional officials are more likely to distrust their counterparts in executive agencies. Many government actions require fast coordinated effort by many agencies, and this is a task that Congress is ill-suited for. Congress is slow, open, divided, and not well matched to handle more rapid executive action or do a good job of overseeing such activity, according to one analysis. The Constitution concentrates removal powers in the Congress by empowering and obligating the House of Representatives to impeach executive or judicial officials for "Treason, Bribery, or other high Crimes and Misdemeanors". Impeachment is a formal accusation of unlawful activity by a civil officer or government official. The Senate is constitutionally empowered and obligated to try all impeachments. A simple majority in the House is required to impeach an official; a two-thirds majority in the Senate is required for conviction. A convicted official is automatically removed from office; in addition, the Senate may stipulate that the defendant be banned from holding office in the future. Impeachment proceedings may not inflict more than this. A convicted party may face criminal penalties in a normal court of law. In the history of the United States, the House of Representatives has impeached sixteen officials, of whom seven were convicted. Another resigned before the Senate could complete the trial. Only three presidents have ever been impeached: Andrew Johnson in 1868, Bill Clinton in 1999, Donald Trump in 2019 and 2021. The trials of Johnson, Clinton, and the 2019 trial of Trump all ended in acquittal; in Johnson's case, the Senate fell one vote short of the two-thirds majority required for conviction. In 1974, Richard Nixon resigned from office after impeachment proceedings in the House Judiciary Committee indicated his removal from office.[citation needed] The Senate has an important check on the executive power by confirming Cabinet officials, judges, and other high officers "by and with the Advice and Consent of the Senate". It confirms most presidential nominees, but rejections are not uncommon. Furthermore, treaties negotiated by the President must be ratified by a two-thirds majority vote in the Senate to take effect. As a result, presidential arm-twisting of senators can happen before a key vote; for example, President Obama's secretary of state, Hillary Clinton, urged her former senate colleagues to approve a nuclear arms treaty with Russia in 2010. The House of Representatives has no formal role in either the ratification of treaties or the appointment of federal officials, other than in filling a vacancy in the office of the vice president; in such a case, a majority vote in each House is required to confirm a president's nomination of a vice president. In 1803, the Supreme Court established judicial review of federal legislation in Marbury v. Madison, holding that Congress could not grant unconstitutional power to the Court itself. The Constitution did not explicitly state that the courts may exercise judicial review. The notion that courts could declare laws unconstitutional was envisioned by the founding fathers. Alexander Hamilton, for example, mentioned and expounded upon the doctrine in Federalist No. 78. Originalists on the Supreme Court have argued that if the constitution does not say something explicitly it is unconstitutional to infer what it should, might, or could have said. Judicial review means that the Supreme Court can nullify a congressional law. It is a huge check by the courts on the legislative authority and limits congressional power substantially. In 1857, for example, the Supreme Court struck down provisions of a congressional act of 1820 in its Dred Scott decision. At the same time, the Supreme Court can extend congressional power through its constitutional interpretations.[citation needed] The congressional inquiry into St. Clair's Defeat of 1791 was the first congressional investigation of the executive branch. Investigations are conducted to gather information on the need for future legislation, to test the effectiveness of laws already passed, and to inquire into the qualifications and performance of members and officials of the other branches. Committees may hold hearings, and, if necessary, subpoena people to testify when investigating issues over which it has the power to legislate. Witnesses who refuse to testify may be cited for contempt of Congress, and those who testify falsely may be charged with perjury. Most committee hearings are open to the public (the House and Senate intelligence committees are the exception); important hearings are widely reported in the mass media and transcripts published a few months afterwards. Congress, in the course of studying possible laws and investigating matters, generates an incredible amount of information in various forms, and can be described as a publisher. Indeed, it publishes House and Senate reports and maintains databases which are updated irregularly with publications in a variety of electronic formats. Congress also plays a role in presidential elections. Both Houses meet in joint session on the sixth day of January following a presidential election to count the electoral votes, and there are procedures to follow if no candidate wins a majority. The main result of congressional activity is the creation of laws, most of which are contained in the United States Code, arranged by subject matter alphabetically under fifty title headings to present the laws "in a concise and usable form". Structure Congress is split into two chambers – House and Senate – and manages the task of writing national legislation by dividing work into separate committees which specialize in different areas. Some members of Congress are elected by their peers to be officers of these committees. Congress has ancillary organizations such as the Government Accountability Office and the Library of Congress to help provide it with information, and members of Congress have staff and offices to assist them as well. In addition, a vast industry of lobbyists helps members write legislation on behalf of diverse corporate and labor interests. The committee structure permits members of Congress to study a particular subject intensely. It is neither expected nor possible that a member be an expert on all subject areas before Congress. As time goes by, members develop expertise in particular subjects and their legal aspects. Committees investigate specialized subjects and advise the entire Congress about choices and trade-offs. The choice of specialty may be influenced by the member's constituency, important regional issues, prior background and experience. Senators often choose a different specialty from that of the other senator from their state to prevent overlap. Some committees specialize in running the business of other committees and exert a powerful influence over all legislation; for example, the House Ways and Means Committee has considerable influence over House affairs. Committees write legislation. While procedures, such as the House discharge petition process, can introduce bills to the House floor and effectively bypass committee input, they are exceedingly difficult to implement without committee action. Committees have power and have been called independent fiefdoms. Legislative, oversight, and internal administrative tasks are divided among about two hundred committees and subcommittees which gather information, evaluate alternatives, and identify problems. They propose solutions for consideration by the full chamber. In addition, they perform the function of oversight by monitoring the executive branch and investigating wrongdoing. At the start of each two-year session, the House elects a speaker who does not normally preside over debates but serves as the majority party's leader. In the Senate, the vice president is the ex officio president of the Senate. In addition, the Senate elects an officer called the president pro tempore. Pro tempore means for the time being. This office is usually held by the most senior member of the Senate's majority party and customarily keeps this position until there is a change in party control. Accordingly, the Senate does not necessarily elect a new president pro tempore at the beginning of a new Congress. In the House and Senate, the actual presiding officer is generally a junior member of the majority party who is appointed, so that new members become acquainted with the rules of the chamber.[citation needed] The Library of Congress (LOC) was established by an act of Congress in 1800. It is primarily housed in three buildings on Capitol Hill, but also includes several other sites: the National Library Service for the Blind and Physically Handicapped in Washington, D.C.; the National Audio-Visual Conservation Center in Culpeper, Virginia; a large book storage facility located in Fort Meade, Maryland; and multiple overseas offices. The Library had mostly law books when it was burnt by British forces in 1814 during the War of 1812. The library's collections were restored and expanded when Congress authorized the purchase of Thomas Jefferson's private library. One of the library's missions is to serve Congress and its staff as well as the American public. It is the largest library in the world, with nearly 150 million items including books, films, maps, photographs, music, manuscripts, graphics, and materials in 470 languages. The Congressional Research Service (CRS), part of the Library of Congress, provides detailed, up-to-date and non-partisan research for senators, representatives, and their staff to help them carry out their official duties. It provides ideas for legislation, helps members analyze a bill, facilitates public hearings, makes reports, consults on matters such as parliamentary procedure, and helps the two chambers resolve disagreements. It has been called the "House's think tank" and has a staff of about 900 employees. The Congressional Budget Office (CBO) is a federal agency which provides economic data to Congress. It was created as an independent non-partisan agency by the Congressional Budget and Impoundment Control Act of 1974. It helps Congress estimate revenue inflows from taxes and helps the budgeting process. It makes projections about such matters as the national debt as well as likely costs of legislation. It prepares an annual Economic and Budget Outlook with a mid-year update and writes An Analysis of the President's Budgetary Proposals for the Senate's Appropriations Committee. The speaker of the House and the Senate's president pro tempore jointly appoint the CBO director for a four-year term.[citation needed] The Government Accountability Office (GAO), is a federal agency within the legislative branch that provides auditing, evaluative, and investigative services for the United States Congress in an independent and nonpartisan capacity. The GAO is the supreme audit institution of the federal government of the United States. It identifies its core "mission values" as: accountability, integrity, and reliability. It is also known as the "congressional watchdog". The Architect of the Capitol (AOC) is a federal agency within the legislative branch that is responsible for the maintenance, operation, development, construction, building preservation, and property management of the United States Capitol Complex and is accountable directly to the United States Congress and the Supreme Court of the United States. Lobbyists, advocates, principals, and constituents as well as others that operate in the field of government relations, represent diverse interests and often seek to influence congressional decisions to reflect their clients' or their own needs. Lobbing firms, government relations firms, advocacy groups, businesses, nonprofits, other organizations and their members that conduct lobbying or policy advocacy activities sometimes write legislation and whip bills. In 2007, there were approximately 17,000 federal lobbyists in Washington, D.C, though not all members of the government relations industry are lobbyists requiring registry on an official lobby register. They explain to legislators the goals of their organizations. Some lobbyists represent non-profit organizations and some work pro bono for issues in which they are personally interested.[d] Congress has alternated between periods of constructive cooperation and compromise between parties, known as bipartisanship, and periods of deep political polarization and fierce infighting, known as partisanship. The period after the Civil War was marked by partisanship, as is the case today. It is generally easier for committees to reach accord on issues when compromise is possible. Some political scientists speculate that a prolonged period marked by narrow majorities in both chambers of Congress has intensified partisanship in the last few decades, but that an alternation of control of Congress between Democrats and Republicans may lead to greater flexibility in policies, as well as pragmatism and civility within the institution. Procedures A term of Congress is divided into two "sessions", one for each year; Congress has occasionally been called into an extra or special session. A new session commences on January 3 each year unless Congress decides differently. The Constitution requires Congress to meet at least once each year and forbids either house from meeting outside the Capitol without the consent of the other house. Joint sessions of the United States Congress occur on special occasions that require a concurrent resolution from House and Senate. These sessions include counting electoral votes after a presidential election and the president's State of the Union address. The constitutionally mandated report, normally given as an annual speech, is modeled on Britain's Speech from the Throne, was written by most presidents after Jefferson but personally delivered as a spoken oration beginning with Wilson in 1913. Joint Sessions and Joint Meetings are traditionally presided over by the speaker of the House, except when counting presidential electoral votes when the vice president (acting as the president of the Senate) presides.[citation needed] Ideas for legislation can come from members, lobbyists, state legislatures, constituents, legislative counsel, or executive agencies. Anyone can write a bill, but only members of Congress may introduce bills. Most bills are not written by Congress members, but originate from the Executive branch; interest groups often draft bills as well. The usual next step is for the proposal to be passed to a committee for review. A proposal is usually in one of these forms: Representatives introduce a bill while the House is in session by placing it in the hopper on the Clerk's desk. It is assigned a number and referred to a committee which studies each bill intensely at this stage. Drafting statutes requires "great skill, knowledge, and experience" and sometimes take a year or more. Sometimes lobbyists write legislation and submit it to a member for introduction. Joint resolutions are the normal way to propose a constitutional amendment or declare war. On the other hand, concurrent resolutions (passed by both houses) and simple resolutions (passed by only one house) do not have the force of law but express the opinion of Congress or regulate procedure. Bills may be introduced by any member of either house. The Constitution states: "All Bills for raising Revenue shall originate in the House of Representatives." While the Senate cannot originate revenue and appropriation bills, it has the power to amend or reject them. Congress has sought ways to establish appropriate spending levels. Each chamber determines its own internal rules of operation unless specified in the Constitution or prescribed by law. In the House, a Rules Committee guides legislation; in the Senate, a Standing Rules committee is in charge. Each branch has its own traditions; for example, the Senate relies heavily on the practice of getting "unanimous consent" for noncontroversial matters. House and Senate rules can be complex, sometimes requiring a hundred specific steps before a bill can become a law. Members sometimes turn to outside experts to learn about proper congressional procedures. Each bill goes through several stages in each house including consideration by a committee and advice from the Government Accountability Office. Most legislation is considered by standing committees which have jurisdiction over a particular subject such as Agriculture or Appropriations. The House has twenty standing committees; the Senate has sixteen. Standing committees meet at least once each month. Almost all standing committee meetings for transacting business must be open to the public unless the committee votes, publicly, to close the meeting. A committee might call for public hearings on important bills. Each committee is led by a chair who belongs to the majority party and a ranking member of the minority party. Witnesses and experts can present their case for or against a bill. Then, a bill may go to what is called a mark-up session, where committee members debate the bill's merits and may offer amendments or revisions. Committees may also amend the bill, but the full house holds the power to accept or reject committee amendments. After debate, the committee votes whether it wishes to report the measure to the full house. If a bill is tabled then it is rejected. If amendments are extensive, sometimes a new bill with amendments built in will be submitted as a so-called clean bill with a new number. Both houses have procedures under which committees can be bypassed or overruled but they are rarely used. Generally, members who have been in Congress longer have greater seniority and therefore greater power. A bill which reaches the floor of the full house can be simple or complex and begins with an enacting formula such as "Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled ..." Consideration of a bill requires, itself, a rule which is a simple resolution specifying the particulars of debate – time limits, possibility of further amendments, and such. Each side has equal time and members can yield to other members who wish to speak. Sometimes opponents seek to recommit a bill which means to change part of it. Generally, discussion requires a quorum, usually half of the total number of representatives, before discussion can begin, although there are exceptions. The house may debate and amend the bill; the precise procedures used by the House and Senate differ. A final vote on the bill follows. Once a bill is approved by one house, it is sent to the other which may pass, reject, or amend it. For the bill to become law, both houses must agree to identical versions of the bill. If the second house amends the bill, then the differences between the two versions must be reconciled in a conference committee, an ad hoc committee that includes senators and representatives sometimes by using a reconciliation process to limit budget bills. Both houses use a budget enforcement mechanism informally known as pay-as-you-go or paygo which discourages members from considering acts that increase budget deficits. If both houses agree to the version reported by the conference committee, the bill passes, otherwise it fails.[citation needed] The Constitution specifies that a majority of members (a quorum) be present before doing business in each house. The rules of each house assume that a quorum is present unless a quorum call demonstrates the contrary and debate often continues despite the lack of a majority.[citation needed] Voting within Congress can take many forms, including systems using lights and bells and electronic voting. Both houses use voice voting to decide most matters in which members shout "aye" or "no" and the presiding officer announces the result. The Constitution requires a recorded vote if demanded by one-fifth of the members present or when voting to override a presidential veto. If the voice vote is unclear or if the matter is controversial, a recorded vote usually happens. The Senate uses roll-call voting, in which a clerk calls out the names of all the senators, each senator stating "aye" or "no" when their name is announced. In the Senate, the Vice President may cast the tie-breaking vote if present when the senators are equally divided.[citation needed] The House reserves roll-call votes for the most formal matters, as a roll call of all 435 representatives takes quite some time; normally, members vote by using an electronic device. In the case of a tie, the motion in question fails. Most votes in the House are done electronically, allowing members to vote yea or nay or present or open. Members insert a voting ID card and can change their votes during the last five minutes if they choose; in addition, paper ballots are used occasionally (yea indicated by green and nay by red). One member cannot cast a proxy vote for another. Congressional votes are recorded on an online database. After passage by both houses, a bill is enrolled and sent to the president for approval. The president may sign it making it law or veto it, perhaps returning it to Congress with the president's objections. A vetoed bill can still become law if each house of Congress votes to override the veto with a two-thirds majority. Finally, the president may do nothing neither signing nor vetoing the bill and then the bill becomes law automatically after ten days (not counting Sundays) according to the Constitution. But if Congress is adjourned during this period, presidents may veto legislation passed at the end of a congressional session simply by ignoring it; the maneuver is known as a pocket veto, and cannot be overridden by the adjourned Congress.[citation needed] Public interaction Senators face reelection every six years, and representatives every two. Reelections encourage candidates to focus their publicity efforts at their home states or districts. Running for reelection can be a grueling process of distant travel and fund-raising which distracts senators and representatives from paying attention to governing, according to some critics. Although others respond that the process is necessary to keep members of Congress in touch with voters.[citation needed] Incumbent members of Congress running for reelection have strong advantages over challengers. They raise more money because donors fund incumbents over challengers, perceiving the former as more likely to win, and donations are vital for winning elections. One critic compared election to Congress to receiving life tenure at a university. Another advantage for representatives is the practice of gerrymandering. After each ten-year census, states are allocated representatives based on population, and officials in power can choose how to draw the congressional district boundaries to support candidates from their party. As a result, reelection rates of members of Congress hover around 90 percent, causing some critics to call them a privileged class. Academics such as Princeton's Stephen Macedo have proposed solutions to fix gerrymandering in the U.S. Senators and representatives enjoy free mailing privileges, called franking privileges; while these are not intended for electioneering, this rule is often skirted by borderline election-related mailings during campaigns.[citation needed] In 1971, the cost of running for Congress in Utah was $70,000 but costs have climbed. The biggest expense is television advertisements. Today's races cost more than a million dollars for a House seat, and six million or more for a Senate seat. Since fundraising is vital, "members of Congress are forced to spend ever-increasing hours raising money for their re-election", according to the Fair Elections Now coalition. The Supreme Court has treated campaign contributions as a free speech issue. Some see money as a good influence in politics since it "enables candidates to communicate with voters". Few members retire from Congress without complaining about how much it costs to campaign for reelection. Critics contend that members of Congress are more likely to attend to the needs of heavy campaign contributors than to ordinary citizens. Elections are influenced by many variables. Some political scientists speculate there is a coattail effect (when a popular president or party position has the effect of reelecting incumbents who win by "riding on the president's coattails"), although there is some evidence that the coattail effect is irregular and possibly declining since the 1950s. Some districts are so heavily Democratic or Republican that they are called a safe seat; any candidate winning the primary will almost always be elected, and these candidates do not need to spend money on advertising. But some races can be competitive when there is no incumbent. If a seat becomes vacant in an open district, then both parties may spend heavily on advertising in these races; in California in 1992, only four of twenty races for House seats were considered highly competitive. Since members of Congress must advertise heavily on television, this usually involves negative advertising, which smears an opponent's character without focusing on the issues. Negative advertising is seen as effective because "the messages tend to stick." These advertisements sour the public on the political process in general as most members of Congress seek to avoid blame. One wrong decision or one damaging television image can mean defeat at the next election, which leads to a culture of risk avoidance, a need to make policy decisions behind closed doors, and concentrating publicity efforts in the members' home districts. Prominent Founding Fathers, writing in The Federalist Papers, felt that elections were essential to liberty, that a bond between the people and the representatives was particularly essential, and that "frequent elections are unquestionably the only policy by which this dependence and sympathy can be effectually secured." In 2009, few Americans were familiar with leaders of Congress. The percentage of Americans eligible to vote who did, in fact, vote was 63% in 1960, but has been falling since, although there was a slight upward trend in the 2008 election. Public opinion polls asking people if they approve of the job Congress is doing have, in the last few decades, hovered around 25% with some variation. Scholar Julian Zeliger suggested that the "size, messiness, virtues, and vices that make Congress so interesting also create enormous barriers to our understanding the institution ... Unlike the presidency, Congress is difficult to conceptualize." Other scholars suggest that despite the criticism, "Congress is a remarkably resilient institution ... its place in the political process is not threatened ... it is rich in resources" and that most members behave ethically. They contend that "Congress is easy to dislike and often difficult to defend" and this perception is exacerbated because many challengers running for Congress run against Congress, which is an "old form of American politics" that further undermines Congress's reputation with the public: The rough-and-tumble world of legislating is not orderly and civil, human frailties too often taint its membership, and legislative outcomes are often frustrating and ineffective ... Still, we are not exaggerating when we say that Congress is essential to American democracy. We would not have survived as a nation without a Congress that represented the diverse interests of our society, conducted a public debate on the major issues, found compromises to resolve conflicts peacefully, and limited the power of our executive, military, and judicial institutions ... The popularity of Congress ebbs and flows with the public's confidence in government generally ... the legislative process is easy to dislike – it often generates political posturing and grandstanding, it necessarily involves compromise, and it often leaves broken promises in its trail. Also, members of Congress often appear self-serving as they pursue their political careers and represent interests and reflect values that are controversial. Scandals, even when they involve a single member, add to the public's frustration with Congress and have contributed to the institution's low ratings in opinion polls. — Smith, Roberts & Wielen An additional factor that confounds public perceptions of Congress is that congressional issues are becoming more technical and complex and require expertise in subjects such as science, engineering and economics. As a result, Congress often cedes authority to experts at the executive branch. Since 2006, Congress has dropped ten points in the Gallup confidence poll with only nine percent having "a great deal" or "quite a lot" of confidence in their legislators. Since 2011, Gallup poll has reported Congress's approval rating among Americans at 10% or below three times. Public opinion of Congress plummeted further to 5% in October 2013 after parts of the U.S. government deemed 'nonessential government' shut down. When the Constitution was ratified in 1787, the ratio of the populations of large states to small states was roughly twelve to one. The Connecticut Compromise gave every state, large and small, an equal vote in the Senate. Since each state has two senators, residents of smaller states have more clout in the Senate than residents of larger states. But since 1787, the population disparity between large and small states has grown; in 2006, for example, California had seventy times the population of Wyoming. Critics, such as constitutional scholar Sanford Levinson, have suggested that the population disparity works against residents of large states and causes a steady redistribution of resources from "large states to small states". Others argue that the Connecticut Compromise was deliberately intended by the Founding Fathers to construct the Senate so that each state had equal footing not based on population, and contend that the result works well on balance. A major role for members of Congress is providing services to constituents. Constituents request assistance with problems. Providing services helps members of Congress win votes and elections and can make a difference in close races. Congressional staff can help citizens navigate government bureaucracies. One academic described the complex intertwined relation between lawmakers and constituents as home style.: 8 One way to categorize lawmakers, according to former University of Rochester political science professor Richard Fenno, is by their general motivation: Privileges Representative Jim Cooper of Tennessee told Harvard professor Lawrence Lessig that a chief problem with Congress was that members focused on their future careers as lobbyists after serving – that Congress was a "Farm League for K Street". Family members of active legislators have also been hired by lobbying firms, which while not allowed to lobby their family member, has drawn criticism as a conflict of interest. Members of congress have been accused of insider trading, such as in the 2020 congressional insider trading scandal, where members of Congress or their family members have traded on stocks related to work on their committees. One 2011 study concluded that portfolios of members of Congress outperformed both the market and hedge funds, which the authors suggested as evidence of insider trading. Proposed solutions include putting stocks in blind trusts to prevent future insider trading. Some members of Congress have gone on lavish trips paid for by outside groups, sometimes bringing family members, which are often legal even if in an ethical gray area. Some critics complain congressional pay is high compared with a median American income. Others have countered that congressional pay is consistent with other branches of government. Another criticism is that members of Congress are insulated from the health care market due to their coverage. Others have criticized the wealth of members of Congress. In January 2014, it was reported that for the first time over half of the members of Congress were millionaires. Congress has been criticized for trying to conceal pay raises by slipping them into a large bill at the last minute. Members elected since 1984 are covered by the Federal Employees Retirement System (FERS). Like other federal employees, congressional retirement is funded through taxes and participants' contributions. Members of Congress under FERS contribute 1.3% of their salary into the FERS retirement plan and pay 6.2% of their salary in Social Security taxes. And like federal employees, members contribute one-third of the cost of health insurance with the government covering the other two-thirds. The size of a congressional pension depends on the years of service and the average of the highest three years of their salary. By law, the starting amount of a member's retirement annuity may not exceed 80% of their final salary. In 2018, the average annual pension for retired senators and representatives under the Civil Service Retirement System (CSRS) was $75,528, while those who retired under FERS, or in combination with CSRS, was $41,208. Members of Congress make fact-finding missions to learn about other countries and stay informed, but these outings can cause controversy if the trip is deemed excessive or unconnected with the task of governing. For example, The Wall Street Journal reported in 2009 that lawmaker trips abroad at taxpayer expense had included spas, $300-per-night extra unused rooms, and shopping excursions. Some lawmakers responded that "traveling with spouses compensates for being away from them a lot in Washington" and justify the trips as a way to meet officials in other nations. By the Twenty-seventh Amendment, changes to congressional pay may not take effect before the next election to the House of Representatives. In Boehner v. Anderson, the United States Court of Appeals for the District of Columbia Circuit ruled that the amendment does not affect cost-of-living adjustments. The franking privilege allows members of Congress to send official mail to constituents at government expense. Though they are not permitted to send election materials, borderline material is often sent, especially in the run-up to an election by those in close races. Some academics consider free mailings as giving incumbents a big advantage over challengers.[failed verification] Members of Congress enjoy parliamentary privilege, including freedom from arrest in all cases except for treason, felony, and breach of the peace, and freedom of speech in debate. This constitutionally derived immunity applies to members during sessions and when traveling to and from sessions. The term "arrest" has been interpreted broadly, and includes any detention or delay in the course of law enforcement, including court summons and subpoenas. The rules of the House strictly guard this privilege; a member may not waive the privilege on their own but must seek the permission of the whole house to do so. Senate rules are less strict and permit individual senators to waive the privilege as they choose. The Constitution guarantees absolute freedom of debate in both houses, providing in the Speech or Debate Clause of the Constitution that "for any Speech or Debate in either House, they shall not be questioned in any other Place." Accordingly, a member of Congress may not be sued in court for slander because of remarks made in either house, although each house has its own rules restricting offensive speeches, and may punish members who transgress. Obstructing the work of Congress is a crime under federal law and is known as contempt of Congress. Each member has the power to cite people for contempt but can only issue a contempt citation – the judicial system pursues the matter like a normal criminal case. If convicted in court of contempt of Congress, a person may be imprisoned for up to one year. See also Notes Citations References Further reading External links 38°53′23″N 77°0′32″W / 38.88972°N 77.00889°W / 38.88972; -77.00889 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Nicolaus_Copernicus] | [TOKENS: 16865] |
Contents Nicolaus Copernicus Nicolaus Copernicus[b] (19 February 1473 – 24 May 1543) was a Renaissance polymath who formulated a model of the universe that placed the Sun rather than Earth at its center. The publication of Copernicus's model in his book De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), just before his death in 1543, was a major event in the history of science, triggering the Copernican Revolution and making a pioneering contribution to the Scientific Revolution. Though a similar heliocentric model had been developed eighteen centuries earlier by Aristarchus of Samos, an ancient Greek astronomer, Copernicus likely arrived at his model independently.[c][d][e] Copernicus was born and died in Royal Prussia, a semiautonomous and multilingual region created within the Crown of the Kingdom of Poland from lands regained from the Teutonic Order after the Thirteen Years' War. A polyglot and polymath, he obtained a doctorate in canon law and was a mathematician, astronomer, physician, classics scholar, translator, governor, diplomat, and economist. From 1497 he was a Warmian Cathedral chapter canon. In 1517 he derived a quantity theory of money—a key concept in economics—and in 1519 he formulated an economic principle that later came to be called Gresham's law.[f] Life Nicolaus Copernicus was born on 19 February 1473 in the city of Toruń (Thorn), in the province of Royal Prussia, in the Crown of the Kingdom of Poland, to German-speaking parents. His father was a merchant from Kraków and his mother was the daughter of a wealthy Toruń merchant. Nicolaus was the youngest of four children. His brother Andreas (Andrew) became an Augustinian canon at Frombork (Frauenburg). His sister Barbara, named after her mother, became a Benedictine nun and, in her final years, prioress of a convent in Chełmno (Kulm); she died after 1517. His sister Katharina married the businessman and Toruń city councilor Barthel Gertner and left five children, whom Copernicus looked after to the end of his life. Copernicus never married and is not known to have had children, but from at least 1531 until 1539 his relations with Anna Schilling, a live-in housekeeper, were seen as scandalous by two bishops of Warmia who urged him over the years to break off relations with his "mistress". Copernicus's father's family originally migrated to Silesia in the thirteenth century. The family can be traced to a village between Nysa (Neiße) and Prudnik (Neustadt). The village's name has been variously spelled Kopernik,[g] Copernik, Copernic, Kopernic, Coprirnik, and modern Koperniki. In the 14th century, members of the family began moving to various other Silesian cities, to the Polish capital, Kraków (1367), and to Toruń (1400). In 1396, Niklas Koppernigk, the astronomer's great-great-grandfather, became a burgher of Kraków. The father, likewise named Niklas Koppernigk, likely the son of Jan (or Johann), was first recorded in Kraków in 1448. Nicolaus was named after his father, who appears in records for the first time as a well-to-do merchant who dealt in copper, selling it mostly in Danzig (Gdańsk). He moved from Kraków to Toruń around 1458. Toruń, situated on the Vistula River, was at that time embroiled in the Thirteen Years' War, in which the Kingdom of Poland and the Prussian Confederation, an alliance of Prussian cities, gentry and clergy, fought the Teutonic Order over control of the region. In this war, Hanseatic cities like Danzig and Toruń, Nicolaus Copernicus's hometown, chose to support the Polish King, Casimir IV Jagiellon, who promised to respect the cities' traditional vast independence, which the Teutonic Order had challenged. Nicolaus's father was actively engaged in the politics of the day and supported Poland and the cities against the Teutonic Order. In 1454 he mediated negotiations between Poland's Cardinal Zbigniew Oleśnicki and the Prussian cities for repayment of war loans. In the Second Peace of Thorn (1466), the Teutonic Order formally renounced all claims to the conquered lands, which returned to Poland as Royal Prussia and remained part of it until the First (1772) and Second (1793) Partitions of Poland. Copernicus's father married Barbara Watzenrode, the astronomer's mother, between 1461 and 1464. He died about 1483. Nicolaus's mother, Barbara Watzenrode, was the daughter of a wealthy Toruń patrician and city councillor, Lucas Watzenrode the Elder (deceased 1462), and Katarzyna (widow of Jan Peckau), mentioned in other sources as Katarzyna Rüdiger gente Modlibóg (deceased 1476). The Modlibógs were a prominent Polish family who had been well known in Poland's history since 1271. The Watzenrode family, like the Kopernik family, had come from Silesia from near Schweidnitz (Świdnica), and after 1360 had settled in Toruń. They soon became one of the wealthiest and most influential patrician families. Through the Watzenrodes' extensive family relationships by marriage, Copernicus was related to wealthy families of Toruń (Thorn), Danzig (Gdansk) and Elbing (Elbląg), and to prominent Polish noble families of Prussia: the Czapskis, Działyńskis, Konopackis and Kościeleckis. Lucas and Katherine had three children: Lucas Watzenrode the Younger (1447–1512), who would become Bishop of Warmia and Copernicus's patron; Barbara, the astronomer's mother (deceased after 1495); and Christina (deceased before 1502), who in 1459 married the Toruń merchant and mayor, Tiedeman von Allen. Lucas Watzenrode the Elder, a wealthy merchant and in 1439–62 president of the judicial bench, was a decided opponent of the Teutonic Knights. In 1453 he was the delegate from Toruń at the Grudziądz (Graudenz) conference that planned the uprising against them. During the ensuing Thirteen Years' War, he actively supported the Prussian cities' war effort with substantial monetary subsidies (only part of which he later re-claimed), with political activity in Toruń and Danzig, and by personally fighting in battles at Łasin (Lessen) and Malbork (Marienburg). He died in 1462. Lucas Watzenrode the Younger, the astronomer's maternal uncle and patron, was educated at the University of Kraków and at the universities of Cologne and Bologna. He was a bitter opponent of the Teutonic Order,[h] and its Grand Master once referred to him as "the devil incarnate".[i] In 1489 Watzenrode was elected Bishop of Warmia (Ermeland, Ermland) against the preference of King Casimir IV, who had hoped to install his own son in that seat. As a result, Watzenrode quarreled with the king until Casimir IV's death three years later. Watzenrode was then able to form close relations with three successive Polish monarchs: John I Albert, Alexander Jagiellon, and Sigismund I the Old. He was a friend and key advisor to each ruler, and his influence greatly strengthened the ties between Warmia and Poland proper. Watzenrode came to be considered the most powerful man in Warmia, and his wealth, connections and influence allowed him to secure Copernicus's education and career as a canon at Frombork Cathedral.[j] Copernicus's father died around 1483, when the boy was 10. His maternal uncle, Lucas Watzenrode the Younger (1447–1512), took Copernicus under his wing and saw to his education and career. Six years later, Watzenrode was elected Bishop of Warmia. Watzenrode maintained contacts with leading intellectual figures in Poland and was a friend of the influential Italian-born humanist and Kraków courtier Filippo Buonaccorsi. There are no surviving primary documents on the early years of Copernicus's childhood and education. Copernicus biographers assume that Watzenrode first sent young Copernicus to St. John's School, at Toruń, where he himself had been a master. Later, according to Armitage,[k] the boy attended the Cathedral School at Włocławek, up the Vistula River from Toruń, which prepared pupils for entrance to the University of Kraków. In the winter semester of 1491–92 Copernicus, as "Nicolaus Nicolai de Thuronia", matriculated together with his brother Andrew at the University of Kraków. Copernicus began his studies in the Department of Arts (from the fall of 1491, presumably until the summer or fall of 1495) in the heyday of the Kraków astronomical-mathematical school, acquiring the foundations for his subsequent mathematical achievements. According to a later but credible tradition (Jan Brożek), Copernicus was a pupil of Albert Brudzewski, who by then (from 1491) was a professor of Aristotelian philosophy but taught astronomy privately outside the university; Copernicus became familiar with Brudzewski's widely read commentary to Georg von Peuerbach's Theoricæ novæ planetarum and almost certainly attended the lectures of Bernard of Biskupie and Wojciech Krypa of Szamotuły, and probably other astronomical lectures by Jan of Głogów, Michał of Wrocław (Breslau), Wojciech of Pniewy, and Marcin Bylica of Olkusz. Copernicus's Kraków studies gave him a thorough grounding in the mathematical astronomy taught at the university (arithmetic, geometry, geometric optics, cosmography, theoretical and computational astronomy) and a good knowledge of the philosophical and natural-science writings of Aristotle (De coelo, Metaphysics) and Averroes, stimulating his interest in learning and making him conversant with humanistic culture. Copernicus broadened the knowledge that he took from the university lecture halls with independent reading of books that he acquired during his Kraków years (Euclid, Haly Abenragel, the Alfonsine Tables, Johannes Regiomontanus' Tabulae directionum); to this period, probably, also date his earliest scientific notes, preserved partly at Uppsala University. At Kraków Copernicus began collecting a large library on astronomy; it would later be carried off as war booty by the Swedes during the Deluge in the 1650s and has been preserved at the Uppsala University Library. Copernicus's four years at Kraków played an important role in the development of his critical faculties and initiated his analysis of logical contradictions in the two "official" systems of astronomy—Aristotle's theory of homocentric spheres, and Ptolemy's mechanism of eccentrics and epicycles—the surmounting and discarding of which would be the first step toward the creation of Copernicus's own doctrine of the structure of the universe. Without taking a degree, probably in the fall of 1495, Copernicus left Kraków for the court of his uncle Watzenrode, who in 1489 had been elevated to Prince-Bishop of Warmia and soon (before November 1495) sought to place his nephew in the Warmia canonry vacated by 26 August 1495 death of its previous tenant, Jan Czanow. For unclear reasons—probably due to opposition from part of the chapter, who appealed to Rome—Copernicus's installation was delayed, inclining Watzenrode to send both his nephews to study canon law in Italy, seemingly with a view to furthering their ecclesiastic careers and thereby also strengthening his own influence in the Warmia chapter. On 20 October 1497, Copernicus, by proxy, formally succeeded to the Warmia canonry which had been granted to him two years earlier. To this, by a document dated 10 January 1503 at Padua, he would add a sinecure at the Collegiate Church of the Holy Cross and St. Bartholomew in Wrocław (at the time in the Crown of Bohemia). Despite having been granted a papal indult on 29 November 1508 to receive further benefices, through his ecclesiastic career Copernicus not only did not acquire further prebends and higher stations (prelacies) at the chapter, but in 1538 he relinquished the Wrocław sinecure. It is unclear whether he was ever ordained a priest. Edward Rosen asserts that he was not. Copernicus did take minor orders, which sufficed for assuming a chapter canonry. The Catholic Encyclopedia proposes that his ordination was probable, as in 1537 he was one of four candidates for the episcopal seat of Warmia, a position that required ordination. Meanwhile, leaving Warmia in mid-1496—possibly with the retinue of the chapter's chancellor, Jerzy Pranghe, who was going to Italy—in the fall, possibly in October, Copernicus arrived in Bologna and a few months later (after 6 January 1497) signed himself into the register of the Bologna University of Jurists' "German nation", which included young Poles from Silesia, Prussia and Pomerania as well as students of other nationalities. During his three-year stay at Bologna, which occurred between fall 1496 and spring 1501, Copernicus seems to have devoted himself less keenly to studying canon law (he received his doctorate in canon law only after seven years, following a second return to Italy in 1503) than to studying the humanities—probably attending lectures by Filippo Beroaldo, Antonio Urceo, called Codro, Giovanni Garzoni, and Alessandro Achillini—and to studying astronomy. He met the famous astronomer Domenico Maria Novara da Ferrara and became his disciple and assistant. Copernicus was developing new ideas inspired by reading the "Epitome of the Almagest" (Epitome in Almagestum Ptolemei) by George von Peuerbach and Johannes Regiomontanus (Venice, 1496). He verified its observations about certain peculiarities in Ptolemy's theory of the Moon's motion, by conducting on 9 March 1497 at Bologna a memorable observation of the occultation of Aldebaran, the brightest star in the Taurus constellation, by the Moon. Copernicus the humanist sought confirmation for his growing doubts through close reading of Greek and Latin authors (Pythagoras, Aristarchos of Samos, Cleomedes, Cicero, Pliny the Elder, Plutarch, Philolaus, Heraclides, Ecphantos, Plato), gathering, especially while at Padua, fragmentary historic information about ancient astronomical, cosmological and calendar systems. Copernicus spent the jubilee year 1500 in Rome, where he arrived with his brother Andrew that spring, doubtless to perform an apprenticeship at the Papal Curia. Here, too, however, he continued his astronomical work begun at Bologna, observing, for example, a lunar eclipse on the night of 5–6 November 1500. According to a later account by Rheticus, Copernicus also—probably privately, rather than at the Roman Sapienza—as a "Professor Mathematum" (professor of astronomy) delivered, "to numerous ... students and ... leading masters of the science", public lectures devoted probably to a critique of the mathematical solutions of contemporary astronomy. On his return journey doubtless stopping briefly at Bologna, in mid-1501 Copernicus arrived back in Warmia. After receiving a two-year extension of leave from the chapter on 28 July to study medicine (since "he may in future be a useful medical advisor to our Reverend Superior [Bishop Lucas Watzenrode] and the gentlemen of the chapter"), he returned to Italy in late summer or autumn, probably accompanied by his brother Andrew[m] and by Canon Bernhard Sculteti. This time he studied at the University of Padua, famous as a seat of medical learning, and—except for a brief visit to Ferrara in May–June 1503 to pass examinations for, and receive, his doctorate in canon law—he remained at Padua from fall 1501 to summer 1503. Copernicus studied medicine probably under the direction of leading Padua professors—Bartolomeo da Montagnana, Girolamo Fracastoro, Gabriele Zerbi, Alessandro Benedetti—and read medical treatises that he acquired at this time, by Valescus de Taranta, Jan Mesue, Hugo Senensis, Jan Ketham, Arnold de Villa Nova, and Michele Savonarola, which would form the embryo of his later medical library. One of the subjects that Copernicus must have studied was astrology, since it was considered an important part of a medical education. However, unlike most other prominent Renaissance astronomers, he appears never to have practiced or expressed any interest in astrology. As at Bologna, Copernicus did not limit himself to his official studies. It was probably the Padua years that saw the beginning of his Hellenistic interests. He familiarized himself with Greek language and culture with the aid of Theodorus Gaza's grammar (1495) and Johannes Baptista Chrestonius's dictionary (1499), expanding his studies of antiquity, begun at Bologna, to the writings of Bessarion, Lorenzo Valla, and others. There also seems to be evidence that it was during his Padua stay that the idea finally crystallized, of basing a new system of the world on the movement of the Earth. As the time approached for Copernicus to return home, in spring 1503 he journeyed to Ferrara where, on 31 May 1503, having passed the obligatory examinations, he was granted the degree of Doctor of Canon Law (Nicolaus Copernich de Prusia, Jure Canonico ... et doctoratus). No doubt it was soon after (at latest, in fall 1503) that he left Italy for good to return to Warmia. Copernicus made three observations of Mercury, with errors of −3, −15 and −1 minutes of arc. He made one of Venus, with an error of −24 minutes. Four were made of Mars, with errors of 2, 20, 77, and 137 minutes. Four observations were made of Jupiter, with errors of 32, 51, −11 and 25 minutes. He made four of Saturn, with errors of 31, 20, 23 and −4 minutes. With Novara, Copernicus observed an occultation of Aldebaran by the Moon on 9 March 1497. Copernicus also observed a conjunction of Saturn and the Moon on 4 March 1500. He saw an eclipse of the Moon on 6 November 1500. Having completed all his studies in Italy, 30-year-old Copernicus returned to Warmia, where he would live out the remaining 40 years of his life, apart from brief journeys to Kraków and to nearby Prussian cities: Toruń (Thorn), Gdańsk (Danzig), Elbląg (Elbing), Grudziądz (Graudenz), Malbork (Marienburg), Königsberg (Królewiec). The Prince-Bishopric of Warmia enjoyed substantial autonomy, with its own diet (parliament) and monetary unit (the same as in the other parts of Royal Prussia) and treasury. Copernicus was his uncle's secretary and physician from 1503 to 1510 (or perhaps until his uncle's death on 29 March 1512) and resided in the Bishop's castle at Lidzbark (Heilsberg), where he began work on his heliocentric theory. In his official capacity, he took part in nearly all his uncle's political, ecclesiastic and administrative-economic duties. From the beginning of 1504, Copernicus accompanied Watzenrode to sessions of the Royal Prussian diet held at Malbork and Elbląg and, write Dobrzycki and Hajdukiewicz, "participated ... in all the more important events in the complex diplomatic game that ambitious politician and statesman played in defense of the particular interests of Prussia and Warmia, between hostility to the [Teutonic] Order and loyalty to the Polish Crown." In 1504–1512 Copernicus made numerous journeys as part of his uncle's retinue—in 1504, to Toruń and Gdańsk, to a session of the Royal Prussian Council in the presence of Poland's King Alexander Jagiellon; to sessions of the Prussian diet at Malbork (1506), Elbląg (1507) and Sztum (Stuhm) (1512); and he may have attended a Poznań (Posen) session (1510) and the coronation of Poland's King Sigismund I the Old in Kraków (1507). Watzenrode's itinerary suggests that in spring 1509 Copernicus may have attended the Kraków sejm. It was probably on the latter occasion, in Kraków, that Copernicus submitted for printing at Johann Haller's press his translation, from Greek to Latin, of a collection, by the 7th-century Byzantine historian Theophylact Simocatta, of 85 brief poems called Epistles, or letters, supposed to have passed between various characters in a Greek story. They are of three kinds—"moral", offering advice on how people should live; "pastoral", giving little pictures of shepherd life; and "amorous", comprising love poems. They are arranged to follow one another in a regular rotation of subjects. Copernicus had translated the Greek verses into Latin prose, and he published his version as Theophilacti scolastici Simocati epistolae morales, rurales et amatoriae interpretatione latina, which he dedicated to his uncle in gratitude for all the benefits he had received from him. With this translation, Copernicus declared himself on the side of the humanists in the struggle over the question of whether Greek literature should be revived. Copernicus's first poetic work was a Greek epigram, composed probably during a visit to Kraków, for Johannes Dantiscus's epithalamium for Barbara Zapolya's 1512 wedding to King Zygmunt I the Old. Some time before 1514, Copernicus wrote an initial outline of his heliocentric theory known only from later transcripts, by the title (perhaps given to it by a copyist), Nicolai Copernici de hypothesibus motuum coelestium a se constitutis commentariolus—commonly referred to as the Commentariolus. It was a succinct theoretical description of the world's heliocentric mechanism, without mathematical apparatus, and differed in some important details of geometric construction from De revolutionibus; but it was already based on the same assumptions regarding Earth's triple motions. The Commentariolus, which Copernicus consciously saw as merely a first sketch for his planned book, was not intended for printed distribution. He made only a very few manuscript copies available to his closest acquaintances, including, it seems, several Kraków astronomers with whom he collaborated in 1515–1530 in observing eclipses. Tycho Brahe would include a fragment from the Commentariolus in his own treatise, Astronomiae instauratae progymnasmata, published in Prague in 1602, based on a manuscript that he had received from the Bohemian physician and astronomer Tadeáš Hájek, a friend of Rheticus. The Commentariolus would appear complete in print for the first time only in 1878. In 1510 or 1512 Copernicus moved to Frombork, a town to the northwest at the Vistula Lagoon on the Baltic Sea coast. There, in April 1512, he participated in the election of Fabian of Lossainen as Prince-Bishop of Warmia. It was only in early June 1512 that the chapter gave Copernicus an "external curia"—a house outside the defensive walls of the cathedral mount. In 1514 he purchased the northwestern tower within the walls of the Frombork stronghold. He would maintain both these residences to the end of his life, despite the devastation of the chapter's buildings by a raid against Frauenburg carried out by the Teutonic Order in January 1520, during which Copernicus's astronomical instruments were probably destroyed. Copernicus conducted astronomical observations in 1513–1516 presumably from his external curia; and in 1522–1543, from an unidentified "small tower" (turricula), using primitive instruments modeled on ancient ones—the quadrant, triquetrum, armillary sphere. At Frombork Copernicus conducted over half of his more than 60 registered astronomical observations. Having settled permanently at Frombork, where he would reside to the end of his life, with interruptions in 1516–1519 and 1520–21, Copernicus found himself at the Warmia chapter's economic and administrative center, which was also one of Warmia's two chief centers of political life. In the difficult, politically complex situation of Warmia, threatened externally by the Teutonic Order's aggressions (attacks by Teutonic bands; the Polish–Teutonic War of 1519–1521; Albert's plans to annex Warmia), internally subject to strong separatist pressures (the selection of the prince-bishops of Warmia; currency reform), he, together with part of the chapter, represented a program of strict cooperation with the Polish Crown and demonstrated in all his public activities (the defense of his country against the Order's plans of conquest; proposals to unify its monetary system with the Polish Crown's; support for Poland's interests in the Warmia dominion's ecclesiastic administration) that he was consciously a citizen of the Polish–Lithuanian Republic. Soon after the death of uncle Bishop Watzenrode, he participated in the signing of the Second Treaty of Piotrków Trybunalski (7 December 1512), governing the appointment of the Bishop of Warmia, declaring, despite opposition from part of the chapter, for loyal cooperation with the Polish Crown. That same year (before 8 November 1512) Copernicus assumed responsibility, as magister pistoriae, for administering the chapter's economic enterprises (he would hold this office again in 1530), having already since 1511 fulfilled the duties of chancellor and visitor of the chapter's estates. His administrative and economic duties did not distract Copernicus, in 1512–1515, from intensive observational activity. The results of his observations of Mars and Saturn in this period, and especially a series of four observations of the Sun made in 1515, led to the discovery of the variability of Earth's eccentricity and of the movement of the solar apogee in relation to the fixed stars, which in 1515–1519 prompted his first revisions of certain assumptions of his system. Some of the observations that he made in this period may have had a connection with a proposed reform of the Julian calendar made in the first half of 1513 at the request of the Bishop of Fossombrone, Paul of Middelburg. Their contacts in this matter in the period of the Fifth Lateran Council were later memorialized in a complimentary mention in Copernicus's dedicatory epistle in Dē revolutionibus orbium coelestium and in a treatise by Paul of Middelburg, Secundum compendium correctionis Calendarii (1516), which mentions Copernicus among the learned men who had sent the Council proposals for the calendar's emendation. During 1516–1521, Copernicus resided at Olsztyn (Allenstein) Castle as economic administrator of Warmia, including Olsztyn (Allenstein) and Pieniężno (Mehlsack). While there, he wrote a manuscript, Locationes mansorum desertorum (Locations of Deserted Fiefs), with a view to populating those fiefs with industrious farmers and so bolstering the economy of Warmia. When Olsztyn was besieged by the Teutonic Knights during the Polish–Teutonic War, Copernicus directed the defense of Olsztyn and Warmia by Royal Polish forces. He also represented the Polish side in the ensuing peace negotiations. Copernicus for years advised the Royal Prussian sejmik on monetary reform, particularly in the 1520s when that was a major question in regional Prussian politics. In 1526 he wrote a study on the value of money, "Monetae cudendae ratio". In it he formulated an early iteration of the theory called Gresham's law, that "bad" (debased) coinage drives "good" (un-debased) coinage out of circulation—several decades before Thomas Gresham. He also, in 1517, set down a quantity theory of money, a principal concept in modern economics. Copernicus's recommendations on monetary reform were widely read by leaders of both Prussia and Poland in their attempts to stabilize currency. In 1533, Johann Widmanstetter, secretary to Pope Clement VII, explained Copernicus's heliocentric system to the Pope and two cardinals. The Pope was so pleased that he gave Widmanstetter a valuable gift. In 1535 Bernard Wapowski wrote a letter to a gentleman in Vienna, urging him to publish an enclosed almanac, which he claimed had been written by Copernicus. This is the only mention of a Copernicus almanac in the historical records. The "almanac" was likely Copernicus's tables of planetary positions. Wapowski's letter mentions Copernicus's theory about the motions of the Earth. Nothing came of Wapowski's request, because he died a couple of weeks later. Following the death of Prince-Bishop of Warmia Mauritius Ferber (1 July 1537), Copernicus participated in the election of his successor, Johannes Dantiscus (20 September 1537). Copernicus was one of four candidates for the post, written in at the initiative of Tiedemann Giese; but his candidacy was actually pro forma, since Dantiscus had earlier been named coadjutor bishop to Ferber and since Dantiscus had the backing of Poland's King Sigismund I. At first Copernicus maintained friendly relations with the new Prince-Bishop, assisting him medically in spring 1538 and accompanying him that summer on an inspection tour of Chapter holdings. But that autumn, their friendship was strained by suspicions over Copernicus's housekeeper, Anna Schilling, whom Dantiscus banished from Frombork in spring 1539. In his younger days, Copernicus the physician had treated his uncle, brother and other chapter members. In later years he was called upon to attend the elderly bishops who in turn occupied the see of Warmia—Mauritius Ferber and Johannes Dantiscus—and, in 1539, his old friend Tiedemann Giese, Bishop of Chełmno (Kulm). In treating such important patients, he sometimes sought consultations from other physicians, including the physician to Duke Albert and, by letter, the Polish Royal Physician. In the spring of 1541, Duke Albert—former Grand Master of the Teutonic Order who had converted the Monastic State of the Teutonic Knights into a Lutheran and hereditary realm, the Duchy of Prussia, upon doing homage to his uncle, the King of Poland, Sigismund I—summoned Copernicus to Königsberg to attend the Duke's counselor, George von Kunheim, who had fallen seriously ill, and for whom the Prussian doctors seemed unable to do anything. Copernicus went willingly; he had met von Kunheim during negotiations over reform of the coinage. And Copernicus had come to feel that Albert himself was not such a bad person; the two had many intellectual interests in common. The Chapter readily gave Copernicus permission to go, as it wished to remain on good terms with the Duke, despite his Lutheran faith. In about a month the patient recovered, and Copernicus returned to Frombork. For a time, he continued to receive reports on von Kunheim's condition, and to send him medical advice by letter. Some of Copernicus's close friends turned Protestant, but Copernicus never showed a tendency in that direction. The first attacks on him came from Protestants. Wilhelm Gnapheus, a Dutch refugee settled in Elbląg, wrote a comedy in Latin, Morosophus (The Foolish Sage), and staged it at the Latin school that he had established there. In the play, Copernicus was caricatured as the eponymous Morosophus, a haughty, cold, aloof man who dabbled in astrology, considered himself inspired by God, and was rumored to have written a large work that was moldering in a chest. Elsewhere Protestants were the first to react to news of Copernicus's theory. Melanchthon wrote: Some people believe that it is excellent and correct to work out a thing as absurd as did that Sarmatian [i.e., Polish] astronomer who moves the earth and stops the sun. Indeed, wise rulers should have curbed such light-mindedness. Nevertheless, in 1551, eight years after Copernicus's death, astronomer Erasmus Reinhold published, under the sponsorship of Copernicus's former military adversary, the Protestant Duke Albert, the Prussian Tables, a set of astronomical tables based on Copernicus's work. Astronomers and astrologers quickly adopted it in place of its predecessors. Some time before 1514 Copernicus made available to friends his "Commentariolus" ("Little Commentary"), a manuscript describing his ideas about the heliocentric hypothesis.[o] It contained seven basic assumptions (detailed below). Thereafter he continued gathering data for a more detailed work. At about 1532, Copernicus had basically completed his work on the manuscript of Dē revolutionibus orbium coelestium; but despite urging by his closest friends, he resisted openly publishing his views, not wishing—as he confessed—to risk the scorn "to which he would expose himself on account of the novelty and incomprehensibility of his theses." In 1533, Johann Albrecht Widmannstetter delivered a series of lectures in Rome outlining Copernicus's theory. Pope Clement VII and several Catholic cardinals heard the lectures and were interested in the theory. On 1 November 1536, Cardinal Nikolaus von Schönberg, Archbishop of Capua, wrote to Copernicus from Rome: Some years ago word reached me concerning your proficiency, of which everybody constantly spoke. At that time I began to have a very high regard for you ... For I had learned that you had not merely mastered the discoveries of the ancient astronomers uncommonly well but had also formulated a new cosmology. In it you maintain that the earth moves; that the sun occupies the lowest, and thus the central, place in the universe ... Therefore with the utmost earnestness I entreat you, most learned sir, unless I inconvenience you, to communicate this discovery of yours to scholars, and at the earliest possible moment to send me your writings on the sphere of the universe together with the tables and whatever else you have that is relevant to this subject ... By then, Copernicus's work was nearing its definitive form, and rumors about his theory had reached educated people all over Europe. Despite urgings from many quarters, Copernicus delayed publication of his book, perhaps from fear of criticism—a fear delicately expressed in the subsequent dedication of his masterpiece to Pope Paul III. Scholars disagree on whether Copernicus's concern was limited to possible astronomical and philosophical objections, or whether he was also concerned about religious objections.[p] Copernicus was still working on De revolutionibus orbium coelestium (even if not certain that he wanted to publish it) when in 1539 Georg Joachim Rheticus, a Wittenberg mathematician, arrived in Frombork. Philipp Melanchthon, a close theological ally of Martin Luther, had arranged for Rheticus to visit several astronomers and study with them. Rheticus became Copernicus's pupil, staying with him for two years and writing a book, Narratio prima (First Account), outlining the essence of Copernicus's theory. In 1542 Rheticus published a treatise on trigonometry by Copernicus (later included as chapters 13 and 14 of Book I of De revolutionibus). Under strong pressure from Rheticus, and having seen the favorable first general reception of his work, Copernicus finally agreed to give De revolutionibus to his close friend, Tiedemann Giese, bishop of Chełmno (Kulm), to be delivered to Rheticus for printing by the German printer Johannes Petreius at Nuremberg (Nürnberg), Germany. While Rheticus initially supervised the printing, he had to leave Nuremberg before it was completed, and he handed over the task of supervising the rest of the printing to a Lutheran theologian, Andreas Osiander. Osiander added an unauthorised and unsigned preface, defending Copernicus's work against those who might be offended by its novel hypotheses. He argued that "different hypotheses are sometimes offered for one and the same motion [and therefore] the astronomer will take as his first choice that hypothesis which is the easiest to grasp." According to Osiander, "these hypotheses need not be true nor even probable. [I]f they provide a calculus consistent with the observations, that alone is enough." Toward the close of 1542, Copernicus was seized with apoplexy and paralysis, and he died at age 70 on 24 May 1543. Legend has it that he was presented with the final printed pages of his Dē revolutionibus orbium coelestium on the very day that he died, allowing him to take farewell of his life's work.[q] He is reputed to have awoken from a stroke-induced coma, looked at his book, and then died peacefully.[r] Copernicus was reportedly buried in Frombork Cathedral, where a 1580 epitaph stood until being defaced; it was replaced in 1735. For over two centuries, archaeologists searched the cathedral in vain for Copernicus's remains. Efforts to locate them in 1802, 1909, 1939 had come to nought. In 2004 a team led by Jerzy Gąssowski, head of an archaeology and anthropology institute in Pułtusk, began a new search, guided by the research of historian Jerzy Sikorski. In August 2005, after scanning beneath the cathedral floor, they discovered what they believed to be Copernicus's remains. The discovery was announced only after further research, on 3 November 2008. Gąssowski said he was "almost 100 percent sure it is Copernicus". Forensic expert Capt. Dariusz Zajdel of the Polish Police Central Forensic Laboratory used the skull to reconstruct a face that closely resembled the features—including a broken nose and a scar above the left eye—on a Copernicus self-portrait. The expert also determined that the skull belonged to a man who had died around age 70—Copernicus's age at the time of his death. The grave was in poor condition, and not all the remains of the skeleton were found; missing, among other things, was the lower jaw. The DNA from the bones found in the grave matched hair samples taken from a book owned by Copernicus which was kept at the library of the University of Uppsala in Sweden. On 22 May 2010, Copernicus was given a second funeral in a Mass led by Józef Kowalczyk, the former papal nuncio to Poland and newly named Primate of Poland. Copernicus's remains were reburied in the same spot in Frombork Cathedral where part of his skull and other bones had been found. A black granite tombstone identifies him as the founder of the heliocentric theory and also a church canon. The tombstone bears a representation of Copernicus's model of the Solar System—a golden Sun encircled by six of the planets. Copernican system Philolaus (c. 470 – c. 385 BCE) described an astronomical system in which a Central Fire (different from the Sun) occupied the centre of the universe, and a counter-Earth, the Earth, Moon, the Sun itself, planets, and stars all revolved around it, in that order outward from the centre. Heraclides Ponticus (387–312 BCE) proposed that the Earth rotates on its axis. Aristarchus of Samos (c. 310 BCE – c. 230 BCE) was the first to advance a theory that the Earth orbited the Sun. Further mathematical details of Aristarchus's heliocentric system were worked out around 150 BCE by the Hellenistic astronomer Seleucus of Seleucia. Though Aristarchus's original text has been lost, a reference in Archimedes' book The Sand Reckoner (Archimedis Syracusani Arenarius & Dimensio Circuli) describes a work by Aristarchus in which he advanced the heliocentric model. Thomas Heath gives the following English translation of Archimedes's text: You are now aware ['you' being King Gelon] that the "universe" is the name given by most astronomers to the sphere the centre of which is the centre of the earth, while its radius is equal to the straight line between the centre of the sun and the centre of the earth. This is the common account (τά γραφόμενα) as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the "universe" just mentioned. His hypotheses are that the fixed stars and the sun remain unmoved, that the earth revolves about the sun on the circumference of a circle, the sun lying in the middle of the orbit, and that the sphere of the fixed stars, situated about the same centre as the sun, is so great that the circle in which he supposes the earth to revolve bears such a proportion to the distance of the fixed stars as the centre of the sphere bears to its surface. — The Sand Reckoner In an early unpublished manuscript of De Revolutionibus (which still survives), Copernicus mentioned the (non-heliocentric) 'moving Earth' theory of Philolaus and the possibility that Aristarchus also had a 'moving Earth' theory (though it is unlikely that he was aware that it was a heliocentric theory). He removed both references from his final published manuscript.[c][e] Copernicus was probably aware that Pythagoras's system involved a moving Earth. The Pythagorean system was mentioned by Aristotle. Copernicus owned a copy of Giorgio Valla's De expetendis et fugiendis rebus, which included a translation of Plutarch's reference to Aristarchus's heliostaticism. In Copernicus's dedication of On the Revolutions to Pope Paul III—which Copernicus hoped would dampen criticism of his heliocentric theory by "babblers ... completely ignorant of [astronomy]"—the book's author wrote that, in rereading all of philosophy, in the pages of Cicero and Plutarch he had found references to those few thinkers who dared to move the Earth "against the traditional opinion of astronomers and almost against common sense." The prevailing theory during Copernicus's lifetime was the one that Ptolemy published in his Almagest c. 150 CE; the Earth was the stationary center of the universe. Stars were embedded in a large outer sphere that rotated rapidly, approximately daily, while each of the planets, the Sun, and the Moon were embedded in their own, smaller spheres. Ptolemy's system employed devices, including epicycles, deferents and equants, to account for observations that the paths of these bodies differed from simple, circular orbits centered on the Earth. Beginning in the 10th century, a tradition criticizing Ptolemy developed within Islamic astronomy, which climaxed with Ibn al-Haytham of Basra's Al-Shukūk 'alā Baṭalamiyūs ("Doubts Concerning Ptolemy"). Several Islamic astronomers questioned the Earth's apparent immobility, and centrality within the universe. Some accepted that the earth rotates around its axis, such as Abu Sa'id al-Sijzi (d. c. 1020). According to al-Biruni, al-Sijzi invented an astrolabe based on a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky." That others besides al-Sijzi held this view is further confirmed by a reference from an Arabic work in the 13th century which states: According to the geometers [or engineers] (muhandisīn), the earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the earth and not the stars. In the 12th century, Nur ad-Din al-Bitruji proposed a complete alternative to the Ptolemaic system (although not heliocentric). He declared the Ptolemaic system as an imaginary model, successful at predicting planetary positions, but not real or physical. Al-Bitruji's alternative system spread through most of Europe during the 13th century, with debates and refutations of his ideas continued up to the 16th century. Mathematical techniques developed in the 13th to 14th centuries by Mo'ayyeduddin al-Urdi, Nasir al-Din al-Tusi, and Ibn al-Shatir for geocentric models of planetary motions closely resemble some of those used later by Copernicus in his heliocentric models. Copernicus used what is now known as the Urdi lemma and the Tusi couple in the same planetary models as found in Arabic sources. Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by Ibn al-Shatir (d. c. 1375) of Damascus. Ibn al-Shatir's lunar and Mercury models are also identical to those of Copernicus. This has led some scholars to argue that Copernicus must have had access to some yet to be identified work on the ideas of those earlier astronomers. However, no likely candidate for this conjectured work has yet come to light, and other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. Nevertheless, Copernicus cited some of the Islamic astronomers whose theories and observations he used in De Revolutionibus, namely al-Battani, Thabit ibn Qurra, al-Zarqali, Averroes, and al-Bitruji. It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science; Gregory Chioniades translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy. Copernicus described his astronomical model in Dē revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), published in the year of his death, 1543. He had formulated his theory by 1510. "He wrote out a short overview of his new heavenly arrangement [known as the Commentariolus, or Brief Sketch], also probably in 1510 [but no later than May 1514], and sent it off to at least one correspondent beyond Varmia [the Latin for "Warmia"]. That person in turn copied the document for further circulation, and presumably the new recipients did, too ...". Copernicus's Commentariolus summarized his heliocentric theory. It listed the "assumptions" upon which the theory was based, as follows: De revolutionibus itself was divided into six sections or parts, called "books": Georg Joachim Rheticus could have been Copernicus's successor, but did not rise to the occasion. Erasmus Reinhold could have been his successor, but died prematurely. The first of the great successors was Tycho Brahe (though he did not think the Earth orbited the Sun), followed by Johannes Kepler, who had collaborated with Tycho in Prague and benefited from Tycho's decades' worth of detailed observational data. Despite the near universal acceptance later of the heliocentric idea (though not the epicycles or the circular orbits), Copernicus's theory was originally slow to catch on. Scholars hold that sixty years after the publication of The Revolutions there were only around 15 astronomers espousing Copernicanism in all of Europe: "Thomas Digges and Thomas Harriot in England; Giordano Bruno and Galileo Galilei in Italy; Diego Zuniga in Spain; Simon Stevin in the Low Countries; and in Germany, the largest group—Georg Joachim Rheticus, Michael Maestlin, Christoph Rothmann (who may have later recanted), and Johannes Kepler." Additional possibilities are Englishman William Gilbert, along with Achilles Gasser, Georg Vogelin, Valentin Otto, and Tiedemann Giese. The Barnabite priest Redento Baranzano supported Copernicus's view in his Uranoscopia (1617) but was forced to retract it. Arthur Koestler, in his popular book The Sleepwalkers, asserted that Copernicus's book had not been widely read on its first publication. This claim was trenchantly criticised by Edward Rosen,[s] and has been decisively disproved by Owen Gingerich, who examined nearly every surviving copy of the first two editions and found copious marginal notes by their owners throughout many of them. Gingerich published his conclusions in 2004 in The Book Nobody Read. The intellectual climate of the time "remained dominated by Aristotelian philosophy and the corresponding Ptolemaic astronomy. At that time there was no reason to accept the Copernican theory, except for its mathematical simplicity [by avoiding using the equant in determining planetary positions]." Tycho Brahe's system ("that the earth is stationary, the sun revolves about the earth, and the other planets revolve about the sun") also directly competed with Copernicus's. It was only a half-century later with the work of Kepler and Galileo that any substantial evidence defending Copernicanism appeared, starting "from the time when Galileo formulated the principle of inertia ... [which] helped to explain why everything would not fall off the earth if it were in motion." "[Not until] after Isaac Newton formulated the universal law of gravitation and the laws of mechanics [in his 1687 Principia], which unified terrestrial and celestial mechanics, was the heliocentric view generally accepted." Controversy The immediate result of the 1543 publication of Copernicus's book was only mild controversy. At the Council of Trent (1545–1563) neither Copernicus's theory nor calendar reform (which would later use tables deduced from Copernicus's calculations) were discussed. It has been much debated why it was not until six decades after the publication of De revolutionibus that the Catholic Church took any official action against it, even the efforts of Tolosani going unheeded. Catholic side opposition only commenced seventy-three years later, when it was occasioned by Galileo. The first notable to move against Copernicanism was the Magister of the Holy Palace (i.e., the Catholic Church's chief censor), Dominican Bartolomeo Spina, who "expressed a desire to stamp out the Copernican doctrine". But with Spina's death in 1546, his cause fell to his friend, the well-known theologian-astronomer, the Dominican Giovanni Maria Tolosani of the Convent of St. Mark in Florence. Tolosani had written a treatise on reforming the calendar (in which astronomy would play a large role) and had attended the Fifth Lateran Council (1512–1517) to discuss the matter. He had obtained a copy of De Revolutionibus in 1544. His denunciation of Copernicanism was written a year later, in 1545, in an appendix to his unpublished work, On the Truth of Sacred Scripture. Emulating the rationalistic style of Thomas Aquinas, Tolosani sought to refute Copernicanism by philosophical argument. Copernicanism was absurd, according to Tolosani, because it was scientifically unproven and unfounded. First, Copernicus had assumed the motion of the Earth but offered no physical theory whereby one would deduce this motion. (No one realized that the investigation into Copernicanism would result in a rethinking of the entire field of physics.) Second, Tolosani charged that Copernicus's thought process was backwards. He held that Copernicus had come up with his idea and then sought phenomena that would support it, rather than observing phenomena and deducing from them the idea of what caused them. In this, Tolosani was linking Copernicus's mathematical equations with the practices of the Pythagoreans (whom Aristotle had made arguments against, which were later picked up by Thomas Aquinas). It was argued that mathematical numbers were a mere product of the intellect without any physical reality, and as such could not provide physical causes in the investigation of nature. Some astronomical hypotheses at the time (such as epicycles and eccentrics) were seen as mere mathematical devices to adjust calculations of where the heavenly bodies would appear, rather than an explanation of the cause of those motions. (As Copernicus still maintained the idea of perfectly spherical orbits, he relied on epicycles.) This "saving the phenomena" was seen as proof that astronomy and mathematics could not be taken as serious means to determine physical causes. Tolosani invoked this view in his final critique of Copernicus, saying that his biggest error was that he had started with "inferior" fields of science to make pronouncements about "superior" fields. Copernicus had used mathematics and astronomy to postulate about physics and cosmology, rather than beginning with the accepted principles of physics and cosmology to determine things about astronomy and mathematics. Thus Copernicus seemed to be undermining the whole system of the philosophy of science at the time. Tolosani held that Copernicus had fallen into philosophical error because he had not been versed in physics and logic; anyone without such knowledge would make a poor astronomer and be unable to distinguish truth from falsehood. Because Copernicanism had not met the criteria for scientific truth set out by Thomas Aquinas, Tolosani held that it could only be viewed as a wild unproven theory. Tolosani recognized that the Ad Lectorem preface to Copernicus's book was not actually by him. Its thesis that astronomy as a whole would never be able to make truth claims was rejected by Tolosani (though he still held that Copernicus's attempt to describe physical reality had been faulty); he found it ridiculous that Ad Lectorem had been included in the book (unaware that Copernicus had not authorized its inclusion). Tolosani wrote: "By means of these words [of the Ad Lectorem], the foolishness of this book's author is rebuked. For by a foolish effort he [Copernicus] tried to revive the weak Pythagorean opinion [that the element of fire was at the center of the Universe], long ago deservedly destroyed, since it is expressly contrary to human reason and also opposes holy writ. From this situation, there could easily arise disagreements between Catholic expositors of holy scripture and those who might wish to adhere obstinately to this false opinion." Tolosani declared: "Nicolaus Copernicus neither read nor understood the arguments of Aristotle the philosopher and Ptolemy the astronomer." Tolosani wrote that Copernicus "is expert indeed in the sciences of mathematics and astronomy, but he is very deficient in the sciences of physics and logic. Moreover, it appears that he is unskilled with regard to [the interpretation of] holy scripture, since he contradicts several of its principles, not without danger of infidelity to himself and the readers of his book. ... his arguments have no force and can very easily be taken apart. For it is stupid to contradict an opinion accepted by everyone over a very long time for the strongest reasons, unless the impugner uses more powerful and insoluble demonstrations and completely dissolves the opposed reasons. But he does not do this in the least." Tolosani declared that he had written against Copernicus "for the purpose of preserving the truth to the common advantage of the Holy Church." Despite this, his work remained unpublished and there is no evidence that it received serious consideration. Robert Westman describes it as becoming a "dormant" viewpoint with "no audience in the Catholic world" of the late sixteenth century, but also notes that there is some evidence that it did become known to Tommaso Caccini, who would criticize Galileo in a sermon in December 1613. Tolosani may have criticized the Copernican theory as scientifically unproven and unfounded, but the theory also conflicted with the theology of the time, as can be seen in a sample of the works of John Calvin. In his Commentary on Genesis he said that "We indeed are not ignorant that the circuit of the heavens is finite, and that the earth, like a little globe, is placed in the centre." In his commentary on Psalms 93:1 he states that "The heavens revolve daily, and, immense as is their fabric and inconceivable the rapidity of their revolutions, we experience no concussion ... How could the earth hang suspended in the air were it not upheld by God's hand? By what means could it maintain itself unmoved, while the heavens above are in constant rapid motion, did not its Divine Maker fix and establish it." One sharp point of conflict between Copernicus's theory and the Bible concerned the story of the Battle of Gibeon in the Book of Joshua where the Hebrew forces were winning but whose opponents were likely to escape once night fell. This is averted by Joshua's prayers causing the Sun and the Moon to stand still. Martin Luther once made a remark about Copernicus, although without mentioning his name. According to Anthony Lauterbach, while eating with Martin Luther the topic of Copernicus arose during dinner on 4 June 1539 (in the same year as professor George Joachim Rheticus of the local University had been granted leave to visit him). Luther is said to have remarked "So it goes now. Whoever wants to be clever must agree with nothing others esteem. He must do something of his own. This is what that fellow does who wishes to turn the whole of astronomy upside down. Even in these things that are thrown into disorder I believe the Holy Scriptures, for Joshua commanded the sun to stand still and not the earth." These remarks were made four years before the publication of On the Revolutions of the Heavenly Spheres and a year before Rheticus's Narratio Prima. In John Aurifaber's account of the conversation Luther calls Copernicus "that fool" rather than "that fellow", this version is viewed by historians as less reliably sourced. Luther's collaborator Philipp Melanchthon also took issue with Copernicanism. After receiving the first pages of Narratio Prima from Rheticus himself, Melanchthon wrote to Mithobius (physician and mathematician Burkard Mithob of Feldkirch) on 16 October 1541 condemning the theory and calling for it to be repressed by governmental force, writing "certain people believe it is a marvelous achievement to extol so crazy a thing, like that Polish astronomer who makes the earth move and the sun stand still. Really, wise governments ought to repress impudence of mind." It had appeared to Rheticus that Melanchton would understand the theory and would be open to it. This was because Melanchton had taught Ptolemaic astronomy and had even recommended his friend Rheticus to an appointment to the Deanship of the Faculty of Arts & Sciences at the University of Wittenberg after he had returned from studying with Copernicus. Rheticus's hopes were dashed when six years after the publication of De Revolutionibus Melanchthon published his Initia Doctrinae Physicae presenting three grounds to reject Copernicanism. These were "the evidence of the senses, the thousand-year consensus of men of science, and the authority of the Bible". Blasting the new theory Melanchthon wrote, "Out of love for novelty or in order to make a show of their cleverness, some people have argued that the earth moves. They maintain that neither the eighth sphere nor the sun moves, whereas they attribute motion to the other celestial spheres, and also place the earth among the heavenly bodies. Nor were these jokes invented recently. There is still extant Archimedes's book on The Sand Reckoner; in which he reports that Aristarchus of Samos propounded the paradox that the sun stands still and the earth revolves around the sun. Even though subtle experts institute many investigations for the sake of exercising their ingenuity, nevertheless public proclamation of absurd opinions is indecent and sets a harmful example." Melanchthon went on to cite Bible passages and then declare "Encouraged by this divine evidence, let us cherish the truth and let us not permit ourselves to be alienated from it by the tricks of those who deem it an intellectual honor to introduce confusion into the arts." In the first edition of Initia Doctrinae Physicae, Melanchthon even questioned Copernicus's character claiming his motivation was "either from love of novelty or from desire to appear clever", these more personal attacks were largely removed by the second edition in 1550. Another Protestant theologian who disparaged heliocentrism on scriptural grounds was John Owen. In a passing remark in an essay on the origin of the sabbath, he characterised "the late hypothesis, fixing the sun as in the centre of the world" as being "built on fallible phenomena, and advanced by many arbitrary presumptions against evident testimonies of Scripture." In Roman Catholic circles, Copernicus's book was incorporated into scholarly curricula throughout the 16th century. For example, at the University of Salamanca in 1561 it became one of four text books that students of astronomy could choose from, and in 1594 it was made mandatory. German Jesuit Nicolaus Serarius was one of the first Catholics to write against Copernicus's theory as heretical, citing the Joshua passage, in a work published in 1609–1610, and again in a book in 1612. In his 12 April 1615 letter to a Catholic defender of Copernicus, Paolo Antonio Foscarini, Catholic Cardinal Robert Bellarmine condemned Copernican theory, writing, "not only the Holy Fathers, but also the modern commentaries on Genesis, the Psalms, Ecclesiastes, and Joshua, you will find all agreeing in the literal interpretation that the sun is in heaven and turns around the earth with great speed, and that the earth is very far from heaven and sits motionless at the center of the world ... Nor can one answer that this is not a matter of faith, since if it is not a matter of faith 'as regards the topic,' it is a matter of faith 'as regards the speaker': and so it would be heretical to say that Abraham did not have two children and Jacob twelve, as well as to say that Christ was not born of a virgin, because both are said by the Holy Spirit through the mouth of prophets and apostles." One year later, the Roman Inquisition prohibited Copernicus's work. Nevertheless, the Spanish Inquisition never banned the De revolutionibus, which continued to be taught at Salamanca. Perhaps the most influential opponent of the Copernican theory was Francesco Ingoli, a Catholic priest. Ingoli wrote a January 1616 essay to Galileo presenting more than twenty arguments against the Copernican theory. Though "it is not certain, it is probable that he [Ingoli] was commissioned by the Inquisition to write an expert opinion on the controversy", (after the Congregation of the Index's decree against Copernicanism on 5 March 1616, Ingoli was officially appointed its consultant). Galileo himself was of the opinion that the essay played an important role in the rejection of the theory by church authorities, writing in a later letter to Ingoli that he was concerned that people thought the theory was rejected because Ingoli was right. Ingoli presented five physical arguments against the theory, thirteen mathematical arguments (plus a separate discussion of the sizes of stars), and four theological arguments. The physical and mathematical arguments were of uneven quality, but many of them came directly from the writings of Tycho Brahe, and Ingoli repeatedly cited Brahe, the leading astronomer of the era. These included arguments about the effect of a moving Earth on the trajectory of projectiles, and about parallax and Brahe's argument that the Copernican theory required that stars be absurdly large. Two of Ingoli's theological issues with the Copernican theory were "common Catholic beliefs not directly traceable to Scripture: the doctrine that hell is located at the center of Earth and is most distant from heaven; and the explicit assertion that Earth is motionless in a hymn sung on Tuesdays as part of the Liturgy of the Hours of the Divine Office prayers regularly recited by priests." Ingoli cited Robert Bellarmine in regards to both of these arguments, and may have been trying to convey to Galileo a sense of Bellarmine's opinion. Ingoli also cited Genesis 1:14 where God places "lights in the firmament of the heavens to divide the day from the night." Ingoli did not think the central location of the Sun in the Copernican theory was compatible with it being described as one of the lights placed in the firmament. Like previous commentators Ingoli also pointed to the passages about the Battle of Gibeon. He dismissed arguments that they should be taken metaphorically, saying "Replies which assert that Scripture speaks according to our mode of understanding are not satisfactory: both because in explaining the Sacred Writings the rule is always to preserve the literal sense, when it is possible, as it is in this case; and also because all the [Church] Fathers unanimously take this passage to mean that the Sun which was truly moving stopped at Joshua's request. An interpretation that is contrary to the unanimous consent of the Fathers is condemned by the Council of Trent, Session IV, in the decree on the edition and use of the Sacred Books. Furthermore, although the Council speaks about matters of faith and morals, nevertheless it cannot be denied that the Holy Fathers would be displeased with an interpretation of Sacred Scriptures which is contrary to their common agreement." However, Ingoli closed the essay by suggesting Galileo respond primarily to the better of his physical and mathematical arguments rather than to his theological arguments, writing "Let it be your choice to respond to this either entirely of in part—clearly at least to the mathematical and physical arguments, and not to all even of these, but to the more weighty ones." When Galileo wrote a letter in reply to Ingoli years later, he in fact only addressed the mathematical and physical arguments. In March 1616, in connection with the Galileo affair, the Roman Catholic Church's Congregation of the Index issued a decree suspending De revolutionibus until it could be "corrected", on the grounds of ensuring that Copernicanism, which it described as a "false Pythagorean doctrine, altogether contrary to the Holy Scripture," would not "creep any further to the prejudice of Catholic truth." The corrections consisted largely of removing or altering wording that spoke of heliocentrism as a fact, rather than a hypothesis. The corrections were made based largely on work by Ingoli. On the orders of Pope Paul V, Cardinal Robert Bellarmine gave Galileo prior notice that the decree was about to be issued, and warned him that he could not "hold or defend" the Copernican doctrine.[u] The corrections to De revolutionibus, which omitted or altered nine sentences, were issued four years later, in 1620. In 1633, Galileo Galilei was convicted of grave suspicion of heresy for "following the position of Copernicus, which is contrary to the true sense and authority of Holy Scripture", and was placed under house arrest for the rest of his life. At the instance of Roger Boscovich, the Catholic Church's 1758 Index of Prohibited Books omitted the general prohibition of works defending heliocentrism, but retained the specific prohibitions of the original uncensored versions of De revolutionibus and Galileo's Dialogue Concerning the Two Chief World Systems. Those prohibitions were finally dropped from the 1835 Index. Languages, name, and nationality Copernicus is postulated to have spoken Latin, German, and Polish with equal fluency; he also spoke Greek and Italian.[v][w][x][y] The vast majority of Copernicus's extant writings are in Latin, the language of European academia in his lifetime. Arguments for German being Copernicus's native tongue are that he was born into a predominantly German-speaking urban patrician class using German, next to Latin, as language of trade and commerce in written documents, and that, while studying canon law at the University of Bologna in 1496, he signed into the German natio (Natio Germanorum)—a student organization which, according to its 1497 by-laws, was open to students of all kingdoms and states whose mother-tongue was German. However, according to French philosopher Alexandre Koyré, Copernicus's registration with the Natio Germanorum does not in itself imply that Copernicus considered himself German, since students from Prussia and Silesia were routinely so categorized, which carried certain privileges that made it a natural choice for German-speaking students, regardless of their ethnicity or self-identification.[z][aa] The surname Kopernik, Copernik, Koppernigk, in various spellings, is recorded in Kraków from c. 1350, apparently given to people from the village of Koperniki (prior to 1845 rendered Kopernik, Copernik, Copirnik, and Koppirnik) in the Duchy of Nysa, 10 km south of Nysa, and now 10 km north of the Polish-Czech border. Nicolaus Copernicus's great-grandfather is recorded as having received citizenship in Kraków in 1386. The toponym Kopernik (modern Koperniki) has been variously tied to the Polish word for "dill" (koper) and the German word for "copper" (Kupfer).[ab] The suffix -nik (or plural, -niki) denotes a Slavic and Polish agent noun. As was common in the period, the spellings of both the toponym and the surname vary greatly. Copernicus "was rather indifferent about orthography". During his childhood, about 1480, the name of his father (and thus of the future astronomer) was recorded in Thorn as Niclas Koppernigk. At Kraków he signed himself, in Latin, Nicolaus Nicolai de Torunia (Nicolaus, son of Nicolaus, of Toruń).[ac] At Bologna, in 1496, he registered in the Matricula Nobilissimi Germanorum Collegii, resp. Annales Clarissimae Nacionis Germanorum, of the Natio Germanica Bononiae, as Dominus Nicolaus Kopperlingk de Thorn – IX grosseti. At Padua he signed himself "Nicolaus Copernik", later "Coppernicus". The astronomer thus Latinized his name to Coppernicus, generally with two "p"s (in 23 of 31 documents studied), but later in life he used a single "p". On the title page of De revolutionibus, Rheticus published the name (in the genitive, or possessive, case) as "Nicolai Copernici". There has been discussion of Copernicus's nationality and of whether it is meaningful to ascribe to him a nationality in the modern sense. Nicolaus Copernicus was born and raised in Royal Prussia, a semiautonomous and multilingual region of the Kingdom of Poland. He was the child of German-speaking parents and grew up with German as his mother tongue. His first alma mater was the University of Kraków in Poland. When he later studied in Italy, at the University of Bologna, he joined the German Nation, a student organization for German-speakers of all allegiances (Germany would not become a nation-state until 1871). His family stood against the Teutonic Order and actively supported the city of Toruń during the Thirteen Years' War. Copernicus's father lent money to Poland's King Casimir IV Jagiellon to finance the war against the Teutonic Knights, but the inhabitants of Royal Prussia also resisted the Polish crown's efforts for greater control over the region. Encyclopedia Americana, The Concise Columbia Encyclopedia, The Oxford World Encyclopedia, and World Book Encyclopedia refer to Copernicus as a "Polish astronomer". Sheila Rabin, writing in the Stanford Encyclopedia of Philosophy, describes Copernicus as a "child of a German family [who] was a subject of the Polish crown", while Manfred Weissenbacher writes that Copernicus's father was a Germanized Pole. Andrzej Wojtkowski noted that most of the 19th and 20th century encyclopedias, particularly the English-language sources, described Copernicus as a "German scientist". Kasparek and Kasparek stated that it is incorrect to ascribe him German or Polish nationality, as "a 16th century figure cannot be described with the use of 19th and 20th century concepts". No Polish texts by Copernicus survive due to the rarity of Polish language in literature before the writings of the Polish Renaissance poets Mikołaj Rej and Jan Kochanowski (educated Poles had generally written in Latin); but it is known that Copernicus knew Polish on a par with German and Latin. Historian Michael Burleigh describes the nationality debate as a "totally insignificant battle" between German and Polish scholars during the interwar period. Polish astronomer Konrad Rudnicki calls the discussion a "fierce scholarly quarrel in ... times of nationalism" and describes Copernicus as an inhabitant of a German-speaking territory that belonged to Poland, himself being of mixed Polish-German extraction. Czesław Miłosz describes the debate as an "absurd" projection of a modern understanding of nationality onto Renaissance people, who identified with their home territories rather than with a nation. Similarly, historian Norman Davies writes that Copernicus, as was common in his era, was "largely indifferent" to nationality, being a local patriot who considered himself "Prussian". Miłosz and Davies both write that Copernicus had a German-language cultural background, while his working language was Latin in accord with the usage of the time. Additionally, according to Davies, "there is ample evidence that he knew the Polish language". Davies concludes that, "Taking everything into consideration, there is good reason to regard him both as a German and as a Pole: and yet, in the sense that modern nationalists understand it, he was neither." Commemoration The third in NASA's Orbiting Astronomical Observatory series of missions, launched on 21 August 1972, was named Copernicus after its successful launch. The satellite carried an X-ray detector and an ultraviolet telescope, and operated until February 1981. Copernicia, a genus of palm trees native to South America and the Greater Antilles, was named after Copernicus in 1837. In some of the species, the leaves are coated with a thin layer of wax, known as carnauba wax. On 14 July 2009, the discoverers, from the Gesellschaft für Schwerionenforschung in Darmstadt, Germany, of chemical element 112 (temporarily named ununbium) proposed to the International Union of Pure and Applied Chemistry (IUPAC) that its permanent name be "copernicium" (symbol Cn). "After we had named elements after our city and our state, we wanted to make a statement with a name that was known to everyone," said Hofmann. "We didn't want to select someone who was a German. We were looking world-wide." On the 537th anniversary of his birthday the name became official. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning name for 55 Cancri A was Copernicus. A German non-profit society founded in February 1988 at the Max Planck Institute for Aeronomy to promote international collaboration in the geo- and space sciences. The society supports open-access scientific publishing, organizes scientific conferences (including those of the European Geophysicists' Union and European Meteorological Society), and presents the Copernicus Medal for "ingenious, innovative work in the geosciences and planetary and space sciences, and in their exceptional promotion and international cooperation". Copernicus is commemorated by the Nicolaus Copernicus Monument in Warsaw, designed by Bertel Thorvaldsen (1822), completed in 1830; and by Jan Matejko's 1873 painting, Astronomer Copernicus, or Conversations with God. Named for Copernicus are Nicolaus Copernicus University in Toruń; Warsaw's Copernicus Science Centre, the Centrum Astronomiczne im. Mikołaja Kopernika (a principal Polish research institution in astrophysics), Copernicus Hospital in Poland's fourth largest city, Łódź, and the Wrocław Airport, Port lotniczy Wrocław im. Mikołaja Kopernika or in English: Nicolaus Copernicus Wrocław Airport. Contemporary literary and artistic works inspired by Copernicus include: See also Notes References External links Primary sources General About De Revolutionibus Prizes German-Polish cooperation |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-ignhistory_32-5] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Geographical_Review] | [TOKENS: 353] |
Contents Geographical Review The Geographical Review is a quarterly peer-reviewed academic journal published by Routledge on behalf of the American Geographical Society. It covers all aspects of geography. The editor-in-chief is David H. Kaplan (Kent State University). History In 1852, the American Geographical Society began publishing its first academic journal, the Bulletin [and Journal] of the American Geographical Society. This publication continued through 1915, when it was succeeded by the Geographical Review under the direction of the American Geographical Society's Director Isaiah Bowman. Influential editors include Gladys M. Wrigley, who served as editor from 1920 to 1949, and Wilma B. Fairchild who edited the journal from 1949 to 1972. Douglas McManis edited the journal from 1978 until 1995 and was credited with maintaining a legacy of high scholarly standards set by his predecessors. Wrigley-Fairchild Prize The Wrigley-Fairchild Prize was established by the American Geographical Society in 1994 as a way to promote scholarly writing among new scholars published in the Geographical Review. The prize was given every three years to the author of the best article by an early-career scholar published in the most recent three volumes of the Geographic Review. Beginning in 2020, the Wrigley-Fairchild Prize will be awarded each year. The prize is named for previous editors Gladys M. Wrigley and Wilma B. Fairchild who edited the journal for a combined 52 years. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2018 impact factor of 1.636. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Platforms#cite_note-YahooFinance-204] | [TOKENS: 8626] |
Contents Meta Platforms Meta Platforms, Inc. (doing business as Meta) is an American multinational technology company headquartered in Menlo Park, California. Meta owns and operates several prominent social media platforms and communication services, including Facebook, Instagram, WhatsApp, Messenger, Threads and Manus. The company also operates an advertising network for its own sites and third parties; as of 2023[update], advertising accounted for 97.8 percent of its total revenue. Meta has been described as a part of Big Tech, which refers to the largest six tech companies in the United States, Alphabet (Google), Amazon, Apple, Meta (Facebook), Microsoft, and Nvidia, which are also the largest companies in the world by market capitalization. The company was originally established in 2004 as TheFacebook, Inc., and was renamed Facebook, Inc. in 2005. In 2021, it rebranded as Meta Platforms, Inc. to reflect a strategic shift toward developing the metaverse—an interconnected digital ecosystem spanning virtual and augmented reality technologies. In 2023, Meta was ranked 31st on the Forbes Global 2000 list of the world's largest public companies. As of 2022, it was the world's third-largest spender on research and development, with R&D expenses totaling US$35.3 billion. History Facebook filed for an initial public offering (IPO) on January 1, 2012. The preliminary prospectus stated that the company sought to raise $5 billion, had 845 million monthly active users, and a website accruing 2.7 billion likes and comments daily. After the IPO, Zuckerberg would retain 22% of the total shares and 57% of the total voting power in Facebook. Underwriters valued the shares at $38 each, valuing the company at $104 billion, the largest valuation yet for a newly public company. On May 16, one day before the IPO, Facebook announced it would sell 25% more shares than originally planned due to high demand. The IPO raised $16 billion, making it the third-largest in US history (slightly ahead of AT&T Mobility and behind only General Motors and Visa). The stock price left the company with a higher market capitalization than all but a few U.S. corporations—surpassing heavyweights such as Amazon, McDonald's, Disney, and Kraft Foods—and made Zuckerberg's stock worth $19 billion. The New York Times stated that the offering overcame questions about Facebook's difficulties in attracting advertisers to transform the company into a "must-own stock". Jimmy Lee of JPMorgan Chase described it as "the next great blue-chip". Writers at TechCrunch, on the other hand, expressed skepticism, stating, "That's a big multiple to live up to, and Facebook will likely need to add bold new revenue streams to justify the mammoth valuation." Trading in the stock, which began on May 18, was delayed that day due to technical problems with the Nasdaq exchange. The stock struggled to stay above the IPO price for most of the day, forcing underwriters to buy back shares to support the price. At the closing bell, shares were valued at $38.23, only $0.23 above the IPO price and down $3.82 from the opening bell value. The opening was widely described by the financial press as a disappointment. The stock set a new record for trading volume of an IPO. On May 25, 2012, the stock ended its first full week of trading at $31.91, a 16.5% decline. On May 22, 2012, regulators from Wall Street's Financial Industry Regulatory Authority announced that they had begun to investigate whether banks underwriting Facebook had improperly shared information only with select clients rather than the general public. Massachusetts Secretary of State William F. Galvin subpoenaed Morgan Stanley over the same issue. The allegations sparked "fury" among some investors and led to the immediate filing of several lawsuits, one of them a class action suit claiming more than $2.5 billion in losses due to the IPO. Bloomberg estimated that retail investors may have lost approximately $630 million on Facebook stock since its debut. S&P Global Ratings added Facebook to its S&P 500 index on December 21, 2013. On May 2, 2014, Zuckerberg announced that the company would be changing its internal motto from "Move fast and break things" to "Move fast with stable infrastructure". The earlier motto had been described as Zuckerberg's "prime directive to his developers and team" in a 2009 interview in Business Insider, in which he also said, "Unless you are breaking stuff, you are not moving fast enough." In November 2016, Facebook announced the Microsoft Windows client of gaming service Facebook Gameroom, formerly Facebook Games Arcade, at the Unity Technologies developers conference. The client allows Facebook users to play "native" games in addition to its web games. The service was closed in June 2021. Lasso was a short-video sharing app from Facebook similar to TikTok that was launched on iOS and Android in 2018 and was aimed at teenagers. On July 2, 2020, Facebook announced that Lasso would be shutting down on July 10. In 2018, the Oculus lead Jason Rubin sent his 50-page vision document titled "The Metaverse" to Facebook's leadership. In the document, Rubin acknowledged that Facebook's virtual reality business had not caught on as expected, despite the hundreds of millions of dollars spent on content for early adopters. He also urged the company to execute fast and invest heavily in the vision, to shut out HTC, Apple, Google and other competitors in the VR space. Regarding other players' participation in the metaverse vision, he called for the company to build the "metaverse" to prevent their competitors from "being in the VR business in a meaningful way at all". In May 2019, Facebook founded Libra Networks, reportedly to develop their own stablecoin cryptocurrency. Later, it was reported that Libra was being supported by financial companies such as Visa, Mastercard, PayPal and Uber. The consortium of companies was expected to pool in $10 million each to fund the launch of the cryptocurrency coin named Libra. Depending on when it would receive approval from the Swiss Financial Market Supervisory authority to operate as a payments service, the Libra Association had planned to launch a limited format cryptocurrency in 2021. Libra was renamed Diem, before being shut down and sold in January 2022 after backlash from Swiss government regulators and the public. During the COVID-19 pandemic, the use of online services, including Facebook, grew globally. Zuckerberg predicted this would be a "permanent acceleration" that would continue after the pandemic. Facebook hired aggressively, growing from 48,268 employees in March 2020 to more than 87,000 by September 2022. Following a period of intense scrutiny and damaging whistleblower leaks, news started to emerge on October 21, 2021 about Facebook's plan to rebrand the company and change its name. In the Q3 2021 earnings call on October 25, Mark Zuckerberg discussed the ongoing criticism of the company's social services and the way it operates, and pointed to the pivoting efforts to building the metaverse – without mentioning the rebranding and the name change. The metaverse vision and the name change from Facebook, Inc. to Meta Platforms was introduced at Facebook Connect on October 28, 2021. Based on Facebook's PR campaign, the name change reflects the company's shifting long term focus of building the metaverse, a digital extension of the physical world by social media, virtual reality and augmented reality features. "Meta" had been registered as a trademark in the United States in 2018 (after an initial filing in 2015) for marketing, advertising, and computer services, by a Canadian company that provided big data analysis of scientific literature. This company was acquired in 2017 by the Chan Zuckerberg Initiative (CZI), a foundation established by Zuckerberg and his wife, Priscilla Chan, and became one of their projects. Following the rebranding announcement, CZI announced that it had already decided to deprioritize the earlier Meta project, thus it would be transferring its rights to the name to Meta Platforms, and the previous project would end in 2022. Soon after the rebranding, in early February 2022, Meta reported a greater-than-expected decline in profits in the fourth quarter of 2021. It reported no growth in monthly users, and indicated it expected revenue growth to stall. It also expected measures taken by Apple Inc. to protect user privacy to cost it some $10 billion in advertisement revenue, an amount equal to roughly 8% of its revenue for 2021. In meeting with Meta staff the day after earnings were reported, Zuckerberg blamed competition for user attention, particularly from video-based apps such as TikTok. The 27% reduction in the company's share price which occurred in reaction to the news eliminated some $230 billion of value from Meta's market capitalization. Bloomberg described the decline as "an epic rout that, in its sheer scale, is unlike anything Wall Street or Silicon Valley has ever seen". Zuckerberg's net worth fell by as much as $31 billion. Zuckerberg owns 13% of Meta, and the holding makes up the bulk of his wealth. According to published reports by Bloomberg on March 30, 2022, Meta turned over data such as phone numbers, physical addresses, and IP addresses to hackers posing as law enforcement officials using forged documents. The law enforcement requests sometimes included forged signatures of real or fictional officials. When asked about the allegations, a Meta representative said, "We review every data request for legal sufficiency and use advanced systems and processes to validate law enforcement requests and detect abuse." In June 2022, Sheryl Sandberg, the chief operating officer of 14 years, announced she would step down that year. Zuckerberg said that Javier Olivan would replace Sandberg, though in a “more traditional” role. In March 2022, Meta (except Meta-owned WhatsApp) and Instagram were banned in Russia and added to the Russian list of terrorist and extremist organizations for alleged Russophobia and hate speech (up to genocidal calls) amid the ongoing Russian invasion of Ukraine. Meta appealed against the ban, but it was upheld by a Moscow court in June of the same year. Also in March 2022, Meta and Italian eyewear giant Luxottica released Ray-Ban Stories, a series of smartglasses which could play music and take pictures. Meta and Luxottica parent company EssilorLuxottica declined to disclose sales on the line of products as of September 2022, though Meta has expressed satisfaction with its customer feedback. In July 2022, Meta saw its first year-on-year revenue decline when its total revenue slipped by 1% to $28.8bn. Analysts and journalists accredited the loss to its advertising business, which has been limited by Apple's app tracking transparency feature and the number of people who have opted not to be tracked by Meta apps. Zuckerberg also accredited the decline to increasing competition from TikTok. On October 27, 2022, Meta's market value dropped to $268 billion, a loss of around $700 billion compared to 2021, and its shares fell by 24%. It lost its spot among the top 20 US companies by market cap, despite reaching the top 5 in the previous year. In November 2022, Meta laid off 11,000 employees, 13% of its workforce. Zuckerberg said the decision to aggressively increase Meta's investments had been a mistake, as he had wrongly predicted that the surge in e-commerce would last beyond the COVID-19 pandemic. He also attributed the decline to increased competition, a global economic downturn and "ads signal loss". Plans to lay off a further 10,000 employees began in April 2023. The layoffs were part of a general downturn in the technology industry, alongside layoffs by companies including Google, Amazon, Tesla, Snap, Twitter and Lyft. Starting from 2022, Meta scrambled to catch up to other tech companies in adopting specialized artificial intelligence hardware and software. It had been using less expensive CPUs instead of GPUs for AI work, but that approach turned out to be less efficient. The company gifted the Inter-university Consortium for Political and Social Research $1.3 million to finance the Social Media Archive's aim to make their data available to social science research. In 2023, Ireland's Data Protection Commissioner imposed a record EUR 1.2 billion fine on Meta for transferring data from Europe to the United States without adequate protections for EU citizens.: 250 In March 2023, Meta announced a new round of layoffs that would cut 10,000 employees and close 5,000 open positions to make the company more efficient. Meta revenue surpassed analyst expectations for the first quarter of 2023 after announcing that it was increasing its focus on AI. On July 6, Meta launched a new app, Threads, a competitor to Twitter. Meta announced its artificial intelligence model Llama 2 in July 2023, available for commercial use via partnerships with major cloud providers like Microsoft. It was the first project to be unveiled out of Meta's generative AI group after it was set up in February. It would not charge access or usage but instead operate with an open-source model to allow Meta to ascertain what improvements need to be made. Prior to this announcement, Meta said it had no plans to release Llama 2 for commercial use. An earlier version of Llama was released to academics. In August 2023, Meta announced its permanent removal of news content from Facebook and Instagram in Canada due to the Online News Act, which requires Canadian news outlets to be compensated for content shared on its platform. The Online News Act was in effect by year-end, but Meta will not participate in the regulatory process. In October 2023, Zuckerberg said that AI would be Meta's biggest investment area in 2024. Meta finished 2023 as one of the best-performing technology stocks of the year, with its share price up 150 percent. Its stock reached an all-time high in January 2024, bringing Meta within 2% of achieving $1 trillion market capitalization. In November 2023 Meta Platforms launched an ad-free service in Europe, allowing subscribers to opt-out of personal data being collected for targeted advertising. A group of 28 European organizations, including Max Schrems' advocacy group NOYB, the Irish Council for Civil Liberties, Wikimedia Europe, and the Electronic Privacy Information Center, signed a 2024 letter to the European Data Protection Board (EDPB) expressing concern that this subscriber model would undermine privacy protections, specifically GDPR data protection standards. Meta removed the Facebook and Instagram accounts of Iran's Supreme Leader Ali Khamenei in February 2024, citing repeated violations of its Dangerous Organizations & Individuals policy. As of March, Meta was under investigation by the FDA for alleged use of their social media platforms to sell illegal drugs. On 16 May 2024, the European Commission began an investigation into Meta over concerns related to child safety. In May 2023, Iraqi social media influencer Esaa Ahmed-Adnan encountered a troubling issue when Instagram removed his posts, citing false copyright violations despite his content being original and free from copyrighted material. He discovered that extortionists were behind these takedowns, offering to restore his content for $3,000 or provide ongoing protection for $1,000 per month. This scam, exploiting Meta’s rights management tools, became widespread in the Middle East, revealing a gap in Meta’s enforcement in developing regions. An Iraqi nonprofit Tech4Peace’s founder, Aws al-Saadi helped Ahmed-Adnan and others, but the restoration process was slow, leading to significant financial losses for many victims, including prominent figures like Ammar al-Hakim. This situation highlighted Meta’s challenges in balancing global growth with effective content moderation and protection. On 16 September 2024, Meta announced it had banned Russian state media outlets from its platforms worldwide due to concerns about "foreign interference activity." This decision followed allegations that RT and its employees funneled $10 million through shell companies to secretly fund influence campaigns on various social media channels. Meta's actions were part of a broader effort to counter Russian covert influence operations, which had intensified since the invasion. At its 2024 Connect conference, Meta presented Orion, its first pair of augmented reality glasses. Though Orion was originally intended to be sold to consumers, the manufacturing process turned out to be too complex and expensive. Instead, the company pivoted to producing a small number of the glasses to be used internally. On 4 October 2024, Meta announced about its new AI model called Movie Gen, capable of generating realistic video and audio clips based on user prompts. Meta stated it would not release Movie Gen for open development, preferring to collaborate directly with content creators and integrate it into its products by the following year. The model was built using a combination of licensed and publicly available datasets. On October 31, 2024, ProPublica published an investigation into deceptive political advertisement scams that sometimes use hundreds of hijacked profiles and facebook pages run by organized networks of scammers. The authors cited spotty enforcement by Meta as a major reason for the extent of the issue. In November 2024, TechCrunch reported that Meta were considering building a $10bn global underwater cable spanning 25,000 miles. In the same month, Meta closed down 2 million accounts on Facebook and Instagram that were linked to scam centers in Myanmar, Laos, Cambodia, the Philippines, and the United Arab Emirates doing pig butchering scams. In December 2024, Meta announced that, beginning February 2025, they would require advertisers to run ads about financial services in Australia to verify information about who are the beneficiary and the payer in a bid to regulate scams. On December 4, 2024, Meta announced it will invest US$10 billion for its largest AI data center in northeast Louisiana, powered by natural gas facilities. On the 11th of that month, Meta experienced a global outage, impacting accounts on all of their social media and messaging applications. Outage reports from DownDetector reached 70,000+ and 100,000+ within minutes for Instagram and Facebook, respectively. In January 2025, Meta announced plans to roll back its diversity, equity, and inclusion (DEI) initiatives, citing shifts in the "legal and policy landscape" in the United States following the 2024 presidential election. The decision followed reports that CEO Mark Zuckerberg sought to align the company more closely with the incoming Trump administration, including changes to content moderation policies and executive leadership. The new content moderation policies continued to bar insults about a person's intellect or mental illness, but made an exception to allow calling LGBTQ people mentally ill because they are gay or transgender. Later that month, Meta agreed to pay $25 million to settle a 2021 lawsuit brought by Donald Trump for suspending his social media accounts after the January 6 riots. Changes to Meta's moderation policies were controversial among its oversight board, with a significant divide in opinion between the board's US conservatives and its global members. In June 2025, Meta Platforms Inc. has decided to make a multibillion-dollar investment into artificial intelligence startup Scale AI. The financing could exceed $10 billion in value which would make it one of the largest private company funding events of all time. In October 2025, it was announced that Meta would be laying off 600 employees in the artificial intelligence unit to perform better and simpler. They referred to their AI unit as "bloated" and are seeking to trim down the department. This mass layoff is going to impact Meta’s AI infrastructure units, Fundamental Artificial Intelligence Research unit (FAIR) and other product-related positions. Mergers and acquisitions Meta has acquired multiple companies (often identified as talent acquisitions). One of its first major acquisitions was in April 2012, when it acquired Instagram for approximately US$1 billion in cash and stock. In October 2013, Facebook, Inc. acquired Onavo, an Israeli mobile web analytics company. In February 2014, Facebook, Inc. announced it would buy mobile messaging company WhatsApp for US$19 billion in cash and stock. The acquisition was completed on October 6. Later that year, Facebook bought Oculus VR for $2.3 billion in cash and stock, which released its first consumer virtual reality headset in 2016. In late November 2019, Facebook, Inc. announced the acquisition of the game developer Beat Games, responsible for developing one of that year's most popular VR games, Beat Saber. In Late 2022, after Facebook Inc rebranded to Meta Platforms Inc, Oculus was rebranded to Meta Quest. In May 2020, Facebook, Inc. announced it had acquired Giphy for a reported cash price of $400 million. It will be integrated with the Instagram team. However, in August 2021, UK's Competition and Markets Authority (CMA) stated that Facebook, Inc. might have to sell Giphy, after an investigation found that the deal between the two companies would harm competition in display advertising market. Facebook, Inc. was fined $70 million by CMA for deliberately failing to report all information regarding the acquisition and the ongoing antitrust investigation. In October 2022, the CMA ruled for a second time that Meta be required to divest Giphy, stating that Meta already controls half of the advertising in the UK. Meta agreed to the sale, though it stated that it disagrees with the decision itself. In May 2023, Giphy was divested to Shutterstock for $53 million. In November 2020, Facebook, Inc. announced that it planned to purchase the customer-service platform and chatbot specialist startup Kustomer to promote companies to use their platform for business. It has been reported that Kustomer valued at slightly over $1 billion. The deal was closed in February 2022 after regulatory approval. In September 2022, Meta acquired Lofelt, a Berlin-based haptic tech startup. In December 2025, it was announced Meta had acquired the AI-wearables startup, Limitless. In the same month, they also acquired another AI startup, Manus AI, for $2 billion. Manus announced in December that its platform had achieved $100mm in recurring revenue just 8 months after its launch and Meta said it will scale the platform to many other businesses. In January 2026, it was announced Meta proposed acquisition of Manus was undergoing preliminary scrutiny by Chinese regulators. The examination concerns the cross-border transfer of artificial intelligence technology developed in China. Lobbying In 2020, Facebook, Inc. spent $19.7 million on lobbying, hiring 79 lobbyists. In 2019, it had spent $16.7 million on lobbying and had a team of 71 lobbyists, up from $12.6 million and 51 lobbyists in 2018. Facebook was the largest spender of lobbying money among the Big Tech companies in 2020. The lobbying team includes top congressional aide John Branscome, who was hired in September 2021, to help the company fend off threats from Democratic lawmakers and the Biden administration. In December 2024, Meta donated $1 million to the inauguration fund for then-President-elect Donald Trump. In 2025, Meta was listed among the donors funding the construction of the White House State Ballroom. Partnerships February 2026, Meta announced a long-term partnership with Nvidia. Censorship In August 2024, Mark Zuckerberg sent a letter to Jim Jordan indicating that during the COVID-19 pandemic the Biden administration repeatedly asked Meta to limit certain COVID-19 content, including humor and satire, on Facebook and Instagram. In 2016 Meta hired Jordana Cutler, formerly an employee at the Israeli Embassy to the United States, as its policy chief for Israel and the Jewish Diaspora. In this role, Cutler pushed for the censorship of accounts belonging to Students for Justice in Palestine chapters in the United States. Critics have said that Cutler's position gives the Israeli government an undue influence over Meta policy, and that few countries have such high levels of contact with Meta policymakers. Following the election of Donald Trump in 2025, various sources noted possible censorship related to the Democratic Party on Instagram and other Meta platforms. In February 2025, a Meta rep flagged journalist Gil Duran's article and other "critiques of tech industry figures" as spam or sensitive content, limiting their reach. In March 2025, Meta attempted to block former employee Sarah Wynn-Williams from promoting or further distributing her memoir, Careless People, that includes allegations of unaddressed sexual harassment in the workplace by senior executives. The New York Times reports that the arbitration is among Meta's most forcible attempts to repudiate a former employee's account of workplace dynamics. Publisher Macmillan reacted to the ruling by the Emergency International Arbitral Tribunal by stating that it will ignore its provisions. As of 15 March 2025[update], hardback and digital versions of Careless People were being offered for sale by major online retailers. From October 2025, Meta began removing and restricting access for accounts related to LGBTQ, reproductive health and abortion information pages on its platforms. Martha Dimitratou, executive director of Repro Uncensored, called Meta's shadow-banning of these issues "One of the biggest waves of censorship we are seeing". Disinformation concerns Since its inception, Meta has been accused of being a host for fake news and misinformation. In the wake of the 2016 United States presidential election, Zuckerberg began to take steps to eliminate the prevalence of fake news, as the platform had been criticized for its potential influence on the outcome of the election. The company initially partnered with ABC News, the Associated Press, FactCheck.org, Snopes and PolitiFact for its fact-checking initiative; as of 2018, it had over 40 fact-checking partners across the world, including The Weekly Standard. A May 2017 review by The Guardian found that the platform's fact-checking initiatives of partnering with third-party fact-checkers and publicly flagging fake news were regularly ineffective, and appeared to be having minimal impact in some cases. In 2018, journalists working as fact-checkers for the company criticized the partnership, stating that it had produced minimal results and that the company had ignored their concerns. In 2024 Meta's decision to continue to disseminate a falsified video of US president Joe Biden, even after it had been proven to be fake, attracted criticism and concern. In January 2025, Meta ended its use of third-party fact-checkers in favor of a user-run community notes system similar to the one used on X. While Zuckerberg supported these changes, saying that the amount of censorship on the platform was excessive, the decision received criticism by fact-checking institutions, stating that the changes would make it more difficult for users to identify misinformation. Meta also faced criticism for weakening its policies on hate speech that were designed to protect minorities and LGBTQ+ individuals from bullying and discrimination. While moving its content review teams from California to Texas, Meta changed their hateful conduct policy to eliminate restrictions on anti-LGBT and anti-immigrant hate speech, as well as explicitly allowing users to accuse LGBT people of being mentally ill or abnormal based on their sexual orientation or gender identity. In January 2025, Meta faced significant criticism for its role in removing LGBTQ+ content from its platforms, amid its broader efforts to address anti-LGBTQ+ hate speech. The removal of LGBTQ+ themes was noted as part of the wider crackdown on content deemed to violate its community guidelines. Meta's content moderation policies, which were designed to combat harmful speech and protect users from discrimination, inadvertently led to the removal or restriction of LGBTQ+ content, particularly posts highlighting LGBTQ+ identities, support, or political issues. According to reports, LGBTQ+ posts, including those that simply celebrated pride or advocated for LGBTQ+ rights, were flagged and removed for reasons that some critics argue were vague or inconsistently applied. Many LGBTQ+ activists and users on Meta's platforms expressed concern that such actions stifled visibility and expression, potentially isolating LGBTQ+ individuals and communities, especially in spaces that were historically important for outreach and support. Lawsuits Numerous lawsuits have been filed against the company, both when it was known as Facebook, Inc., and as Meta Platforms. In March 2020, the Office of the Australian Information Commissioner (OAIC) sued Facebook, for significant and persistent infringements of the rule on privacy involving the Cambridge Analytica fiasco. Every violation of the Privacy Act is subject to a theoretical cumulative liability of $1.7 million. The OAIC estimated that a total of 311,127 Australians had been exposed. On December 8, 2020, the U.S. Federal Trade Commission and 46 states (excluding Alabama, Georgia, South Carolina, and South Dakota), the District of Columbia and the territory of Guam, launched Federal Trade Commission v. Facebook as an antitrust lawsuit against Facebook. The lawsuit concerns Facebook's acquisition of two competitors—Instagram and WhatsApp—and the ensuing monopolistic situation. FTC alleges that Facebook holds monopolistic power in the U.S. social networking market and seeks to force the company to divest from Instagram and WhatsApp to break up the conglomerate. William Kovacic, a former chairman of the Federal Trade Commission, argued the case will be difficult to win as it would require the government to create a counterfactual argument of an internet where the Facebook-WhatsApp-Instagram entity did not exist, and prove that harmed competition or consumers. In November 2025, it was ruled that Meta did not violate antitrust laws and holds no monopoly in the market. On December 24, 2021, a court in Russia fined Meta for $27 million after the company declined to remove unspecified banned content. The fine was reportedly tied to the company's annual revenue in the country. In May 2022, a lawsuit was filed in Kenya against Meta and its local outsourcing company Sama. Allegedly, Meta has poor working conditions in Kenya for workers moderating Facebook posts. According to the lawsuit, 260 screeners were declared redundant with confusing reasoning. The lawsuit seeks financial compensation and an order that outsourced moderators be given the same health benefits and pay scale as Meta employees. In June 2022, 8 lawsuits were filed across the U.S. over the allege that excessive exposure to platforms including Facebook and Instagram has led to attempted or actual suicides, eating disorders and sleeplessness, among other issues. The litigation follows a former Facebook employee's testimony in Congress that the company refused to take responsibility. The company noted that tools have been developed for parents to keep track of their children's activity on Instagram and set time limits, in addition to Meta's "Take a break" reminders. In addition, the company is providing resources specific to eating disorders as well as developing AI to prevent children under the age of 13 signing up for Facebook or Instagram. In June 2022, Meta settled a lawsuit with the US Department of Justice. The lawsuit, which was filed in 2019, alleged that the company enabled housing discrimination through targeted advertising, as it allowed homeowners and landlords to run housing ads excluding people based on sex, race, religion, and other characteristics. The U.S. Department of Justice stated that this was in violation of the Fair Housing Act. Meta was handed a penalty of $115,054 and given until December 31, 2022, to shadow the algorithm tool. In January 2023, Meta was fined €390 million for violations of the European Union General Data Protection Regulation. In May 2023, the European Data Protection Board fined Meta a record €1.2 billion for breaching European Union data privacy laws by transferring personal data of Facebook users to servers in the U.S. In July 2024, Meta agreed to pay the state of Texas US$1.4 billion to settle a lawsuit brought by Texas Attorney General Ken Paxton accusing the company of collecting users' biometric data without consent, setting a record for the largest privacy-related settlement ever obtained by a state attorney general. In October 2024, Meta Platforms faced lawsuits in Japan from 30 plaintiffs who claimed they were defrauded by fake investment ads on Facebook and Instagram, featuring false celebrity endorsements. The plaintiffs are seeking approximately $2.8 million in damages. In April 2025, the Kenyan High Court ruled that a US$2.4 billion lawsuit in which three plaintiffs claim that Facebook inflamed civil violence in Ethiopia in 2021 could proceed. In April 2025, Meta was fined €200 million ($230 million) for breaking the Digital Markets Act, by imposing a “consent or pay” system that forces users to either allow their personal data to be used to target advertisements, or pay a subscription fee for advertising-free versions of Facebook and Instagram. In late April 2025, a case was filed against Meta in Ghana over the alleged psychological distress experienced by content moderators employed to take down disturbing social media content including depictions of murders, extreme violence and child sexual abuse. Meta moved the moderation service to the Ghanaian capital of Accra after legal issues in the previous location Kenya. The new moderation company is Teleperformance, a multinational corporation with a history of worker's rights violation. Reports suggests the conditions are worse here than in the previous Kenyan location, with many workers afraid of speaking out due to fear of returning to conflict zones. Workers reported developing mental illnesses, attempted suicides, and low pay. In 26 January 2026, a New Mexico state court case was filed, suggesting that Mark Zuckerberg approved allowing minors to access artificial intelligence chatbot companions that safety staffers warned were capable of sexual interactions. In 2020, the company UReputation, which had been involved in several cases concerning the management of digital armies[clarification needed], filed a lawsuit against Facebook, accusing it of unlawfully transmitting personal data to third parties. Legal actions were initiated in Tunisia, France, and the United States. In 2025, the United States District court for the Northern District of Georgia approved a discovery procedure, allowing UReputation to access documents and evidence held by Meta. Structure Meta's key management consists of: As of October 2022[update], Meta had 83,553 employees worldwide. As of June 2024[update], Meta's board consisted of the following directors; Meta Platforms is mainly owned by institutional investors, who hold around 80% of all shares. Insiders control the majority of voting shares. The three largest individual investors in 2024 were Mark Zuckerberg, Sheryl Sandberg and Christopher K. Cox. The largest shareholders in late 2024/early 2025 were: Roger McNamee, an early Facebook investor and Zuckerberg's former mentor, said Facebook had "the most centralized decision-making structure I have ever encountered in a large company". Facebook co-founder Chris Hughes has stated that chief executive officer Mark Zuckerberg has too much power, that the company is now a monopoly, and that, as a result, it should be split into multiple smaller companies. In an op-ed in The New York Times, Hughes said he was concerned that Zuckerberg had surrounded himself with a team that did not challenge him, and that it is the U.S. government's job to hold him accountable and curb his "unchecked power". He also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agreed with Hughes. European Union Commissioner for Competition Margrethe Vestager stated that splitting Facebook should be done only as "a remedy of the very last resort", and that it would not solve Facebook's underlying problems. Revenue Facebook ranked No. 34 in the 2020 Fortune 500 list of the largest United States corporations by revenue, with almost $86 billion in revenue most of it coming from advertising. One analysis of 2017 data determined that the company earned US$20.21 per user from advertising. According to New York, since its rebranding, Meta has reportedly lost $500 billion as a result of new privacy measures put in place by companies such as Apple and Google which prevents Meta from gathering users' data. In February 2015, Facebook announced it had reached two million active advertisers, with most of the gain coming from small businesses. An active advertiser was defined as an entity that had advertised on the Facebook platform in the last 28 days. In March 2016, Facebook announced it had reached three million active advertisers with more than 70% from outside the United States. Prices for advertising follow a variable pricing model based on auctioning ad placements, and potential engagement levels of the advertisement itself. Similar to other online advertising platforms like Google and Twitter, targeting of advertisements is one of the chief merits of digital advertising compared to traditional media. Marketing on Meta is employed through two methods based on the viewing habits, likes and shares, and purchasing data of the audience, namely targeted audiences and "look alike" audiences. The U.S. IRS challenged the valuation Facebook used when it transferred IP from the U.S. to Facebook Ireland (now Meta Platforms Ireland) in 2010 (which Facebook Ireland then revalued higher before charging out), as it was building its double Irish tax structure. The case is ongoing and Meta faces a potential fine of $3–5bn. The U.S. Tax Cuts and Jobs Act of 2017 changed Facebook's global tax calculations. Meta Platforms Ireland is subject to the U.S. GILTI tax of 10.5% on global intangible profits (i.e. Irish profits). On the basis that Meta Platforms Ireland Limited is paying some tax, the effective minimum US tax for Facebook Ireland will be circa 11%. In contrast, Meta Platforms Inc. would incur a special IP tax rate of 13.125% (the FDII rate) if its Irish business relocated to the U.S. Tax relief in the U.S. (21% vs. Irish at the GILTI rate) and accelerated capital expensing, would make this effective U.S. rate around 12%. The insignificance of the U.S./Irish tax difference was demonstrated when Facebook moved 1.5bn non-EU accounts to the U.S. to limit exposure to GDPR. Facilities Users outside of the U.S. and Canada contract with Meta's Irish subsidiary, Meta Platforms Ireland Limited (formerly Facebook Ireland Limited), allowing Meta to avoid US taxes for all users in Europe, Asia, Australia, Africa and South America. Meta is making use of the Double Irish arrangement which allows it to pay 2–3% corporation tax on all international revenue. In 2010, Facebook opened its fourth office, in Hyderabad, India, which houses online advertising and developer support teams and provides support to users and advertisers. In India, Meta is registered as Facebook India Online Services Pvt Ltd. It also has offices or planned sites in Chittagong, Bangladesh; Dublin, Ireland; and Austin, Texas, among other cities. Facebook opened its London headquarters in 2017 in Fitzrovia in central London. Facebook opened an office in Cambridge, Massachusetts in 2018. The offices were initially home to the "Connectivity Lab", a group focused on bringing Internet access to those who do not have access to the Internet. In April 2019, Facebook opened its Taiwan headquarters in Taipei. In March 2022, Meta opened new regional headquarters in Dubai. In September 2023, it was reported that Meta had paid £149m to British Land to break the lease on Triton Square London office. Meta reportedly had another 18 years left on its lease on the site. As of 2023, Facebook operated 21 data centers. It committed to purchase 100% renewable energy and to reduce its greenhouse gas emissions 75% by 2020. Its data center technologies include Fabric Aggregator, a distributed network system that accommodates larger regions and varied traffic patterns. Reception US Representative Alexandria Ocasio-Cortez responded in a tweet to Zuckerberg's announcement about Meta, saying: "Meta as in 'we are a cancer to democracy metastasizing into a global surveillance and propaganda machine for boosting authoritarian regimes and destroying civil society ... for profit!'" Ex-Facebook employee Frances Haugen and whistleblower behind the Facebook Papers responded to the rebranding efforts by expressing doubts about the company's ability to improve while led by Mark Zuckerberg, and urged the chief executive officer to resign. In November 2021, a video published by Inspired by Iceland went viral, in which a Zuckerberg look-alike promoted the Icelandverse, a place of "enhanced actual reality without silly looking headsets". In a December 2021 interview, SpaceX and Tesla chief executive officer Elon Musk said he could not see a compelling use-case for the VR-driven metaverse, adding: "I don't see someone strapping a frigging screen to their face all day." In January 2022, Louise Eccles of The Sunday Times logged into the metaverse with the intention of making a video guide. She wrote: Initially, my experience with the Oculus went well. I attended work meetings as an avatar and tried an exercise class set in the streets of Paris. The headset enabled me to feel the thrill of carving down mountains on a snowboard and the adrenaline rush of climbing a mountain without ropes. Yet switching to the social apps, where you mingle with strangers also using VR headsets, it was at times predatory and vile. Eccles described being sexually harassed by another user, as well as "accents from all over the world, American, Indian, English, Australian, using racist, sexist, homophobic and transphobic language". She also encountered users as young as 7 years old on the platform, despite Oculus headsets being intended for users over 13. See also References External links 37°29′06″N 122°08′54″W / 37.48500°N 122.14833°W / 37.48500; -122.14833 |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Qastina] | [TOKENS: 1278] |
Contents Qastina Qastina (Arabic: قسطينة) was a Palestinian village, located 38 kilometers northeast of Gaza City. It was depopulated during the 1948 Arab-Israeli war. Location Qastina was situated on an elevated spot in a generally flat area on the coastal plain, on the highway between al-Majdal and the Jerusalem-Jaffa highway. A British military camp, Beer Tuvia, was 3 km. southwest of the village. History Qastina was incorporated into the Ottoman Empire in 1517 with the rest of Palestine, and by the 1596 tax records, it was a village in the nahiya (subdistrict) of Gaza under the liwa' (district) of Gaza, with a population of 55 households and 15 bachelors, an estimated 385 persons. All the villagers were Muslim. They paid a fixed tax rate of 33,3% on a number of crops, including wheat, barley and sesame, and fruits, as well as goats, beehives and vineyards; a total of 13,100 akçe. 5/6 of the revenue went to a Muslim charitable endowment. The Syrian Sufi teacher and traveller Mustafa al-Bakri al-Siddiqi (1688-1748/9) reported travelling through the village in the first half of the eighteenth century,[dubious – discuss] on his way to al-Masmiyya al-Kabira. In 1838, Edward Robinson saw el-Kustineh located northwest of Tell es-Safi, where he was staying, and noted it as a Muslim village located in the Gaza district. In 1863, the French explorer Victor Guérin visited the village, called Kasthineh. He found it had four hundred inhabitants. Near the mouth of a well were the remains of an antique gray-white marble column, while two palm trees and three acacia mimosas shaded the cemetery. An Ottoman village list of about 1870 showed that Qastina had 152 houses and a population of 469, though the population count included men only. In 1882, the PEF's Survey of Western Palestine described Qastina as a village laid out in a northwest–southeast direction on flat ground. It had adobe brick structures, a well, and gardens. In the 1922 census of Palestine conducted by the British Mandate authorities, the village had a population of 406 inhabitants, all Muslims, increasing in the 1931 census when it had an all-Muslim population of 593 in 147 houses. The villagers had a mosque, and in 1936 an elementary school was started, which was shared with the neighbouring village of Tall al-Turmus. By the mid-1940s the school had 161 students. In 1939 Kfar Warburg was established on what was traditionally village land, 3 km southwest of the village site. During the Second World War, the village played host to many elements of the Allied forces, including the HQ for the Australian 6th Division. By the 1945 statistics the population was 890, all Muslims, with a total of 12,019 dunams of land. The villagers lived mostly of agriculture. In addition, villagers raised animals and poultry, and worked in the British military camp (Beer Tuvia) nearby. In 1944–45 a total of 235 dunums was used for citrus and bananas, 7,317 dunums used for cereals, 770 dunums were irrigated or used for orchards, while 37 dunams were built-up, urban, land. Qastina was in the territory allotted to the Arab state under the 1947 UN Partition Plan. Upon Israel's declaration of independence on 15 May 1948, the armies of neighbouring Arab states invaded, prompting fresh evacuations of civilians fearful of being caught up in the fighting. The women and children of Qastina were sent away to the village of Tell es-Safi by the menfolk at this time, but they returned after discovering there was insufficient water in the host village to meet the newcomers' needs. A preparatory order for the conquest of Qastina and other neighbouring villages (Masmiya al Kabira, Masmiya al Saghira, al Tina and Tall al Turmus) was drafted by the Giv'ati Brigade's 51st Battalion and produced on 29 June 1948. According to Benny Morris, the document recommended "the 'liquidation' (hisul) of the two Masmiya villages and 'burning' (bi'ur) the rest." On 9 July 1948, the village and its over 147 houses were completely destroyed by Israeli forces after its inhabitants fled an assault by the Givati Brigade in Operation An-Far. Qastina was used as a rallying point by the IDF seventh Battalion of the 8th Armored Brigade after the failed attack on Iraq al-Manshiyya in part of the Israeli drive to open a route to the Negev during Operation Yoav. In early 1949 Quaker relief workers reported that many those living in tents in what became Maghazi refugee camp had come from Qastina. Following the war the area was incorporated into the State of Israel and four villages were later established on the lands of Qastina; Arugot and Kfar Ahim were founded in 1949 after the village had been destroyed. They were followed by Avigdor in 1950 and Kiryat Malakhi in 1951. Be'er Tuvia, which was also known by the name Qastina after its establishment in 1887, lies adjacent. In 1992, Walid Khalidi notes of Qastina that: "All that remains is the debris of houses strewn across the site. The research team investigating the current status of the depopulated villages visited the site and found it overgrown with bushes and tall grasses that were about 2m high." Nowadays, Qastina is the popular name for Malakhi Junction. See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:PlayStation_4_Pro_enhanced_games] | [TOKENS: 94] |
Category:PlayStation 4 Pro enhanced games Video games in this category are those that have been released or will be released on the PlayStation 4, and have enhancements for the PlayStation 4 Pro; an updated version of the PlayStation 4 with support for 4K rendering. Contents Pages in category "PlayStation 4 Pro enhanced games" The following 200 pages are in this category, out of approximately 326 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/File:Panorama_of_United_States_Supreme_Court_Building_at_Dusk.jpg] | [TOKENS: 802] |
File:Panorama of United States Supreme Court Building at Dusk.jpg Summary العربية ∙ Boarisch ∙ български ∙ dansk ∙ Deutsch ∙ Zazaki ∙ Ελληνικά ∙ English ∙ Canadian English ∙ British English ∙ Esperanto ∙ español ∙ eesti ∙ suomi ∙ français ∙ हिन्दी ∙ hrvatski ∙ magyar ∙ italiano ∙ 日本語 ∙ 한국어 ∙ македонски ∙ മലയാളം ∙ Nederlands ∙ polski ∙ português ∙ русский ∙ sicilianu ∙ slovenščina ∙ svenska ∙ Türkçe ∙ українська ∙ 中文 ∙ +/− Licensing Quality image العربية ∙ جازايرية ∙ беларуская ∙ беларуская (тарашкевіца) ∙ български ∙ বাংলা ∙ català ∙ čeština ∙ Cymraeg ∙ Deutsch ∙ Schweizer Hochdeutsch ∙ Zazaki ∙ Ελληνικά ∙ English ∙ Esperanto ∙ español ∙ eesti ∙ euskara ∙ فارسی ∙ suomi ∙ français ∙ galego ∙ עברית ∙ हिन्दी ∙ hrvatski ∙ magyar ∙ հայերեն ∙ Bahasa Indonesia ∙ italiano ∙ 日本語 ∙ Jawa ∙ ქართული ∙ Qaraqalpaqsha ∙ 한국어 ∙ kurdî ∙ кыргызча ∙ Latina ∙ Lëtzebuergesch ∙ lietuvių ∙ македонски ∙ മലയാളം ∙ मराठी ∙ Bahasa Melayu ∙ Nederlands ∙ ਪੰਜਾਬੀ ∙ Norfuk / Pitkern ∙ polski ∙ português ∙ português do Brasil ∙ rumantsch ∙ română ∙ русский ∙ sicilianu ∙ slovenčina ∙ slovenščina ∙ shqip ∙ српски / srpski ∙ svenska ∙ தமிழ் ∙ తెలుగు ∙ ไทย ∙ Tagalog ∙ toki pona ∙ Türkçe ∙ українська ∙ oʻzbekcha / ўзбекча ∙ vèneto ∙ Tiếng Việt ∙ 中文 ∙ 中文(简体) ∙ 中文(繁體) ∙ +/− File history Click on a date/time to view the file as it appeared at that time. File usage The following 26 pages use this file: Global file usage The following other wikis use this file: View more global usage of this file. Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Social_network#cite_ref-:0_80-1] | [TOKENS: 5247] |
Contents Social network 1800s: Martineau · Tocqueville · Marx · Spencer · Le Bon · Ward · Pareto · Tönnies · Veblen · Simmel · Durkheim · Addams · Mead · Weber · Du Bois · Mannheim · Elias A social network is a social structure consisting of a set of social actors (such as individuals or organizations), networks of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities along with a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine dynamics of networks. For instance, social network analysis has been used in studying the spread of misinformation on social media platforms or analyzing the influence of key figures in social networks. Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations". Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s. Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science. Overview The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics. History In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society"). Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors. Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups. Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently. In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski, Alfred Radcliffe-Brown, and Claude Lévi-Strauss. A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes, J. Clyde Mitchell and Elizabeth Bott Spillius, often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom. Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis. In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure. Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory. By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis. Mark Granovetter and Barry Wellman are among the former students of White who elaborated and championed the analysis of social networks. Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks. Levels of analysis In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system. These patterns become more apparent as network size increases. However, a global network analysis of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis. The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level. At the micro-level, social network research typically begins with an individual, snowballing as social relationships are traced, or may begin with a small group of individuals in a particular social context. Dyadic level: A dyad is a social relationship between two individuals. Network research on dyads may concentrate on structure of the relationship (e.g. multiplexity, strength), social equality, and tendencies toward reciprocity/mutuality. Triadic level: Add one individual to a dyad, and you have a triad. Research at this level may concentrate on factors such as balance and transitivity, as well as social equality and tendencies toward reciprocity/mutuality. In the balance theory of Fritz Heider the triad is the key to social dynamics. The discord in a rivalrous love triangle is an example of an unbalanced triad, likely to change to a balanced triad by a change in one of the relations. The dynamics of social friendships in society has been modeled by balancing triads. The study is carried forward with the theory of signed graphs. Actor level: The smallest unit of analysis in a social network is an individual in their social setting, i.e., an "actor" or "ego." Egonetwork analysis focuses on network characteristics, such as size, relationship strength, density, centrality, prestige and roles such as isolates, liaisons, and bridges. Such analyses, are most commonly used in the fields of psychology or social psychology, ethnographic kinship analysis or other genealogical studies of relationships between individuals. Subset level: Subset levels of network research problems begin at the micro-level, but may cross over into the meso-level of analysis. Subset level research may focus on distance and reachability, cliques, cohesive subgroups, or other group actions or behavior. In general, meso-level theories begin with a population size that falls between the micro- and macro-levels. However, meso-level may also refer to analyses that are specifically designed to reveal connections between micro- and macro-levels. Meso-level networks are low density and may exhibit causal processes distinct from interpersonal micro-level networks. Organizations: Formal organizations are social groups that distribute tasks for a collective goal. Network research on organizations may focus on either intra-organizational or inter-organizational ties in terms of formal or informal relationships. Intra-organizational networks themselves often contain multiple levels of analysis, especially in larger organizations with multiple branches, franchises or semi-autonomous departments. In these cases, research is often conducted at a work group level and organization level, focusing on the interplay between the two structures. Experiments with networked groups online have documented ways to optimize group-level coordination through diverse interventions, including the addition of autonomous agents to the groups. Randomly distributed networks: Exponential random graph models of social networks became state-of-the-art methods of social network analysis in the 1980s. This framework has the capacity to represent social-structural effects commonly observed in many human social networks, including general degree-based structural effects commonly observed in many human social networks as well as reciprocity and transitivity, and at the node-level, homophily and attribute-based activity and popularity effects, as derived from explicit hypotheses about dependencies among network ties. Parameters are given in terms of the prevalence of small subgraph configurations in the network and can be interpreted as describing the combinations of local social processes from which a given network emerges. These probability models for networks on a given set of actors allow generalization beyond the restrictive dyadic independence assumption of micro-networks, allowing models to be built from theoretical structural foundations of social behavior. Scale-free networks: A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. In network theory a scale-free ideal network is a random network with a degree distribution that unravels the size distribution of social groups. Specific characteristics of scale-free networks vary with the theories and analytical tools used to create them, however, in general, scale-free networks have some common characteristics. One notable characteristic in a scale-free network is the relative commonness of vertices with a degree that greatly exceeds the average. The highest-degree nodes are often called "hubs", and may serve specific purposes in their networks, although this depends greatly on the social context. Another general characteristic of scale-free networks is the clustering coefficient distribution, which decreases as the node degree increases. This distribution also follows a power law. The Barabási model of network evolution shown above is an example of a scale-free network. Rather than tracing interpersonal interactions, macro-level analyses generally trace the outcomes of interactions, such as economic or other resource transfer interactions over a large population. Large-scale networks: Large-scale network is a term somewhat synonymous with "macro-level." It is primarily used in social and behavioral sciences, and in economics. Originally, the term was used extensively in the computer sciences (see large-scale network mapping). Complex networks: Most larger social networks display features of social complexity, which involves substantial non-trivial features of network topology, with patterns of complex connections between elements that are neither purely regular nor purely random (see, complexity science, dynamical system and chaos theory), as do biological, and technological networks. Such complex network features include a heavy tail in the degree distribution, a high clustering coefficient, assortativity or disassortativity among vertices, community structure (see stochastic block model), and hierarchical structure. In the case of agency-directed networks these features also include reciprocity, triad significance profile (TSP, see network motif), and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such as lattices and random graphs, do not show these features. Theoretical links Various theoretical frameworks have been imported for the use of social network analysis. The most prominent of these are Graph theory, Balance theory, Social comparison theory, and more recently, the Social identity approach. Few complete theories have been produced from social network analysis. Two that have are structural role theory and heterophily theory. The basis of Heterophily Theory was the finding in one study that more numerous weak ties can be important in seeking information and innovation, as cliques have a tendency to have more homogeneous opinions as well as share many common traits. This homophilic tendency was the reason for the members of the cliques to be attracted together in the first place. However, being similar, each member of the clique would also know more or less what the other members knew. To find new information or insights, members of the clique will have to look beyond the clique to its other friends and acquaintances. This is what Granovetter called "the strength of weak ties". Structural holes In the context of networks, social capital exists where people have an advantage because of their location in a network. Contacts in a network provide information, opportunities and perspectives that can be beneficial to the central player in the network. Most social structures tend to be characterized by dense clusters of strong connections. Information within these clusters tends to be rather homogeneous and redundant. Non-redundant information is most often obtained through contacts in different clusters. When two separate clusters possess non-redundant information, there is said to be a structural hole between them. Thus, a network that bridges structural holes will provide network benefits that are in some degree additive, rather than overlapping. An ideal network structure has a vine and cluster structure, providing access to many different clusters and structural holes. Networks rich in structural holes are a form of social capital in that they offer information benefits. The main player in a network that bridges structural holes is able to access information from diverse sources and clusters. For example, in business networks, this is beneficial to an individual's career because he is more likely to hear of job openings and opportunities if his network spans a wide range of contacts in different industries/sectors. This concept is similar to Mark Granovetter's theory of weak ties, which rests on the basis that having a broad range of contacts is most effective for job attainment. Structural holes have been widely applied in social network analysis, resulting in applications in a wide range of practical scenarios as well as machine learning-based social prediction. Research clusters Research has used network analysis to examine networks created when artists are exhibited together in museum exhibition. Such networks have been shown to affect an artist's recognition in history and historical narratives, even when controlling for individual accomplishments of the artist. Other work examines how network grouping of artists can affect an individual artist's auction performance. An artist's status has been shown to increase when associated with higher status networks, though this association has diminishing returns over an artist's career. In J.A. Barnes' day, a "community" referred to a specific geographic location and studies of community ties had to do with who talked, associated, traded, and attended church with whom. Today, however, there are extended "online" communities developed through telecommunications devices and social network services. Such devices and services require extensive and ongoing maintenance and analysis, often using network science methods. Community development studies, today, also make extensive use of such methods. Complex networks require methods specific to modelling and interpreting social complexity and complex adaptive systems, including techniques of dynamic network analysis. Mechanisms such as Dual-phase evolution explain how temporal changes in connectivity contribute to the formation of structure in social networks. The study of social networks is being used to examine the nature of interdependencies between actors and the ways in which these are related to outcomes of conflict and cooperation. Areas of study include cooperative behavior among participants in collective actions such as protests; promotion of peaceful behavior, social norms, and public goods within communities through networks of informal governance; the role of social networks in both intrastate conflict and interstate conflict; and social networking among politicians, constituents, and bureaucrats. In criminology and urban sociology, much attention has been paid to the social networks among criminal actors. For example, murders can be seen as a series of exchanges between gangs. Murders can be seen to diffuse outwards from a single source, because weaker gangs cannot afford to kill members of stronger gangs in retaliation, but must commit other violent acts to maintain their reputation for strength. Diffusion of ideas and innovations studies focus on the spread and use of ideas from one actor to another or one culture and another. This line of research seeks to explain why some become "early adopters" of ideas and innovations, and links social network structure with facilitating or impeding the spread of an innovation. A case in point is the social diffusion of linguistic innovation such as neologisms. Experiments and large-scale field trials (e.g., by Nicholas Christakis and collaborators) have shown that cascades of desirable behaviors can be induced in social groups, in settings as diverse as Honduras villages, Indian slums, or in the lab. Still other experiments have documented the experimental induction of social contagion of voting behavior, emotions, risk perception, and commercial products. In demography, the study of social networks has led to new sampling methods for estimating and reaching populations that are hard to enumerate (for example, homeless people or intravenous drug users.) For example, respondent driven sampling is a network-based sampling technique that relies on respondents to a survey recommending further respondents. The field of sociology focuses almost entirely on networks of outcomes of social interactions. More narrowly, economic sociology considers behavioral interactions of individuals and groups through social capital and social "markets". Sociologists, such as Mark Granovetter, have developed core principles about the interactions of social structure, information, ability to punish or reward, and trust that frequently recur in their analyses of political, economic and other institutions. Granovetter examines how social structures and social networks can affect economic outcomes like hiring, price, productivity and innovation and describes sociologists' contributions to analyzing the impact of social structure and networks on the economy. Analysis of social networks is increasingly incorporated into health care analytics, not only in epidemiological studies but also in models of patient communication and education, disease prevention, mental health diagnosis and treatment, and in the study of health care organizations and systems. Human ecology is an interdisciplinary and transdisciplinary study of the relationship between humans and their natural, social, and built environments. The scientific philosophy of human ecology has a diffuse history with connections to geography, sociology, psychology, anthropology, zoology, and natural ecology. In the study of literary systems, network analysis has been applied by Anheier, Gerhards and Romo, De Nooy, Senekal, and Lotker, to study various aspects of how literature functions. The basic premise is that polysystem theory, which has been around since the writings of Even-Zohar, can be integrated with network theory and the relationships between different actors in the literary network, e.g. writers, critics, publishers, literary histories, etc., can be mapped using visualization from SNA. Research studies of formal or informal organization relationships, organizational communication, economics, economic sociology, and other resource transfers. Social networks have also been used to examine how organizations interact with each other, characterizing the many informal connections that link executives together, as well as associations and connections between individual employees at different organizations. Many organizational social network studies focus on teams. Within team network studies, research assesses, for example, the predictors and outcomes of centrality and power, density and centralization of team instrumental and expressive ties, and the role of between-team networks. Intra-organizational networks have been found to affect organizational commitment, organizational identification, interpersonal citizenship behaviour. Social capital is a form of economic and cultural capital in which social networks are central, transactions are marked by reciprocity, trust, and cooperation, and market agents produce goods and services not mainly for themselves, but for a common good. Social capital is split into three dimensions: the structural, the relational and the cognitive dimension. The structural dimension describes how partners interact with each other and which specific partners meet in a social network. Also, the structural dimension of social capital indicates the level of ties among organizations. This dimension is highly connected to the relational dimension which refers to trustworthiness, norms, expectations and identifications of the bonds between partners. The relational dimension explains the nature of these ties which is mainly illustrated by the level of trust accorded to the network of organizations. The cognitive dimension analyses the extent to which organizations share common goals and objectives as a result of their ties and interactions. Social capital is a sociological concept about the value of social relations and the role of cooperation and confidence to achieve positive outcomes. The term refers to the value one can get from their social ties. For example, newly arrived immigrants can make use of their social ties to established migrants to acquire jobs they may otherwise have trouble getting (e.g., because of unfamiliarity with the local language). A positive relationship exists between social capital and the intensity of social network use. In a dynamic framework, higher activity in a network feeds into higher social capital which itself encourages more activity. This particular cluster focuses on brand-image and promotional strategy effectiveness, taking into account the impact of customer participation on sales and brand-image. This is gauged through techniques such as sentiment analysis which rely on mathematical areas of study such as data mining and analytics. This area of research produces vast numbers of commercial applications as the main goal of any study is to understand consumer behaviour and drive sales. In many organizations, members tend to focus their activities inside their own groups, which stifles creativity and restricts opportunities. A player whose network bridges structural holes has an advantage in detecting and developing rewarding opportunities. Such a player can mobilize social capital by acting as a "broker" of information between two clusters that otherwise would not have been in contact, thus providing access to new ideas, opinions and opportunities. British philosopher and political economist John Stuart Mill, writes, "it is hardly possible to overrate the value of placing human beings in contact with persons dissimilar to themselves.... Such communication [is] one of the primary sources of progress." Thus, a player with a network rich in structural holes can add value to an organization through new ideas and opportunities. This in turn, helps an individual's career development and advancement. A social capital broker also reaps control benefits of being the facilitator of information flow between contacts. Full communication with exploratory mindsets and information exchange generated by dynamically alternating positions in a social network promotes creative and deep thinking. In the case of consulting firm Eden McCallum, the founders were able to advance their careers by bridging their connections with former big three consulting firm consultants and mid-size industry firms. By bridging structural holes and mobilizing social capital, players can advance their careers by executing new opportunities between contacts. There has been research that both substantiates and refutes the benefits of information brokerage. A study of high tech Chinese firms by Zhixing Xiao found that the control benefits of structural holes are "dissonant to the dominant firm-wide spirit of cooperation and the information benefits cannot materialize due to the communal sharing values" of such organizations. However, this study only analyzed Chinese firms, which tend to have strong communal sharing values. Information and control benefits of structural holes are still valuable in firms that are not quite as inclusive and cooperative on the firm-wide level. In 2004, Ronald Burt studied 673 managers who ran the supply chain for one of America's largest electronics companies. He found that managers who often discussed issues with other groups were better paid, received more positive job evaluations and were more likely to be promoted. Thus, bridging structural holes can be beneficial to an organization, and in turn, to an individual's career. Computer networks combined with social networking software produce a new medium for social interaction. A relationship over a computerized social networking service can be characterized by context, direction, and strength. The content of a relation refers to the resource that is exchanged. In a computer-mediated communication context, social pairs exchange different kinds of information, including sending a data file or a computer program as well as providing emotional support or arranging a meeting. With the rise of electronic commerce, information exchanged may also correspond to exchanges of money, goods or services in the "real" world. Social network analysis methods have become essential to examining these types of computer mediated communication. In addition, the sheer size and the volatile nature of social media has given rise to new network metrics. A key concern with networks extracted from social media is the lack of robustness of network metrics given missing data. Based on the pattern of homophily, ties between people are most likely to occur between nodes that are most similar to each other, or within neighbourhood segregation, individuals are most likely to inhabit the same regional areas as other individuals who are like them. Therefore, social networks can be used as a tool to measure the degree of segregation or homophily within a social network. Social Networks can both be used to simulate the process of homophily but it can also serve as a measure of level of exposure of different groups to each other within a current social network of individuals in a certain area. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Category:PlayStation_5_games] | [TOKENS: 109] |
Category:PlayStation 5 games This category includes articles of Sony PlayStation 5 games. If you do not see an article here for a game you are interested in, search for it, or start an article for it, and please add the article to this category. Subcategories This category has the following 10 subcategories, out of 10 total. Pages in category "PlayStation 5 games" The following 200 pages are in this category, out of approximately 1,507 total. This list may not reflect recent changes. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Software_performance_analysis] | [TOKENS: 1437] |
Contents Profiling (computer programming) In software engineering, profiling (program profiling, software profiling) is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering. Profiling is achieved by instrumenting either the program source code or its binary executable form using a tool called a profiler (or code profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Gathering program events Profilers use a wide variety of techniques to collect data, including hardware interrupts, code instrumentation, instruction set simulation, operating system hooks, and performance counters. Use of profilers Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on new architectures. Software writers need tools to analyze their programs and identify critical sections of code. Compiler writers often use such tools to find out how well their instruction scheduling or branch prediction algorithm is performing... — ATOM, PLDI The output of a profiler may be: A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious. A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions or various loads. Profiling results can be ingested by a compiler that provides profile-guided optimization. Profiling results can be used to guide the design and optimization of an individual algorithm; the Krauss matching wildcards algorithm is an example. Profilers are built into some application performance management systems that aggregate profiling data to provide insight into transaction workloads in distributed applications. History Performance-analysis tools existed on IBM/360 and IBM/370 platforms from the early 1970s, usually based on timer interrupts which recorded the program status word (PSW) at set timer-intervals to detect "hot spots" in executing code.[citation needed] This was an early example of sampling (see below). In early 1974 instruction-set simulators permitted full trace and other performance-monitoring features.[citation needed] Profiler-driven program analysis on Unix dates back to 1973, when Unix systems included a basic tool, prof, which listed each function and how much of program execution time it used. In 1982 gprof extended the concept to a complete call graph analysis. In 1994, Amitabh Srivastava and Alan Eustace of Digital Equipment Corporation published a paper describing ATOM (Analysis Tools with OM). The ATOM platform converts a program into its own profiler: at compile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both the gprof and ATOM papers appeared on the list of the 50 most influential PLDI papers for the 20-year period ending in 1999. Profiler types based on output Flat profilers compute the average call times, from the calls, and do not break down the call times based on the callee or the context. Call graph profilers show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. In some tools full context is not preserved. Input-sensitive profilers add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an application's performance scales as a function of its input. Data granularity in profiler types Profilers, which are also programs themselves, analyze target programs by collecting information on the target program's execution. Based on their data granularity, which depends upon how profilers collect information, they are classified as event-based or statistical profilers. Profilers interrupt program execution to collect information. Those interrupts can limit time measurement resolution, which implies that timing results should be taken with a grain of salt. Basic block profilers report a number of machine clock cycles devoted to executing each line of code, or timing based on adding those together; the timings reported per basic block may not reflect a difference between cache hits and misses. Event-based profilers are available for the following programming languages: These profilers operate by sampling. A sampling profiler probes the target program's call stack at regular intervals using operating system interrupts. Sampling profiles are typically less numerically accurate and specific, providing only a statistical approximation, but allow the target program to run at near full speed. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods." In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program and thus don't have as many side effects (such as on memory caches or instruction decoding pipelines). Also since they don't affect the execution speed as much, they can detect issues that would otherwise be hidden. They are also relatively immune to over-evaluating the cost of small, frequently called routines or 'tight' loops. They can show the relative amount of time spent in user mode versus interruptible kernel mode such as system call processing. Unfortunately, running kernel code to handle the interrupts incurs a minor loss of CPU cycles from the target program, diverts cache usage, and cannot distinguish the various tasks occurring in uninterruptible kernel code (microsecond-range activity) from user code. Dedicated hardware can do better: ARM Cortex-M3 and some recent MIPS processors' JTAG interfaces have a PCSAMPLE register, which samples the program counter in a truly undetectable manner, allowing non-intrusive collection of a flat profile. Some commonly used statistical profilers for Java/managed code are SmartBear Software's AQtime and Microsoft's CLR Profiler. Those profilers also support native code profiling, along with Apple Inc.'s Shark (OSX), OProfile (Linux), Intel VTune and Parallel Amplifier (part of Intel Parallel Studio), and Oracle Performance Analyzer, among others. This technique effectively adds instructions to the target program to collect the required information. Note that instrumenting a program can cause performance changes, and may in some cases lead to inaccurate results and/or heisenbugs. The effect will depend on what information is being collected, on the level of timing details reported, and on whether basic block profiling is used in conjunction with instrumentation. For example, adding code to count every procedure/routine call will probably have less effect than counting how many times each statement is obeyed. A few computers have special hardware to collect information; in this case the impact on the program is minimal. Instrumentation is key to determining the level of control and amount of time resolution available to the profilers. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dov_Lando] | [TOKENS: 255] |
Contents Dov Lando Dov Lando (Hebrew: דב לנדו; born 5 April 1930) is the rosh yeshiva of the Slabodka yeshiva of Bnei Brak along with Rabbi Moshe Hillel Hirsch, a rabbi of Chug Chazon Ish, and a member of the directorate of the Board of Yeshivas. In his youth, he studied under Avrohom Yeshaya Karelitz as well as at the yeshivot of Ponevezh and Hebron. With the death of Rav Gershon Edelstein, he became the chairman of the Moetzes Gedolei Hatorah in Israel along with his colleague Rabbi Moshe Hillel Hirsch. Views An anti-Zionist, Lando has characterized Zionism as "a movement whose purpose is to establish the Jewish people on an explicitly secular foundation, rooted in heresy and rebellion against divine sovereignty." He has written that involvement in Zionist institutions such as the World Zionist Organization leads to the "desecration of God's name." Works Students External links References This biographical article about an Israeli rabbi is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Idnibba] | [TOKENS: 760] |
Contents Idnibba Idnibba (Arabic: إدنبّة) was a Palestinian village, located at latitude 31.7426937N and longitude 34.8561001,E in the southern part of the Ramle Subdistrict. It was depopulated in 1948, at which time its population was 568, and its lands are now used by Kfar Menahem. History Idnibba may have been built on the site of the Roman settlement of Danuba. The Crusaders also called it Danuba. In 1517, the village was incorporated into the Ottoman Empire with the rest of Palestine, and in 1596 it appeared under the name of Dinba in the tax registers being in the nahiya (subdistrict) of Gaza under the liwa' (district) of Gaza. It had 36 households, an estimated population of 198; all Muslim. They paid taxes on a number of crops, including wheat, barley and sesame seeds, as well as goats and beehives; a total of 10,800 akçe. In 1838, Edward Robinson noted Idhnibbeh as a Muslim village located in the Gaza district. In 1863 Victor Guérin found the village to be situated on a low hill, and with a population of 600. He also noted a well which was built with ancient blocks, and olives gardens surrounding the village. An Ottoman village list from about 1870 found that the village (calling it ed-denube) had a population of 265, in a total of 74 houses, though the population count included men, only. In 1882 the PEF's Survey of Western Palestine (SWP) described Idnibba as a village built of stone and adobe and situated on high ground. It was surrounded by cactus hedges and had a fig tree orchard to the south. In the 1922 census of Palestine, conducted by the British Mandate authorities, Idnebbeh had a population of 275 Muslims, increasing in the 1931 census to 345, still all Muslims, in a total of 87 houses. Most villagers worked in agriculture and animal husbandry. In the 1945 statistics the population was 490, all Muslims, while the total land area was 8,103 dunams, according to an official land and population survey. Of this, a total of 5,277 dunums of village land was used for cereals, 85 dunums were irrigated or used for orchards, of which 64 dunums was for olives. while 25 dunams were classified as built-up public areas. On 16 July 1948, during Operation An-Far, Givati HQ informed General Staff\Operations that "our forces have entered the villages of Qazaza, Kheima, Jilya, Idnibba, Mughallis, expelled the inhabitants, [and] blown up and torched a number of houses. The area is at the moment clear of Arabs." There are no Israeli settlements on village lands. The settlement of Kefar Menachem, built in 1937, is about 2 km southwest of the village site. Palestinian historian Walid Khalidi described the remains of Idnibba in 1992: "The site and the surrounding lands have been converted into pastures and woods. A large area has been leveled by bulldozers. Demolished walls and the remnants of stone houses lie at various points on the site. There are natural caves with artificial, arched entrances on the upper, western edge of the site." See also References Bibliography External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ketamine] | [TOKENS: 6543] |
Contents Ketamine Ketamine is a cyclohexanone-derived general anesthetic and NMDA receptor antagonist with analgesic and hallucinogenic properties, used medically for anesthesia, depression, and pain management. Ketamine exists as its two enantiomers, S- (esketamine) and R- (arketamine), and has antidepressant action likely involving other mechanisms in addition to NMDA antagonism. At anesthetic doses, ketamine induces a state of dissociative anesthesia, a trance-like state providing pain relief, sedation, and amnesia. Its distinguishing features as an anesthestic are preserved breathing and airway reflexes, stimulated heart function with increased blood pressure, and moderate bronchodilation. As an anesthetic, it is used especially in trauma, emergency, and pediatric cases. At lower, sub-anesthetic doses, it is used as a treatment for pain and treatment-resistant depression. Ketamine is legally used in medicine but is also tightly controlled, as it is used as a recreational drug for its hallucinogenic and dissociative effects. When used recreationally, it is found both in crystalline powder and liquid form, and is often referred to by users as "Ket", "Special K" or simply "K". The long-term effects of repeated use are largely unknown and are an area of active investigation. Liver and urinary toxicity have been reported among regular users of high doses of ketamine for recreational purposes. Ketamine can cause dissociation and nausea, and other adverse effects, and is contraindicated in severe heart or liver disease, and uncontrolled psychosis. Ketamine's clinical and antidepressant effects can be influenced by co-administration of other drugs, though these interactions are variable and not yet fully understood. Ketamine was first synthesized in 1962; it was derived from phencyclidine in pursuit of a safer anesthetic with fewer hallucinogenic effects. It was approved for use in the United States in 1970. It has been regularly used in veterinary medicine and was extensively used for surgical anesthesia in the Vietnam War. It later gained prominence for its rapid antidepressant effects discovered in 2000, marking a major breakthrough in depression treatment. Racemic ketamine, especially at higher doses, may be more effective and longer-lasting than esketamine in reducing depression severity. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. Medical uses The use of ketamine in anesthesia reflects its characteristics. It is a drug of choice for short-term procedures when muscle relaxation is not required. The effect of ketamine on the respiratory and circulatory systems is different from that of other anesthetics. It suppresses breathing much less than most other available anesthetics. When used at anesthetic doses, ketamine usually stimulates rather than depresses the circulatory system. Protective airway reflexes are preserved, and it is sometimes possible to administer ketamine anesthesia without protective measures to the airways. Psychotomimetic effects limit the acceptance of ketamine; however, lamotrigine and nimodipine decrease psychotomimetic effects and can also be counteracted by benzodiazepines or propofol administration. Ketofol is a combination of ketamine and propofol. Ketamine is frequently used in severely injured people and appears to be safe in this group. It has been widely used for emergency surgery in field conditions in war zones, for example, during the Vietnam War. A 2011 clinical practice guideline supports the use of ketamine as a sedative in emergency medicine, including during physically painful procedures. It is the drug of choice for people in traumatic shock who are at risk of hypotension. Ketamine often raises blood pressure upon administration and is unlikely to lower blood pressure in most patients, making it useful in treating severe head injuries for which low blood pressure can be dangerous. Ketamine is an option in children as the sole anesthetic for minor procedures or as an induction agent followed by neuromuscular blocker and tracheal intubation. In particular, children with cyanotic heart disease and neuromuscular disorders are good candidates for ketamine anesthesia. Due to the bronchodilating properties of ketamine, it can be used for anesthesia in people with asthma, chronic obstructive airway disease, and with severe reactive airway disease, including active bronchospasm. Ketamine infusions are used for acute pain treatment in emergency departments and in the perioperative period for individuals with refractory or intractable pain. The doses are lower than those used for anesthesia, usually referred to as sub-anesthetic doses. Adjunctive to morphine or on its own, ketamine reduces morphine use, pain level, nausea, and vomiting after surgery. Ketamine is likely to be most beneficial for surgical patients when severe post-operative pain is expected, and for opioid-tolerant patients. Ketamine is especially useful in the pre-hospital setting due to its effectiveness and low risk of respiratory depression. Ketamine has similar efficacy to opioids in a hospital emergency department setting for the management of acute pain and the control of procedural pain. It may also prevent opioid-induced hyperalgesia and postanesthetic shivering. For chronic pain, ketamine is used as an intravenous analgesic, mainly if the pain is neuropathic. It has the added benefit of counteracting spinal sensitization or wind-up phenomena experienced with chronic pain. In multiple clinical trials, ketamine infusions delivered short-term pain relief in neuropathic pain diagnoses, pain after a traumatic spine injury, fibromyalgia, and complex regional pain syndrome (CRPS). However, the 2018 consensus guidelines on chronic pain concluded that, overall, there is only weak evidence in favor of ketamine use in spinal injury pain, moderate evidence in favor of ketamine for CRPS, and weak or no evidence for ketamine in mixed neuropathic pain, fibromyalgia, and cancer pain. In particular, only for CRPS, there is evidence of medium to longer-term pain relief. Ketamine is a rapid-acting antidepressant, but its effect is transient. Intravenous ketamine infusion in treatment-resistant depression may result in improved mood within 4 hours reaching the peak at 24 hours. A single dose of intravenous ketamine has been shown to result in a response rate greater than 60% as early as 4.5 hours after the dose (with a sustained effect after 24 hours) and greater than 40% after 7 days. Although only a few pilot studies have sought to determine the optimal dose, increasing evidence suggests that 0.5 mg/kg dose injected over 40 minutes gives an optimal outcome. The antidepressant effect of ketamine is diminished at 7 days, and most people relapse within 10 days. However, for a significant minority, the improvement may last 30 days or more. One of the main challenges with ketamine treatment can be the length of time that the antidepressant effects last after finishing a course of treatment. A possible option may be maintenance therapy with ketamine, which usually runs twice a week to once every two weeks. Ketamine may decrease suicidal thoughts for up to three days after the injection. An enantiomer of ketamine – esketamine – was approved as an antidepressant by the European Medicines Agency in 2019. Esketamine was approved as a nasal spray for treatment-resistant depression in the United States and elsewhere in 2019. The Canadian Network for Mood and Anxiety Treatments (CANMAT) recommends esketamine as a third-line treatment for depression. A Cochrane review of randomized controlled trials in adults with major depressive disorder found that when compared with placebo, people treated with either ketamine or esketamine experienced reduction or remission of symptoms lasting 1 to 7 days. There were 18.7% (4.1 to 40.4%) more people reporting some benefit and 9.6% (0.2 to 39.4%) more who achieved remission within 24 hours of ketamine treatment. Among people receiving esketamine, 12.1% (2.5 to 24.4%) encountered some relief at 24 hours, and 10.3% (4.5 to 18.2%) had few or no symptoms. These effects did not persist beyond one week, although a higher dropout rate in some studies means that the benefit duration remains unclear. Ketamine may partially improve depressive symptoms among people with bipolar depression at 24 hours after treatment, but not three or more days. Potentially, ten more people with bipolar depression per 1000 may experience brief improvement, but not the cessation of symptoms, one day following treatment. These estimates are based on limited available research. In February 2022, the US Food and Drug Administration (FDA) issued an alert to healthcare professionals concerning compounded nasal spray products containing ketamine intended to treat depression. Comparative efficacy of ketamine and esketamine Several recent reviews and meta-analyses suggest that racemic ketamine and intranasal esketamine may differ in clinical effectiveness for treatment-resistant depression. A 2025 narrative review comparing ketamine and esketamine reported that racemic ketamine may produce broader and more sustained antidepressant effects, potentially due to the combined action of both enantiomers, R-ketamine and S-ketamine, and its higher and more consistent bioavailability when administered intravenously or intramuscularly. Intravenous racemic ketamine achieves approximately 100% bioavailability, whereas intranasal esketamine demonstrates substantially lower and more variable bioavailability of approximately 45–50%, which may contribute to interindividual variability in treatment response. Preclinical and translational research indicates that the R-enantiomer of ketamine, which is absent from esketamine formulations, may contribute significantly to antidepressant efficacy while producing fewer dissociative and psychotomimetic effects than S-ketamine. A systematic review and meta-analysis comparing intravenous racemic ketamine with intranasal esketamine found that intravenous ketamine was associated with greater reductions in depressive symptom severity, more rapid onset of action, and longer-lasting antidepressant effects. These findings have contributed to ongoing debate regarding whether racemic ketamine may offer superior clinical benefit compared with intranasal esketamine, although long-term comparative trials remain limited. Ketamine is used to treat status epilepticus that has not responded to standard treatments, but only case studies and no randomized controlled trials support its use. Ketamine has been suggested as a possible therapy for children with severe acute asthma who do not respond to standard treatment. This is due to its bronchodilator effects. A 2012 Cochrane review found there were minimal adverse effects reported, but the limited studies showed no significant benefit. Contraindications Some major contraindications for ketamine are: Adverse effects At anesthetic doses, 10–20% of adults and 1–2% of children experience adverse psychiatric reactions that occur during emergence from anesthesia, ranging from dreams and dysphoria to hallucinations and emergence delirium. Psychotomimetic effects decrease when adding lamotrigine and nimodipine and can be counteracted by pretreatment with a benzodiazepine or propofol. Ketamine anesthesia commonly causes tonic-clonic movements (greater than 10% of people) and rarely hypertonia. Vomiting can be expected in 5–15% of the patients; pretreatment with propofol mitigates it as well. Laryngospasm occurs only rarely with ketamine. Ketamine, generally, stimulates breathing; however, in the first 2–3 minutes of a high-dose rapid intravenous injection, it may cause a transient respiratory depression. At lower sub-anesthetic doses, psychiatric side effects are prominent. The most common psychiatric side effects are dissociation, visual distortions, and numbness. Also common (20–50%) are difficulty speaking, confusion, euphoria, drowsiness, and difficulty concentrating. Hallucinations are described by 6–10% of people. Dizziness, blurred vision, dry mouth, hypertension, nausea, increased or decreased body temperature, or flushing are the common (>10%) non-psychiatric side effects. All these adverse effects are most pronounced by the end of the injection, dramatically reduced 40 minutes afterward, and completely disappear within 4 hours after the injection. Urologic diseases occur primarily in people who use large amounts of ketamine routinely, with 20–30% of frequent users having bladder complaints. It includes a range of disorders from cystitis to hydronephrosis to kidney failure. The typical symptoms of ketamine-induced cystitis are frequent urination, dysuria, and urinary urgency sometimes accompanied by pain during urination and blood in urine. The damage to the bladder wall has similarities to both interstitial and eosinophilic cystitis. The wall is thickened and the functional bladder capacity is as low as 10–150 mL. Studies indicate that ketamine-induced cystitis is caused by ketamine and its metabolites directly interacting with urothelium, resulting in damage of the epithelial cells of the bladder lining and increased permeability of the urothelial barrier which results in clinical symptoms. Management of ketamine-induced cystitis involves ketamine cessation as the first step. This is followed by NSAIDs and anticholinergics and, if the response is insufficient, by tramadol. The second-line treatments are epithelium-protective agents such as oral pentosan polysulfate or intravesical instillation of hyaluronic acid. Intravesical botulinum toxin is also useful. Some research also indicates that epigallocatechin-3-gallate (EGCG) may mitigate bladder dysfunction in ketamine-induced cystitis by normalizing the collagen-to-muscle ratio and restoring storage capacity. Hepatotoxicity (toxicity to the liver) of ketamine involves higher doses and repeated administration. In a group of chronic high-dose ketamine users, the frequency of liver injury was reported to be about 10%. There are case reports of increased liver enzymes involving ketamine treatment of chronic pain. Chronic ketamine abuse has also been associated with biliary colic, cachexia, gastrointestinal diseases, hepatobiliary disorder, and acute kidney injury. Most people who were able to remember their dreams during ketamine anesthesia report near-death experiences (NDEs) when the broadest possible definition of an NDE is used. Ketamine can reproduce features that commonly have been associated with NDEs. A 2019 large-scale study found that written reports of ketamine experiences had a high degree of similarity to written reports of NDEs in comparison to other written reports of drug experiences. Although the incidence of ketamine dependence is unknown, some people who regularly use ketamine develop ketamine dependence. Animal experiments also confirm the risk of misuse. Additionally, the rapid onset of effects following insufflation may increase potential use as a recreational drug. The short duration of effects promotes bingeing. Ketamine tolerance rapidly develops, even with repeated medical use, prompting the use of higher doses. Some daily users reported withdrawal symptoms, primarily anxiety, tremor, sweating, and palpitations, following the attempts to stop. Despite the balance of palliative benefits which planned course(s) of therapy can confer when patients face serious medical conditions, long-term ketamine abuse is known to cause brain damage, including reduction in both white and grey matter seen on MRI imaging and atrophy seen on CT scans. Cognitive deficits as well as increased dissociation and delusions were observed in frequent recreational users of ketamine. Interactions Ketamine potentiates the sedative effects of propofol and midazolam. Naltrexone potentiates psychotomimetic effects of a low dose of ketamine, while lamotrigine and nimodipine decrease them. Clonidine reduces the increase of salivation, heart rate, and blood pressure during ketamine anesthesia and decreases the incidence of nightmares. Clinical observations suggest that benzodiazepines may diminish the antidepressant effects of ketamine. It appears most conventional antidepressants can be safely combined with ketamine. Pharmacology Ketamine is a mixture of equal amounts of two enantiomers: esketamine and arketamine. Esketamine is a far more potent NMDA receptor pore blocker than arketamine. Pore blocking of the NMDA receptor is responsible for the anesthetic, analgesic, and psychotomimetic effects of ketamine. Blocking of the NMDA receptor results in analgesia by preventing central sensitization in dorsal horn neurons; in other words, ketamine's actions interfere with pain transmission in the spinal cord. The mechanism of action of ketamine in alleviating depression is not well understood, but it is an area of active investigation. Due to the hypothesis that NMDA receptor antagonism underlies the antidepressant effects of ketamine, esketamine was developed as an antidepressant. However, multiple other NMDA receptor antagonists, including memantine, lanicemine, rislenemdaz, rapastinel, and 4-chlorokynurenine, have thus far failed to demonstrate significant effectiveness for depression. Furthermore, animal research indicates that arketamine, the enantiomer with a weaker NMDA receptor antagonism, as well as (2R,6R)-hydroxynorketamine, the metabolite with negligible affinity for the NMDA receptor but potent alpha-7 nicotinic receptor antagonist activity, may have antidepressant action. This furthers the argument that NMDA receptor antagonism may not be primarily responsible for the antidepressant effects of ketamine. Acute inhibition of the lateral habenula, a part of the brain responsible for inhibiting the mesolimbic reward pathway and referred to as the "anti-reward center", is another possible mechanism for ketamine's antidepressant effects. Possible biochemical mechanisms of ketamine's antidepressant action include direct action on the NMDA receptor and downstream effects on regulators such as BDNF and mTOR. It is not clear whether ketamine alone is sufficient for antidepressant action or its metabolites are also important; the active metabolite of ketamine, hydroxynorketamine, which does not significantly interact with the NMDA receptor but nonetheless indirectly activates AMPA receptors, may also or alternatively be involved in the rapid-onset antidepressant effects of ketamine. In NMDA receptor antagonism, acute blockade of NMDA receptors in the brain results in an increase in the release of glutamate, which leads to an activation of AMPA receptors, which in turn modulate a variety of downstream signaling pathways to influence neurotransmission in the limbic system and mediate antidepressant effects. Such downstream actions of the activation of AMPA receptors include upregulation of brain-derived neurotrophic factor (BDNF) and activation of its signaling receptor tropomyosin receptor kinase B (TrkB), activation of the mammalian target of rapamycin (mTOR) pathway, deactivation of glycogen synthase kinase 3 (GSK-3), and inhibition of the phosphorylation of the eukaryotic elongation factor 2 (eEF2) kinase. Ketamine principally acts as a pore blocker of the NMDA receptor, an ionotropic glutamate receptor. The S-(+) and R-(–) stereoisomers of ketamine bind to the dizocilpine site of the NMDA receptor with different affinities, the former showing approximately 3- to 4-fold greater affinity for the receptor than the latter. As a result, the S isomer is a more potent anesthetic and analgesic than its R counterpart. Ketamine may interact with and inhibit the NMDAR via another allosteric site on the receptor. With a couple of exceptions, ketamine actions at other receptors are far weaker than ketamine's antagonism of the NMDA receptor (see the activity table to the right). Although ketamine is a very weak ligand of the monoamine transporters (Ki > 60 μM), it has been suggested that it may interact with allosteric sites on the monoamine transporters to produce monoamine reuptake inhibition. However, no functional inhibition (IC50) of the human monoamine transporters has been observed with ketamine or its metabolites at concentrations of up to 10,000 nM. Moreover, animal studies and at least three human case reports have found no interaction between ketamine and the monoamine oxidase inhibitor (MAOI) tranylcypromine, which is of importance as the combination of a monoamine reuptake inhibitor with an MAOI can produce severe toxicity such as serotonin syndrome or hypertensive crisis. Collectively, these findings shed doubt on the involvement of monoamine reuptake inhibition in the effects of ketamine in humans. Ketamine has been found to increase dopaminergic neurotransmission in the brain, but instead of being due to dopamine reuptake inhibition, this may be via indirect/downstream mechanisms, namely through antagonism of the NMDA receptor. Whether ketamine is an agonist of D2 receptors is controversial. Early research by the Philip Seeman group found ketamine to be a D2 partial agonist with a potency similar to that of its NMDA receptor antagonism. However, later studies by different researchers found the affinity of ketamine of >10 μM for the regular human and rat D2 receptors, Moreover, whereas D2 receptor agonists such as bromocriptine can rapidly and powerfully suppress prolactin secretion, subanesthetic doses of ketamine have not been found to do this in humans and in fact, have been found to dose-dependently increase prolactin levels. Imaging studies have shown mixed results on inhibition of striatal [11C] raclopride binding by ketamine in humans, with some studies finding a significant decrease and others finding no such effect. However, changes in [11C] raclopride binding may be due to changes in dopamine concentrations induced by ketamine rather than binding of ketamine to the D2 receptor. Dissociation and psychotomimetic effects are reported in people treated with ketamine at plasma concentrations of approximately 100 to 250 ng/mL (0.42–1.1 μM). The typical intravenous antidepressant dosage of ketamine used to treat depression is low and results in maximal plasma concentrations of 70 to 200 ng/mL (0.29–0.84 μM). At similar plasma concentrations (70 to 160 ng/mL; 0.29–0.67 μM) it also shows analgesic effects. In 1–5 minutes after inducing anesthesia by rapid intravenous injection of ketamine, its plasma concentration reaches as high as 60–110 μM. When the anesthesia was maintained using nitrous oxide together with continuous injection of ketamine, the ketamine concentration stabilized at approximately 9.3 μM. In an experiment with purely ketamine anesthesia, people began to awaken once the plasma level of ketamine decreased to about 2,600 ng/mL (11 μM) and became oriented in place and time when the level was down to 1,000 ng/mL (4 μM). In a single-case study, the concentration of ketamine in cerebrospinal fluid, a proxy for the brain concentration, during anesthesia varied between 2.8 and 6.5 μM and was approximately 40% lower than in plasma. Ketamine can be absorbed by many different routes due to its water and lipid solubility. Intravenous ketamine bioavailability is 100% by definition, intramuscular injection bioavailability is slightly lower at 93%, and epidural bioavailability is 77%. Subcutaneous bioavailability has never been measured but is presumed to be high. Among the less invasive routes, the intranasal route has the highest bioavailability (45–50%) and oral – the lowest (16–20%). Sublingual and rectal bioavailabilities are intermediate at approximately 25–50%. After absorption ketamine is rapidly distributed into the brain and other tissues. The plasma protein binding of ketamine is variable at 23–47%. In the body, ketamine undergoes extensive metabolism. It is biotransformed by CYP3A4 and CYP2B6 isoenzymes into norketamine, which, in turn, is converted by CYP2A6 and CYP2B6 into hydroxynorketamine and dehydronorketamine. Low oral bioavailability of ketamine is due to the first-pass effect and, possibly, ketamine intestinal metabolism by CYP3A4. As a result, norketamine plasma levels are several-fold higher than ketamine following oral administration, and norketamine may play a role in anesthetic and analgesic action of oral ketamine. This also explains why oral ketamine levels are independent of CYP2B6 activity, unlike subcutaneous ketamine levels. After an intravenous injection of tritium-labelled ketamine, 91% of the radioactivity is recovered from urine and 3% from feces. The medication is excreted mostly in the form of metabolites, with only 2% remaining unchanged. Conjugated hydroxylated derivatives of ketamine (80%) followed by dehydronorketamine (16%) are the most prevalent metabolites detected in urine. Chemistry In chemical structure, ketamine is an arylcyclohexylamine derivative. Ketamine is a chiral compound. The more active enantiomer, esketamine (S-ketamine), is also available for medical use under the brand name Ketanest S, while the less active enantiomer, arketamine (R-ketamine), has never been marketed as an enantiopure drug for clinical use. While S-ketamine is more effective as an analgesic and anesthetic through NMDA receptor antagonism, R-ketamine produces longer-lasting effects as an antidepressant. The optical rotation of a given enantiomer of ketamine can vary between its salts and free base form. The free base form of (S)‑ketamine exhibits dextrorotation and is therefore labelled (S)‑(+)‑ketamine. However, its hydrochloride salt shows levorotation and is thus labelled (S)‑(−)‑ketamine hydrochloride. Ketamine may be quantified in blood or plasma to confirm a diagnosis of poisoning in hospitalized people, provide evidence in an impaired driving arrest, or assist in a medicolegal death investigation. Blood or plasma ketamine concentrations are usually in a range of 0.5–5.0 mg/L in persons receiving the drug therapeutically (during general anesthesia), 1–2 mg/L in those arrested for impaired driving, and 3–20 mg/L in victims of acute fatal overdosage. Urine is often the preferred specimen for routine drug use monitoring purposes. The presence of norketamine, a pharmacologically active metabolite, is useful for confirmation of ketamine ingestion. History Ketamine was first synthesized in 1962 by Calvin L. Stevens, a professor of chemistry at Wayne State University and a Parke-Davis consultant. It was known by the developmental code name CI-581. After promising preclinical research in animals, ketamine was tested in human prisoners in 1964. These investigations demonstrated ketamine's short duration of action and reduced behavioral toxicity made it a favorable choice over phencyclidine (PCP) as an anesthetic. The researchers wanted to call the state of ketamine anesthesia "dreaming", but Parke-Davis did not approve of the name. Hearing about this problem and the "disconnected" appearance of treated people, Mrs. Edward F. Domino, the wife of one of the pharmacologists working on ketamine, suggested "dissociative anesthesia". Following FDA approval in 1970, ketamine anesthesia was first given to American soldiers during the Vietnam War. The discovery of antidepressive action of ketamine in 2000 has been described as the single most important advance in the treatment of depression in more than 50 years. It has sparked interest in NMDA receptor antagonists for depression, and has shifted the direction of antidepressant research and development. Society and culture While ketamine is marketed legally in many countries worldwide, it is also a controlled substance in many countries. At sub-anesthetic doses, ketamine produces a dissociative state, characterised by a sense of detachment from one's physical body and the external world that is known as depersonalization and derealization. At sufficiently high doses, users may experience what is called the "K-hole", a state of dissociation with visual and auditory hallucination. John C. Lilly, Marcia Moore, and D. M. Turner (among others) have written extensively about their own entheogenic and psychonautic experiences with ketamine. Turner died prematurely due to drowning during presumed unsupervised ketamine use. Recreational ketamine use has been implicated in deaths globally, with more than 90 deaths in England and Wales in 2005–2013. They include accidental poisonings, drownings, traffic accidents, and suicides. The majority of deaths were among young people. Several months after being found dead in his hot tub, actor Matthew Perry's October 2023 apparent drowning death was revealed to have been caused by a ketamine overdose, and, while other factors were present, the acute effects of ketamine were ruled to be the primary cause of death. Due to its ability to cause confusion and amnesia, ketamine has been used for date rape. Research Ketamine, in the form of esketamine, is approved in the United States for treating treatment-resistant depression. In vivo, ketamine rapidly activates the mTOR pathway, promoting synaptogenesis and reversing stress-related synaptic deficits in the prefrontal cortex, which might underlie its fast-acting antidepressant effects in treatment-resistant depression. A 2023 meta-analysis found that racemic ketamine, particularly at higher doses, is more effective than esketamine in reducing depression severity, with more sustained benefits over time. Ketamine has shown potential for rapid and tolerable symptom relief in obsessive-compulsive disorder, but evidence is limited and inconsistent. The British critical psychiatrist Joanna Moncrieff has critiqued the use and study of ketamine and related drugs like psychedelics for treatment of psychiatric disorders, highlighting concerns including excessive hype around these drugs, questionable biologically-based theories of benefit, blurred lines between medical and recreational use, flawed clinical trial findings, financial conflicts of interest, strong expectancy effects and large placebo responses, small and short-term benefits over placebo, and their potential for difficult experiences and adverse effects, among others. Veterinary uses In veterinary anesthesia, ketamine is often used for its anesthetic and analgesic effects on cats, dogs, rabbits, rats, and other small animals. It is frequently used in induction and anesthetic maintenance in horses. It is an important part of the "rodent cocktail", a mixture of drugs used for anesthetising rodents. Veterinarians often use ketamine with sedative drugs to produce balanced anesthesia and analgesia, and as a constant-rate infusion to help prevent pain wind-up. Ketamine is also used to manage pain among large animals. It is the primary intravenous anesthetic agent used in equine surgery, often in conjunction with detomidine and thiopental, or sometimes guaifenesin. Ketamine appears not to produce sedation or anesthesia in snails. Instead, it appears to have an excitatory effect. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Sextant] | [TOKENS: 2831] |
Contents Sextant A sextant is a doubly reflecting navigation instrument that measures the angular distance between two visible objects. The primary use of a sextant is to measure the angle between an astronomical object and the horizon for the purposes of celestial navigation. The estimation of this angle, the altitude, is known as sighting or shooting the object, or taking a sight. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart—for example, sighting the Sun at noon or Polaris at night (in the Northern Hemisphere) to estimate latitude (with sight reduction). Sighting the height of a landmark can give a measure of distance off and, held horizontally, a sextant can measure angles between objects for a position on a chart. A sextant can also be used to measure the lunar distance between the moon and another celestial object (such as a star or planet) in order to determine Greenwich Mean Time and hence longitude. The principle of the instrument was first implemented around 1731 by John Hadley (1682–1744) and Thomas Godfrey (1704–1749), but it was also found later in the unpublished writings of Isaac Newton (1643–1727). In 1922, it was modified for aeronautical navigation by Portuguese navigator and naval officer Gago Coutinho. Navigational sextants Like the Davis quadrant, the sextant allows celestial objects to be measured relative to the horizon, rather than relative to the instrument. This allows excellent precision. Also, unlike the backstaff, the sextant allows direct observations of stars. This permits the use of the sextant at night when a backstaff is difficult to use. For solar observations, filters allow direct observation of the Sun. Since the measurement is relative to the horizon, the measuring pointer is a beam of light that reaches to the horizon. The measurement is thus limited by the angular accuracy of the instrument and not the sine error of the length of an alidade, as it is in a mariner's astrolabe or similar older instrument. A sextant does not require a completely steady aim, because it measures a relative angle. For example, when a sextant is used on a moving ship, the image of both horizon and celestial object will move around in the field of view. However, the relative position of the two images will remain steady, and as long as the user can determine when the celestial object touches the horizon, the accuracy of the measurement will remain high compared to the magnitude of the movement. The sextant is not dependent upon electricity (unlike many forms of modern navigation) or any human-controlled signals (such as GPS). For these reasons it is considered to be an eminently practical back-up navigation tool for ships. Design The frame of a sextant is in the shape of a sector which is approximately 1⁄6 of a circle (60°), hence its name (sextāns, sextantis is the Latin word for "one sixth"). Both smaller and larger instruments are (or were) in use: the octant, quintant (or pentant) and the (doubly reflecting) quadrant span sectors of approximately 1⁄8 of a circle (45°), 1⁄5 of a circle (72°) and 1⁄4 of a circle (90°), respectively. All of these instruments may be termed "sextants". Attached to the frame are the "horizon mirror", an index arm which moves the index mirror, a sighting telescope, Sun shades, a graduated scale and a micrometer drum gauge for accurate measurements. The scale must be graduated so that the marked degree divisions register twice the angle through which the index arm turns. The scales of the octant, sextant, quintant and quadrant are graduated from below zero to 90°, 120°, 140° and 180° respectively. For example, the sextant illustrated has a scale graduated from −10° to 142°, which is basically a quintant: the frame is a sector of a circle subtending an angle of 76° at the pivot of the index arm. The necessity for the doubled scale reading follows from consideration of the relations of the fixed ray (between the mirrors), the object ray (from the sighted object) and the direction of the normal perpendicular to the index mirror. When the index arm moves by an angle, say 20°, the angle between the fixed ray and the normal also increases by 20°. But the angle of incidence equals the angle of reflection so the angle between the object ray and the normal must also increase by 20°. The angle between the fixed ray and the object ray must therefore increase by 40°. This is the case shown in the graphic. There are two types of horizon mirrors on the market today. Both types give good results. Traditional sextants have a half-horizon mirror, which divides the field of view in two. On one side, there is a view of the horizon; on the other side, a view of the celestial object. The advantage of this type is that both the horizon and celestial object are bright and as clear as possible. This is superior at night and in haze, when the horizon and/or a star being sighted can be difficult to see. However, one has to sweep the celestial object to ensure that the lowest limb of the celestial object touches the horizon. Whole-horizon sextants use a half-silvered horizon mirror to provide a full view of the horizon. This makes it easy to see when the bottom limb of a celestial object touches the horizon. Since most sights are of the Sun or Moon, and haze is rare without overcast, the low-light advantages of the half-horizon mirror are rarely important in practice. In both types, larger mirrors give a larger field of view, and thus make it easier to find a celestial object. Modern sextants often have 5 cm or larger mirrors, while 19th-century sextants rarely had a mirror larger than 2.5 cm (one inch). In large part, this is because precision flat mirrors have grown less expensive to manufacture and to silver. An artificial horizon is useful when the horizon is invisible, as occurs in fog, on moonless nights, in a calm, when sighting through a window or on land surrounded by trees or buildings. There are two common designs of artificial horizon. An artificial horizon can consist simply of a pool of water shielded from the wind, allowing the user to measure the distance between the body and its reflection, and divide by two. Another design allows the mounting of a fluid-filled tube with bubble directly to the sextant. Most sextants also have filters for use when viewing the Sun and reducing the effects of haze. The filters usually consist of a series of progressively darker glasses that can be used singly or in combination to reduce haze and the Sun's brightness. However, sextants with adjustable polarizing filters have also been manufactured, where the degree of darkness is adjusted by twisting the frame of the filter. Most sextants mount a 1 or 3-power monocular for viewing. Many users prefer a simple sighting tube, which has a wider, brighter field of view and is easier to use at night. Some navigators mount a light-amplifying monocular to help see the horizon on moonless nights. Others prefer to use a lit artificial horizon.[citation needed] Professional sextants use a click-stop degree measure and a worm adjustment that reads to a minute, 1/60 of a degree. Most sextants also include a vernier on the worm dial that reads to 0.1 minute. Since 1 minute of error is about a nautical mile, the best possible accuracy of celestial navigation is about 0.1 nautical miles (190 m). At sea, results within several nautical miles, well within visual range, are acceptable. A highly skilled and experienced navigator can determine position to an accuracy of about 0.25-nautical-mile (460 m). A change in temperature can warp the arc, creating inaccuracies. Many navigators purchase weatherproof cases so that their sextant can be placed outside the cabin to come to equilibrium with outside temperatures. The standard frame designs (see illustration) are supposed to equalise differential angular error from temperature changes. The handle is separated from the arc and frame so that body heat does not warp the frame. Sextants for tropical use are often painted white to reflect sunlight and remain relatively cool. High-precision sextants have an invar (a special low-expansion steel) frame and arc. Some scientific sextants have been constructed of quartz or ceramics with even lower expansions. Many commercial sextants use low-expansion brass or aluminium. Brass is lower-expansion than aluminium, but aluminium sextants are lighter and less tiring to use. Some say they are more accurate because one's hand trembles less. Solid brass frame sextants are less susceptible to wobbling in high winds or when the vessel is working in heavy seas, but as noted are substantially heavier. Sextants with aluminum frames and brass arcs have also been manufactured. Essentially, a sextant is intensely personal to each navigator, and they will choose whichever model has the features which suit them best. Aircraft sextants are now out of production, but had special features. Most had artificial horizons to permit taking a sight through a flush overhead window. Some also had mechanical averagers to make hundreds of measurements per sight for compensation of random accelerations in the artificial horizon's fluid. Older aircraft sextants had two visual paths, one standard and the other designed for use in open-cockpit aircraft that let one view from directly over the sextant in one's lap. More modern aircraft sextants were periscopic with only a small projection above the fuselage. With these, the navigator pre-computed their sight and then noted the difference in observed versus predicted height of the body to determine their position. Taking a sight A sight (or measure) of the angle between the Sun, a star, or a planet, and the horizon is done with the 'star telescope' fitted to the sextant using a visible horizon. On a vessel at sea even on misty days a sight may be done from a low height above the water to give a more definite, better horizon. Navigators hold the sextant by its handle in the right hand, avoiding touching the arc with the fingers. For a Sun sight, a filter is used to overcome the glare such as "shades" covering both index mirror and the horizon mirror designed to prevent eye damage. Initially, with the index bar set to zero and the shades covering both mirrors, the sextant is aimed at the sun until it can be viewed on both mirrors through the telescope, then lowered vertically until the portion of the horizon directly below it is viewed on both mirrors. It is necessary to flip back the horizon mirror shade to be able to see the horizon more clearly on it. Releasing the index bar (either by releasing a clamping screw, or on modern instruments, using the quick-release button), and moving it towards higher values of the scale, eventually the image of the Sun will reappear on the index mirror and can be aligned to about the level of the horizon on the horizon mirror. Then the fine adjustment screw on the end of the index bar is turned until the bottom curve (the lower limb) of the Sun just touches the horizon. "Swinging" the sextant about the axis of the telescope ensures that the reading is being taken with the instrument held vertically. The angle of the sight is then read from the scale on the arc, making use of the micrometer or vernier scale provided. The exact time of the sight must also be noted simultaneously, and the height of the eye above sea-level recorded. An alternative method is to estimate the current altitude (angle) of the Sun from navigation tables, then set the index bar to that angle on the arc, apply suitable shades only to the index mirror, and point the instrument directly at the horizon, sweeping it from side to side until a flash of the Sun's rays are seen in the telescope. Fine adjustments are then made as above. This method is less likely to be successful for sighting stars and planets. Star and planet sights are normally taken during nautical twilight at dawn or dusk, while both the heavenly bodies and the sea horizon are visible. There is no need to use shades or to distinguish the lower limb as the body appears as a mere point in the telescope. The Moon can be sighted, but it appears to move very fast, appears to have different sizes at different times, and sometimes only the lower or upper limb can be distinguished due to its phase. After a sight is taken, it is reduced to a position by looking at several mathematical procedures. The simplest sight reduction is to draw the equal-altitude circle of the sighted celestial object on a globe. The intersection of that circle with a dead-reckoning track, or another sighting, gives a more precise location. Sextants can be used very accurately to measure other visible angles, for example between one heavenly body and another and between landmarks ashore. Used horizontally, a sextant can measure the apparent angle between two landmarks such as a lighthouse and a church spire, which can then be used to find the distance off or out to sea (provided the distance between the two landmarks is known). Used vertically, a measurement of the angle between the lantern of a lighthouse of known height and the sea level at its base can also be used for distance off. Adjustment Due to the sensitivity of the instrument it is easy to knock the mirrors out of adjustment. For this reason a sextant should be checked frequently for errors and adjusted accordingly. There are four errors that can be adjusted by the navigator, and they should be removed in the following order. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Dayr_Aban] | [TOKENS: 1521] |
Contents Dayr Aban Dayr Aban (also spelled Deir Aban; Arabic: دير آبان) was a Palestinian Arab village in the Jerusalem Subdistrict, located on the lower slope of a high ridge that formed the western slope of a mountain, to the east of Beit Shemesh. It was formerly bordered by olive trees to the north, east, and west. The valley, Wadi en-Najil, ran north and south on the west-side of the village. The village is associated with the biblical site of Eben-Ezer. The prefix "Dayr" hints at a historical monastery. Early Ottoman records document a mixed Christian and Muslim population. However, by the 17th century, historical records highlights a communal conversion to Islam. Nonetheless, traditions linked to the village's Christian past persisted in later periods. Dayr Aban was depopulated during the 1948 Arab-Israeli War on October 19, 1948, during Operation Ha-Har. It was located 21 km west of Jerusalem. Today there are over 5000 people originally from Deir Aban living in Jordan. History In pre-Roman and Roman times the settlement was referred to as "Abenezer", and may have been the location of the biblical site Eben-Ezer.(1 Samuel 4:1–11). The name Dayr indicates that this was the site of a Christian monastery. In 1596, Dayr Aban appeared in Ottoman tax registers as being in the Nahiya of Quds of the Liwa of Quds. It had a population of 23 Muslim households and 23 Christian households; that is, an estimated 127 persons. They paid a fixed tax rate of 33,3% on agricultural products, such as wheat, barley, olives, and goats or beehives; a total of 9,700 Akçe. In the 17th century, the inhabitants of Dayr Aban collectively converted to Islam, an unusual event within the Middle East during the Ottoman period. Jerusalem court records document four related conversion certificates. The earliest, dated 1635, records the conversion of a person named Gimʿa bin Dāfir. Subsequently, in 1649-1650, three additional certificates were issued. Two, from September 5, 1649, concern individuals named Rabīʿa and Nāṣir bin Manṣūr. Later, on March 7, 1650, a communal conversion of all Dayr Abān's residents was documented. The document lists both the original and new names of the converts, along with a note indicating the entire village's conversion. In 1838, Deir Aban was noted as a Muslim village, located in the el-Arkub District, south west of Jerusalem. Victor Guérin described it in 1863 as being a large village, and its adjacent valley "strewn with sesame." An Ottoman village list from about 1870 found that the village had a population of 443, in a total of 135 houses, though the population count included men, only. In 1883, the PEF's Survey of Western Palestine described Dayr Aban as "a large village on the lower slope of a high ridge, with a well to the north, and olives on the east, west, and north. This place no doubt represents the fourth century site of Ebenezer (I Sam. IV. I) which is mentioned in the Onomasticon (s.v. Ebenezer) as near Beth Shemesh. The village is 2 miles east of 'Ain Shems." Baldensperger, writing in 1893, stated that the village's residents had been Greek Orthodox until they converted to Islam at a "very recent date [...] perhaps it was about the beginning of this century". He noted that the Christians of Beit Jala and the citizens of the village continue to share the same names, and added that the village's original Greek New Testament is still kept in the church in Beit Jala. In another article, he mentioned that women in Dayr Aban have small crosses tattooed on their foreheads. Yitzhak Ben-Zvi mentioned a local tradition according which elderly Muslim women at Dayr Aban preserved old miniature crosses. H. Stephan wrote that persecutions brought Christians from Dayr Aban to seek refuge at Beit Jala and Ramallah, where they stayed in touch with family members that continued to live in the village as Muslims. In 1896, the population of Der Aban was estimated to be about 921 persons. In the 1922 census of Palestine, conducted by the British Mandate authorities, Dayr Aban had a population of 1,214 inhabitants, all Muslims, increasing in the 1931 census to 1,534 inhabitants, in 321 houses. In the 1945 statistics, the village had a total population of 2,100 Arabs; 10 Christians and 2,090 Muslims, with a total of 22,734 dunums of land. Of this, Arabs used 1,580 dunams for irrigable land or plantations, 14,925 for cereals, while 54 dunams were built-up (urban) Arab land. Dayr Aban had a mosque and a pipeline transporting water from 'Ayn Marjalayn, 5 km to the east. The village contains three khirbats: Khirbat Jinna'ir, Khirbat Haraza, and Khirbat al-Suyyag. On 4 August 1948, two weeks into the Second truce of the 1948 Arab–Israeli War, Grand Mufti of Jerusalem and Palestinian nationalist Amin al Husseini noted that ‘for two weeks now . . . the Jews have continued with their attacks on the Arab villages and outposts in all areas. Stormy battles are continuing in the villages of Sataf, Deiraban, Beit Jimal, Ras Abu ‘Amr, ‘Aqqur, and ‘Artuf . . .’ The village became depopulated on 19–20 October 1948, after a military assault during Operation Ha-Har. Through the second half of 1948, the IDF, under Ben-Gurion’s tutelage, continued to destroy Arab villages, including Dayr Aban on 22 October 1948. After the war, the ruin of Dayr Abban remained under Israeli control under the terms of the 1949 Armistice Agreement between Israel and Jordan, until such time that the agreement was dissolved in 1967. The moshav of Mahseya was later established near the site of the old village, on the land of Dayr Aban, as was Tzora, Beit Shemesh and Yish'i. Etymology The prefix "Dayr" which appears in many village names is of Aramaic and Syriac-Aramaic origin, and has the connotation of "habitation," or "dwelling place," usually given to places where there was once a Christian population, or settlement of monks. In most cases, a monastery was formerly built there, and, throughout time, the settlement expanded. Dayr Aban would, therefore, literally mean, "the Monastery of Aban." Gallery References Bibliography External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.