text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/Keren_Hayesod] | [TOKENS: 1348]
Contents Keren Hayesod Keren Hayesod – United Israel Appeal (Hebrew: קרן היסוד, literally "The Foundation Fund") is an official fundraising organization for Israel with branches in 45 countries. Its work is carried out in accordance with the Keren haYesod Law-5716, passed by the Knesset in January 1956, granting the organization a unique fundraising status. It is a registered corporation of the State of Israel. One of Israel's three "National Institutions," Keren Hayesod works in coordination with the Government of Israel and the Jewish Agency for Israel to further the national priorities of the State of Israel. History Keren Hayesod was established at the World Zionist Congress in London on July 7–24, 1920 to provide the Zionist movement with resources needed to establish a Jewish homeland in Palestine. It came in response to the Balfour Declaration of 1917, which stated that "His Majesty's government views with favour the establishment in Palestine of a national home for the Jewish people" – turning the ages-old dream of the return to the Land of Israel into a politically feasible goal. Keren Hayesod established fundraising organizations around the world. Early leaders included Chaim Weizmann, Albert Einstein and Ze'ev Jabotinsky. During the 1920s, Keren Hayesod began to lay the groundwork for a Jewish national home and helped raise funds to establish the Hebrew University of Jerusalem, Bank Hapoalim and various physical projects. In 1926, Keren Hayesod relocated its headquarters from London to Jerusalem. With the establishment of the Jewish Agency in 1929, Keren Hayesod became its fundraising arm while continuing its own wide-ranging activities. The effects of the worldwide economic depression of 1929 hit Keren Hayesod hard, but after Hitler's rise to power in 1933, Keren Hayesod helped to develop the Haifa Bay suburbs to provide housing for German Jews fleeing the Nazis. Towards this end, the Rassco construction company was established in 1934. In 1936, Keren Hayesod supported the establishment of what would become the Israel Philharmonic Orchestra to provide employment for refugee musicians. With the help of donations from all over the Jewish world, Keren Hayesod established over 900 urban and rural settlements in Israel, provided housing and jobs for new immigrants. During and after World War II, it launched emergency campaigns, sometimes in partnership with other organizations. Funds were used to help the Allied war effort and when the concentration camps were liberated to smuggle survivors into Palestine in defiance of British immigration restrictions. Many Keren Hayesod leaders were murdered in the Holocaust. In March 1948, a car bomb was detonated in the courtyard of the building, killing twelve people, including the director of Keren Hayesod, Leib Yaffe. The first full decade that followed the birth of the State of Israel was marked by huge waves of immigration, primarily from North Africa, Yemen, Kurdistan and Iraq. Within a few years, Israel's population tripled, resulting in great distress and a heightened demand for social, educational and cultural services. Keren Hayesod Sderot (1951) and Eilat (1956), as well as kibbutzim and moshavim. Keren Hayesod provided major funding for these communities, establishing new fundraising campaigns around the world and renewing its presence in Germany (1955). The economic crisis that hit Israel in 1983 and 1984 created major hardships, and programs to alleviate social distress became Keren Hayesod's major priority. Keren Hayesod-supported Operation Moses brought 5,000 new immigrants from Ethiopia to Israel in a dramatic airlift (1984), and the organization immediately mobilized to raise funds to address the new immigrants’ special needs. Israel was still in the throes of the First Intifada (1987–1993) when the Soviet Union imploded. The end of the Communist regime in the USSR (1991) opened the gates to over a million Jews who had been fighting for years for the right to immigrate to their ancestral homeland. In addition, over 14,000 Ethiopian Jews were airlifted to Israel in Operation Solomon (in 1991). The massive numbers of new immigrants created a huge demand for immigrant services, housing, and jobs; Keren Hayesod launched a special Exodus Campaign to fund this effort. Renewed resistance to Israel’s illegal occupation of the Palestinian territories, marked by the Second Intifada (2000-2004), had a devastating impact on the Israeli economy, resulting in major social distress. The situation was exacerbated by the crisis in the tourism industry and the bursting of the hi-tech bubble. In response, Keren Hayesod traditional areas of activity, immigrant absorption and Jewish-Zionist education in the Diaspora. Thus, for example, Keren Hayesod, in partnership with the Jewish Agency, Cisco Systems Inc. and the Appleseeds Academy, initiated the Net@ project, which provides high-tech training to youth in the suburbs. Keren Hayesod was also a lead partner in the Jewish Agency Fund for Victims of Terror. Tens of thousands disadvantaged children and adults are served by social and cultural programs established by Keren Hayesod. The organization also provides financial support to educational youth villages, after-school programming, and youth mentoring projects. Sheltered housing has been constructed to enable Holocaust survivors and the needy elderly live out their lives in dignity and comfort. Keren Hayesod marked its 90th anniversary in 2010. A major priority is now financial aid to peripheral cities in Israel and programs to bridge the social gap. Social aid programs Zionist education in the Diaspora Emergency campaigns Keren haYesod provided assistance to residents of southern Israel under daily rocket attack in the summer of 2014. Following the kidnapping and murder of three teenage boys by Hamas in early June, rocket fire from Gaza intensified into round-the-clock attacks with only 15 seconds to run for safety. Keren Hayesod financed the construction of portable bomb shelters in residential areas. These shelters were purchased with funds collected in Keren Hayesod's Emergency Solidarity Campaign launched after Operation “Protective Edge.” Funds were also used to provide fun days for children, giving them a break from the rocket fire, and for professional counseling for traumatized residents. Medical equipment was acquired for hospitals in the south where injured soldiers and civilians were brought for treatment. Other support included financial assistance to families of fallen soldiers and civilians killed in terror attacks, as well as injured soldiers and families whose homes were destroyed by rockets. References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_The_New_York_Times_(1851%E2%80%931896)] | [TOKENS: 4174]
Contents History of The New York Times (1851–1896) The New-York Daily Times was established in 1851 by New-York Tribune journalists Henry Jarvis Raymond and George Jones. The Times experienced significant circulation, particularly among conservatives; New-York Tribune publisher Horace Greeley praised the New-York Daily Times. During the American Civil War, Times correspondents gathered information directly from Confederate states. In 1869, Jones inherited the paper from Raymond, who had changed its name to The New-York Times. Under Jones, the Times began to publish a series of articles criticizing Tammany Hall political boss William M. Tweed, despite vehement opposition from other New York newspapers. In 1871, The New-York Times published Tammany Hall's accounting books; Tweed was tried in 1873 and sentenced to twelve years in prison. The Times earned national recognition for its coverage of Tweed. In 1891, Jones died, creating a management imbroglio in which his children had insufficient business acumen to inherit the company and his will prevented an acquisition of the Times. Editor-in-chief Charles Ransom Miller, editorial editor Edward Cary, and correspondent George F. Spinney established a company to manage The New-York Times, but faced financial difficulties during the Panic of 1893. 1851–1861: Origins and initial success Seven newspapers in New York titled The New York Times existed before the Times in the early 1800s. In 1851, journalists Henry Jarvis Raymond and George Jones working for Horace Greeley at the New-York Tribune formed Raymond, Jones & Company on August 5, 1851. The first issue of the New-York Daily Times was published on September 18, 1851, in the basement of 113 Nassau Street. The Times frequently culled from European newspapers at the time and from within the United States, particularly in California. The New-York Daily Times was well received by conservatives. By its ninth issue, the Times boasted that it had been circulated ten thousand times. On its first anniversary, the New-York Daily Times announced it had printed 7,550,000 copies and circulated 24,000 copies a day, although these figures were contested by Bennett[who?].[citation needed] The following day, the price of the Times increased to two cents (equivalent to $0.77 in 2025). Early investors of the company included Edwin B. Morgan and Christopher Morgan. The New-York Daily Times experimented with multiple formats, with the Weekly Family Times circulating until the 1870s, and Semi-weekly Times lasting several years longer. The prevalence in rail transportation also ended the Campaign Times for presidential years. The Times for California was started in 1852 and circulated when mail boats could be sent to California from New York.[citation needed] By 1854, the New-York Daily Times had moved to Nassau and Beekman Streets. The company purchased the Brick Presbyterian Church in 1857, following the congregation's egress to Murray Hill. Architect Thomas R. Jackson designed a five-story Romanesque Revival building at the 41 Park Row site. When the New-York Daily Times moved into the building in 1858, the paper became the first housed in a building specifically constructed for a newspaper. On September 14, 1857, Raymond shortened the paper's name to The New-York Times. 1861–1869: Civil War, expansion, and Raymond's death In the 1860 presidential election, The New-York Times was a leading Republican newspaper. During the Civil War, the Times experienced a transformation necessitated by the public's demand for recent updates on the war. To gather updates, The New-York Times relied on correspondents in Confederate states rather than telegraphs from the Associated Press.[citation needed] The New-York Times correspondents competed against other newspapers to gather as much information as possible.[citation needed] Benjamin C. Truman, a distinguished war correspondent, reported on the Confederacy's repulse in the Battle of Franklin four days before the Department of War heard from John Schofield. Due to mounting opposition to the Civil War in New York, on July 13, 1863, a series of violent disturbances broke out. Thousands of Irish American rioters set flame to the draft registration office and attacked the New-York Tribune office.[citation needed] Warned by the attack on the Tribune, the staff of the Times armed themselves with Gatling guns. Raymond sent sixteen men armed with Minié rifles to the Tribune's office to stave off the mob while two hundred policemen marched onto Printing House Square.[citation needed] The New-York Times remained prideful in its coverage of the event. The Civil War drove The New-York Times to purchase more presses and to stereotype, an approach tested by the New-York Tribune and met with failure. On April 20, 1861, eight days after the attack on Fort Sumter, the Times began issuing a Sunday edition of the paper. For 3 years, both The Sunday Times and The New-York Times went up in price to four cents (equivalent to $0.82 in 2025), where it would remain until 1883.[citation needed] By May 1861, circulation had gone up by 40,000 issues. In December, the paper extended its columns from six to seven—in line with the English newspaper The Times. The New-York Times suffered a reputational loss in August 1866. Raymond attended the National Union Convention in Philadelphia and composed the Philadelphia Address to endorse Andrew Johnson.[citation needed] The address cost Raymond his position as chairman of the Republican National Committee, and The Times's rivals seized on the opportunity to gain an advantage. According to Raymond, the incident cost the paper US$100,000 (equivalent to US$2.2 million in 2025). In 1868, The New-York Times supported Grant. Raymond also established principles for the Times to follow, including objecting to "easy but unsound money"—including Greenbacks and later free silver. The paper did not support Samuel J. Tilden in the 1876 presidential election nor William Jennings Bryan in the 1896 presidential election, falling in line with the National Democratic Party.[citation needed] The New-York Times also supported reforming the tariff and introducing a merit system into civil service. The Times also became involved in local issues; in 1868, the paper opposed the Erie Railroad. On June 18, 1869, Raymond died. At an annual salary of US$9,000 (equivalent to $218,000 in 2025), George Jones inherited the company and took over its editorial and financial end. The New-York Times's directors—composed of Jones, Leonard Jerome, and James B. Taylor—elected John Bigelow editor. The Black Friday of 1869 occurred that year when investors Jay Gould and James Fisk cornered the gold market. The Times published an article by Abel Corbin promoting gold, but its prose was rendered innocuous after financial editor Caleb C. Norvell suggested that Corbin had an ulterior motive to "bull gold". Shortly after Black Friday, Bigelow left The New-York Times, replaced by George Shepard.[citation needed] 1869–1876: Jones era, the Tweed Ring, and national recognition No money that could be offered me [Tweed wrote] should induce me to dispose of a single share of my property to the Tammany faction, or to any man associated with it, or indeed to any person or party whatever until this struggle is fought out. Under Jones, The New-York Times actively sought to challenge William M. Tweed and the Tweed Ring.[citation needed] The death of Taylor, who was a business partner of Tweed's through the New-York Printing Company, in September 1870 allowed the Times to attack the Tweed Ring. The New-York Times, except for Harper's Weekly through Thomas Nast, was the only newspaper in New York that actively went against Tweed; municipal advertising created a virtual hush fund. Jennings publicly questioned Tweed's wealth—having gone from bankruptcy in 1865 to owning a mansion on Madison Avenue and 59th Street—in an editorial on September 20.[citation needed] Jennings feuded with the New York World in the following days over his editorial. The Sun jovially suggested a monument of Tweed, a "benefactor of the people", should be erected, although a great deal of readers seriously. The Sun later attacked Jennings, writing that his career was "doomed". The New-York Times and Harper's Weekly's reporting did not elicit a strong response from readers themselves. In October, the Astor Committee—of which John Jacob Astor III was a member—found no wrongdoing, and the Tammany faction was reelected that year. In January 1871, county auditor James Watson was killed in a sleighing accident. The Times's reporting of the accident a week prior mentioned Watson's US$10,000 (equivalent to $269,000 in 2025) mare, though readers remained unfazed. To replace Watson, Tweed hired Matthew J. O'Rourke, who secretly worked for James O'Brien, a former sheriff and Tammany insurgent. Through William Copeland, a tax accountant and O'Brien adherent, O'Rourke was able to obtain incriminating entries in the Tweed Ring's books.[citation needed] O'Rourke attempted to offer the books to The Sun, who rejected his offer. In March, Tweed proposed purchasing The New-York Times for US$5 million (equivalent to US$134.38 million in 2025), much to Jones's chagrin.[citation needed] Tweed's offer was publicly rejected in the Times on March 29. On July 8, 1871, The New-York Daily Times published the first of these books.[citation needed] The Times published the second set on July 19, after the Orange Riots subsided. The release of the Tweed Ring's books severely damaged Tweed; he offered Jones US$5 million to suppress the stories.[citation needed] In early 1871, Raymond's widow considered selling her stock to Tweed. Jones wired to multimillionaire Edwin D. Morgan, who came out of rural retirement to block the move.[citation needed] The New-York Daily Times continued its coverage from July 22 to 29.[citation needed] Tweed was tried in 1873 and sentenced to twelve years in prison, although he only served a year. For their coverage of the Tweed Ring, the Times received praise from newspapers nationally. Despite recognition and a steadfast stock price, The New-York Daily Times's circulation numbers remained low[a] and the paper regularly paid high dividends, despite low salaries and living costs. The New-York Tribune was able to use the Times's continuous coverage of Tweed to cover the Great Chicago Fire and the Great Boston Fire of 1872 in greater detail, although the Times was able to cover the Franco-Prussian War through transmissions. In the years following the Tammany campaign, the editors of the Times reconciled their beliefs with the overall Republican Party. In May 1872, the Liberal Republicans gathered to oppose Ulysses S. Grant's reelection bid and the Radical Republicans. At the convention, the Liberal Republicans nominated Horace Greeley. The New-York Times chose to attack Greeley for his beliefs and did not resurface his admiration for Fourierism.[citation needed] The appointment of John C. Reid as managing editor allowed the paper to cover the trial of Henry Ward Beecher in full, a feat unheard of in journalism, though not without criticism from readers who felt that the continuous coverage was vulgar. 1876–1896: Democratic support, Jones's death, and financial hardship Ahead of the 1876 presidential election, the Times's editors rejected a third-term for Grant and did not believe James G. Blaine would be a proper candidate. Jennings's radical Republicanism clashed with Jones's moderate beliefs, and he plotted to solidify control of The New-York Times to further his agenda and forge the paper into an organ of the party through the estate of James B. Taylor.[citation needed] Jennings's efforts were stopped when Jones purchased Taylor's stock for US$150,000 (equivalent to US$4.54 million in 2025) on February 4, 1876, a figure widely reported in financial circles; rival papers refused to believe that the stock was worth that much and accused the Times of inflating the price by bidding against Jennings and that part of the price represented "back dividends".[citation needed] Jennings resigned several months later and became a Member of Parliament. John Foord of the Connolly books succeeded him until 1883. Entering the election, The New-York Times was a Republican paper with a streak of independence.[citation needed] Emboldened by the political controversy surrounding the Mulligan letters, which prevented Blaine from receiving the nomination, the Times supported Rutherford B. Hayes and vehemently attacked Samuel J. Tilden. [The Times] will not support Mr. Blaine for the presidency. It will advise no man to vote for him, and its reasons for this are perfectly well understood by everybody that has ever read it. The New-York Times supported James A. Garfield, victor of the 1880 presidential election, during Roscoe Conkling's comity. Frank D. Root of the Times exposed the Star Route scandal in 1881, the same year that the paper exposed New York Supreme Court justice Theodoric R. Westbrook's support for Jay Gould in controlling the Manhattan Railway Company and a US$250,000 (equivalent to US$8.34 million in 2025) fund for Grant, the latter earning the Times more recognition than shock. These exposés sustained The New-York Times in the 1880s.[citation needed] In April 1883, Charles Ransom Miller succeeded Foord as editor-in-chief. Amid breaks in the Republican Party in 1884, the Times supported neither Blaine nor Chester A. Arthur in an editorial on May 23.[citation needed] Although much of the editorial staff believed the paper should support the Republican ticket, the editorials reflected the populace. On June 7, following Blaine's nomination, in an editorial titled, "Facing the Fire of Defeat", The New-York Times officially disassociated with the Republican Party. Citing his gubernatorial experience, The New-York Times supported Grover Cleveland in the 1884 presidential election.[citation needed] The paper took a financial hit from a net profit of US$188,000 (equivalent to US$6.5 million in 2025) in 1883 to US$56,000 (equivalent to US$2.01 million in 2025), although much of the loss was incurred by the Times decreasing in price from four cents (equivalent to $1.38 in 2025) to two cents (equivalent to $0.72 in 2025). The New-York Times continued to support Cleveland for upholding many of the ideals laid out by Henry Jarvis Raymond. In the late 1880s and throughout the 1890s, the Times faced a changing media landscape, both from within New York and internationally as the Second Industrial Revolution began. The New-York Times published the Spanish Treaty of 1884 on December 8 through cable; at a purported cost of US$8,000 (equivalent to $286,667 in 2025), it is the most expensive cable message the paper has received.[citation needed] Through Harold Frederic's cable letter, readers in New York were able to understand global affairs, including the Proclamation of the Republic in Brazil, which overthrew Pedro II. As the Dickensian New York dissipated, the Times covered how Charles F. Brush's arc lamps replaced gaslight on Broadway, elevated railroads on Third Avenue, and Thomas Edison's Kinetoscope. A mellower editorial page slowly went under the influence of Edward Cary, a Quaker. The technological advancements in New York made up for a slower news cycle.[citation needed] The New-York Times was the first publication to cover the sinking of the SS Oregon on March 14, 1883. Despite supporting Cleveland in the 1888 presidential election, the Times did not accept Democrat David B. Hill. In part prompted by the construction of the New York Tribune Building, construction on a second building at 41 Park Row began in 1888 using designs from Beaux-Arts architect George B. Post. Reconstructing the building posed a logistical challenge, as employees of the Times needed to work while the new building was erected.[citation needed] The new building gradually took form over the next year, and by April 1889, construction completed. Jones would speak fondly of the new building, although annual profits dropped from US$100,000 (equivalent to US$3.58 million in 2025) in the mid-1880s to US$15,000 (equivalent to US$537,500 in 2025) in 1890. In the final year of Jones's life, The New-York Times undertook an active effort to undermine the financial wrongdoings of the New York Life Insurance Company through W. C. Van Antwerp.[citation needed] The New York Life Insurance Company personally sued Jones and Miller, but later asked how the company could fix its wrongdoings and appointed John A. McCall president of the company. On the morning of August 12, 1891, Jones died at his home in Poland, Maine.[citation needed] The borders of the next day's paper were blackened and an editorial was written detailing his significance to the paper; it was stated that "no writer of the Times was ever required or asked to urge upon the public views which he did not accept himself". Although his heirs owned a great majority of stock in the Times, they were not journalistically minded. Jones's son, Gilbert, was trained in The New-York Times's office, but neither him nor Jones's son-in-law, Henry L. Dyer, could manage the business properly. The profits left by Jones to his children were without regard for where they came from, and the rest of the family did not hold the paper with value.[citation needed] In late 1892, the staff of The New-York Times learned that the company would likely be sold to a man antithetical to Raymond and Jones's values, although the will stipulated the paper should never be sold. On April 13, 1893, the Times was sold to the New-York Times Publishing Company, a company managed by Cary, George F. Spinney, and chaired by Miller, for US$1 million (equivalent to US$35.83 million in 2025).[b] The company that Miller, Spinney, and Cary received was financially unsustainable. Fundamentally, The New-York Times's business model depended on leaner newspaper production, and the Times did not implement cost accounting.[citation needed] The presses were dilapidated; the Linotype machines were leased. With Jones left his expertise on how to manage the rusted printing machines. The men soon discovered that they had rented a building on 41 Park Row at US$40,000 (equivalent to US$1.43 million in 2025), not the structure. The rivalry between William Randolph Hearst and Joseph Pulitzer encouraged the two men to engage in increasingly sensationalist journalism.[citation needed] The free silver movement in 1893 that ultimately led to an economic depression gave the paper a death blow. The men could not find money to carry on the paper nor advertising, although they were able to sell US$250,000 (equivalent to US$8.96 million in 2025) in debenture. In December 1891, the Times increased to three cents (equivalent to $1.07 in 2025), a move that furthered the paper's decline. To advertise the new price, Jones had the borders printed in color. Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Institute_for_Ethics_and_Emerging_Technologies] | [TOKENS: 2396]
Contents Institute for Ethics and Emerging Technologies The Institute for Ethics and Emerging Technologies (IEET) is a technoprogressive think tank that seeks to "promote ideas about how technological progress can increase freedom, happiness, and human flourishing in democratic societies." It was incorporated in the United States in 2004, as a non-profit 501(c)(3) organization, by philosopher Nick Bostrom and bioethicist James Hughes. The think tank aims to influence the development of public policies that distribute the benefits and reduce the risks of technological change. It has been described as "[a]mong the more important groups" in the transhumanist movement, and as being among the transhumanist groups that "play a strong role in the academic arena". The IEET works with Humanity Plus (also founded and chaired by Bostrom and Hughes, and previously known as the World Transhumanist Association), an international non-governmental organization with a similar mission but with an activist rather than academic approach. A number of technoprogressive thinkers are offered positions as IEET Fellows. Individuals who have accepted such appointments with the IEET support the institute's mission, but they have expressed a wide range of views about emerging technologies and not all identify themselves as transhumanists. In early October 2012, Kris Notaro became the managing director of the IEET after the previous Managing Director Hank Pellissier stepped down. In April 2016, Steven Umbrello became the managing director of the IEET. Marcelo Rinesi is the IEET's Chief Technology Officer. Activities The Institute publishes, the Journal of Ethics and Emerging Technologies (JEET), formerly the Journal of Evolution and Technology (JET), a peer-reviewed academic journal. JET was established in 1998 as the Journal of Transhumanism and obtained its current title in 2004. The editor-in-chief is Mark Walker. It covers futurological research into long-term developments in science, technology, and philosophy that "many mainstream journals shun as too speculative, radical, or interdisciplinary." The institute also maintains a technology and ethics blog that is supported by various writers. In 2006, the IEET launched the following activities: The institute has since shifted its research away from these programs and towards research on the policy implications of human enhancement and other emerging technologies. It has since partnered with the Applied Ethics Center at the University of Massachusetts Boston to focus on two specific programs: In late May 2006, the IEET held the Human Enhancement Technologies and Human Rights conference at the Stanford University Law School in Stanford, California. The IEET along with other progressive organizations hosted a conference in December 2013 at Yale University on giving various species "personhood" rights. Fellows of the Institute represent the Institute at various conferences and events, including the NASA Institute for Advanced Concepts and the American Association for the Advancement of Science. In 2014, the IEET lead and/or co-sponsored five conferences including: Eros Evolving: The Future of Love, Sex, Marriage and Beauty conference in April in Piedmont, California, and the Global Existential Risks and Radical Futures conference in June in Piedmont, California. Reception Wesley J. Smith, an American conservative lawyer and advocate of intelligent design, wrote that the institute has one of the most active transhumanist websites, and the writers write on the "nonsense of uploading minds into computers and fashioning a post humanity." Smith also criticized the results of the institute's online poll that indicated the majority of Institute's readers are atheist or agnostic. According to Smith, this was evidence that transhumanism is a religion and a desperate attempt to find purpose in a nihilistic and materialistic world. The institute's advocacy project to raise the status of animals to the legal status of personhood also drew criticism from Smith because he claimed humans are exceptional and raising the status of animals may lower the status of humans. Katarina Felsted and Scott D. Wright wrote that although the IEET considers itself technoprogressive some of its views can be described as strong transhumanism or a "radical version of post ageing", and one particular criticism of both moderate and strong transhumanism is that moral arbitrariness undermine both forms of transhumanism. Recent developments (2016 – present) In 2016, IEET started to work with other universities on research projects to expand its research programs and reach more audiences. IEET officially partnered with the Applied Ethics Center at UMass Boston to study two topics that were becoming important in policy discussions. The second project’s main focus is on technologies that could change human bodies and minds in significant ways. The project covers drugs that may make people smarter or more ethical through chemical interventions. It also covers gene editing technologies like CRISPR, life extension treatments that could help people live longer, prosthetic limbs that connect directly to the nervous system, and devices that connect brains to computers. Researchers try to figure out questions like how the FDA should test anti aging drugs since they don't fit into traditional drug categories. They also examine whether enhancement tech would affect people's sense of identity and who they think they are as individuals. Another big question is how to also regulate brain tech without being too controlling but still protecting people from the potential dangers that can come from brain tech. One controversial question is whether technologies that change moral decisions should be used in prisons or courtrooms to rehabilitate criminals. IEET not only looks at the scientific possibilities, but also the ethical problems that come with the use of these technologies. Researchers also study how different countries approach regulation of human enhancement and also whether there should be international regulations for these types of technologies. In order to be able to support research programs like this, UMass Boston and IEET created three postdoctoral fellowship positions that would focus specifically on these topics. Funding would be provided for the people in the fellowships for early career researchers to spend time developing their expertise in technology ethics while producing academic work. Researchers have published academic papers covering subjects such as how AI relates to practical wisdom and the ability to make good decisions in complex situations. There are also policy reports about AI therapy chatbots and whether they should be regulated like medical devices. But, not only do researchers write for conferences but they also write for general audiences such as regular people who can also understand the ethical questions about new technologies. Their articles ask whether schools can prepare students for automation and what kind of education will be needed in the future. They also talk about how AI stops people from making genuine choices by predicting what they want and limiting their options. They then present their research at academic conferences and policy workshops to share their findings with both scholars and policymakers who are working on technology regulation. They combine solid research with hands-on policy analysis to address real problems that governments and companies face when dealing with emerging technologies. IEET expanded internationally in 2024 in order to connect more researchers and policymakers in different regions. During this time, The Ethics and Emerging Technologies at the University of Turin in Italy was also established by the IEET, to establish a presence in Europe. They study ethical problems that come from emerging tech and how different European countries approach technology regulation. The main focus is for keeping technology human centered as well as understanding technologies from multiple perspectives including philosophy, law, and social science. IEET tackles tech ethics as a global problem and works on making AI systems trustworthy and work well with people’s values. People are trained through fellowship programs and workshops, producing research that is useful for policymakers globally who are creating new technology regulations. Steven Umbrello is the managing director of the Ethics and Emerging Technologies at the University of Turin and oversees the research programs and fellowship activities. There are many collaborations including universities, governments, and different companies from around the world to understand how technology ethics plays a role in different regulatory contexts. The Ethics and Emerging Technologies at the University of Turin allows IEET to compare how European countries regulate technology differently from the USA and other regions. Therefore, better international rules for emerging technologies can be created. The organization's leader, James Hughes is the executive director and co-founder. He provides overall plans for IEET's research programs and partnerships. Steven Umbrello is the managing director. He handles day to day operations and oversees the fellowship programs and research initiatives. Marcelo Rinsei is the chief technology officer making sure the organization understands new technologies correctly and can provide accurate analysis of technical developments. In 2025, UNESCO added IEET to its database of civil society organizations working on AI ethics and policy. This addition of IEET confirms that IEET is recognized internationally as a voice in technology ethics debates and policy discussions, giving the organization more visibility in international discussion. The IEET works on making sure that new technologies help people and society rather than cause harm such as researching to make sure to look at both the benefits and the risks. The team strives to keep their partnerships with academic institutions, government agencies, and international organizations. This helps create better international rules for emerging technologies. Between 2020 and 2025, IEET has supported researchers through fellowship programs that focus on different aspects of technology ethics. Alec Stubbs completed a fellowship program where he studied how automation and how AI might change employment and what policies could help workers adapt to these changes. Fellowship in human enhancement biology was also awarded to Cristiano Cali, where his research examined the ethical and biological issues surrounding technologies that have the potential to alter human capacities. Lastly, Cody Turner also established a fellowship to study human thought and artificial intelligence. His main focus was on the implications for society of how artificial intelligence systems digest information differently than humans. IEET's blog has been moved to Substack which changed how the organization shares content with readers. This makes it much easier for people to read their articles and research and subscribe to updates when new content gets posted. The platform allows writers to publish longer essays rather than normal blog formats, which ultimately gives them more space to explain in depth ethical questions about technology. Writers can also interact with readers through comments and discussions which creates conversations between researchers and the public. The move to Substack happened because the organization wanted to reach more people and make content easier to access on different devices including phones and tablets. For the purpose of increasing the amount of people who could access their work without paying subscription fees, the IEET published several research papers for free to the public online in 2024. Human enhancement, reproductive ethics, and quantum technology were covered as the range of topics the researchers studied. Some of the online papers discussed topics such as whether parents should be allowed to use gene editing on their children, or how quantum computers might affect privacy and security. IEET's goal is to make technology ethics available to not just universities or organizations, but to everyone such as students. Policymakers, journalists, and students can download the free papers to read and learn on different viewpoints on technological ethics. Throughout the year, new issues of the Journal of Ethics and Emerging Technologies are released on a regular basis. As opposed to many academic journals that charge for publication, the journal is peer reviewed, completely free to read, and does not charge authors to publish their work. Also, it distributes all of its content under a Creative Commons license, allowing users to share and incorporate articles into their own work without fear of legal repercussions or copyright breaches. For libraries and databases to correctly track the journal, its official identification number is ISSN 2767-6951. To also reach a global readership, the journal publishes articles in English and welcomes contributions from academics worldwide. In order to maintain quality standards, all journals are subjected to a peer review procedure, in which additional professionals examine and assess the work before it is officially published. Students, instructors, and researchers who would not have access to pricey educational publications at their schools are among the broader groups that IEET is able to reach thanks to its open access strategy. References External links
========================================
[SOURCE: https://www.theverge.com/column/811549/trump-tariff-shakedown] | [TOKENS: 4050]
ColumnCloseColumnPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ColumnPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyThe StepbackCloseThe StepbackPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All The StepbackThe great tariff shakedownWe’re all paying for Donald Trump’s erratic policies.by Mia SatoCloseMia SatoFeatures Writer, The VergePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Mia SatoNov 2, 2025, 1:00 PM UTCLinkShareGiftIf you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Image: Cath Virginia / The Verge, Getty ImagesColumnCloseColumnPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ColumnPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyThe StepbackCloseThe StepbackPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All The StepbackThe great tariff shakedownWe’re all paying for Donald Trump’s erratic policies.by Mia SatoCloseMia SatoFeatures Writer, The VergePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Mia SatoNov 2, 2025, 1:00 PM UTCLinkShareGiftIf you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.Mia SatoCloseMia SatoPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Mia Sato is features writer with five years of experience covering the companies that shape technology and the people who use their tools.This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the real-world effects of Trump’s tariffs, follow Mia Sato. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.How it startedThe economists tried to warn us. From the beginning, trade policy experts cautioned that tariffs would raise consumer prices and wouldn’t create the American manufacturing renaissance that the Trump administration promised. One thing that I wrote about tariffs back in February keeps popping in my head: When I asked economists and policy folks what high tariffs would actually look like, they were hesitant to be prescriptive. That’s because each industry operates in different ways, and every commodity we have access to arrives to us having touched so many hands. It’s not that one day everything will jump in price, or in equal measure — instead, the consumer pain could be drawn out, with price increases or supply disruptions popping up without much warning.Perhaps the most immediately disruptive tariff policy came in September, when the de minimis rule was suspended for all countries shipping small packages to the US. Overnight, the change affected all parcels valued under $800 that previously entered the US tariff-free. Thanks to the de minimis rule, I was a regular shopper on Japanese eBay for years; on Etsy, I bought handmade jewelry from India and antique treasures from the UK. I haven’t purchased anything that ships from abroad since this summer, wary of surprise tariff bills. If you have any kind of hobby that even partially depends on physical goods coming from abroad — photography, crafting, K-beauty, you name it — you’ve probably already gotten into a new habit before you buy anything online: Check to see if something is even available to be shipped to you in the first place. Then, try to make sense of what, if anything, you’ll owe in additional fees.How it’s goingIt will probably come as no surprise: Consumers have paid up to 55 percent of the tariffs imposed by Trump, according to a Goldman Sachs report released in mid-October. And that number could go higher still: The New York Times reported that even some companies that initially absorbed costs instead of passing them on to consumers are now looking for ways to boost profits that tariffs ate into.Another thing that economists told me months ago that’s come to bear: It’s not just imports that will get more expensive. By now many people understand that even products made in the US often use smaller components made somewhere else, therefore making US businesses’ costs jump. Data from Harvard’s Pricing Lab shows that it’s not just imported goods that have risen in price — domestic items have gotten more expensive, too. Some of that could be because of imported materials or components. But in industries where consumers have more US-made alternatives to imports, domestic manufacturers can also hike their own prices in response to imports that are now more expensive. They raise prices because they can.Other companies have said outright that Trump’s tariff policies are triggering changes to their business. The sporting and outdoor goods company Orvis is planning to close half its stores by 2026 and slim down its product offerings due to the “unprecedented tariff landscape.” Kids clothing store Carter’s similarly said that tariffs were eating into its profit margins and that it would close 150 stores and cut 300 jobs.The new tariffs — especially those resulting from the end of the de minimis rule — create new headaches for consumers. I’ve gotten in the habit of triple-checking where products are shipping from, and if I’m not certain, I message the seller and confirm. Some shoppers’ packages are stuck in customs and in transit, and UPS told NBC News that it was “disposing of” some shipments amid the backlog.Producers have also come up with… creative ways to blunt the impact of the tariffs. For consumers, Halloween candy this year might have been smaller thanks to shrinkflation, but also less chocolate-y as manufacturers deal with rising prices of cocoa. Tariffs are far from the only reason that treats are more expensive, but CNN notes that candy makers are coming up with some truly cursed new specialty flavors that aren’t as chocolate-forward, like “cinnamon-toast-flavored Kit Kats.” No, thank you.What happens nextThe holidays will be another test of the resiliency of our supply chains and Trump’s commitment to his wildly unpopular trade policies. Around 90 percent of fake Christmas trees are made in China, for example, and some importers have warned that there could be shortages of decorations this year (the shortages are also already being used to encourage shoppers to “stock up” and “get ahead of the curve,” so take that as you will).Trump’s tariffs are also being challenged in court, and the Supreme Court is scheduled to hear arguments in the first week of November. Instead of going through Congress, Trump imposed tariffs using the International Emergency Economic Powers Act.It’s hard not to feel like Trump’s tariffs are a continuous own-goal, over and over again — especially because after coming down hard on certain countries and commodities, the Trump administration regularly cuts side deals to decrease the taxes. You know how retailers will sometimes raise prices before a sale so the markdowns make shoppers think they’re getting a deal? That’s what this feels like. Most recently, Trump and Chinese president Xi Jinping reached a “deal” where Trump agreed to “trim” tariffs on China by “10 percent” (the tax is still 47 percent). In exchange, China said it would work to stop fentanyl from coming into the US from China, pause restrictions on the export of rare-earth minerals essential to auto and tech industries, and buy many million metric tons of US soybeans over the next several years (recall that Trump triggered a soybean crisis during his last presidency, too).I say “deal,” and “trim,” and “10 percent” because these terms are far from settled. If the last several months are any indication, the framework of the agreement could be one clever commercial away from being blown up entirely. The public flattery of Trump — Apple’s gold plaque, the Korean crown, Japan serving US rice to Trump at lunch, YouTube throwing millions at Trump’s ballroom and Nvidia’s Jensen Huang bragging about his support — are shocking enough. The promises made behind closed doors in negotiating these deals are apparently fickle enough to change on a dime; the only certainty is the rest of us will feel it regardless.By the wayThe $800 per person per day de minimis exemption was indeed on the high end compared to other countries: EU nations have a 150 euro threshold, for example. The US limit was $200 until 2016, when it was quadrupled. The best way to get tariffs reduced or removed continues to be gold stuff, apparently. During a visit to South Korea, local officials gifted Trump a gold crown; shortly after, the two parties announced they had reached a trade deal with lower reciprocal tariffs. They also served Trump mini beef patties with ketchup.K-pop (actually, just pop in general) star RM, a member of BTS, spoke at a trade forum in South Korea that hosted Trump before his trade talks with South Korean leaders. “K-pop’s shining success is proof that cultural diversity and creativity are the greatest human potential — a force with no borders and no limits for growth,” he said at the event. It might seem random to have a pop star talk about global trade, but K-pop is one of the most successful (and lucrative) exports ever. It’s kind of perfect.Is anyone else’s local Christmas celebration being canceled because of tariffs?Read thisThis Times piece on Malaysia’s “Furniture City” and the artisans and manufacturers whose livelihoods hang in the balance.This nice New York Times explainer on one of the tests SCOTUS might use to evaluate the constitutionality of Trump’s tariffs. Note that the same test was used to reject several policies by the Biden administration, like student loan forgiveness and the pandemic-era eviction moratorium. This Business of Fashion story on the effects of US tariffs on India’s garment workers.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Mia SatoCloseMia SatoFeatures Writer, The VergePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Mia SatoColumnCloseColumnPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ColumnPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyThe StepbackCloseThe StepbackPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All The StepbackMost PopularMost PopularThe RAM shortage is coming for everything you care aboutA $10K+ bounty is waiting for anyone who can unplug Ring doorbells from Amazon’s cloudThe biggest app in the whole wide worldMeta’s VR metaverse is ditching VRGE Profile made a smaller version of its nugget ice maker that needs less counter spaceThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Column Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All The Stepback The great tariff shakedown We’re all paying for Donald Trump’s erratic policies. Posts from this author will be added to your daily email digest and your homepage feed. See All by Mia Sato If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Posts from this topic will be added to your daily email digest and your homepage feed. See All Column Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All The Stepback The great tariff shakedown We’re all paying for Donald Trump’s erratic policies. Posts from this author will be added to your daily email digest and your homepage feed. See All by Mia Sato If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Posts from this author will be added to your daily email digest and your homepage feed. See All by Mia Sato This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on the real-world effects of Trump’s tariffs, follow Mia Sato. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here. How it started The economists tried to warn us. From the beginning, trade policy experts cautioned that tariffs would raise consumer prices and wouldn’t create the American manufacturing renaissance that the Trump administration promised. One thing that I wrote about tariffs back in February keeps popping in my head: When I asked economists and policy folks what high tariffs would actually look like, they were hesitant to be prescriptive. That’s because each industry operates in different ways, and every commodity we have access to arrives to us having touched so many hands. It’s not that one day everything will jump in price, or in equal measure — instead, the consumer pain could be drawn out, with price increases or supply disruptions popping up without much warning. Perhaps the most immediately disruptive tariff policy came in September, when the de minimis rule was suspended for all countries shipping small packages to the US. Overnight, the change affected all parcels valued under $800 that previously entered the US tariff-free. Thanks to the de minimis rule, I was a regular shopper on Japanese eBay for years; on Etsy, I bought handmade jewelry from India and antique treasures from the UK. I haven’t purchased anything that ships from abroad since this summer, wary of surprise tariff bills. If you have any kind of hobby that even partially depends on physical goods coming from abroad — photography, crafting, K-beauty, you name it — you’ve probably already gotten into a new habit before you buy anything online: Check to see if something is even available to be shipped to you in the first place. Then, try to make sense of what, if anything, you’ll owe in additional fees. How it’s going It will probably come as no surprise: Consumers have paid up to 55 percent of the tariffs imposed by Trump, according to a Goldman Sachs report released in mid-October. And that number could go higher still: The New York Times reported that even some companies that initially absorbed costs instead of passing them on to consumers are now looking for ways to boost profits that tariffs ate into. Another thing that economists told me months ago that’s come to bear: It’s not just imports that will get more expensive. By now many people understand that even products made in the US often use smaller components made somewhere else, therefore making US businesses’ costs jump. Data from Harvard’s Pricing Lab shows that it’s not just imported goods that have risen in price — domestic items have gotten more expensive, too. Some of that could be because of imported materials or components. But in industries where consumers have more US-made alternatives to imports, domestic manufacturers can also hike their own prices in response to imports that are now more expensive. They raise prices because they can. Other companies have said outright that Trump’s tariff policies are triggering changes to their business. The sporting and outdoor goods company Orvis is planning to close half its stores by 2026 and slim down its product offerings due to the “unprecedented tariff landscape.” Kids clothing store Carter’s similarly said that tariffs were eating into its profit margins and that it would close 150 stores and cut 300 jobs. The new tariffs — especially those resulting from the end of the de minimis rule — create new headaches for consumers. I’ve gotten in the habit of triple-checking where products are shipping from, and if I’m not certain, I message the seller and confirm. Some shoppers’ packages are stuck in customs and in transit, and UPS told NBC News that it was “disposing of” some shipments amid the backlog. Producers have also come up with… creative ways to blunt the impact of the tariffs. For consumers, Halloween candy this year might have been smaller thanks to shrinkflation, but also less chocolate-y as manufacturers deal with rising prices of cocoa. Tariffs are far from the only reason that treats are more expensive, but CNN notes that candy makers are coming up with some truly cursed new specialty flavors that aren’t as chocolate-forward, like “cinnamon-toast-flavored Kit Kats.” No, thank you. What happens next The holidays will be another test of the resiliency of our supply chains and Trump’s commitment to his wildly unpopular trade policies. Around 90 percent of fake Christmas trees are made in China, for example, and some importers have warned that there could be shortages of decorations this year (the shortages are also already being used to encourage shoppers to “stock up” and “get ahead of the curve,” so take that as you will). Trump’s tariffs are also being challenged in court, and the Supreme Court is scheduled to hear arguments in the first week of November. Instead of going through Congress, Trump imposed tariffs using the International Emergency Economic Powers Act. It’s hard not to feel like Trump’s tariffs are a continuous own-goal, over and over again — especially because after coming down hard on certain countries and commodities, the Trump administration regularly cuts side deals to decrease the taxes. You know how retailers will sometimes raise prices before a sale so the markdowns make shoppers think they’re getting a deal? That’s what this feels like. Most recently, Trump and Chinese president Xi Jinping reached a “deal” where Trump agreed to “trim” tariffs on China by “10 percent” (the tax is still 47 percent). In exchange, China said it would work to stop fentanyl from coming into the US from China, pause restrictions on the export of rare-earth minerals essential to auto and tech industries, and buy many million metric tons of US soybeans over the next several years (recall that Trump triggered a soybean crisis during his last presidency, too). I say “deal,” and “trim,” and “10 percent” because these terms are far from settled. If the last several months are any indication, the framework of the agreement could be one clever commercial away from being blown up entirely. The public flattery of Trump — Apple’s gold plaque, the Korean crown, Japan serving US rice to Trump at lunch, YouTube throwing millions at Trump’s ballroom and Nvidia’s Jensen Huang bragging about his support — are shocking enough. The promises made behind closed doors in negotiating these deals are apparently fickle enough to change on a dime; the only certainty is the rest of us will feel it regardless. By the way Read this Posts from this author will be added to your daily email digest and your homepage feed. See All by Mia Sato Posts from this topic will be added to your daily email digest and your homepage feed. See All Column Posts from this topic will be added to your daily email digest and your homepage feed. See All Policy Posts from this topic will be added to your daily email digest and your homepage feed. See All The Stepback Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Column This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Byzantine_Palestine] | [TOKENS: 2314]
Contents Byzantine Palestine Byzantine Palestine or Palaestina refers to the geographic, political, and cultural landscape of Palestine during the period of Byzantine rule (early 4th to mid-7th centuries CE), beginning with the consolidation of Constantine's power in the early 4th century CE and lasting until the Arab-Muslim conquest in the 7th century CE. The term generally designates the territories reorganized into the provinces of Palaestina Prima, Secunda, and Tertia (or Salutaris) between the late 4th and 5th centuries (covering most of modern-day Israel and Palestine and parts of Jordan and Syria. The title "Byzantine" is a modern and artificial term which has been called "imaginary". This division is not unique for Palestine and related to the historiographical line between Ancient history and the Middle Ages. The Byzantine period in Palestine was politically a direct continuation of Roman rule, which began with Pompey's conquest in 63 BCE and, from 395 CE, persisted in the form of the Eastern Roman Empire. Culturally, it followed a historical continuum that began in 332 BCE with the conquest of Alexander the Great and the incorporation of the Levant into the Hellenistic world, later evolving into a Hellenistic–Roman–Byzantine sphere. The Byzantine period is most distinguished from earlier times by major religious and demographic changes. Christianity became the state religion and Palestine assumed a central place in the Christian world, while the Jewish, Samaritan and polytheistic populations, facing increasing restrictions, became religious minorities. The Jewish community declined in influence relative to diaspora communities, with the Babylonian Jewish community emerging as the leading center of Judaism. Christianity The veneration of Palestine as the Holy Land intensified after Constantine established his rule over the eastern half of the Roman Empire in 324 CE. In 325 CE, he convened the First Council of Nicaea, where bishops from across the empire gathered to reach consensus on theological matters. Among the participants were Macarius of Jerusalem and Eusebius of Caesarea, and the see of Jerusalem was accorded a special honor. In 326 CE, Constantine's mother Helena made a pilgrimage to the Holy Land and supported Macarius's construction projects in Jerusalem. The most significant was the Church of the Anastasis (later known as the Church of the Holy Sepulchre), built at the site identified by tradition as the location of Jesus's burial, where the Temple of Aphrodite had previously stood. This church became a focal point for Christian Jerusalem. Other churches from this period include the Church of Eleona on the Mount of Olives, the Church of the Nativity in Bethlehem, a 5th-century church in Jabalia, a pair of churches at Khirbet et-Tireh, and the Church of Mamre in Hebron. The latter was reportedly constructed at Constantine's suggestion to suppress a syncretic cult practiced at the site by Christians, Jews, and polytheists. In the Galilee, Joseph of Tiberias, a Christian convert from Judaism known from the writings of Epiphanius of Salamis, established churches in towns with significant Jewish populations, including Tiberias, Sepphoris, Nazareth, and Capernaum. Jewish community During the first generation of Christian rule in Palestine, Jewish communities were pressured by what they saw as the appropriation of the Land of Israel into a Christian Holy Land. Together with increasing Anti-Jewish sentiment from Christian communities and leadership, this culminated into a Jewish revolt in the Galilee which was launched in Sepphoris in 350 CE, with Patricius proclaimed as the leader. At that time the Roman empire fell into a civil war between emperor Constantius II and Magnentius. Constantius Gallus, the ruler of the who sent the Roman general Ursicinus, who crushed the rebellion in 351 CE. In 361 CE, Julian became the last non-Christian emperor of the Roman Empire. He rejected the Christian faith and sought to restore the traditional Roman religion. As part of this endeavor, he promised the Jews to build the Third Temple in Jerusalem. In 363 CE, a pair of severe earthquakes shook Palestine and Syria and led to a halt in the construction efforts. Later that year Julian was killed in his campaign against the Sasanian Empire and the project was completely abandoned as all subsequent emperors were Christians. The Jewish community had maintained its autonomy under the Sanhedrin throughout most of the 4th century CE. However during the reign Arcadius, an imperial law was issued in 398 which limited the Sanhedrin's jurisdiction on civil matters. Emperors Theodosius II further imposed a ban on the construction of new synagogues and deprived the Gamaliel VI, the Nasi of the Sanhedrin from his honorary titles. Upon Gamliel's death in 429 CE, Theodosius II abolished the institution of Sanhedrin, diverting its taxes to imperial officers. Settlements and population Between the 4th and mid 6th centuries CE, Byzantine Palestine experienced considerable demographic growth. Present conventional estimates for the population of Palaestina in the mid 6th century CE, based on qualitative analysis of archaeological data and agricultural potential, stand at around 1 million inhabitants. In early stages of research it was believed that this growth was linked to the desecration of Palestine in the 4th century, a view which was accepted by many scholars. Recent developments in archaeological research and the expanding the dataset have led scholars to conclude that population growth began already in the 2nd and 3rd centuries, similar to other regions throughout the empire. This process reached a halt and decline from mid-6th century, following the Samaritan revolts, plagues and political crises. Religiously, the population of Palestine comprised four main groups: Jews, Samaritans, polytheists, and Christians. In the 4th century and most of the 5th century, polytheists formed the majority, particularly in the southern regions. By the late 5th century, Christians had become the largest religious group. In the 5th and 6th centuries, polytheism declined sharply and virtually disappeared, largely as a result of imperial and ecclesiastical policies. Administration During Diocletian reign (r. 284 – 305 CE) the Empire's political divisions were reorganized. He introduced the Tetrarchy system, where the empire was split between east and west, each ruled by a co-emperor titled Augustus. It was later made that each Augustus appointed an heir which would rule half of his domain and bear the title of Caesar, making the empire split between four constituent parts. Each of these parts were governed by a Praetorian prefect. The empire was further divided into smaller administrative units called dioceses. While the system of four co-emperors collapsed almost immediately, the provincial division remained intact throughout the Byzantine period. The province of Palaestina was included within the Diocese of the East, part of the Praetorian prefecture of the East. Since the First Jewish–Roman War (70 CE), the province of Syria Palaestina was administered by governor of a Legatus Augusti pro praetore rank, responsible for military and judicial matters, and a Procurator responsible for financial matters. The capital was Caesarea Maritima and the province included most of modern-day Israel and Palestine, as well as a strip of land called Perea (modern-day Jordan) and excluding the Galilee and Golan Heights regions, which were part of Phoenice, and the Negev which was part of Arabia. The reforms of Diocletian transformed the military responsibility from the governor to a new office: Dux Palaestinae. The governor's role was administering justice, collecting taxes, maintaining the treasury, and preserving public order with a reduced military force. The governor's title changed to iudex ("judge"). His rank was of praeses, later elevated to consularis, and in the 380s CE to proconsul. This reflects an expansion of his office and the allocation of greater funds. The province of Palaestina underwent numerous transformations during the 4th century CE, the exact chronology of which remains ambiguous. While Diocletian's reforms generally aimed to reduce the size of provinces, Palaestina was enlarged in the south at the expense of Arabia, with the annexation of the Negev, Sinai, and parts of Transjordan south of Wadi al-Hasa around 295 CE. Dora was also annexed from Phoenice. The territories in the south appear to have been transferred between Palaestina and Arabia on several occasions during the 4th century, possibly as many as six times. In some sources, a province called "New Arabia" is mentioned between 315 and the late 4th century, with its capital at Eleutheropolis, though both its boundaries and even its existence are disputed by modern scholars. In 357/8, the southern parts of Palaestina, including the town of Elusa, seem to have been incorporated into a smaller province, which was also called Palaestina (and likely an early predecessor to Palaestina Salutaris) and later reabsorbed back into Palaestina. It is possible that the reunification of Palaestina was linked to the implementation of Theodosius I's religious reforms, and it is around the last years of his reign that Palaestina was divided into three provinces. The ultimate partition of Palaestina into three provinces took place in the turn of the 5th century, sometime between 390 and 409 CE. Most of Palaestina was included in Palaestina Prima, whose capital remained in Caesarea Maritima, but the governor's rank was reduced back to Consularis. Palaestina Secunda in the north included the northern Samarian Highlands, the western Galilee and northern Perea (modern-day southern Syria). It was goverend by a Praeses from Scythopolis. The province of Palaestina Salutaris (or "Palaestina Tertia") in the south included the Negev, Sinai and southern Transjordan. Its capital was Petra and it was ruled also by a Praeses. This administrative division remained mostly unchanged until the reign of Justinian I, with one exception. Sometime between the middle of the 5th century and 518 CE, the border of Palaestina Salutaris was extended north until Wadi Mujib, at the expense of Arabia. A major transformation occurred in 529, following the Samaritan revolts, the rank of the governor of Palaestina Secunda was elevated to that of a Consularis. In 536, Justinian I elevated the rank of the governor of Palaestina Prima to Proconsul and gave him permission to interfere in Palaestina Secunda in case the local governor could not quell local unrest, as well as recruit troops from Dux Palaestinae. See also References Bibliography
========================================
[SOURCE: https://en.wikipedia.org/wiki/Elizabeth_II#Succession] | [TOKENS: 11449]
Contents Elizabeth II Elizabeth II (Elizabeth Alexandra Mary; 21 April 1926 – 8 September 2022) was Queen of the United Kingdom and other Commonwealth realms from 6 February 1952 until her death in 2022. She was queen regnant of 32 sovereign states during her lifetime and was the monarch of 15 realms at her death. Her reign of 70 years and 214 days is the longest of any British monarch, the second-longest of any sovereign state, and the longest of any queen regnant in history. Elizabeth was born in Mayfair, London, during the reign of her paternal grandfather, King George V. She was the first child of the Duke and Duchess of York (later King George VI and Queen Elizabeth the Queen Mother). Her father acceded to the throne in 1936 upon the abdication of his brother Edward VIII, making the ten-year-old Princess Elizabeth the heir presumptive. She was educated privately at home and began to undertake public duties during the Second World War, serving in the Auxiliary Territorial Service. In November 1947, she married Philip Mountbatten, a former prince of Greece and Denmark. Their marriage lasted 73 years until his death in 2021. They had four children: Charles, Anne, Andrew, and Edward. When her father died in February 1952, Elizabeth, then 25 years old, became queen of seven independent Commonwealth countries: the United Kingdom, Canada, Australia, New Zealand, South Africa, Pakistan, and Ceylon, as well as head of the Commonwealth. Elizabeth reigned as a constitutional monarch through significant political changes such as the Troubles in Northern Ireland, devolution in the United Kingdom, the decolonisation of Africa, and the United Kingdom's accession to the European Communities as well as its subsequent withdrawal. The number of her realms varied over time as territories gained independence and some realms became republics. As queen, Elizabeth was served by more than 170 prime ministers across her realms. Her many historic visits and meetings included state visits to China in 1986, to Russia in 1994, and to the Republic of Ireland in 2011, and meetings with five popes and fourteen US presidents. Significant events included Elizabeth's coronation in 1953 and the celebrations of her Silver, Golden, Diamond, and Platinum jubilees. Although there was occasional republican sentiment and media criticism of her family—particularly after the breakdowns of her children's marriages, her annus horribilis in 1992, and the death of her former daughter-in-law Diana in 1997—support for the monarchy and her popularity in the United Kingdom remained consistently high. Elizabeth died aged 96 at Balmoral Castle, and was succeeded by her eldest son, Charles III. Early life Elizabeth was born at 2:40 am on 21 April 1926 by Caesarean section at her maternal grandfather's London home, 17 Bruton Street in Mayfair, the first child of Prince Albert, Duke of York (later King George VI), and his wife, Elizabeth, Duchess of York (later Queen Elizabeth the Queen Mother). Her father was the second son of King George V and Queen Mary, and her mother was the youngest daughter of Scottish aristocrat Claude Bowes-Lyon, 14th Earl of Strathmore and Kinghorne and his wife Cecilia (née Cavendish-Bentinck). She was baptised by the Archbishop of York, Cosmo Gordon Lang in the private chapel at Buckingham Palace on 29 May,[b] and she was named Elizabeth after her mother; Alexandra after her paternal great-grandmother, who had died five months earlier; and Mary after her paternal grandmother. She was affectionately called "Lilibet" by her close family, based on what she called herself at first. She was cherished by her grandfather George V, whom she affectionately called "Grandpa England", and her regular visits during his serious illness in 1929 were credited in the popular press and by later biographers with raising his spirits and aiding his recovery. Elizabeth's sole sibling, Princess Margaret, was born in 1930. The two princesses were cared for by their nanny, Clara Knight, and educated at home under the supervision of their mother and their governess, Marion Crawford. Lessons concentrated on history, language, literature, and music. Crawford published a biography of Elizabeth and Margaret's childhood years titled The Little Princesses in 1950, much to the dismay of the royal family. The book describes Elizabeth's love of horses and dogs, her orderliness, and her attitude of responsibility. Others echoed such observations: Winston Churchill described Elizabeth when she was two as "a character. She has an air of authority and reflectiveness astonishing in an infant." Her cousin Margaret Rhodes described her as "a jolly little girl, but fundamentally sensible and well-behaved". Elizabeth's early life was spent primarily at the Yorks' residences at 145 Piccadilly (their town house in London) and Royal Lodge in Windsor. Heir presumptive During her grandfather's reign, Elizabeth was third in the line of succession to the British throne, behind her uncle Edward, Prince of Wales, and her father. Although her birth generated public interest, she was not expected to become queen, as Edward was still young and likely to marry and have children of his own, who would precede Elizabeth in the line of succession. When her grandfather died in 1936 and her uncle succeeded as Edward VIII, she became second in line to the throne, after her father. Later that year, Edward abdicated, after his proposed marriage to divorced American socialite Wallis Simpson provoked a constitutional crisis. Consequently, Elizabeth's father became king, taking the regnal name George VI. Since Elizabeth had no brothers, she became heir presumptive. If her parents had subsequently had a son, he would have been heir apparent and before her in the line of succession, which was determined by the male-preference primogeniture in effect at the time. Elizabeth received private tuition in constitutional history from Henry Marten, Vice-Provost of Eton College, and learned French from a succession of native-speaking governesses. A Girl Guides company, the 1st Buckingham Palace Company, was formed specifically so she could socialise with girls her age. Later, she was enrolled as a Sea Ranger. In 1939, Elizabeth's parents toured Canada and the United States. As in 1927, when they had toured Australia and New Zealand, Elizabeth remained in Britain since her father thought she was too young to undertake public tours. She "looked tearful" as her parents departed. They corresponded regularly, and she and her parents made the first royal transatlantic telephone call on 18 May. In September 1939, Britain entered the Second World War. Lord Hailsham suggested that Princesses Elizabeth and Margaret should be evacuated to Canada to avoid the frequent aerial bombings of London by the Luftwaffe. This was rejected by their mother, who declared, "The children won't go without me. I won't leave without the King. And the King will never leave." The princesses stayed at Balmoral Castle, Scotland, until Christmas 1939, when they moved to Sandringham House, Norfolk. From February to May 1940, they lived at Royal Lodge, Windsor, until moving to Windsor Castle, where they lived for most of the next five years. At Windsor, the princesses staged pantomimes at Christmas in aid of the Queen's Wool Fund, which bought yarn to knit into military garments. In 1940, the 14-year-old Elizabeth made her first radio broadcast during the BBC's Children's Hour, addressing other children who had been evacuated from the cities. She stated: "We are trying to do all we can to help our gallant sailors, soldiers, and airmen, and we are trying, too, to bear our own share of the danger and sadness of war. We know, every one of us, that in the end all will be well." In 1943, Elizabeth undertook her first solo public appearance on a visit to the Grenadier Guards, of which she had been appointed colonel the previous year. As she approached her 18th birthday, Parliament changed the law so that she could act as one of five counsellors of state in the event of her father's incapacity or absence abroad, such as his visit to Italy in July 1944. In February 1945, she was appointed an honorary second subaltern in the Auxiliary Territorial Service with the service number 230873. She trained as a driver and mechanic and was given the rank of honorary junior commander (female equivalent of captain at the time) five months later. At the end of the war in Europe, on Victory in Europe Day, Elizabeth and Margaret mingled incognito with the celebrating crowds in the streets of London. In 1985, Elizabeth recalled in a rare interview, "... we asked my parents if we could go out and see for ourselves. I remember we were terrified of being recognised ... I remember lines of unknown people linking arms and walking down Whitehall, all of us just swept along on a tide of happiness and relief." During the war, plans were drawn to quell Welsh nationalism by affiliating Elizabeth more closely with Wales. Proposals, such as appointing her Constable of Caernarfon Castle or a patron of Urdd Gobaith Cymru (the Welsh League of Youth), were abandoned for several reasons, including fear of associating Elizabeth with conscientious objectors in the Urdd at a time when Britain was at war. Welsh politicians suggested she be made Princess of Wales on her 18th birthday. Home Secretary Herbert Morrison supported the idea, but the King rejected it because he felt such a title belonged solely to the wife of a prince of Wales and the prince of Wales had always been the heir apparent. In 1946, she was inducted into the Gorsedd of Bards at the National Eisteddfod of Wales. Elizabeth went on her first overseas tour in 1947, accompanying her parents through southern Africa. During the tour, in a broadcast to the British Commonwealth on her 21st birthday, she made the following pledge:[c] I declare before you all that my whole life, whether it be long or short, shall be devoted to your service and the service of our great imperial family to which we all belong. But I shall not have strength to carry out this resolution alone unless you join in it with me, as I now invite you to do: I know that your support will be unfailingly given. God help me to make good my vow, and God bless all of you who are willing to share in it. Elizabeth met her future husband, Prince Philip of Greece and Denmark, in 1934 and again in 1937. They were second cousins once removed through King Christian IX of Denmark and third cousins through Queen Victoria. After meeting for the third time at the Royal Naval College in Dartmouth in July 1939, Elizabeth—though only 13 years old—said she fell in love with Philip, who was 18, and they began to exchange letters. She was 21 when their engagement was officially announced on 9 July 1947. The engagement attracted some controversy. Philip had no financial standing, was foreign-born (though a British subject who had served in the Royal Navy throughout the Second World War), and his sisters had married German noblemen with Nazi links. Marion Crawford wrote, "Some of the King's advisors did not think him good enough for her. He was a prince without a home or kingdom. Some of the papers played long and loud tunes on the string of Philip's foreign origin." Later biographies reported that Elizabeth's mother had reservations about the union initially and teased Philip as "the Hun". In later life, however, she told the biographer Tim Heald that Philip was "an English gentleman". Before the marriage, Philip renounced his Greek and Danish titles, officially converted from Greek Orthodoxy to Anglicanism, and adopted the style Lieutenant Philip Mountbatten, taking the surname of his mother's British family. Shortly before the wedding, he was created Duke of Edinburgh and granted the style His Royal Highness. Elizabeth and Philip were married on 20 November 1947 at Westminster Abbey. They received 2,500 wedding gifts from around the world. Elizabeth required ration coupons to buy the material for her gown (which was designed by Norman Hartnell) because Britain had not yet completely recovered from the devastation of the war. In post-war Britain, it was not acceptable for Philip's German relations, including his three surviving sisters, to be invited to the wedding. Neither was an invitation extended to the Duke of Windsor, formerly King Edward VIII. Elizabeth gave birth to her first child, Prince Charles, in November 1948. One month earlier, the King had issued letters patent allowing her children to use the style and title of a royal prince or princess, to which they otherwise would not have been entitled, as their father was no longer a royal prince. A second child, Princess Anne, was born in August 1950. Following their wedding, the couple leased Windlesham Moor, near Windsor Castle, until July 1949, when they took up residence at Clarence House in London. At various times between 1949 and 1951, Philip was stationed in the British Crown Colony of Malta as a serving Royal Navy officer. He and Elizabeth lived intermittently in Malta for several months at a time in the hamlet of Gwardamanġa, at Villa Guardamangia, the rented home of Philip's uncle Lord Mountbatten. Their two children remained in Britain. Reign As George VI's health declined during 1951, Elizabeth frequently stood in for him at public events. When she visited Canada and Harry S. Truman in Washington, DC, in October 1951, her private secretary Martin Charteris carried a draft accession declaration in case the King died while she was on tour. In early 1952, Elizabeth and Philip set out for a tour of Australia and New Zealand by way of the British colony of Kenya. On 6 February, they had just returned to their Kenyan home, Sagana Lodge, after a night spent at Treetops Hotel, when word arrived of the death of Elizabeth's father. Philip broke the news to the new queen. She chose to retain Elizabeth as her regnal name, and was therefore called Elizabeth II. The numeral offended some Scots, as she was the first Elizabeth to rule in Scotland. She was proclaimed queen throughout her realms, and the royal party hastily returned to the United Kingdom. Elizabeth and Philip moved into Buckingham Palace. With Elizabeth's accession, it seemed possible that the royal house would take her husband's name, in line with the custom for married women of the time. Lord Mountbatten advocated for House of Mountbatten, and Philip suggested House of Edinburgh, after his ducal title. The British prime minister, Winston Churchill, and Elizabeth's grandmother Queen Mary favoured the retention of the House of Windsor. Elizabeth issued a declaration on 9 April 1952 that the royal house would continue to be Windsor. Philip complained, "I am the only man in the country not allowed to give his name to his own children." In 1960, the surname Mountbatten-Windsor was adopted for Philip and Elizabeth's male-line descendants who do not carry royal titles. Amid preparations for the coronation, Princess Margaret told her sister she wished to marry Peter Townsend, a divorcé 16 years Margaret's senior with two sons from his previous marriage. Elizabeth asked them to wait for a year; in the words of her private secretary, "the Queen was naturally sympathetic towards the Princess, but I think she thought—she hoped—given time, the affair would peter out." Senior politicians were against the match and the Church of England did not permit remarriage after divorce. If Margaret had contracted a civil marriage, she would have been expected to renounce her right of succession. Margaret decided to abandon her plans with Townsend. In 1960, she married Antony Armstrong-Jones, who was created Earl of Snowdon the following year. They divorced in 1978; Margaret did not remarry. Despite Queen Mary's death on 24 March 1953, the coronation went ahead as planned on 2 June, as Mary had requested. The coronation ceremony in Westminster Abbey was televised for the first time, with the exception of the anointing and communion.[d] On Elizabeth's instruction, her coronation gown was embroidered with the floral emblems of Commonwealth countries. From Elizabeth's birth onwards, the British Empire continued its transformation into the Commonwealth of Nations. By the time of her accession in 1952, her role as head of multiple independent states was already established. In 1953, Elizabeth and Philip embarked on a seven-month round-the-world tour, visiting 13 countries and covering more than 40,000 miles (64,000 km) by land, sea and air. She became the first reigning monarch of Australia and New Zealand to visit those nations. During the tour, crowds were immense; three-quarters of the population of Australia were estimated to have seen her. Throughout her reign, she made hundreds of state visits to other countries and tours of the Commonwealth; she was the most widely travelled head of state. In 1956, the British and French prime ministers, Sir Anthony Eden and Guy Mollet, discussed the possibility of France joining the Commonwealth. The proposal was never accepted, and the following year, France signed the Treaty of Rome, which established the European Economic Community, the precursor to the European Union. In November 1956, Britain and France invaded Egypt in an ultimately unsuccessful attempt to capture the Suez Canal. Lord Mountbatten said that Elizabeth was opposed to the invasion, though Eden denied it. Eden resigned two months later. The governing Conservative Party had no formal mechanism for choosing a leader, meaning that it fell to Elizabeth to decide whom to commission to form a government following Eden's resignation. Eden recommended she consult Lord Salisbury, the lord president of the council. Lord Salisbury and Lord Kilmuir, the lord chancellor, consulted the British Cabinet, Churchill, and the chairman of the backbench 1922 Committee, resulting in Elizabeth appointing their recommended candidate: Harold Macmillan. The Suez crisis and the choice of Eden's successor led, in 1957, to the first major personal criticism of Elizabeth. In a magazine, which he owned and edited, Lord Altrincham accused her of being "out of touch". Altrincham was denounced by public figures and slapped by a member of the public appalled by his comments. Six years later, in 1963, Macmillan resigned and advised Elizabeth to appoint Alec Douglas-Home as the prime minister, advice she followed. Elizabeth again came under criticism for appointing the prime minister on the advice of a small number of ministers or a single minister. In 1965, the Conservatives adopted a formal mechanism for electing a leader, thus relieving the Queen of her involvement. In 1957, Elizabeth made a state visit to the United States, where she addressed the United Nations General Assembly on behalf of the Commonwealth. On the same tour, she opened the 23rd Canadian Parliament, becoming the first monarch of Canada to open a parliamentary session. Two years later, solely in her capacity as Queen of Canada, she revisited the United States and toured Canada. In 1961, she toured Cyprus, India, Pakistan, Nepal, and Iran. On a visit to Ghana the same year, she dismissed fears for her safety, even though her host, President Kwame Nkrumah, who had replaced her as head of state, was a target for assassins. Harold Macmillan wrote, "The Queen has been absolutely determined all through ... She is impatient of the attitude towards her to treat her as ... a film star ... She has indeed 'the heart and stomach of a man' ... She loves her duty and means to be a Queen." Before her tour through parts of Quebec in 1964, the press reported that extremists within the Quebec separatist movement were plotting Elizabeth's assassination. No assassination attempt was made, but a riot did break out while she was in Montreal; her "calmness and courage in the face of the violence" was noted. Elizabeth gave birth to her third child, Prince Andrew, in February 1960; this was the first birth to a reigning British monarch since 1857. Her fourth child, Prince Edward, was born in March 1964. The 1960s and 1970s saw an acceleration in the decolonisation of Africa and the Caribbean. More than 20 countries gained independence from Britain as part of a planned transition to self-government. In 1965, however, the Rhodesian prime minister, Ian Smith, in opposition to moves towards majority rule, unilaterally declared independence with Elizabeth as "Queen of Rhodesia". Although Elizabeth formally dismissed him, and the international community applied sanctions against Rhodesia, his regime survived for over a decade. As Britain's ties to its former empire weakened, the British government sought entry to the European Community, a goal it achieved in 1973. In 1966, the Queen was criticised for waiting eight days before visiting the village of Aberfan, where a mining disaster killed 116 children and 28 adults. Martin Charteris said that the delay, made on his advice, was a mistake that she later regretted. Elizabeth toured Yugoslavia in October 1972, becoming the first British monarch to visit a communist country. She was received at the airport by President Josip Broz Tito, and a crowd of thousands greeted her in Belgrade. In February 1974, British prime minister Edward Heath advised Elizabeth to call a general election in the middle of her tour of the Austronesian Pacific Rim, requiring her to fly back to Britain. The election resulted in a hung parliament; Heath's Conservatives were not the largest party but could stay in office if they formed a coalition with the Liberals. When discussions on forming a coalition foundered, Heath resigned, and Elizabeth asked the Leader of the Opposition, Labour's Harold Wilson, to form a government. A year later, at the height of the 1975 Australian constitutional crisis, the Australian prime minister, Gough Whitlam, was dismissed from his post by Governor-General Sir John Kerr, after the Opposition-controlled Senate rejected Whitlam's budget proposals. As Whitlam had a majority in the House of Representatives, Speaker Gordon Scholes appealed to Elizabeth to reverse Kerr's decision. She declined, saying she would not interfere in decisions reserved by the Constitution of Australia for the governor-general. The crisis fuelled Australian republicanism. In 1977, Elizabeth marked the Silver Jubilee of her accession. Parties and events took place throughout the Commonwealth, many coinciding with her associated national and Commonwealth tours. The celebrations re-affirmed Elizabeth's popularity, despite virtually coincident negative press coverage of Princess Margaret's separation from her husband, Lord Snowdon. In 1978, Elizabeth endured a state visit to the United Kingdom by Romania's communist leader, Nicolae Ceaușescu, and his wife, Elena, though privately she thought they had "blood on their hands". The following year brought two blows: the unmasking of Anthony Blunt, former Surveyor of the Queen's Pictures, as a communist spy and the assassination of Lord Mountbatten by the Provisional Irish Republican Army. According to Paul Martin Sr., by the end of the 1970s, Elizabeth was worried the Crown "had little meaning for" Pierre Trudeau, the Canadian prime minister. Tony Benn said Elizabeth found Trudeau "rather disappointing". Trudeau's supposed republicanism seemed to be confirmed by his antics, such as sliding down banisters at Buckingham Palace and pirouetting behind Elizabeth's back in 1977, and the removal of various Canadian royal symbols during his term of office. In 1980, Canadian politicians sent to London to discuss the patriation of the Canadian constitution found Elizabeth "better informed ... than any of the British politicians or bureaucrats". She was particularly interested after the failure of Bill C-60, which would have affected her role as head of state. During the 1981 Trooping the Colour ceremony, six weeks before the wedding of Prince Charles and Lady Diana Spencer, six shots were fired at Elizabeth from close range as she rode down The Mall, London, on her horse, Burmese. Police later discovered the shots were blanks. The 17-year-old assailant, Marcus Sarjeant, was sentenced to five years in prison and released after three. Elizabeth's composure and skill in controlling her mount were widely praised. That October, Elizabeth was the subject of another attack while on a visit to Dunedin, New Zealand. Christopher John Lewis, who was 17 years old, fired a shot with a .22 rifle from the fifth floor of a building overlooking the parade but missed. Lewis was arrested, but instead of being charged with attempted murder or treason was sentenced to three years in jail for unlawful possession and discharge of a firearm. Two years into his sentence, he attempted to escape a psychiatric hospital with the intention of assassinating Charles, who was visiting the country with Diana and their son Prince William. From April to September 1982, Elizabeth's son Andrew served with British forces in the Falklands War, for which she reportedly felt anxiety and pride. On 9 July, she awoke in her bedroom at Buckingham Palace to find an intruder, Michael Fagan, in the room with her. In a serious lapse of security, assistance only arrived after two calls to the Palace police switchboard. After hosting US president Ronald Reagan at Windsor Castle in 1982 and visiting his California ranch in 1983, Elizabeth was angered when his administration ordered the invasion of Grenada, one of her Caribbean realms, without informing her. Intense media interest in the opinions and private lives of the royal family during the 1980s led to a series of sensational stories in the press, pioneered by The Sun tabloid. As Kelvin MacKenzie, editor of The Sun, told his staff: "Give me a Sunday for Monday splash on the Royals. Don't worry if it's not true—so long as there's not too much of a fuss about it afterwards." Newspaper editor Donald Trelford wrote in The Observer of 21 September 1986: "The royal soap opera has now reached such a pitch of public interest that the boundary between fact and fiction has been lost sight of ... it is not just that some papers don't check their facts or accept denials: they don't care if the stories are true or not." It was reported, most notably in The Sunday Times of 20 July 1986, that Elizabeth was worried that Margaret Thatcher's economic policies fostered social divisions and was alarmed by high unemployment, a series of riots, the violence of a miners' strike, and Thatcher's refusal to apply sanctions against the apartheid regime in South Africa. The sources of the rumours included royal aide Michael Shea and Commonwealth secretary-general Shridath Ramphal, but Shea claimed his remarks were taken out of context and embellished by speculation. Thatcher reputedly said Elizabeth would vote for the Social Democratic Party—Thatcher's political opponents. Thatcher's biographer John Campbell claimed "the report was a piece of journalistic mischief-making". Reports of acrimony between them were exaggerated, and Elizabeth gave two honours in her personal gift—membership in the Order of Merit and the Order of the Garter—to Thatcher after her replacement as prime minister by John Major. Brian Mulroney, Canadian prime minister between 1984 and 1993, said Elizabeth was a "behind the scenes force" in ending apartheid. In 1986, Elizabeth paid a six-day state visit to the People's Republic of China, becoming the first British monarch to visit the country. The tour included the Forbidden City, the Great Wall of China, and the Terracotta Warriors. At a state banquet, Elizabeth joked about the first British emissary to China being lost at sea with Queen Elizabeth I's letter to the Wanli Emperor, and remarked, "fortunately postal services have improved since 1602". Elizabeth's visit also signified the acceptance of both countries that sovereignty over Hong Kong would be transferred from the United Kingdom to China in 1997. By the end of the 1980s, Elizabeth had become the target of satire. The involvement of younger members of the royal family in the charity game show It's a Royal Knockout in 1987 was ridiculed. In Canada, Elizabeth publicly supported politically divisive constitutional amendments, prompting criticism from opponents of the proposed changes, including Pierre Trudeau. The same year, the elected Fijian government was deposed in a military coup. As monarch of Fiji, Elizabeth supported the attempts of Governor-General Ratu Sir Penaia Ganilau to assert executive power and negotiate a settlement. Coup leader Sitiveni Rabuka deposed Ganilau and declared Fiji a republic. In the wake of coalition victory in the Gulf War, Elizabeth became the first British monarch to address a joint meeting of the United States Congress in May 1991. In November 1992, in a speech to mark the Ruby Jubilee of her accession, Elizabeth called 1992 her annus horribilis (a Latin phrase, meaning 'horrible year'). Republican feeling in Britain had risen because of press estimates of Elizabeth's private wealth—contradicted by the Palace[e]—and reports of affairs and strained marriages among her extended family. In March, her second son, Prince Andrew, separated from his wife, Sarah; her daughter, Princess Anne, divorced Captain Mark Phillips in April; angry demonstrators in Dresden threw eggs at Elizabeth during a state visit to Germany in October; and a large fire broke out at Windsor Castle, one of her official residences, in November. The monarchy came under increased criticism and public scrutiny. In an unusually personal speech, Elizabeth said that any institution must expect criticism, but suggested it might be done with "a touch of humour, gentleness and understanding". Two days later, John Major announced plans to reform the royal finances, drawn up the previous year, including Elizabeth paying income tax from 1993 onwards, and a reduction in the civil list. In December, Prince Charles and his wife, Diana, formally separated. At the end of the year, Elizabeth sued The Sun newspaper for breach of copyright when it published the text of her annual Christmas message two days before it was broadcast. The newspaper was forced to pay her legal fees and donated £200,000 to charity. Elizabeth's solicitors had taken successful action against The Sun five years earlier for breach of copyright after it published a photograph of her daughter-in-law the Duchess of York and her granddaughter Princess Beatrice. In January 1994, Elizabeth broke her left wrist when a horse she was riding at Sandringham tripped and fell. In October 1994, she became the first reigning British monarch to set foot on Russian soil.[f] In October 1995, she was tricked into a hoax call by Montreal radio host Pierre Brassard impersonating Canadian prime minister Jean Chrétien. Elizabeth, who believed that she was speaking to Chrétien, said she supported Canadian unity and would try to influence Quebec's referendum on proposals to break away from Canada. In the year that followed, public revelations on the state of Charles and Diana's marriage continued. In consultation with her husband and John Major, as well as the Archbishop of Canterbury (George Carey) and her private secretary (Robert Fellowes), Elizabeth wrote to Charles and Diana at the end of December 1995, suggesting that a divorce would be advisable. In August 1997, a year after the divorce, Diana was killed in a car crash in Paris. Elizabeth was on holiday with her extended family at Balmoral. Diana's two sons, Princes William and Harry, wanted to attend church, so Elizabeth and Philip took them that morning. Afterwards, for five days, the royal couple shielded their grandsons from the intense press interest by keeping them at Balmoral where they could grieve in private, but the royal family's silence and seclusion, and the failure to fly a flag at half-mast over Buckingham Palace, caused public dismay. Pressured by the hostile reaction, Elizabeth agreed to return to London and address the nation in a live television broadcast on 5 September, the day before Diana's funeral. In the broadcast, she expressed admiration for Diana and her feelings "as a grandmother" for the two princes. As a result, much of the public hostility evaporated. In October 1997, Elizabeth and Philip made a state visit to India, which included a controversial visit to the site of the Jallianwala Bagh massacre to pay her respects. Protesters chanted "Killer Queen, go back", and there were demands for her to apologise for the action of British troops 78 years earlier. At the memorial in the park, she and Philip laid a wreath and stood for a 30‑second moment of silence. As a result, much of the fury among the public softened, and the protests were called off. That November, the royal couple held a reception at Banqueting House to mark their golden wedding anniversary. Elizabeth made a speech and praised Philip for his role as consort, referring to him as "my strength and stay". In 1999, as part of the process of devolution in the United Kingdom, Elizabeth formally opened newly established legislatures for Wales and Scotland: the National Assembly for Wales at Cardiff in May, and the Scottish Parliament at Edinburgh in July. On the eve of the new millennium, Elizabeth and Philip boarded a vessel from Southwark, bound for the Millennium Dome. Before passing under Tower Bridge, she lit the National Millennium Beacon in the Pool of London using a laser torch. Shortly before midnight, she officially opened the Dome. During the singing of Auld Lang Syne, Elizabeth held hands with Philip and British prime minister Tony Blair. Following the 9/11 attacks in the United States, Elizabeth, breaking with tradition, ordered the American national anthem to be played during the changing of the guard at Buckingham Palace to express her solidarity with the country. In 2002, Elizabeth marked her Golden Jubilee, the 50th anniversary of her accession. Her sister died in February and her mother in March, and the media speculated on whether the Jubilee would be a success or a failure. Princess Margaret's death shook Elizabeth; her funeral was one of the rare occasions where Elizabeth openly cried. Elizabeth again undertook an extensive tour of her realms, beginning in Jamaica in February, where she called the farewell banquet "memorable" after a power cut plunged King's House, the official residence of the governor-general, into darkness. As in 1977, there were street parties and commemorative events, and monuments were named to honour the occasion. One million people attended each day of the three-day main Jubilee celebration in London, and the enthusiasm shown for Elizabeth by the public was greater than many journalists had anticipated. In 2003, Elizabeth sued the Daily Mirror for breach of confidence and obtained an injunction which prevented the outlet from publishing information gathered by a reporter who posed as a footman at Buckingham Palace. The newspaper also paid £25,000 towards her legal costs. Though generally healthy throughout her life, in 2003 she had keyhole surgery on both knees. In October 2006, she missed the opening of the new Emirates Stadium because of a strained back muscle that had been troubling her since the summer. In May 2007, citing unnamed sources, The Daily Telegraph reported that Elizabeth was "exasperated and frustrated" by the policies of Tony Blair, that she was concerned the British Armed Forces were overstretched in Iraq and Afghanistan, and that she had raised concerns over rural and countryside issues with Blair. She was, however, said to admire Blair's efforts to achieve peace in Northern Ireland. She became the first British monarch to celebrate a diamond wedding anniversary in November 2007. On 20 March 2008, at the Church of Ireland St Patrick's Cathedral, Armagh, Elizabeth attended the first Maundy service held outside England and Wales. Elizabeth addressed the UN General Assembly for a second time in 2010, again in her capacity as Queen of all Commonwealth realms and Head of the Commonwealth. The UN secretary-general, Ban Ki-moon, introduced her as "an anchor for our age". During her visit to New York, which followed a tour of Canada, she officially opened a memorial garden for British victims of the 9/11 attacks. Elizabeth's 11-day visit to Australia in October 2011 was her 16th visit to the country since 1954. By invitation of the Irish president, Mary McAleese, she made the first state visit to the Republic of Ireland by a British monarch in May 2011. The 2012 Diamond Jubilee marked 60 years since Elizabeth's accession, and celebrations were held throughout her realms, the wider Commonwealth, and beyond. She and Philip undertook an extensive tour of the United Kingdom, while their children and grandchildren embarked on royal tours of other Commonwealth states on her behalf. On 4 June, Jubilee beacons were lit around the world. On 18 December, the Queen became the first British sovereign to attend a peacetime Cabinet meeting since George III in 1781. Elizabeth, who opened the Montreal Summer Olympics in 1976, also opened the 2012 Summer Olympics and Paralympics in London, making her the first head of state to open two Olympic Games in two countries. For the London Olympics, she portrayed herself in a short film as part of the opening ceremony, alongside Daniel Craig as James Bond. On 4 April 2013, she received an honorary BAFTA award for her patronage of the film industry and was called "the most memorable Bond girl yet" at a special presentation at Windsor Castle. In March 2013, the Queen stayed overnight at King Edward VII's Hospital as a precaution after developing symptoms of gastroenteritis. A week later, she signed the new Charter of the Commonwealth. That year, because of her age and the need for her to limit travelling, she chose not to attend the biennial Commonwealth Heads of Government Meeting for the first time in 40 years. She was represented at the summit in Sri Lanka by Prince Charles. On 20 April 2018, the Commonwealth heads of government announced that Charles would succeed her as Head of the Commonwealth, which the Queen stated as her "sincere wish". She underwent cataract surgery in May 2018. In March 2019, she gave up driving on public roads, largely as a consequence of a car accident involving her husband two months earlier. On 21 December 2007, Elizabeth surpassed her great-great-grandmother, Queen Victoria, to become the longest-lived British monarch, and she became the longest-reigning British monarch and longest-reigning queen regnant and female head of state in the world on 9 September 2015. She became the oldest living monarch after the death of King Abdullah of Saudi Arabia on 23 January 2015. She later became the longest-reigning current monarch and the longest-serving current head of state following the death of King Bhumibol Adulyadej of Thailand on 13 October 2016, and the oldest current head of state on the resignation of Robert Mugabe of Zimbabwe on 21 November 2017. On 6 February 2017, she became the first British monarch to commemorate a sapphire jubilee, and on 20 November that year, she was the first British monarch to celebrate a platinum wedding anniversary. Philip had retired from his official duties as the Queen's consort in August 2017. On 19 March 2020, as the COVID-19 pandemic hit the United Kingdom, Elizabeth moved to Windsor Castle and sequestered there as a precaution. Public engagements were cancelled and Windsor Castle followed a strict sanitary protocol nicknamed "HMS Bubble". On 5 April, in a televised broadcast watched by an estimated 24 million viewers in the United Kingdom, Elizabeth asked people to "take comfort that while we may have more still to endure, better days will return: we will be with our friends again; we will be with our families again; we will meet again." On 8 May, the 75th anniversary of VE Day, in a television broadcast at 9 pm—the exact time at which her father had broadcast to the nation on the same day in 1945—she asked people to "never give up, never despair". In 2021, she received her first and second COVID-19 vaccinations in January and April respectively. Prince Philip died on 9 April 2021, after 73 years of marriage, making Elizabeth the first British monarch to reign as a widow or widower since Queen Victoria. She was reportedly at her husband's bedside when he died, and remarked in private that his death had "left a huge void". Due to the COVID-19 restrictions in place in England at the time, Elizabeth sat alone at Philip's funeral service, which evoked sympathy from people around the world. It was later reported in the press that Elizabeth had rejected a government offer to relax the rules. In her Christmas broadcast that year, which was ultimately her last, she paid a personal tribute to her "beloved Philip", saying, "That mischievous, inquiring twinkle was as bright at the end as when I first set eyes on him." Despite the pandemic, Elizabeth attended the 2021 State Opening of Parliament in May, the 47th G7 summit in June, and hosted US president Joe Biden at Windsor Castle. Biden was the 14th US president that the Queen had met. In October 2021, Elizabeth cancelled a planned trip to Northern Ireland and stayed overnight at King Edward VII's Hospital for "preliminary investigations". On Christmas Day 2021, while she was staying at Windsor Castle, 19-year-old Jaswant Singh Chail broke into the gardens using a rope ladder and carrying a crossbow with the aim of assassinating Elizabeth in revenge for the Amritsar massacre. Before he could enter any buildings, he was arrested and detained under the Mental Health Act. In February 2023, Chail pleaded guilty to attempting to injure or alarm the sovereign, and was sentenced in October to a nine-year custodial sentence plus an additional five years on extended licence. The sentencing judge also placed Chail under a hybrid order under section 45A of the Mental Health Act 1983, ordering that he remain at Broadmoor Hospital to be transferred into custody only after receiving psychiatric treatment. Elizabeth's Platinum Jubilee celebrations began on 6 February 2022, marking 70 years since her accession. In her accession day message, she renewed her commitment to a lifetime of public service, which she had originally made in 1947. Later that month, Elizabeth fell ill with COVID-19 along with several family members, but she only exhibited "mild cold-like symptoms" and recovered by the end of the month. She was present at the service of thanksgiving for her husband at Westminster Abbey on 29 March, but was unable to attend both the annual Commonwealth Day service that month and the Royal Maundy service in April, because of "episodic mobility problems". In May, she missed the State Opening of Parliament for the first time in 59 years. (She did not attend the state openings in 1959 and 1963 as she was pregnant with Prince Andrew and Prince Edward, respectively.) Later that month she made a surprise visit to Paddington Station and officially opened the Elizabeth line, named in her honour. The Queen was largely confined to balcony appearances during the public jubilee celebrations, and she missed the National Service of Thanksgiving on 3 June. On 13 June, she became the second-longest reigning monarch in history (among those whose exact dates of reign are known), with 70 years and 127 days on the throne—surpassing King Bhumibol Adulyadej of Thailand. On 6 September, she appointed her 15th British prime minister, Liz Truss, at Balmoral Castle in Scotland. This was the only occasion on which Elizabeth received a new prime minister at a location other than Buckingham Palace. No other British monarch appointed as many prime ministers. The Queen's last public message was issued on 7 September, in which she expressed her sympathy for those affected by the Saskatchewan stabbings. Elizabeth did not plan to abdicate, though she took on fewer public engagements in her later years and Prince Charles performed more of her duties. She told Canadian governor-general Adrienne Clarkson in a meeting in 2002 that she would never abdicate, saying, "It is not our tradition. Although, I suppose if I became completely gaga, one would have to do something." In June 2022, Elizabeth met the Archbishop of Canterbury, Justin Welby, who "came away thinking there is someone who has no fear of death, has hope in the future, knows the rock on which she stands and that gives her strength." Death On 8 September 2022, Buckingham Palace stated, "Following further evaluation this morning, the Queen's doctors are concerned for Her Majesty's health and have recommended she remain under medical supervision. The Queen remains comfortable and at Balmoral." Her immediate family rushed to Balmoral. She died peacefully at 3:10 pm, aged 96. Her death was announced to the public at 6:30 pm, setting in motion Operation London Bridge and, because she died in Scotland, Operation Unicorn. Elizabeth was the first monarch to die in Scotland since James V in 1542. Her death certificate recorded her cause of death as "old age". According to former prime minister Boris Johnson and the biographer Gyles Brandreth, she was suffering from a form of bone marrow cancer, which Brandreth wrote was multiple myeloma. On 12 September, Elizabeth's coffin was carried up the Royal Mile in a procession to St Giles' Cathedral, where the Crown of Scotland was placed on it. Her coffin lay at rest at the cathedral for 24 hours, guarded by the Royal Company of Archers, during which around 33,000 people filed past it. On 13 September, the coffin was flown to RAF Northolt in west London, before continuing its journey by road to Buckingham Palace. On 14 September, her coffin was taken in a military procession to Westminster Hall, where Elizabeth's body lay in state for four days. The coffin was guarded by members of both the Sovereign's Bodyguard and the Household Division. An estimated 250,000 members of the public filed past the coffin, as did politicians and other public figures. On 16 September, Elizabeth's children held a vigil around her coffin, and the next day her eight grandchildren did the same. Elizabeth's state funeral was held at Westminster Abbey on 19 September, marking the first time a monarch's funeral service had been held there since George II in 1760. More than a million people lined the streets of central London, and the day was declared a holiday in several Commonwealth countries. In Windsor, a final procession involving 1,000 military personnel took place and was witnessed by 97,000 people. Elizabeth's fell pony and two royal corgis stood at the side of the procession. After a committal service at St George's Chapel, Windsor Castle, Elizabeth's body was interred with her husband Philip's in the King George VI Memorial Chapel later the same day, in a private ceremony attended by her closest family members. Public perception and character Elizabeth rarely gave interviews, and little was known of her political opinions, which she did not express explicitly in public. It is against convention to ask or reveal the monarch's views. When Times journalist Paul Routledge asked her about the miners' strike of 1984–85 during a royal tour of the newspaper's offices, she replied that it was "all about one man" (a reference to Arthur Scargill), with which Routledge disagreed. Routledge was widely criticised in the media for asking the question and claimed that he was unaware of the protocols. After the 2014 Scottish independence referendum, Prime Minister David Cameron was overheard saying that Elizabeth was pleased with the outcome. She had arguably issued a public coded statement about the referendum by telling one woman outside Balmoral Kirk that she hoped people would think "very carefully" about the outcome. It emerged later that Cameron had specifically requested that she register her concern. Elizabeth had a deep sense of religious and civic duty, and took her Coronation Oath seriously. Aside from her official religious role as supreme governor of the established Church of England, she worshipped with that church and with the national Church of Scotland. She demonstrated support for inter-faith relations and met with leaders of other churches and religions, including five popes: Pius XII, John XXIII, John Paul II, Benedict XVI and Francis. A personal note about her faith often featured in her annual Christmas Message broadcast to the Commonwealth. In 2000, she said: To many of us, our beliefs are of fundamental importance. For me the teachings of Christ and my own personal accountability before God provide a framework in which I try to lead my life. I, like so many of you, have drawn great comfort in difficult times from Christ's words and example. Elizabeth was patron of more than 600 organisations and charities. The Charities Aid Foundation estimated that Elizabeth helped raise over £1.4 billion for her patronages during her reign. Her main leisure interests included equestrianism and dogs, especially her Pembroke Welsh Corgis. Her lifelong love of corgis began in 1933 with Dookie, the first of many royal corgis. Scenes of a relaxed, informal home life were occasionally witnessed; she and her family, from time to time, prepared a meal together and washed the dishes afterwards. In the 1950s, as a young woman at the start of her reign, Elizabeth was depicted as a glamorous "fairytale Queen". After the trauma of the Second World War, it was a time of hope, a period of progress and achievement heralding a "new Elizabethan age". Lord Altrincham's accusation in 1957 that her speeches sounded like those of a "priggish schoolgirl" was an extremely rare criticism. In the late 1960s, attempts to portray a more modern image of the monarchy were made in the television documentary Royal Family and by televising Prince Charles's investiture as Prince of Wales. Elizabeth also instituted other new practices; her first royal walkabout, meeting ordinary members of the public, took place during a tour of Australia and New Zealand in 1970. Her wardrobe developed a recognisable, signature style driven more by function than fashion. In public, she took to wearing mostly solid-colour overcoats and decorative hats, allowing her to be seen easily in a crowd. By the end of her reign, nearly one third of Britons had seen or met Elizabeth in person. At Elizabeth's Silver Jubilee in 1977, the crowds and celebrations were genuinely enthusiastic; but, in the 1980s, public criticism of the royal family increased, as the personal and working lives of Elizabeth's children came under media scrutiny. Her popularity sank to a low point in the 1990s. Under pressure from public opinion, she began to pay income tax for the first time, and Buckingham Palace was opened to the public. Although support for republicanism in Britain seemed higher than at any time in living memory, republican ideology was still a minority viewpoint, and Elizabeth herself had high approval ratings. Criticism was focused on the institution of the monarchy itself, and the conduct of Elizabeth's wider family, rather than her own behaviour and actions. Discontent with the monarchy reached its peak on the death of Diana, Princess of Wales, although Elizabeth's personal popularity—as well as general support for the monarchy—rebounded after her live television broadcast to the world five days after Diana's death. In November 1999, a referendum in Australia on the future of the Australian monarchy favoured its retention in preference to an indirectly elected head of state. Many republicans credited Elizabeth's personal popularity with the survival of the monarchy in Australia. In 2010, Prime Minister Julia Gillard noted that there was a "deep affection" for Elizabeth in Australia and that another referendum on the monarchy should wait until after her reign. Gillard's successor, Malcolm Turnbull, who led the republican campaign in 1999, similarly believed that Australians would not vote to become a republic in her lifetime. "She's been an extraordinary head of state", Turnbull said in 2021, "and I think frankly, in Australia, there are more Elizabethans than there are monarchists." Similarly, referendums in both Tuvalu in 2008 and Saint Vincent and the Grenadines in 2009 saw voters reject proposals to become republics. Polls in Britain in 2006 and 2007 revealed strong support for the monarchy, and in 2012, Elizabeth's Diamond Jubilee year, her approval ratings hit 90 per cent. Her family came under scrutiny again in the last few years of her life due to her son Andrew's association with convicted sex offenders Jeffrey Epstein and Ghislaine Maxwell, his lawsuit with Virginia Giuffre amidst accusations of sexual impropriety, and her grandson Harry and his wife Meghan's exit from the working royal family and subsequent move to the United States. Polling in Great Britain during the Platinum Jubilee, however, showed support for maintaining the monarchy and Elizabeth's personal popularity remained strong. As of 2021 she remained the third most admired woman in the world according to the annual Gallup poll, her 52 appearances on the list meaning she had been in the top ten more than any other woman in the poll's history. Elizabeth was portrayed in a variety of media by many notable artists, including painters Pietro Annigoni, Peter Blake, Chinwe Chukwuogo-Roy, Terence Cuneo, Lucian Freud, Rolf Harris, Damien Hirst, Juliet Pannett and Tai-Shan Schierenberg. Notable photographers of Elizabeth included Cecil Beaton, Yousuf Karsh, Anwar Hussein, Annie Leibovitz, Lord Lichfield, Terry O'Neill, John Swannell and Dorothy Wilding. The first official portrait photograph of Elizabeth was taken by Marcus Adams in 1926. Titles, styles, honours, and arms Elizabeth held many titles and honorary military positions throughout the Commonwealth, was sovereign of many orders in her own countries and received honours and awards from around the world. In each of her realms, she had a distinct title that follows a similar formula: Queen of Saint Lucia and of Her other Realms and Territories in Saint Lucia, Queen of Australia and Her other Realms and Territories in Australia, etc. She was also styled Defender of the Faith. From 21 April 1944 until her accession, Elizabeth's arms consisted of a lozenge bearing the royal coat of arms of the United Kingdom differenced with a label of three points argent, the centre point bearing a Tudor rose and the first and third a cross of Saint George. Upon her accession, she inherited the various arms her father held as sovereign, with a subsequently modified representation of the crown. Elizabeth also possessed royal standards and personal flags for use in the United Kingdom, Canada, Australia, New Zealand, Jamaica, and elsewhere. Elizabeth approved her modified British arms on 26 May 1954. Family tree See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-1-4214-0568-1] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://www.theverge.com/gadgets/881946/samsung-galaxy-z-trifold-restock-buy] | [TOKENS: 1785]
GadgetsCloseGadgetsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All GadgetsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMobileCloseMobilePosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All MobileSamsung’s Galaxy Z TriFold restock sold out in minutesDespite being $2,899.99 for the 512GB model, it sold out quickly — again.Despite being $2,899.99 for the 512GB model, it sold out quickly — again.by Cameron FaulknerCloseCameron FaulknerEditor, CommercePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Cameron FaulknerUpdated Feb 20, 2026, 3:24 PM UTCLinkShareGiftIf you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.Image: Allison Johnson / The VergeCameron FaulknerCloseCameron FaulknerPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Cameron Faulkner is an editor covering deals and gaming hardware. He joined in 2018, and after a two-year stint at Polygon, he rejoined The Verge in May 2025.Samsung unleashed a new batch of Galaxy Z TriFold units after selling out initial stock of the behemoth foldable that turns into a 10-inch tablet. It took less than ten minutes to sell through today’s supply, which either speaks to the phone’s surprising popularity or, rather, just how few of them the company is making. The device launched on January 30th, and unlike most other Samsung phones, availability of this one was limited to the company’s own site.RelatedI just want to keep unfolding the Samsung Z TriFoldSamsung Galaxy Z TriFold$2900$2900$2900 at SamsungThe TriFold costs $2,899.99 and comes with 512GB of storage (you’d think paying this much would get you at least 1TB, but alas). It’s essentially three phones in one; one when it’s folded, another when it opens to double the screen real estate, and ultimately, in its final form is as a large tablet that makes multitasking easier than with vastly inferior (joking) single-hinge foldables.While my colleague Allison Johnson waited for our review unit to show up, she had a surprisingly good time with using the Z Fold 7 as a fold-out portable computer when paired with a wireless keyboard. It seems safe to say that using the TriFold, with its extra screen real estate, would make this kind of setup even better. Whether it’s nearly $1,000 better, though, is a question that only you can answer with your wallet.Verge DealsSign up for Verge Deals to get deals on products we’ve tested sent to your inbox weekly.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Update, February 20th: Samsung sold out of TriFolds within minutes, so we updated this piece to reflect that the waiting game continues for those who couldn’t proceed to checkout.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Cameron FaulknerCloseCameron FaulknerEditor, CommercePosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Cameron FaulknerFoldable PhonesCloseFoldable PhonesPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Foldable PhonesGadgetsCloseGadgetsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All GadgetsMobileCloseMobilePosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All MobileSamsungCloseSamsungPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All SamsungTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechVerge ShoppingCloseVerge ShoppingPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All Verge ShoppingMost PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftThe RAM shortage is coming for everything you care aboutRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxAmazon blames human employees for an AI coding agent’s mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Mobile Samsung’s Galaxy Z TriFold restock sold out in minutes Despite being $2,899.99 for the 512GB model, it sold out quickly — again. Despite being $2,899.99 for the 512GB model, it sold out quickly — again. Posts from this author will be added to your daily email digest and your homepage feed. See All by Cameron Faulkner If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Posts from this author will be added to your daily email digest and your homepage feed. See All by Cameron Faulkner Samsung unleashed a new batch of Galaxy Z TriFold units after selling out initial stock of the behemoth foldable that turns into a 10-inch tablet. It took less than ten minutes to sell through today’s supply, which either speaks to the phone’s surprising popularity or, rather, just how few of them the company is making. The device launched on January 30th, and unlike most other Samsung phones, availability of this one was limited to the company’s own site. The TriFold costs $2,899.99 and comes with 512GB of storage (you’d think paying this much would get you at least 1TB, but alas). It’s essentially three phones in one; one when it’s folded, another when it opens to double the screen real estate, and ultimately, in its final form is as a large tablet that makes multitasking easier than with vastly inferior (joking) single-hinge foldables. While my colleague Allison Johnson waited for our review unit to show up, she had a surprisingly good time with using the Z Fold 7 as a fold-out portable computer when paired with a wireless keyboard. It seems safe to say that using the TriFold, with its extra screen real estate, would make this kind of setup even better. Whether it’s nearly $1,000 better, though, is a question that only you can answer with your wallet. Verge Deals Sign up for Verge Deals to get deals on products we’ve tested sent to your inbox weekly. Update, February 20th: Samsung sold out of TriFolds within minutes, so we updated this piece to reflect that the waiting game continues for those who couldn’t proceed to checkout. Posts from this author will be added to your daily email digest and your homepage feed. See All by Cameron Faulkner Posts from this topic will be added to your daily email digest and your homepage feed. See All Foldable Phones Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All Mobile Posts from this topic will be added to your daily email digest and your homepage feed. See All Samsung Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Verge Shopping Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Gadgets This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Bebo] | [TOKENS: 2002]
Contents Bebo Bebo (/ˈbiːboʊ/ BEE-boh) was an American social networking website that originally operated from 2005 until its bankruptcy in 2013. The site relaunched several times after its bankruptcy with a number of short-lived offerings, including instant messaging and video streaming, until its acquisition by Amazon in July 2019 when it was shut down. It was announced in January 2021 that it would be returning as a new social-media site the month after. By May 2022, it had once again been shut down, without having left beta-testing. The site was founded by Michael Birch and Xochi Birch. History Bebo was founded by husband-and-wife team Michael and Xochi Birch in January 2005 at their home in San Francisco. The website name was bought by the founders, and the backronym "Blog Early; Blog Often" was invented to answer the question of what the name meant. The website, at the height of its popularity, overtook Myspace to become the most widely used social-networking website in the United Kingdom, eventually registering at least 10.7 million unique users. Bebo's popularity saw it sold to AOL in March 2008 for $850 million, with the Birches' combined 70% stake yielding a profit of $595 million from the deal. The BBC later described the AOL purchase of Bebo as "one of the worst deals ever made in the dotcom era", and it cost the then-CEO of AOL, Randy Falco, his job. In 2010, on April 7, AOL announced that it would either sell the website or shut it down; this was mainly due to the falling numbers of unique users moving to rival site Facebook. AOL said that Bebo could not compete with other social-networking sites in its current state and that the company could not commit to taking on the massive task to keep Bebo in the social-network race. It was reported that AOL's finances were struggling. The National Space Agency of Ukraine's RT-70 radio telescope sent 501 messages chosen by Bebo users, called A Message From Earth, toward the planet Gliese 581c. Sent on 9 October 2008, it will arrive in the spring of 2028. On June 16, 2010, AOL sold Bebo to hedge-fund operators Criterion Capital Partners. On February 17, 2011, Bebo launched a brand-new design. This consisted of a new, more-modern header and homepage, as well as a new profile-layout option. Users could also see who had visited their profiles (a feature which could be changed in settings). In April 2011, Bebo added a new notification system, similar to Facebook's – a feature which had been much-requested in feedback.[citation needed] It notified users of new inbox-messages, lifestream activity, and more.[vague] On January 30, 2012, access to Bebo became unavailable for 36 hours, resuming normal service during the early hours of February 1, 2012. A Bebo spokesperson told TechCrunch that the site was down due to "a technical clusterfuck". Adam Levin, CEO of Bebo and Criterion Capital Partners, stated that they were trying to release some new features which caused the site to crash. No data was lost as a result of the outage. The crash triggered a belief that Bebo was gone for good, so that the hashtag #bebomemories trended worldwide on Twitter. In May 2013, the company voluntarily filed for Chapter 11 bankruptcy protection; however, the receiver Burke Capital Corporation has clarified that Bebo remains "healthy" and "operating" and that the company was using its Chapter 11 filing on May 9 in Los Angeles to "restructure some operational inefficiencies and other arrangements that are burdensome." Many analysts have questioned the value-proposition that Bebo could offer users and do not fault CCP. On July 1, 2013, Michael and Xochi Birch, the original founders, purchased the social network back from Criterion Capital Partners (CCP) for $1 million. On August 6, 2013, messages were posted on Bebo.com informing users that the site would be down for maintenance from August 7, 2013. On August 7, 2013, a video featuring Michael announcing his plan for the new Bebo was placed on the front page of the site. The video informed users that the site would be taken down while the Bebo team developed the new product. Many believed that this would be normal maintenance; however, it was revealed that the site would be closed for a few months. The announcement also stated that all user-content had been deleted, but users' blog posts and images would be retrievable in downloadable format should members opt-in to receive this. However, members who submitted emails still[when?] have not retrieved profile data (pictures, blogs, etc.). In April 2014, Bebo founder Michael Birch took to Bebo in a tongue-in-cheek video to promote the re-launch of Bebo with the slogan, "Probably Not for Boring People". The relaunch video emphasized Bebo's history in which it included its then-most popular feature: the whiteboard. Bebo relaunched on January 7, 2015; announced with the news that Bebo was now a messenger app called Bebo Blab which was available on Google Play and Apple App Store. The app amassed 3.9 million users in just one year. Bebo Blab shut down two years after its relaunch, as users weren't returning to the platform to watch archived streams on replay. Birch wrote: Blab was great in many ways, but it wasn't going to be an everyday thing for millions. So we're kicking down the sandcastle, and rebuilding it as an 'always on' place to hang with friends. — Michael Birch, CEO of Bebo As of April 19, 2018, the site offered multi-feature Twitch streaming software (similar to Open Broadcaster Software or XSplit). This closed down in October 2018 to focus on tournament software. In July 2019, Amazon, through their subsidiary Twitch Interactive, acquired Bebo for US$25 million after outbidding Discord. In early 2021, the Bebo.com webpage began to display a series of messages suggesting a new relaunch of Bebo was imminent. In an interview on February 3, founder Michael Birch described his plan to relaunch the social networking website with a focus on individual profiles, rather than the news feed that had become ubiquitous throughout the rest of social media. As of May 2022, Bebo has been shut down again, never having left its private beta-testing phase, with the website now displaying a quote by founder Michael Birch, about the attempts of resurrecting Bebo. Birch states "Who knows", in response to if Bebo would ever be relaunched again. Original website features Users received a personal profile page where they would post blogs, photographs, music, videos, and questionnaires, which other users may answer. Additionally, users could add others as friends and send them messages, and update their personal profiles to notify friends about themselves. Each Bebo user received a profile, which included two specific modules: a comment section where other users could leave a message and a list of the user's friends. Users could select from many more modules to add. By default, when an account was created, the profile was private, which limited access to friends. The user could select the "Public Profile" option so the profile would be visible to any other members. Profiles could be personalized by a design template that became the background of the user's profile, known as a skin. Profiles also included multiple-choice quizzes; polls for their friends to vote in and comment on; photo albums allowing users 96 images per album; blogs with a comments section; a list of bands of which the user was a fan; and a list of groups that the user was a member of. A "Video Box" could also be added, either hot-linked from YouTube or copied from a Bebo media content provider's page. Other features included: Bebo runs on servers running the Resin Server and uses the Oracle Database system. It is estimated that Bebo had somewhere between 5000 and 8000 Phantom4 servers provided by Rackable Systems and has over 100 TB of disk space across all of their servers. Announced on the November 13, 2007, Bebo's Open Media Platform is a platform for companies to distribute content to the Bebo community. Content providers can bring their media player to Bebo, and monetize the advertising within it. Each content provider has a specialised page designed for video which showcases any Adobe Flash video content at the top of the profile. Many networks are signed up for the service, including CBS, Sky, Ustream.tv, BBC and Last.fm. Bebo joined OpenSocial, a set of common APIs for building social applications across the web. It announced plans for a developers platform and said it will make a further platform announcement. Bebo's Open Application Platform was launched in early December 2007 with just over fifty applications and is now host to hundreds. On May 21, 2008, some users in New Zealand were temporarily given full access to other users' accounts. Bebo network engineers traced the error to a misconfigured proxy server in an Internet service provider (ISP) in New Zealand, which was later corrected. The ISP seemed to be interfering with its cache, thereby causing some of its customers to receive cached cookies and details from other users, likely because the ISP used dynamic IP addresses. References External links
========================================
[SOURCE: https://www.mako.co.il/tvbee-tv-news/Article-c658061f07c7c91027.htm] | [TOKENS: 94]
We are sorry... ...but your activity and behavior on this website made us think that you are a bot. Please solve this CAPTCHA in helping us understand your behavior to grant access You reached this page when trying to access https://www.mako.co.il/tvbee-tv-news/Article-c658061f07c7c91027.htm from 79.181.162.231 on February 21 2026, 10:56:28 UTC
========================================
[SOURCE: https://www.wired.com/video/watch/the-el-paso-no-fly-debacle-is-just-the-beginning-of-a-drone-defense-mess] | [TOKENS: 501]
The El Paso No-Fly Debacle Is Just the Beginning of a Drone Defense Mess Released on 02/18/2026 [Reporter] Last week, the US unexpectedly shut down airspace over El Paso, Texas and parts of New Mexico. The closure was supposed to last 10 days, but ended up lasting just eight hours. Now the episode is raising serious questions about America's anti-drone defenses. As low cost drones become more worldwide, security experts have repeatedly warneed that destructive drone attacks are inevitable. But stopping them isn't simple. Jamming signals or shooting drones down can be dangerous, especially in populated areas where civilian aircraft are flying overhead. In the case of the El Paso incident, the Trump administration initially said the airspace closure was due to possible Mexican cartel drone activity. However, reporting from The New York Times and others revealed that customs and border protection had deployed a Pentagon provided anti-drone laser weapon in the area despite the Federal Aviation Administration's concerns about potential dangers to civilian aircraft. A cybersecurity expert told WIRED that the FAA's move to shut down airspace was likely precautionary, and that the original 10-day window suggests the FAA may not have been fully informed about how long the laser would be in use. According to Reuters, the system used was called LOCUST, A 20-kilowatt directed energy weapon designed to take down small drones, which reportedly shot down what turned out to be, well, a party balloon. A White House official told The Hill that the FAA made the closure decision without notifying the White House or Pentagon, and insisted civilian aircraft were never in danger. But members of Congress are now demanding a detailed classified briefing, asking what went wrong and where communication broke down. Pilots meanwhile say the episode was deeply unsettling, as one told WIRED, I do not want to be stuck anywhere for 10 days or get hit by a laser. There is currently no procedure for that. Trending video Collectibles Expert Answers Collectibles Questions Olympian Answers Figure Skating Questions Paralympian Answers Paralympics Questions I Escaped Chinese Mafia Crypto Slavery Professor Answers Olympic History Questions © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://www.mako.co.il/tvbee-tv-news/Article-a6c96ddf8fa7c91027.htm] | [TOKENS: 5710]
ימיו האחרונים של אריק דיין: "איבד את היכולת לדבר"דיין, שהלך לעולמו אחרי מאבק ב-ALS, היה בשבוע האחרון לחייו מרותק למיטה והיה לו קשה לבלוע - כך סיפר פטריק דמפסי, ששיחק לצידו בסדרה "האנטומיה של גריי". "התעוררתי לחדשות עצובות", הוא אמר בריאיון לתוכנית בוקר, "דיברנו בשבוע שעבר. איכות חייו התדרדרה כל כך מהר"מיכל דאביmakoפורסם: 20.02.26, 14:15 | עודכן: 20.02.26, 19:10הקישור הועתקהוליווד מתאבלת על מותו של השחקן אריק דיין, שהלך לעולמו אתמול (חמישי) בגיל 53 לאחר מאבק של כשנה במחלת ה-ALS. שחקנים וקולגות שעבדו איתו לאורך השנים ספדו לו, ובהם גם כמה מכוכבי "האנטומיה של גריי" - הסדרה שבה גילם דיין את ד"ר מארק סלואן (מקסטימי) במשך שש עונות.פטריק דמפסי, שגילם את ד"ר שפרד, סיפר על הימים האחרונים בחייו של דיין וגילה כי הפעם האחרונה שבה שוחח עמו הייתה רק בשבוע שעבר. "התעוררתי הבוקר לחדשות מאוד עצובות", הוא אמר בריאיון לתוכנית בוקר, "קשה לשים את זה במילים. אני באמת כל כך עצוב עבור הילדים שלו. היינו בקשר, כתבנו אחד לשני, דיברתי איתו לפני שבוע, וכמה חברים שלנו הלכו לבקר אותו. אריק התחיל לאבד את יכולתו לדבר. הוא היה מרותק למיטה והיה לו מאוד קשה לבלוע, איכות החיים שלו התדרדרה כל כך מהר".דמפסי אמר עוד כי דיין "היה האיש הכי מצחיק - היה תענוג לעבוד איתו וככה אני רוצה לזכור אותו, כי בכל פעם שהוא היה על הסט, הוא הביא איתו כל כך הרבה כיף. היה לו חוש הומור נהדר. היה קל לעבוד איתו, הסתדרנו מיד. הסצנה הראשונה הייתה שלו, בכל הדרו - הוא יוצא מהשירותים עם מגבת, נראה מדהים, גורם לך להרגיש לגמרי לא בכושר. הסתדרנו כי לא באמת הייתה תחרות. היה רק כבוד הדדי נפלא. הוא היה אינטליגנט בצורה מטורפת, ואני תמיד אזכור את הרגעים האלה שהיו לנו יחד ואחגוג את השמחה שהוא הביא לחיים של אנשים."ההפסד האמיתי הוא לנו, שלא יהיו לנו אותם יותר. הוא עשה עבודה מדהימה בהעלאת המודעות למחלה הנוראה הזו, והימים האלה מזכירים לנו שכולנו צריכים לחגוג כל יום כאילו זה היום האחרון שלנו. זה משהו שאנחנו חייבים לזכור, ובוודאי בעולם שבו יש כל כך הרבה משברים וכל כך הרבה טרגדיות, אנחנו באמת צריכים להיות אסירי תודה על כל רגע שאנחנו זוכים לו. לבלות זמן עם המשפחות שלנו, לעשות דברים שהם טובים, שמועילים לאחרים, להיות אדיבים, להיות אוהבים".קווין מק'קיד, שגילם את ד"ר אואן האנט, נפרד גם הוא: "נוח על משכבך בשלום, חבר", כתב בסטורי שפרסם בחשבון האינסטגרם שלו, וכמוהו גם ג'יימס פיקנס ג'וניור, שגילם את ד"ר ריצ'רד וובר: "נוח על משכבך בשלום".דיין שיחק גם בסדרה "אופוריה" של HBO עוד מאז 2019, שם גילם את דמותו של קאל ג'ייקובס. סם לוינסון, יוצר הסדרה, ספד לו גם הוא: "אני שבור לב על האובדן של חברנו היקר אריק. לעבוד איתו היה כבוד. להיות חבר שלו היה מתנה. משפחתו של אריק נמצאת בתפילות שלנו"; גם בחשבון הרשמי של "אופוריה" באינסטגרם נפרדו: "אנחנו עצובים מאוד לשמוע על מותו של אריק דיין. הוא היה מוכשר בצורה יוצאת דופן, ול-HBO היה מזל לעבוד איתו בשלוש עונות של 'אופוריה'. מחשבותינו עם יקיריו בתקופה הקשה הזו".פרסומתדיין, כאמור, הלך לעולמו אתמול לאחר מאבק של כשנה במחלת ה-ALS, והותיר אחריו את אשתו שתי בנותיהם, בילי וג'ורג'יה. "לאורך ההתמודדות עם המחלה אריק הפך לפעיל לקידום מודעות ומחקר, מתוך רצון לעשות שינוי עבור אחרים שנמצאים באותו מאבק", מסרו בני משפחתו בהצהרה לתקשורת. "הוא יחסר לנו מאוד וייזכר תמיד באהבה, הוא אהב את המעריצים שלו והיה אסיר תודה על גל התמיכה והאהבה שקיבל, והמשפחה מבקשת פרטיות בזמן שהיא מתמודדת עם התקופה הבלתי אפשרית הזו".מצאתם טעות לשון? ימיו האחרונים של אריק דיין: "איבד את היכולת לדבר" דיין, שהלך לעולמו אחרי מאבק ב-ALS, היה בשבוע האחרון לחייו מרותק למיטה והיה לו קשה לבלוע - כך סיפר פטריק דמפסי, ששיחק לצידו בסדרה "האנטומיה של גריי". "התעוררתי לחדשות עצובות", הוא אמר בריאיון לתוכנית בוקר, "דיברנו בשבוע שעבר. איכות חייו התדרדרה כל כך מהר" הוליווד מתאבלת על מותו של השחקן אריק דיין, שהלך לעולמו אתמול (חמישי) בגיל 53 לאחר מאבק של כשנה במחלת ה-ALS. שחקנים וקולגות שעבדו איתו לאורך השנים ספדו לו, ובהם גם כמה מכוכבי "האנטומיה של גריי" - הסדרה שבה גילם דיין את ד"ר מארק סלואן (מקסטימי) במשך שש עונות. פטריק דמפסי, שגילם את ד"ר שפרד, סיפר על הימים האחרונים בחייו של דיין וגילה כי הפעם האחרונה שבה שוחח עמו הייתה רק בשבוע שעבר. "התעוררתי הבוקר לחדשות מאוד עצובות", הוא אמר בריאיון לתוכנית בוקר, "קשה לשים את זה במילים. אני באמת כל כך עצוב עבור הילדים שלו. היינו בקשר, כתבנו אחד לשני, דיברתי איתו לפני שבוע, וכמה חברים שלנו הלכו לבקר אותו. אריק התחיל לאבד את יכולתו לדבר. הוא היה מרותק למיטה והיה לו מאוד קשה לבלוע, איכות החיים שלו התדרדרה כל כך מהר". דמפסי אמר עוד כי דיין "היה האיש הכי מצחיק - היה תענוג לעבוד איתו וככה אני רוצה לזכור אותו, כי בכל פעם שהוא היה על הסט, הוא הביא איתו כל כך הרבה כיף. היה לו חוש הומור נהדר. היה קל לעבוד איתו, הסתדרנו מיד. הסצנה הראשונה הייתה שלו, בכל הדרו - הוא יוצא מהשירותים עם מגבת, נראה מדהים, גורם לך להרגיש לגמרי לא בכושר. הסתדרנו כי לא באמת הייתה תחרות. היה רק כבוד הדדי נפלא. הוא היה אינטליגנט בצורה מטורפת, ואני תמיד אזכור את הרגעים האלה שהיו לנו יחד ואחגוג את השמחה שהוא הביא לחיים של אנשים. "ההפסד האמיתי הוא לנו, שלא יהיו לנו אותם יותר. הוא עשה עבודה מדהימה בהעלאת המודעות למחלה הנוראה הזו, והימים האלה מזכירים לנו שכולנו צריכים לחגוג כל יום כאילו זה היום האחרון שלנו. זה משהו שאנחנו חייבים לזכור, ובוודאי בעולם שבו יש כל כך הרבה משברים וכל כך הרבה טרגדיות, אנחנו באמת צריכים להיות אסירי תודה על כל רגע שאנחנו זוכים לו. לבלות זמן עם המשפחות שלנו, לעשות דברים שהם טובים, שמועילים לאחרים, להיות אדיבים, להיות אוהבים". קווין מק'קיד, שגילם את ד"ר אואן האנט, נפרד גם הוא: "נוח על משכבך בשלום, חבר", כתב בסטורי שפרסם בחשבון האינסטגרם שלו, וכמוהו גם ג'יימס פיקנס ג'וניור, שגילם את ד"ר ריצ'רד וובר: "נוח על משכבך בשלום". דיין שיחק גם בסדרה "אופוריה" של HBO עוד מאז 2019, שם גילם את דמותו של קאל ג'ייקובס. סם לוינסון, יוצר הסדרה, ספד לו גם הוא: "אני שבור לב על האובדן של חברנו היקר אריק. לעבוד איתו היה כבוד. להיות חבר שלו היה מתנה. משפחתו של אריק נמצאת בתפילות שלנו"; גם בחשבון הרשמי של "אופוריה" באינסטגרם נפרדו: "אנחנו עצובים מאוד לשמוע על מותו של אריק דיין. הוא היה מוכשר בצורה יוצאת דופן, ול-HBO היה מזל לעבוד איתו בשלוש עונות של 'אופוריה'. מחשבותינו עם יקיריו בתקופה הקשה הזו". דיין, כאמור, הלך לעולמו אתמול לאחר מאבק של כשנה במחלת ה-ALS, והותיר אחריו את אשתו שתי בנותיהם, בילי וג'ורג'יה. "לאורך ההתמודדות עם המחלה אריק הפך לפעיל לקידום מודעות ומחקר, מתוך רצון לעשות שינוי עבור אחרים שנמצאים באותו מאבק", מסרו בני משפחתו בהצהרה לתקשורת. "הוא יחסר לנו מאוד וייזכר תמיד באהבה, הוא אהב את המעריצים שלו והיה אסיר תודה על גל התמיכה והאהבה שקיבל, והמשפחה מבקשת פרטיות בזמן שהיא מתמודדת עם התקופה הבלתי אפשרית הזו".
========================================
[SOURCE: https://www.ynet.co.il/economy/article/s1mf3gidwg] | [TOKENS: 408]
מכה לטראמפ: בית המשפט העליון בארה"ב ביטל את תוכנית המכסים של הנשיא בית המשפט קבע כי טראמפ חרג מסמכותו כשנשען על חוק חירום מיוחד בהחלטה חד-צדדית, ללא אישור מהקונגרס, כדי להוציא לפועל את תוכנית המכסים הנרחבת שלו על יבוא סחורות לארה"ב ממדינות העולם. הנשיא הגיב בחריפות והודיע כי יחתום עוד היום על צו להטלת מכס רוחבי גלובלי בגובה 10%. המכס הזה יצטרף למכסים שנותרו בתוקף לאחר הפסיקה, והוא מוגבל ל-150 יום ללא אישור קונגרס
========================================
[SOURCE: https://en.wikipedia.org/wiki/Euler_Mathematical_Toolbox] | [TOKENS: 683]
Contents Euler Mathematical Toolbox Euler Mathematical Toolbox (or EuMathT; formerly Euler) is a free and open-source numerical software package. It contains a matrix language, a graphical notebook style interface, and a plot window. Euler is designed for higher level math such as calculus, optimization, and statistics. The software can handle real, complex and interval numbers, vectors and matrices, it can produce 2D/3D plots, and uses Maxima for symbolic operations. The software is compilable with Windows. The Unix and Linux versions do not contain a computer algebra subsystem. History Euler Math Toolbox originated in 1988 as a program for Atari ST. At that time, the title of the program was simply Euler, but it turned out to be too unspecific for the Internet. The main aim of the program was to create a tool for testing numerical algorithms, to visualize results, and to demonstrate mathematical content in the classroom. Euler Math Toolbox uses a matrix language similar to MATLAB, a system that had been under development since the 1970s. Then and now the main developer of Euler is René Grothmann, a mathematician at the Catholic University of Eichstätt-Ingolstadt, Germany. In 2007, Euler was married with the Maxima computer algebra system. Symbolic expressions and other functions were added to communicate with Maxima, and to reach a good degree of integration into the numerical Euler core. Overview The Euler core is a numerical system written in C/C++. It handles real, complex, and interval values, as well as matrices of these types. Other available data types are sparse, compressed matrices, a long accumulator for an exact scalar product, and strings. Strings are used for expressions, file names etc. Based on this core, additional functions are implemented in the Euler matrix language, which is an interpreted programming language in the style of an advanced BASIC dialect. Euler contains libraries for statistics, exact numerical computations with interval inclusions, differential equations and stiff equations, astronomical functions, geometry, and more. The clean interface consists of a text window and a graphics window. The text window contains fully editable notebooks, while the graphics window displays the graphics output. Graphics can be added to the notebook window or can be exported in various formats (PNG, SVG, WMF, Clipboard). Graphic types include line, bar or point plots in 2D and 3D, including anaglyph plots of 3D surfaces and other 3D plots. Euler provides an API for the open raytracer POV-Ray. Euler handles symbolic computations via Maxima, which is loaded as a separate process, communicating with Euler through pipes. The two programs can exchange variables and values. Indeed, Maxima is used in various Euler functions (e.g. Newton's method) to assist in the computation of derivatives, Taylor expansions and integrals. Moreover, Maxima can be called at definition time of an Euler function. LaTeX can be used from within Euler to display formulas. For export of formulas to HTML, either the generated LaTeX images or MathJax can be used. A special export option exports all graphics to SVG. Euler also includes the Tiny C Compiler, which allows subroutines in C to be compiled and included via a Windows DLL. Euler has a lot of similarity to MATLAB and its free clones (GNU Octave), but it is not compatible. See also References External links
========================================
[SOURCE: https://www.ynet.co.il/economy/article/sjgxvwuo11l] | [TOKENS: 302]
הוא לא יוותר כ"כ מהר: כך טראמפ יוכל להמשיך להפעיל את תוכנית המכסים שלו על אף שבית המשפט העליון בארה"ב פסל את השימוש בחוק חירום משנת 1977 עליו הסתמך הנשיא עבור תוכנית המכסים הנרחבת שלו, לרשות טראמפ עומדות לפחות חמש חלופות חוקתיות שיאפשרו לו להמשיך להטיל מכסים על מדינות העולם, גם אם בצורה מסורבלת ואיטית יותר מבעבר
========================================
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-57] | [TOKENS: 11349]
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 °C; 212 and 32 °F), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 ⋅ 0.5 ⋅ 2 ⋅ 1 ⋅ 0.2 ⋅ 1 ⋅ 10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25×1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1−491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals – which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links
========================================
[SOURCE: https://www.theverge.com/tech/881943/tamron-link-usb-dongle-bluetooth-adapter-camera-lens-remote] | [TOKENS: 1606]
TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechGadgetsCloseGadgetsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All GadgetsNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTamron’s new dongle lets you wirelessly control your lens from your phoneThe Tamron-Link dongle adds Bluetooth connectivity for the company’s mobile and desktop remote apps.The Tamron-Link dongle adds Bluetooth connectivity for the company’s mobile and desktop remote apps.by Andrew LiszewskiCloseAndrew LiszewskiSenior Reporter, NewsPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Andrew LiszewskiFeb 20, 2026, 3:19 PM UTCLinkShareGiftIf you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.The dongle replaces a USB-C cable but its wireless range is limited to just over 16 feet. Image: TamronAndrew LiszewskiCloseAndrew LiszewskiPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Andrew Liszewski is a senior reporter who’s been covering and reviewing the latest gadgets and tech since 2006, but has loved all things electronic since he was a kid.Tamron has announced a new USB-C dongle that wirelessly connects compatible lenses to mobile devices and laptops so they can be customized and controlled remotely. It works with 16 of the company’s lenses to start. The $50 Tamron-Link dongle is available now, and while its Bluetooth range is limited to just over 16-feet, it’s one less cable you’ll need to wrangle during a shoot.The dongle works with a variety of Tamron Sony E, Nikon Z, and Canon RF APS-C lenses , according to PetaPixel, with support for more lenses coming through future firmware updates. It wirelessly connects the lenses to the Tamron Lens Utility app introduced last June and is available for iOS and iPadOS, Android mobile devices, and PCs and Macs.The dongle is only compatible with lenses featuring a USB-C port. Image: TamronThe app isn’t needed to use Tamron’s lenses, but it introduces some advanced functionality that can be useful when shooting video without a full crew of camera operators, or still photography when precise settings adjustments are needed. You can set markers for different focus positions and specify how long it takes the lens to adjust between them, creating an automated and repeatable transition where the focus in a video dramatically shifts from one person to another. You can also electronically limit the focus range on a lens, ensuring that you’re not accidentally focusing on something in the distant background while capturing a moving subject.The basic functionality of lenses can also be customized through the app. You can change the direction of the focus and aperture rings, toggle between linear and non-linear adjustments for the focus ring, and customize the functionality of buttons on the lens. Even firmware updates will be slightly more convenient, although they may take just a bit longer over Bluetooth.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Andrew LiszewskiCloseAndrew LiszewskiSenior Reporter, NewsPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Andrew LiszewskiGadgetsCloseGadgetsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All GadgetsNewsCloseNewsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All NewsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftThe RAM shortage is coming for everything you care aboutRead Microsoft gaming CEO Asha Sharma’s first memo on the future of XboxAmazon blames human employees for an AI coding agent’s mistakeA $10K+ bounty is waiting for anyone who can unplug Ring doorbells from Amazon’s cloudThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All News Tamron’s new dongle lets you wirelessly control your lens from your phone The Tamron-Link dongle adds Bluetooth connectivity for the company’s mobile and desktop remote apps. The Tamron-Link dongle adds Bluetooth connectivity for the company’s mobile and desktop remote apps. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Tamron has announced a new USB-C dongle that wirelessly connects compatible lenses to mobile devices and laptops so they can be customized and controlled remotely. It works with 16 of the company’s lenses to start. The $50 Tamron-Link dongle is available now, and while its Bluetooth range is limited to just over 16-feet, it’s one less cable you’ll need to wrangle during a shoot. The dongle works with a variety of Tamron Sony E, Nikon Z, and Canon RF APS-C lenses , according to PetaPixel, with support for more lenses coming through future firmware updates. It wirelessly connects the lenses to the Tamron Lens Utility app introduced last June and is available for iOS and iPadOS, Android mobile devices, and PCs and Macs. The app isn’t needed to use Tamron’s lenses, but it introduces some advanced functionality that can be useful when shooting video without a full crew of camera operators, or still photography when precise settings adjustments are needed. You can set markers for different focus positions and specify how long it takes the lens to adjust between them, creating an automated and repeatable transition where the focus in a video dramatically shifts from one person to another. You can also electronically limit the focus range on a lens, ensuring that you’re not accidentally focusing on something in the distant background while capturing a moving subject. The basic functionality of lenses can also be customized through the app. You can change the direction of the focus and aperture rings, toggle between linear and non-linear adjustments for the focus ring, and customize the functionality of buttons on the lens. Even firmware updates will be slightly more convenient, although they may take just a bit longer over Bluetooth. Posts from this author will be added to your daily email digest and your homepage feed. See All by Andrew Liszewski Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All News Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Tech This is the title for the native ad Top Stories © 2026 Vox Media, LLC. All Rights Reserved
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEMao199624_177-0] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/BBC_News#cite_note-16] | [TOKENS: 8810]
Contents BBC News BBC News is an operational business division of the British Broadcasting Corporation (BBC) responsible for the gathering and broadcasting of news and current affairs in the UK and around the world. The department is the world's largest broadcast news organisation and generates about 120 hours of radio and television output each day, as well as online news coverage. The service has over 5,500 journalists working across its output including in 50 foreign news bureaus where more than 250 foreign correspondents are stationed. Deborah Turness has been the CEO of news and current affairs since September 2022. In 2019, it was reported in an Ofcom report that the BBC spent £136m on news during the period April 2018 to March 2019. BBC News' domestic, global and online news divisions are housed within the largest live newsroom in Europe, in Broadcasting House in central London. Parliamentary coverage is produced and broadcast from studios in London. Through BBC English Regions, the BBC also has regional centres across England and national news centres in Northern Ireland, Scotland and Wales. All nations and English regions produce their own local news programmes and other current affairs and sport programmes. The BBC is a quasi-autonomous corporation authorised by royal charter, making it operationally independent of the government. As of 2024, the BBC reaches an average of 450 million people per week, with the BBC World Service accounting for 320 million people. History This is London calling – 2LO calling. Here is the first general news bulletin, copyright by Reuters, Press Association, Exchange Telegraph and Central News. — BBC news programme opening during the 1920s The British Broadcasting Company broadcast its first radio bulletin from radio station 2LO on 14 November 1922. Wishing to avoid competition, newspaper publishers persuaded the government to ban the BBC from broadcasting news before 7 pm, and to force it to use wire service copy instead of reporting on its own. The BBC gradually gained the right to edit the copy and, in 1934, created its own news operation. However, it could not broadcast news before 6 p.m. until World War II. In addition to news, Gaumont British and Movietone cinema newsreels had been broadcast on the TV service since 1936, with the BBC producing its own equivalent Television Newsreel programme from January 1948. A weekly Children's Newsreel was inaugurated on 23 April 1950, to around 350,000 receivers. The network began simulcasting its radio news on television in 1946, with a still picture of Big Ben. Televised bulletins began on 5 July 1954, broadcast from leased studios within Alexandra Palace in London. The public's interest in television and live events was stimulated by Elizabeth II's coronation in 1953. It is estimated that up to 27 million people viewed the programme in the UK, overtaking radio's audience of 12 million for the first time. Those live pictures were fed from 21 cameras in central London to Alexandra Palace for transmission, and then on to other UK transmitters opened in time for the event. That year, there were around two million TV Licences held in the UK, rising to over three million the following year, and four and a half million by 1955. Television news, although physically separate from its radio counterpart, was still firmly under radio news' control in the 1950s. Correspondents provided reports for both outlets, and the first televised bulletin, shown on 5 July 1954 on the then BBC television service and presented by Richard Baker, involved his providing narration off-screen while stills were shown. This was then followed by the customary Television Newsreel with a recorded commentary by John Snagge (and on other occasions by Andrew Timothy). On-screen newsreaders were introduced a year later in 1955 – Kenneth Kendall (the first to appear in vision), Robert Dougall, and Richard Baker—three weeks before ITN's launch on 21 September 1955. Mainstream television production had started to move out of Alexandra Palace in 1950 to larger premises – mainly at Lime Grove Studios in Shepherd's Bush, west London – taking Current Affairs (then known as Talks Department) with it. It was from here that the first Panorama, a new documentary programme, was transmitted on 11 November 1953, with Richard Dimbleby becoming anchor in 1955. In 1958, Hugh Carleton Greene became head of News and Current Affairs. On 1 January 1960, Greene became Director-General. Greene made changes that were aimed at making BBC reporting more similar to its competitor ITN, which had been highly rated by study groups held by Greene. A newsroom was created at Alexandra Palace, television reporters were recruited and given the opportunity to write and voice their own scripts, without having to cover stories for radio too. On 20 June 1960, Nan Winton, the first female BBC network newsreader, appeared in vision. 19 September 1960 saw the start of the radio news and current affairs programme The Ten O'clock News. BBC2 started transmission on 20 April 1964 and began broadcasting a new show, Newsroom. The World at One, a lunchtime news programme, began on 4 October 1965 on the then Home Service, and the year before News Review had started on television. News Review was a summary of the week's news, first broadcast on Sunday, 26 April 1964 on BBC 2 and harking back to the weekly Newsreel Review of the Week, produced from 1951, to open programming on Sunday evenings–the difference being that this incarnation had subtitles for the deaf and hard-of-hearing. As this was the decade before electronic caption generation, each superimposition ("super") had to be produced on paper or card, synchronised manually to studio and news footage, committed to tape during the afternoon, and broadcast early evening. Thus Sundays were no longer a quiet day for news at Alexandra Palace. The programme ran until the 1980s – by then using electronic captions, known as Anchor – to be superseded by Ceefax subtitling (a similar Teletext format), and the signing of such programmes as See Hear (from 1981). On Sunday 17 September 1967, The World This Weekend, a weekly news and current affairs programme, launched on what was then Home Service, but soon-to-be Radio 4. Preparations for colour began in the autumn of 1967 and on Thursday 7 March 1968 Newsroom on BBC2 moved to an early evening slot, becoming the first UK news programme to be transmitted in colour – from Studio A at Alexandra Palace. News Review and Westminster (the latter a weekly review of Parliamentary happenings) were "colourised" shortly after. However, much of the insert material was still in black and white, as initially only a part of the film coverage shot in and around London was on colour reversal film stock, and all regional and many international contributions were still in black and white. Colour facilities at Alexandra Palace were technically very limited for the next eighteen months, as it had only one RCA colour Quadruplex videotape machine and, eventually two Pye plumbicon colour telecines–although the news colour service started with just one. Black and white national bulletins on BBC 1 continued to originate from Studio B on weekdays, along with Town and Around, the London regional "opt out" programme broadcast throughout the 1960s (and the BBC's first regional news programme for the South East), until it started to be replaced by Nationwide on Tuesday to Thursday from Lime Grove Studios early in September 1969. Town and Around was never to make the move to Television Centre – instead it became London This Week which aired on Mondays and Fridays only, from the new TVC studios. The BBC moved production out of Alexandra Palace in 1969. BBC Television News resumed operations the next day with a lunchtime bulletin on BBC1 – in black and white – from Television Centre, where it remained until March 2013. This move to a smaller studio with better technical facilities allowed Newsroom and News Review to replace back projection with colour-separation overlay. During the 1960s, satellite communication had become possible; however, it was some years before digital line-store conversion was able to undertake the process seamlessly. On 14 September 1970, the first Nine O'Clock News was broadcast on television. Robert Dougall presented the first week from studio N1 – described by The Guardian as "a sort of polystyrene padded cell"—the bulletin having been moved from the earlier time of 20.50 as a response to the ratings achieved by ITN's News at Ten, introduced three years earlier on the rival ITV. Richard Baker and Kenneth Kendall presented subsequent weeks, thus echoing those first television bulletins of the mid-1950s. Angela Rippon became the first female news presenter of the Nine O'Clock News in 1975. Her work outside the news was controversial at the time, appearing on The Morecambe and Wise Christmas Show in 1976 singing and dancing. The first edition of John Craven's Newsround, initially intended only as a short series and later renamed just Newsround, came from studio N3 on 4 April 1972. Afternoon television news bulletins during the mid to late 1970s were broadcast from the BBC newsroom itself, rather than one of the three news studios. The newsreader would present to camera while sitting on the edge of a desk; behind him staff would be seen working busily at their desks. This period corresponded with when the Nine O'Clock News got its next makeover, and would use a CSO background of the newsroom from that very same camera each weekday evening. Also in the mid-1970s, the late night news on BBC2 was briefly renamed Newsnight, but this was not to last, or be the same programme as we know today – that would be launched in 1980 – and it soon reverted to being just a news summary with the early evening BBC2 news expanded to become Newsday. News on radio was to change in the 1970s, and on Radio 4 in particular, brought about by the arrival of new editor Peter Woon from television news and the implementation of the Broadcasting in the Seventies report. These included the introduction of correspondents into news bulletins where previously only a newsreader would present, as well as the inclusion of content gathered in the preparation process. New programmes were also added to the daily schedule, PM and The World Tonight as part of the plan for the station to become a "wholly speech network". Newsbeat launched as the news service on Radio 1 on 10 September 1973. On 23 September 1974, a teletext system which was launched to bring news content on television screens using text only was launched. Engineers originally began developing such a system to bring news to deaf viewers, but the system was expanded. The Ceefax service became much more diverse before it ceased on 23 October 2012: it not only had subtitling for all channels, it also gave information such as weather, flight times and film reviews. By the end of the decade, the practice of shooting on film for inserts in news broadcasts was declining, with the introduction of ENG technology into the UK. The equipment would gradually become less cumbersome – the BBC's first attempts had been using a Philips colour camera with backpack base station and separate portable Sony U-matic recorder in the latter half of the decade. In 1980, the Iranian Embassy Siege had been shot electronically by the BBC Television News Outside broadcasting team, and the work of reporter Kate Adie, broadcasting live from Prince's Gate, was nominated for BAFTA actuality coverage, but this time beaten by ITN for the 1980 award. Newsnight, the news and current affairs programme, was due to go on air on 23 January 1980, although trade union disagreements meant that its launch from Lime Grove was postponed by a week. On 27 August 1981 Moira Stuart became the first African Caribbean female newsreader to appear on British television. By 1982, ENG technology had become sufficiently reliable for Bernard Hesketh to use an Ikegami camera to cover the Falklands War, coverage for which he won the "Royal Television Society Cameraman of the Year" award and a BAFTA nomination – the first time that BBC News had relied upon an electronic camera, rather than film, in a conflict zone. BBC News won the BAFTA for its actuality coverage, however the event has become remembered in television terms for Brian Hanrahan's reporting where he coined the phrase "I'm not allowed to say how many planes joined the raid, but I counted them all out and I counted them all back" to circumvent restrictions, and which has become cited as an example of good reporting under pressure. The first BBC breakfast television programme, Breakfast Time also launched during the 1980s, on 17 January 1983 from Lime Grove Studio E and two weeks before its ITV rival TV-am. Frank Bough, Selina Scott, and Nick Ross helped to wake viewers with a relaxed style of presenting. The Six O'Clock News first aired on 3 September 1984, eventually becoming the most watched news programme in the UK (however, since 2006 it has been overtaken by the BBC News at Ten). In October 1984, images of millions of people starving to death in the Ethiopian famine were shown in Michael Buerk's Six O'Clock News reports. The BBC News crew were the first to document the famine, with Buerk's report on 23 October describing it as "a biblical famine in the 20th century" and "the closest thing to hell on Earth". The BBC News report shocked Britain, motivating its citizens to inundate relief agencies, such as Save the Children, with donations, and to bring global attention to the crisis in Ethiopia. The news report was also watched by Bob Geldof, who would organise the charity single "Do They Know It's Christmas?" to raise money for famine relief followed by the Live Aid concert in July 1985. Starting in 1981, the BBC gave a common theme to its main news bulletins with new electronic titles–a set of computer-animated "stripes" forming a circle on a red background with a "BBC News" typescript appearing below the circle graphics, and a theme tune consisting of brass and keyboards. The Nine used a similar (striped) number 9. The red background was replaced by a blue from 1985 until 1987. By 1987, the BBC had decided to re-brand its bulletins and established individual styles again for each one with differing titles and music, the weekend and holiday bulletins branded in a similar style to the Nine, although the "stripes" introduction continued to be used until 1989 on occasions where a news bulletin was screened out of the running order of the schedule. In 1987, John Birt resurrected the practice of correspondents working for both TV and radio with the introduction of bi-media journalism. During the 1990s, a wider range of services began to be offered by BBC News, with the split of BBC World Service Television to become BBC World (news and current affairs), and BBC Prime (light entertainment). Content for a 24-hour news channel was thus required, followed in 1997 with the launch of domestic equivalent BBC News 24. Rather than set bulletins, ongoing reports and coverage was needed to keep both channels functioning and meant a greater emphasis in budgeting for both was necessary. In 1998, after 66 years at Broadcasting House, the BBC Radio News operation moved to BBC Television Centre. New technology, provided by Silicon Graphics, came into use in 1993 for a re-launch of the main BBC 1 bulletins, creating a virtual set which appeared to be much larger than it was physically. The relaunch also brought all bulletins into the same style of set with only small changes in colouring, titles, and music to differentiate each. A computer generated cut-glass sculpture of the BBC coat of arms was the centrepiece of the programme titles until the large scale corporate rebranding of news services in 1999. In November 1997, BBC News Online was launched, following individual webpages for major news events such as the 1996 Olympic Games, 1997 general election, and the death of Princess Diana. In 1999, the biggest relaunch occurred, with BBC One bulletins, BBC World, BBC News 24, and BBC News Online all adopting a common style. One of the most significant changes was the gradual adoption of the corporate image by the BBC regional news programmes, giving a common style across local, national and international BBC television news. This also included Newyddion, the main news programme of Welsh language channel S4C, produced by BBC News Wales. Following the relaunch of BBC News in 1999, regional headlines were included at the start of the BBC One news bulletins in 2000. The English regions did however lose five minutes at the end of their bulletins, due to a new headline round-up at 18:55. 2000 also saw the Nine O'Clock News moved to the later time of 22:00. This was in response to ITN who had just moved their popular News at Ten programme to 23:00. ITN briefly returned News at Ten but following poor ratings when head-to-head against the BBC's Ten O'Clock News, the ITN bulletin was moved to 22.30, where it remained until 14 January 2008. The retirement in 2009 of Peter Sissons and departure of Michael Buerk from the Ten O'Clock News led to changes in the BBC One bulletin presenting team on 20 January 2003. The Six O'Clock News became double headed with George Alagiah and Sophie Raworth after Huw Edwards and Fiona Bruce moved to present the Ten. A new set design featuring a projected fictional newsroom backdrop was introduced, followed on 16 February 2004 by new programme titles to match those of BBC News 24. BBC News 24 and BBC World introduced a new style of presentation in December 2003, that was slightly altered on 5 July 2004 to mark 50 years of BBC Television News. On 7 March 2005 director general Mark Thompson launched the "Creative Futures" project to restructure the organisation. The individual positions of editor of the One and Six O'Clock News were replaced by a new daytime position in November 2005. Kevin Bakhurst became the first Controller of BBC News 24, replacing the position of editor. Amanda Farnsworth became daytime editor while Craig Oliver was later named editor of the Ten O'Clock News. Bulletins received new titles and a new set design in May 2006, to allow for Breakfast to move into the main studio for the first time since 1997. The new set featured Barco videowall screens with a background of the London skyline used for main bulletins and originally an image of cirrus clouds against a blue sky for Breakfast. This was later replaced following viewer criticism. The studio bore similarities with the ITN-produced ITV News in 2004, though ITN uses a CSO Virtual studio rather than the actual screens at BBC News. BBC News became part of a new BBC Journalism group in November 2006 as part of a restructuring of the BBC. The then-Director of BBC News, Helen Boaden reported to the then-Deputy Director-General and head of the journalism group, Mark Byford until he was made redundant in 2010. On 18 October 2007, ED Mark Thompson announced a six-year plan, "Delivering Creative Futures" (based on his project begun in March 2005), merging the television current affairs department into a new "News Programmes" division. Thompson's announcement, in response to a £2 billion shortfall in funding, would, he said, deliver "a smaller but fitter BBC" in the digital age, by cutting its payroll and, in 2013, selling Television Centre. The various separate newsrooms for television, radio and online operations were merged into a single multimedia newsroom. Programme making within the newsrooms was brought together to form a multimedia programme making department. BBC World Service director Peter Horrocks said that the changes would achieve efficiency at a time of cost-cutting at the BBC. In his blog, he wrote that by using the same resources across the various broadcast media meant fewer stories could be covered, or by following more stories, there would be fewer ways to broadcast them. A new graphics and video playout system was introduced for production of television bulletins in January 2007. This coincided with a new structure to BBC World News bulletins, editors favouring a section devoted to analysing the news stories reported on. The first new BBC News bulletin since the Six O'Clock News was announced in July 2007 following a successful trial in the Midlands. The summary, lasting 90 seconds, has been broadcast at 20:00 on weekdays since December 2007 and bears similarities with 60 Seconds on BBC Three, but also includes headlines from the various BBC regions and a weather summary. As part of a long-term cost cutting programme, bulletins were renamed the BBC News at One, Six and Ten respectively in April 2008 while BBC News 24 was renamed BBC News and moved into the same studio as the bulletins at BBC Television Centre. BBC World was renamed BBC World News and regional news programmes were also updated with the new presentation style, designed by Lambie-Nairn. 2008 also saw tri-media introduced across TV, radio, and online. The studio moves also meant that Studio N9, previously used for BBC World, was closed, and operations moved to the previous studio of BBC News 24. Studio N9 was later refitted to match the new branding, and was used for the BBC's UK local elections and European elections coverage in early June 2009. A strategy review of the BBC in March 2010, confirmed that having "the best journalism in the world" would form one of five key editorial policies, as part of changes subject to public consultation and BBC Trust approval. After a period of suspension in late 2012, Helen Boaden ceased to be the Director of BBC News. On 16 April 2013, incoming BBC Director-General Tony Hall named James Harding, a former editor of The Times of London newspaper as Director of News and Current Affairs. From August 2012 to March 2013, all news operations moved from Television Centre to new facilities in the refurbished and extended Broadcasting House, in Portland Place. The move began in October 2012, and also included the BBC World Service, which moved from Bush House following the expiry of the BBC's lease. This new extension to the north and east, referred to as "New Broadcasting House", includes several new state-of-the-art radio and television studios centred around an 11-storey atrium. The move began with the domestic programme The Andrew Marr Show on 2 September 2012, and concluded with the move of the BBC News channel and domestic news bulletins on 18 March 2013. The newsroom houses all domestic bulletins and programmes on both television and radio, as well as the BBC World Service international radio networks and the BBC World News international television channel. BBC News and CBS News established an editorial and newsgathering partnership in 2017, replacing an earlier long-standing partnership between BBC News and ABC News. In an October 2018 Simmons Research survey of 38 news organisations, BBC News was ranked the fourth most trusted news organisation by Americans, behind CBS News, ABC News and The Wall Street Journal. In January 2020 the BBC announced a BBC News savings target of £80 million per year by 2022, involving about 450 staff reductions from the current 6,000. BBC director of news and current affairs Fran Unsworth said there would be further moves toward digital broadcasting, in part to attract back a youth audience, and more pooling of reporters to stop separate teams covering the same news. A further 70 staff reductions were announced in July 2020. BBC Three began airing the news programme The Catch Up in February 2022. It is presented by Levi Jouavel, Kirsty Grant, and Callum Tulley and aims to get the channel's target audience (16 to 34-year olds) to make sense of the world around them while also highlighting optimistic stories. Compared to its predecessor 60 Seconds, The Catch Up is three times longer, running for about three minutes and not airing during weekends. According to its annual report as of December 2021[update], India has the largest number of people using BBC services in the world. In May 2025, following the earthquake that hit Myanmar and Thailand, a television news bulletin (BBC News Myanmar) from the Burmese service using a vacated Voice of America satellite frequency began its broadcasts. Programming and reporting In November 2023, BBC News joined with the International Consortium of Investigative Journalists, Paper Trail Media [de] and 69 media partners including Distributed Denial of Secrets and the Organised Crime and Corruption Reporting Project (OCCRP) and more than 270 journalists in 55 countries and territories to produce the 'Cyprus Confidential' report on the financial network which supports the regime of Vladimir Putin, mostly with connections to Cyprus, and showed Cyprus to have strong links with high-up figures in the Kremlin, some of whom have been sanctioned. Government officials including Cyprus president Nikos Christodoulides and European lawmakers began responding to the investigation's findings in less than 24 hours, calling for reforms and launching probes. BBC News is responsible for the news programmes and documentary content on the BBC's general television channels, as well as the news coverage on the BBC News Channel in the UK, and 22 hours of programming for the corporation's international BBC World News channel. Coverage for BBC Parliament is carried out on behalf of the BBC at Millbank Studios, though BBC News provides editorial and journalistic content. BBC News content is also output onto the BBC's digital interactive television services under the BBC Red Button brand, and until 2012, on the Ceefax teletext system. The music on all BBC television news programmes was introduced in 1999 and composed by David Lowe. It was part of the re-branding which commenced in 1999 and features 'BBC Pips'. The general theme was used on bulletins on BBC One, News 24, BBC World and local news programmes in the BBC's Nations and Regions. Lowe was also responsible for the music on Radio One's Newsbeat. The theme has had several changes since 1999, the latest in March 2013. The BBC Arabic Television news channel launched on 11 March 2008, a Persian-language channel followed on 14 January 2009, broadcasting from the Peel wing of Broadcasting House; both include news, analysis, interviews, sports and highly cultural programmes and are run by the BBC World Service and funded from a grant-in-aid from the British Foreign Office (and not the television licence). The BBC Verify service was launched in 2023 to fact-check news stories, followed by BBC Verify Live in 2025. BBC Radio News produces bulletins for the BBC's national radio stations and provides content for local BBC radio stations via the General News Service (GNS), a BBC-internal news distribution service. BBC News does not produce the BBC's regional news bulletins, which are produced individually by the BBC nations and regions themselves. The BBC World Service broadcasts to some 150 million people in English as well as 27 languages across the globe. BBC Radio News is a patron of the Radio Academy. BBC News Online is the BBC's news website. Launched in November 1997, it is one of the most popular news websites, with 1.2 billion website visits in April 2021, as well as being used by 60% of the UK's internet users for news. The website contains international news coverage as well as entertainment, sport, science, and political news. Mobile apps for Android, iOS and Windows Phone systems have been provided since 2010. Many television and radio programmes are also available to view on the BBC iPlayer and BBC Sounds services. The BBC News channel is also available to view 24 hours a day, while video and radio clips are also available within online news articles. In October 2019, BBC News Online launched a mirror on the dark web anonymity network Tor in an effort to circumvent censorship. Criticism The BBC is required by its charter to be free from both political and commercial influence and answers only to its viewers and listeners. This political objectivity is sometimes questioned. For instance, The Daily Telegraph (3 August 2005) carried a letter from the KGB defector Oleg Gordievsky, referring to it as "The Red Service". Books have been written on the subject, including anti-BBC works like Truth Betrayed by W J West and The Truth Twisters by Richard Deacon. The BBC has been accused of bias by Conservative MPs. The BBC's Editorial Guidelines on Politics and Public Policy state that while "the voices and opinions of opposition parties must be routinely aired and challenged", "the government of the day will often be the primary source of news". The BBC is regularly accused by the government of the day of bias in favour of the opposition and, by the opposition, of bias in favour of the government. Similarly, during times of war, the BBC is often accused by the UK government, or by strong supporters of British military campaigns, of being overly sympathetic to the view of the enemy. An edition of Newsnight at the start of the Falklands War in 1982 was described as "almost treasonable" by John Page, MP, who objected to Peter Snow saying "if we believe the British". During the first Gulf War, critics of the BBC took to using the satirical name "Baghdad Broadcasting Corporation". During the Kosovo War, the BBC were labelled the "Belgrade Broadcasting Corporation" (suggesting favouritism towards the FR Yugoslavia government over ethnic Albanian rebels) by British ministers, although Slobodan Milosević (then FRY president) claimed that the BBC's coverage had been biased against his nation. Conversely, some of those who style themselves anti-establishment in the United Kingdom or who oppose foreign wars have accused the BBC of pro-establishment bias or of refusing to give an outlet to "anti-war" voices. Following the 2003 invasion of Iraq, a study by the Cardiff University School of Journalism of the reporting of the war found that nine out of 10 references to weapons of mass destruction during the war assumed that Iraq possessed them, and only one in 10 questioned this assumption. It also found that, out of the main British broadcasters covering the war, the BBC was the most likely to use the British government and military as its source. It was also the least likely to use independent sources, like the Red Cross, who were more critical of the war. When it came to reporting Iraqi casualties, the study found fewer reports on the BBC than on the other three main channels. The report's author, Justin Lewis, wrote "Far from revealing an anti-war BBC, our findings tend to give credence to those who criticised the BBC for being too sympathetic to the government in its war coverage. Either way, it is clear that the accusation of BBC anti-war bias fails to stand up to any serious or sustained analysis." Prominent BBC appointments are constantly assessed by the British media and political establishment for signs of political bias. The appointment of Greg Dyke as Director-General was highlighted by press sources because Dyke was a Labour Party member and former activist, as well as a friend of Tony Blair. The BBC's former Political Editor, Nick Robinson, was some years ago a chairman of the Young Conservatives and did, as a result, attract informal criticism from the former Labour government, but his predecessor Andrew Marr faced similar claims from the right because he was editor of The Independent, a liberal-leaning newspaper, before his appointment in 2000. Mark Thompson, former Director-General of the BBC, admitted the organisation has been biased "towards the left" in the past. He said, "In the BBC I joined 30 years ago, there was, in much of current affairs, in terms of people's personal politics, which were quite vocal, a massive bias to the left". He then added, "The organization did struggle then with impartiality. Now it is a completely different generation. There is much less overt tribalism among the young journalists who work for the BBC." Following the EU referendum in 2016, some critics suggested that the BBC was biased in favour of leaving the EU. For instance, in 2018, the BBC received complaints from people who took issue that the BBC was not sufficiently covering anti-Brexit marches while giving smaller-scale events hosted by former UKIP leader Nigel Farage more airtime. On the other hand, a poll released by YouGov showed that 45% of people who voted to leave the EU thought that the BBC was 'actively anti-Brexit' compared to 13% of the same kinds of voters who think the BBC is pro-Brexit. In 2008, the BBC Hindi was criticised by some Indian outlets for referring to the terrorists who carried out the 2008 Mumbai attacks as "gunmen". The response to this added to prior criticism from some Indian commentators suggesting that the BBC may have an Indophobic bias. In March 2015, the BBC was criticised for a BBC Storyville documentary interviewing one of the rapists in India. In spite of a ban ordered by the Indian High court, the BBC still aired the documentary "India's Daughter" outside India. BBC News was at the centre of a political controversy following the 2003 invasion of Iraq. Three BBC News reports (Andrew Gilligan's on Today, Gavin Hewitt's on The Ten O'Clock News and another on Newsnight) quoted an anonymous source that stated the British government (particularly the Prime Minister's office) had embellished the September Dossier with misleading exaggerations of Iraq's weapons of mass destruction capabilities. The government denounced the reports and accused the corporation of poor journalism. In subsequent weeks the corporation stood by the report, saying that it had a reliable source. Following intense media speculation, David Kelly was named in the press as the source for Gilligan's story on 9 July 2003. Kelly was found dead, by suicide, in a field close to his home early on 18 July. An inquiry led by Lord Hutton was announced by the British government the following day to investigate the circumstances leading to Kelly's death, concluding that "Dr. Kelly took his own life." In his report on 28 January 2004, Lord Hutton concluded that Gilligan's original accusation was "unfounded" and the BBC's editorial and management processes were "defective". In particular, it specifically criticised the chain of management that caused the BBC to defend its story. The BBC Director of News, Richard Sambrook, the report said, had accepted Gilligan's word that his story was accurate in spite of his notes being incomplete. Davies had then told the BBC Board of Governors that he was happy with the story and told the Prime Minister that a satisfactory internal inquiry had taken place. The Board of Governors, under the chairman's, Gavyn Davies, guidance, accepted that further investigation of the Government's complaints were unnecessary. Because of the criticism in the Hutton report, Davies resigned on the day of publication. BBC News faced an important test, reporting on itself with the publication of the report, but by common consent (of the Board of Governors) managed this "independently, impartially and honestly". Davies' resignation was followed by the resignation of Director General, Greg Dyke, the following day, and the resignation of Gilligan on 30 January. While undoubtedly a traumatic experience for the corporation, an ICM poll in April 2003 indicated that it had sustained its position as the best and most trusted provider of news. The BBC has faced accusations of holding both anti-Israel and anti-Palestine bias. Douglas Davis, the London correspondent of The Jerusalem Post, has described the BBC's coverage of the Arab–Israeli conflict as "a relentless, one-dimensional portrayal of Israel as a demonic, criminal state and Israelis as brutal oppressors [which] bears all the hallmarks of a concerted campaign of vilification that, wittingly or not, has the effect of delegitimising the Jewish state and pumping oxygen into a dark old European hatred that dared not speak its name for the past half-century.". However two large independent studies, one conducted by Loughborough University and the other by Glasgow University's Media Group concluded that Israeli perspectives are given greater coverage. Critics of the BBC argue that the Balen Report proves systematic bias against Israel in headline news programming. The Daily Mail and The Daily Telegraph criticised the BBC for spending hundreds of thousands of British tax payers' pounds from preventing the report being released to the public. Jeremy Bowen, the Middle East Editor for BBC world news, was singled out specifically for bias by the BBC Trust which concluded that he violated "BBC guidelines on accuracy and impartiality." An independent panel appointed by the BBC Trust was set up in 2006 to review the impartiality of the BBC's coverage of the Israeli–Palestinian conflict. The panel's assessment was that "apart from individual lapses, there was little to suggest deliberate or systematic bias." While noting a "commitment to be fair accurate and impartial" and praising much of the BBC's coverage the independent panel concluded "that BBC output does not consistently give a full and fair account of the conflict. In some ways the picture is incomplete and, in that sense, misleading." It notes that, "the failure to convey adequately the disparity in the Israeli and Palestinian experience, [reflects] the fact that one side is in control and the other lives under occupation". Writing in the Financial Times, Philip Stephens, one of the panellists, later accused the BBC's director-general, Mark Thompson, of misrepresenting the panel's conclusions. He further opined "My sense is that BBC news reporting has also lost a once iron-clad commitment to objectivity and a necessary respect for the democratic process. If I am right, the BBC, too, is lost". Mark Thompson published a rebuttal in the FT the next day. The description by one BBC correspondent reporting on the funeral of Yassir Arafat that she had been left with tears in her eyes led to other questions of impartiality, particularly from Martin Walker in a guest opinion piece in The Times, who picked out the apparent case of Fayad Abu Shamala, the BBC Arabic Service correspondent, who told a Hamas rally on 6 May 2001, that journalists in Gaza were "waging the campaign shoulder to shoulder together with the Palestinian people". Walker argues that the independent inquiry was flawed for two reasons. Firstly, because the time period over which it was conducted (August 2005 to January 2006) surrounded the Israeli withdrawal from Gaza and Ariel Sharon's stroke, which produced more positive coverage than usual. Furthermore, he wrote, the inquiry only looked at the BBC's domestic coverage, and excluded output on the BBC World Service and BBC World. Tom Gross accused the BBC of glorifying Hamas suicide bombers, and condemned its policy of inviting guests such as Jenny Tonge and Tom Paulin who have compared Israeli soldiers to Nazis. Writing for the BBC, Paulin said Israeli soldiers should be "shot dead" like Hitler's S.S, and said he could "understand how suicide bombers feel". The BBC also faced criticism for not airing a Disasters Emergency Committee aid appeal for Palestinians who suffered in Gaza during 22-day war there between late 2008 and early 2009. Most other major UK broadcasters did air this appeal, but rival Sky News did not. British journalist Julie Burchill has accused BBC of creating a "climate of fear" for British Jews over its "excessive coverage" of Israel compared to other nations. In light of the Gaza war, the BBC suspended seven Arab journalists over allegations of expressing support for Hamas via social media. BBC and ABC share video segments and reporters as needed in producing their newscasts. with the BBC showing ABC World News Tonight with David Muir in the UK. However, in July 2017, the BBC announced a new partnership with CBS News allows both organisations to share video, editorial content, and additional newsgathering resources in New York, London, Washington and around the world. BBC News subscribes to wire services from leading international agencies including PA Media (formerly Press Association), Reuters, and Agence France-Presse. In April 2017, the BBC dropped Associated Press in favour of an enhanced service from AFP. BBC News reporters and broadcasts are now and have in the past been banned in several countries primarily for reporting which has been unfavourable to the ruling government. For example, correspondents were banned by the former apartheid regime of South Africa. The BBC was banned in Zimbabwe under Mugabe for eight years as a terrorist organisation until being allowed to operate again over a year after the 2008 elections. The BBC was banned in Burma (officially Myanmar) after their coverage and commentary on anti-government protests there in September 2007. The ban was lifted four years later in September 2011. Other cases have included Uzbekistan, China, and Pakistan. BBC Persian, the BBC's Persian language news site, was blocked from the Iranian internet in 2006. The BBC News website was made available in China again in March 2008, but as of October 2014[update], was blocked again. In June 2015, the Rwandan government placed an indefinite ban on BBC broadcasts following the airing of a controversial documentary regarding the 1994 Rwandan genocide, Rwanda's Untold Story, broadcast on BBC2 on 1 October 2014. The UK's Foreign Office recognised "the hurt caused in Rwanda by some parts of the documentary". In February 2017, reporters from the BBC (as well as the Daily Mail, The New York Times, Politico, CNN, and others) were denied access to a United States White House briefing. In 2017, BBC India was banned for a period of five years from covering all national parks and sanctuaries in India. Following the withdrawal of CGTN's UK broadcaster licence on 4 February 2021 by Ofcom, China banned BBC News from airing in China. See also References External links |below = Category }}
========================================
[SOURCE: https://www.wired.com/video/series/tech-support] | [TOKENS: 245]
Tech SupportCollectibles Expert Answers Collectibles QuestionsOlympian Answers Figure Skating QuestionsParalympian Answers Paralympics QuestionsProfessor Answers Olympic History QuestionsSupply Chain Expert Answers Chinese Manufacturing QuestionsAstronomer Answers Cosmos QuestionsDoctor Answers Longevity QuestionsJacques Torres Answers Chocolatier QuestionsProfessor Answers Television History QuestionsHideo Kojima Answers Hideo Kojima QuestionsHistorian Answers Victorian England QuestionsProfessor Answers Coding QuestionsDoctor Answers Vaccine QuestionsArmy Historian Answers World War II QuestionsAlex Honnold Answers Rock Climbing QuestionsMercedes CEO Answers F1 Team Principal QuestionsMax Verstappen Answers F1 Driver QuestionsHistorian Answers Revolution QuestionsEconomics Professor Answers Great Depression QuestionsHistorian Answers Native American QuestionsHistorian Answers Folklore QuestionsLanguage Expert Answers English QuestionsEngineering Professor Answers Electric Car QuestionsKevin O'Leary Answers Investor QuestionsMore Stories Tech Support © 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-128] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Andromeda_Galaxy] | [TOKENS: 7754]
Contents Andromeda Galaxy The Andromeda Galaxy is a barred spiral galaxy[c] and is the nearest major galaxy to the Milky Way. It was originally named the Andromeda Nebula and is cataloged as Messier 31, M31, and NGC 224. Andromeda has a D25 isophotal diameter of about 46.56 kiloparsecs (152,000 light-years) and is approximately 765 kpc (2.5 million light-years) from Earth. The galaxy's name stems from the area of Earth's sky in which it appears, the constellation of Andromeda, which itself is named after the princess who was the wife of Perseus in Greek mythology. The virial mass of the Andromeda Galaxy is of the same order of magnitude as that of the Milky Way, at 1 trillion solar masses (2.0×1042 kilograms). The mass of either galaxy is difficult to estimate with any accuracy, but it was long thought that the Andromeda Galaxy was more massive than the Milky Way by a margin of some 25% to 50%. However, this has been called into question by early-21st-century studies indicating a possibly lower mass for the Andromeda Galaxy and a higher mass for the Milky Way. The Andromeda Galaxy has a diameter of about 46.56 kpc (152,000 ly), making it the largest member of the Local Group of galaxies in terms of extension. The Milky Way and Andromeda galaxies have about a 50% chance of colliding with each other in the next 10 billion years, merging to potentially form a giant elliptical galaxy or a large lenticular galaxy. With an apparent magnitude of 3.4, the Andromeda Galaxy is among the brightest of the Messier objects, and is visible to the naked eye from Earth on moonless nights, even when viewed from areas with moderate light pollution. Observation history The Andromeda Galaxy is visible to the naked eye in dark skies. It has been speculated that the Babylonian constellation of the Rainbow, Mul Dingir Tir-an-na, may have referred to M31. Around the year 964 CE, the Persian astronomer Abd al-Rahman al-Sufi described the Andromeda Galaxy in his Book of Fixed Stars as a "nebulous smear" or "small cloud". This was the first historical reference to the Andromeda Galaxy and the earliest known reference to a galaxy other than the Milky Way. Star charts of that period labeled it as the Little Cloud. In 1612, the German astronomer Simon Marius gave an early description of the Andromeda Galaxy based on telescopic observations. John Flamsteed cataloged it as 33 Andromedae. Pierre Louis Maupertuis conjectured in 1745 that the blurry spot was an island universe. Charles Messier cataloged Andromeda as object M31 in 1764 and incorrectly credited Marius as the discoverer despite it being visible to the naked eye. In 1785, the astronomer William Herschel noted a faint reddish hue in the core region of Andromeda. He believed Andromeda to be the nearest of all the "great nebulae," and based on the color and magnitude of the nebula, he incorrectly guessed that it was no more than 2,000 times the distance of Sirius, or roughly 18,000 ly (5.5 kpc). In 1850, William Parsons, 3rd Earl of Rosse, made a drawing of Andromeda's spiral structure.[better source needed] In 1864, William Huggins noted that the spectrum of Andromeda differed from that of a gaseous nebula. The spectrum of Andromeda displays a continuum of frequencies, superimposed with dark absorption lines that help identify the chemical composition of an object. Andromeda's spectrum is very similar to the spectra of individual stars, and from this, it was deduced that Andromeda has a stellar nature. In 1885, a supernova (known as S Andromedae) was seen in Andromeda, the only one ever observed in that galaxy. At the time, it was called "Nova 1885"—the difference between "novae" in the modern sense and supernovae was not yet known. Andromeda was considered to be a nearby object, and it was not realized that the "nova" was much brighter than ordinary novae. In 1888, Isaac Roberts took one of the first photographs of Andromeda, which was still commonly thought to be a nebula within the Milky Way galaxy. Roberts mistook Andromeda and similar "spiral nebulae" as star systems being formed. In 1912, Vesto Slipher used spectroscopy to measure the radial velocity of Andromeda with respect to the Solar System—the largest velocity yet measured, at 300 km/s (190 mi/s). As early as 1755, the German philosopher Immanuel Kant proposed the hypothesis that the Milky Way is only one of many galaxies in his book Universal Natural History and Theory of the Heavens. Arguing that a structure like the Milky Way would look like a circular nebula viewed from above and like an ellipsoid if viewed from an angle, he concluded that the observed elliptical nebulae like Andromeda, which could not be explained otherwise at the time, were indeed galaxies similar to the Milky Way, not nebulae, as Andromeda was commonly believed to be. In 1917, Heber Curtis observed a nova within Andromeda. After searching the photographic record, 11 more novae were discovered. Curtis noticed that these novae were, on average, 10 magnitudes fainter than those that occurred elsewhere in the sky. As a result, he was able to come up with a distance estimate of 500,000 ly (32 billion AU). Although this estimate is about fivefold lower than the best estimates now available, it was the first known estimate of the distance to Andromeda that was correct to within an order of magnitude (i.e., to within a factor of ten of the current estimates, which place the distance around 2.5 million light-years). Curtis became a proponent of the so-called "island universes" hypothesis: that spiral nebulae were actually independent galaxies. In 1920, the Great Debate between Harlow Shapley and Curtis took place concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. To support his claim that the Great Andromeda Nebula is, in fact, an external galaxy, Curtis also noted the appearance of dark lanes within Andromeda that resembled the dust clouds in the Milky Way galaxy, as well as historical observations of the Andromeda Galaxy's significant Doppler shift. In 1922, Ernst Öpik presented a method to estimate the distance of Andromeda using the measured velocities of its stars. His result placed the Andromeda Nebula far outside the Milky Way at a distance of about 450 kpc (1,500 kly). Edwin Hubble settled the debate in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of Andromeda. These were made using the 100-inch (2.5 m) Hooker telescope, and they enabled the distance of the Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within the Milky Way galaxy, but an entirely separate galaxy located a significant distance from the Milky Way. In 1943, Walter Baade was the first person to resolve stars in the central region of the Andromeda Galaxy. Baade identified two distinct populations of stars based on their metallicity, naming the young, high-velocity stars in the disk Type I and the older, red stars in the bulge Type II. This nomenclature was subsequently adopted for stars within the Milky Way and elsewhere. (The existence of two distinct populations had been noted earlier by Jan Oort.) Baade also discovered that there were two types of Cepheid variable stars, which resulted in doubling the distance estimate to Andromeda, as well as the remainder of the universe. In 1950, radio emissions from the Andromeda Galaxy were detected by Robert Hanbury Brown and Cyril Hazard at the Jodrell Bank Observatory. The first radio maps of the galaxy were made in the 1950s by John Baldwin and collaborators at the Cambridge Radio Astronomy Group. The core of the Andromeda Galaxy is called 2C 56 in the 2C radio astronomy catalog. In 1959 rapid rotation of the semi-stellar nucleus of M31 was discovered by Andre Lallemand, M. Duschene and Merle Walker at the Lick Observatory, using the 120-inch telescope, coudé Spectrograph, and Lallemand electronographic camera. They estimated the mass of the nucleus to be about 1.3 x 107 solar masses. The second example of this phenomenon was found in 1961 in the nucleus of M32 by M.F Walker at the Lick Observatory, using the same equipment as used for the discovery of the nucleus of M31. He estimated the nuclear mass to be between 0.8 and 1 x 107 solar masses. Such rotation is now considered to be evidence of the existence of supermassive black holes in the nuclei of these galaxies. In 2009, an occurrence of microlensing—a phenomenon caused by the deflection of light by a massive object—may have led to the first discovery of a planet in the Andromeda Galaxy. In 2020, observations of linearly polarized radio emission with the Westerbork Synthesis Radio Telescope, the Effelsberg 100-m Radio Telescope, and the Very Large Array revealed ordered magnetic fields aligned along the "10-kpc ring" of gas and star formation. In 2023, amateur astronomers Marcel Drechsler, Xavier Strottner and Yann Sainty announced the discovery of a huge, oxygen-rich emission nebula just south of M31, near the bright star 35 And. This nebula, now classified as SDSO-1, is exceedingly faint, requiring dozens of hours of exposure time minimum to detect, and appears to only emit in oxygen-III. Deep studies of the surrounding regions showed no signs of similarly bright oxygen nebulae near M31, nor any sign of connecting hydrogen filaments to SDSO-1, suggesting a high oxygen-hydrogen ratio. Current research suggests SDSO-1 is extragalactic in nature, specifically caused by interaction between the Milky Way's and M31's circumgalactic halos, although more research is needed to fully understand this object. A later study using spectroscopy found the nebula to be in the Milky Way. One study found the nebula to be a bow shock of a ghost planetary nebula around the binary EG Andromedae. In 2025, NASA published a huge mosaic made by the Hubble Space Telescope, assembled from approximately 600 separate overlapping fields of view taken over 10 years of Hubble observation. Hubble resolves an estimated 200 million stars that are hotter than the Sun, but still a fraction of the galaxy's total estimated stellar population. General The estimated distance of the Andromeda Galaxy from our own was doubled in 1953 when it was discovered that there is a second, dimmer type of Cepheid variable star. In the 1990s, measurements of both standard red giants as well as red clump stars from the Hipparcos satellite measurements were used to calibrate the Cepheid distances. A major merger occurred 2 to 3 billion years ago at the Andromeda location, involving two galaxies with a mass ratio of approximately 4. The discovery of a recent merger in the Andromeda galaxy was first based on interpreting its anomalous age-velocity dispersion relation, as well as the fact that 2 billion years ago, star formation throughout Andromeda's disk was much more active than today. Modeling of this violent collision shows that it has formed most of the galaxy's (metal-rich) galactic halo, including the Giant Stream, and also the extended thick disk, the young age thin disk, and the static 10 kpc ring. During this epoch, its rate of star formation would have been very high, to the point of becoming a luminous infrared galaxy for roughly 100 million years. Modeling also recovers the bulge profile, the large bar, and the overall halo density profile. Andromeda and the Triangulum Galaxy (M33) might have had a very close passage 2–4 billion years ago, but it seems unlikely from the last measurements from the Hubble Space Telescope. At least four distinct techniques have been used to estimate distances from Earth to the Andromeda Galaxy. In 2003, using the infrared surface brightness fluctuations (I-SBF) and adjusting for the new period-luminosity value and a metallicity correction of −0.2 mag dex−1 in (O/H), an estimate of 2.57 ± 0.06 million ly (162.5 ± 3.8 billion AU) was derived. A 2004 Cepheid variable method estimated the distance to be 2.51 ± 0.13 million light-years (770 ± 40 kpc). In 2005, an eclipsing binary star was discovered in the Andromeda Galaxy. The binary[d] is made up of two hot blue stars of types O and B. By studying the eclipses of the stars, astronomers were able to measure their sizes. Knowing the sizes and temperatures of the stars, they were able to measure their absolute magnitude. When the visual and absolute magnitudes are known, the distance to the star can be calculated. The stars lie at a distance of 2.52 ± 0.14 million ly (159.4 ± 8.9 billion AU) and the whole Andromeda Galaxy at about 2.5 million ly (160 billion AU). This new value is in excellent agreement with the previous, independent Cepheid-based distance value. The TRGB method was also used in 2005 giving a distance of 2.56 ± 0.08 million ly (161.9 ± 5.1 billion AU). Averaged together, these distance estimates give a value of 2.54 ± 0.11 million ly (160.6 ± 7.0 billion AU).[e] Until 2018, mass estimates for the Andromeda Galaxy's halo (including dark matter) gave a value of approximately 1.5×1012 M☉, compared to 8×1011 M☉ for the Milky Way. This contradicted even earlier measurements that seemed to indicate that the Andromeda Galaxy and Milky Way are almost equal in mass. In 2018, the earlier measurements for equality of mass were re-established by radio results as approximately 8×1011 M☉. In 2006, the Andromeda Galaxy's spheroid was determined to have a higher stellar density than that of the Milky Way, and its galactic stellar disk was estimated at twice the diameter of that of the Milky Way. The total mass of the Andromeda Galaxy is estimated to be between 8×1011 M☉ and 1.1×1012 M☉. The stellar mass of M31 is 10–15×1010 M☉, with 30% of that mass in the central bulge, 56% in the disk, and the remaining 14% in the stellar halo. The radio results (similar mass to the Milky Way Galaxy) should be taken as likeliest as of 2018, although clearly, this matter is still under active investigation by several research groups worldwide. As of 2019, current calculations based on escape velocity and dynamical mass measurements put the Andromeda Galaxy at 0.8×1012 M☉, which is only half of the Milky Way's newer mass, calculated in 2019 at 1.5×1012 M☉. In addition to stars, the Andromeda Galaxy's interstellar medium contains at least 7.2×109 M☉ in the form of neutral hydrogen, at least 3.4×108 M☉ as molecular hydrogen (within its innermost 10 kiloparsecs), and 5.4×107 M☉ of dust. The Andromeda Galaxy is surrounded by a massive halo of hot gas that is estimated to contain half the mass of the stars in the galaxy. The nearly invisible halo stretches about a million light-years from its host galaxy, halfway to our Milky Way Galaxy. Simulations of galaxies indicate the halo formed at the same time as the Andromeda Galaxy. The halo is enriched in elements heavier than hydrogen and helium, formed from supernovae, and its properties are those expected for a galaxy that lies in the "green valley" of the Galaxy color-magnitude diagram (see below). Supernovae erupt in the Andromeda Galaxy's star-filled disk and eject these heavier elements into space. Over the Andromeda Galaxy's lifetime, nearly half of the heavy elements made by its stars have been ejected far beyond the galaxy's 200,000-light-year-diameter stellar disk. The estimated luminosity of the Andromeda Galaxy, ~2.6×1010 L☉, is about 25% higher than that of our own galaxy. However, the galaxy has a high inclination as seen from Earth, and its interstellar dust absorbs an unknown amount of light, so it is difficult to estimate its actual brightness and other authors have given other values for the luminosity of the Andromeda Galaxy (some authors even propose it is the second-brightest galaxy within a radius of 10 megaparsecs of the Milky Way, after the Sombrero Galaxy, with an absolute magnitude of around −22.21[f] or close). An estimation done with the help of Spitzer Space Telescope published in 2010 suggests an absolute magnitude (in the blue) of −20.89 (that with a color index of +0.63 translates to an absolute visual magnitude of −21.52,[a] compared to −20.9 for the Milky Way), and a total luminosity in that wavelength of 3.64×1010 L☉. The rate of star formation in the Milky Way is much higher, with the Andromeda Galaxy producing only about one solar mass per year compared to 3–5 solar masses for the Milky Way. The rate of novae in the Milky Way is also double that of the Andromeda Galaxy. This suggests that the latter once experienced a great star formation phase, but is now in a relative state of quiescence, whereas the Milky Way is experiencing more active star formation. Should this continue, the luminosity of the Milky Way may eventually overtake that of the Andromeda Galaxy. According to recent studies, the Andromeda Galaxy lies in what is known in the galaxy color–magnitude diagram as the "green valley", a region populated by galaxies like the Milky Way in transition from the "blue cloud" (galaxies actively forming new stars) to the "red sequence" (galaxies that lack star formation). Star formation activity in green valley galaxies is slowing as they run out of star-forming gas in the interstellar medium. In simulated galaxies with similar properties to the Andromeda Galaxy, star formation is expected to extinguish within about five billion years, even accounting for the expected, short-term increase in the rate of star formation due to the collision between the Andromeda Galaxy and the Milky Way. Structure Based on its appearance in visible light, the Andromeda Galaxy is classified as an SA(s)b galaxy in the de Vaucouleurs–Sandage extended classification system of spiral galaxies. However, infrared data from the 2MASS survey and the Spitzer Space Telescope showed that Andromeda is actually a barred spiral galaxy, like the Milky Way, with Andromeda's bar major axis oriented 55 degrees anti-clockwise from the disc major axis. There are various methods used in astronomy in defining the size of a galaxy, and each method can yield different results concerning one another. The most commonly employed is the D25 standard, the isophote where the photometric brightness of a galaxy in the B-band (445 nm wavelength of light, in the blue part of the visible spectrum) reaches 25 mag/arcsec2. The Third Reference Catalogue of Bright Galaxies (RC3) used this standard for Andromeda in 1991, yielding an isophotal diameter of 46.56 kiloparsecs (152,000 light-years) at a distance of 2.5 million light-years. An earlier estimate from 1981 gave a diameter for Andromeda at 54 kiloparsecs (176,000 light-years). A study in 2005 by the Keck telescopes shows the existence of a tenuous sprinkle of stars, or galactic halo, extending outward from the galaxy. The stars in this halo behave differently from the ones in Andromeda's main galactic disc, where they show rather disorganized orbital motions as opposed to the stars in the main disc having more orderly orbits and uniform velocities of 200 km/s. This diffuse halo extends outwards away from Andromeda's main disc with the diameter of 67.45 kiloparsecs (220,000 light-years). The galaxy is inclined an estimated 77° relative to Earth (where an angle of 90° would be edge-on). Analysis of the cross-sectional shape of the galaxy appears to demonstrate a pronounced, S-shaped warp, rather than just a flat disk. A possible cause of such a warp could be gravitational interaction with the satellite galaxies near the Andromeda Galaxy. The Galaxy M33 could be responsible for some warp in Andromeda's arms, though more precise distances and radial velocities are required.[original research?] Spectroscopic studies have provided detailed measurements of the rotational velocity of the Andromeda Galaxy as a function of radial distance from the core. The rotational velocity has a maximum value of 225 km/s (140 mi/s) at 1,300 ly (82 million AU) from the core, and it has its minimum possibly as low as 50 km/s (31 mi/s) at 7,000 ly (440 million AU) from the core. Further out, rotational velocity rises out to a radius of 33,000 ly (2.1 billion AU), where it reaches a peak of 250 km/s (160 mi/s). The velocities slowly decline beyond that distance, dropping to around 200 km/s (120 mi/s) at 80,000 ly (5.1 billion AU). These velocity measurements imply a concentrated mass of about 6×109 M☉ in the nucleus. The total mass of the galaxy increases linearly out to 45,000 ly (2.8 billion AU), then more slowly beyond that radius. The spiral arms of the Andromeda Galaxy are outlined by a series of HII regions, first studied in great detail by Walter Baade and described by him as resembling "beads on a string". His studies show two spiral arms that appear to be tightly wound, although they are more widely spaced than in our galaxy. His descriptions of the spiral structure, as each arm crosses the major axis of the Andromeda Galaxy, are as follows§pp1062§pp92: Since the Andromeda Galaxy is seen close to edge-on, it is difficult to study its spiral structure. Rectified images of the galaxy seem to show a fairly normal spiral galaxy, exhibiting two continuous trailing arms that are separated from each other by a minimum of about 13,000 ly (820 million AU) and that can be followed outward from a distance of roughly 1,600 ly (100 million AU) from the core. Alternative spiral structures have been proposed such as a single spiral arm or a flocculent pattern of long, filamentary, and thick spiral arms. The most likely cause of the distortions of the spiral pattern is thought to be interaction with galaxy satellites M32 and M110. This can be seen by the displacement of the neutral hydrogen clouds from the stars. In 1998, images from the European Space Agency's Infrared Space Observatory demonstrated that the overall form of the Andromeda Galaxy may be transitioning into a ring galaxy. The gas and dust within the galaxy are generally formed into several overlapping rings, with a particularly prominent ring formed at a radius of 32,000 ly (9.8 kpc) from the core, nicknamed by some astronomers the ring of fire. This ring is hidden from visible light images of the galaxy because it is composed primarily of cold dust, and most of the star formation that is taking place in the Andromeda Galaxy is concentrated there. Later studies with the help of the Spitzer Space Telescope showed how the Andromeda Galaxy's spiral structure in the infrared appears to be composed of two spiral arms that emerge from a central bar and continue beyond the large ring mentioned above. Those arms, however, are not continuous and have a segmented structure. Close examination of the inner region of the Andromeda Galaxy with the same telescope also showed a smaller dust ring that is believed to have been caused by the interaction with M32 more than 200 million years ago. Simulations show that the smaller galaxy passed through the disk of the Andromeda Galaxy along the latter's polar axis. This collision stripped more than half the mass from the smaller M32 and created the ring structures in Andromeda. It is the co-existence of the long-known large ring-like feature in the gas of Messier 31, together with this newly discovered inner ring-like structure, offset from the barycenter, that suggested a nearly head-on collision with the satellite M32, a milder version of the Cartwheel encounter. Studies of the extended halo of the Andromeda Galaxy show that it is roughly comparable to that of the Milky Way, with stars in the halo being generally "metal-poor", and increasingly so with greater distance. This evidence indicates that the two galaxies have followed similar evolutionary paths. They are likely to have accreted and assimilated about 100–200 low-mass galaxies during the past 12 billion years. The stars in the extended halos of the Andromeda Galaxy and the Milky Way may extend nearly one-third the distance separating the two galaxies. The Andromeda Galaxy is known to harbor a dense and compact star cluster at its very center, similar to the Milky Way galaxy. A large telescope creates a visual impression of a star embedded in the more diffuse surrounding bulge. In 1991, the Hubble Space Telescope was used to image the Andromeda Galaxy's inner nucleus. The nucleus consists of two concentrations separated by 1.5 pc (4.9 ly). The brighter concentration, designated as P1, is offset from the center of the galaxy. The dimmer concentration, P2, falls at the true center of the galaxy and contains an embedded star cluster, called P3, containing many UV-bright A-stars and the supermassive black hole, called M31*. The black hole is classified as a low-luminosity AGN (LLAGN) and it was detected only in radio wavelengths and in x-rays. It was quiescent in 2004–2005, but it was highly variable in 2006–2007. An additional x-ray flare occurred in 2013. The mass of M31* was measured at 3–5 × 107 M☉ in 1993, and at 1.1–2.3 × 108 M☉ in 2005. The velocity dispersion of material around it is measured to be ≈ 160 km/s (100 mi/s). It has been proposed that the observed double nucleus could be explained if P1 is the projection of a disk of stars in an eccentric orbit around the central black hole. The eccentricity is such that stars linger at the orbital apocenter, creating a concentration of stars. It has been postulated that such an eccentric disk could have been formed from the result of a previous black hole merger, where the release of gravitational waves could have "kicked" the stars into their current eccentric distribution. P2 also contains a compact disk of hot, spectral-class A stars. The A stars are not evident in redder filters, but in blue and ultraviolet light they dominate the nucleus, causing P2 to appear more prominent than P1. While at the initial time of its discovery it was hypothesized that the brighter portion of the double nucleus is the remnant of a small galaxy "cannibalized" by the Andromeda Galaxy, this is no longer considered a viable explanation, largely because such a nucleus would have an exceedingly short lifetime due to tidal disruption by the central black hole. While this could be partially resolved if P1 had its own black hole to stabilize it, the distribution of stars in P1 does not suggest that there is a black hole at its center. Discrete sources Apparently, by late 1968, no X-rays had been detected from the Andromeda Galaxy. A balloon flight on 20 October 1970 set an upper limit for detectable hard X-rays from the Andromeda Galaxy. The Swift BAT all-sky survey successfully detected hard X-rays coming from a region centered 6 arcseconds away from the galaxy center. The emission above 25 keV was later found to be originating from a single source named 3XMM J004232.1+411314, and identified as a binary system where a compact object (a neutron star or a black hole) accretes matter from a star. Multiple X-ray sources have since been detected in the Andromeda Galaxy, using observations from the European Space Agency's (ESA) XMM-Newton orbiting observatory. Robin Barnard et al. hypothesized that these are candidate black holes or neutron stars, which are heating the incoming gas to millions of kelvins and emitting X-rays. Neutron stars and black holes can be distinguished mainly by measuring their masses. An observation campaign of NuSTAR space mission identified 40 objects of this kind in the galaxy. In 2012, a microquasar, a radio burst emanating from a smaller black hole was detected in the Andromeda Galaxy. The progenitor black hole is located near the galactic center and has about 10 M☉. It was discovered through data collected by the European Space Agency's XMM-Newton probe and was subsequently observed by NASA's Swift Gamma-Ray Burst Mission and Chandra X-Ray Observatory, the Very Large Array, and the Very Long Baseline Array. The microquasar was the first observed within the Andromeda Galaxy and the first outside of the Milky Way Galaxy. Globular clusters There are approximately 460 globular clusters associated with the Andromeda Galaxy. The most massive of these clusters, identified as Mayall II, nicknamed Globular One, has a greater luminosity than any other known globular cluster in the Local Group of galaxies. It contains several million stars and is about twice as luminous as Omega Centauri, the brightest known globular cluster in the Milky Way. Mayall II (also known as Globular One or G1) has several stellar populations and a structure too massive for an ordinary globular. As a result, some consider Mayall II to be the remnant core of a dwarf galaxy that was consumed by Andromeda in the distant past. The cluster with the greatest apparent brightness is G76 which is located in the southwest arm's eastern half. Another massive globular cluster, named 037-B327 (also known as Bol 37) and discovered in 2006 as is heavily reddened by the Andromeda Galaxy's interstellar dust, was thought to be more massive than Mayall II and the largest cluster of the Local Group; however, other studies have shown it is actually similar in properties to Mayall II. Unlike the globular clusters of the Milky Way, which show a relatively low age dispersion, Andromeda Galaxy's globular clusters have a much larger range of ages: from systems as old as the galaxy itself to much younger systems, with ages between a few hundred million years to five billion years. In 2005, astronomers discovered a completely new type of star cluster in the Andromeda Galaxy. The new-found clusters contain hundreds of thousands of stars, a similar number of stars that can be found in globular clusters. What distinguishes them from the globular clusters is that they are much larger—several hundred light-years across—and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters. The most massive globular cluster in the Andromeda Galaxy, B023-G078, likely has a central intermediate black hole of almost 100,000 solar masses. PA-99-N2 was a microlensing event detected in the Andromeda Galaxy in 1999. One of the explanations for this is the gravitational lensing of a red giant by a star with a mass between 0.02 and 3.6 times that of the Sun, which suggested that the star is likely orbited by a planet. This possible exoplanet would have a mass 6.34 times that of Jupiter. If finally confirmed, it would be the first ever found extragalactic planet. However, anomalies in the event were later found. Nearby and satellite galaxies Like the Milky Way, the Andromeda Galaxy has smaller satellite galaxies, consisting of over 20 known dwarf galaxies. The Andromeda Galaxy's dwarf galaxy population is very similar to the Milky Way's, but the galaxies are much more numerous. The best-known and most readily observed satellite galaxies are M32 and M110. Based on current evidence, it appears that M32 underwent a close encounter with the Andromeda Galaxy in the past. M32 may once have been a larger galaxy that had its stellar disk removed by M31 and underwent a sharp increase of star formation in the core region, which lasted until the relatively recent past. M110 also appears to be interacting with the Andromeda Galaxy, and astronomers have found in the halo of the latter a stream of metal-rich stars that appear to have been stripped from these satellite galaxies. M110 does contain a dusty lane, which may indicate recent or ongoing star formation. M32 has a young stellar population as well. The Triangulum Galaxy is a non-dwarf galaxy that lies 750,000 light-years from Andromeda. It is currently unknown whether it is a satellite of Andromeda. In 2006, it was discovered that nine of the satellite galaxies lie in a plane that intersects the core of the Andromeda Galaxy; they are not randomly arranged as would be expected from independent interactions. This may indicate a common tidal origin for the satellites. Collision with the Milky Way The Andromeda Galaxy is approaching the Milky Way at about 110 kilometres (68 miles) per second. It has been measured approaching relative to the Sun at around 300 km/s (190 mi/s) as the Sun orbits around the center of the galaxy at a speed of approximately 225 km/s (140 mi/s). This makes the Andromeda Galaxy one of about 100 observable blueshifted galaxies. Andromeda Galaxy's tangential or sideways velocity concerning the Milky Way is uncertain, but estimated to be smaller than the approaching velocity. After the sideways velocity was first measured, Andromeda was predicted to collide directly with the Milky Way in about 4 billion years. However, later calculations, including a higher sideways velocity measurement from the Gaia (spacecraft) and the effect of other Local Group galaxies found a much lower probability of a merger. A likely outcome of the collision would be that the galaxies will merge to form a giant elliptical galaxy or possibly large disc galaxy. Such events are frequent among the galaxies in galaxy groups. The fate of Earth and the Solar System in the event of a collision is currently unknown. Before the galaxies merge, there is a small chance that the Solar System could be ejected from the Milky Way or join the Andromeda Galaxy. Amateur observation Under most viewing conditions, the Andromeda Galaxy is one of the most distant objects that can be seen with the naked eye, due to its sheer size. (M33 and, for observers with exceptionally good vision, M81 can be seen under very dark skies.) The constellation of Andromeda, in which the galaxy is located, is usually found with the aid of the constellations Cassiopeia or Pegasus, which are usually easier to recognize at first glance. Andromeda is best seen during autumn nights in the Northern Hemisphere when it passes high overhead, reaching its highest point around midnight in October, and two hours earlier each successive month. In the early evening, it rises in the east in September and sets in the west in February. From the Southern Hemisphere the Andromeda Galaxy is visible between October and December, best viewed from as far north as possible. Binoculars can reveal some larger structures of the galaxy and its two brightest satellite galaxies, M32 and M110. An amateur telescope can reveal Andromeda's disk, some of its brightest globular clusters, dark dust lanes, and the large star cloud NGC 206. Gallery See also Notes References External links *It is uncertain whether these are companion galaxies of the Andromeda Galaxy
========================================
[SOURCE: https://en.wikipedia.org/wiki/History_of_The_New_York_Times_(1945%E2%80%931998)] | [TOKENS: 12664]
Contents History of The New York Times (1945–1998) Following World War II, The New York Times continued to expand. The Times was subject to investigations from the Senate Internal Security Subcommittee, a McCarthyist subcommittee that investigated purported communism from within press institutions. Arthur Hays Sulzberger's decision to dismiss a copyreader who plead the Fifth Amendment drew anger from within the Times and from external organizations. In April 1961, Sulzberger resigned, appointing his son-in-law, The New York Times Company president Orvil Dryfoos. Under Dryfoos, The New York Times established a newspaper based in Los Angeles. In 1962, the implementation of automated printing presses in response to increasing costs mounted fears over technological unemployment. The New York Typographical Union staged a strike in December, altering the media consumption of New Yorkers. The strike left New York with three remaining newspapers—the Times, the Daily News, and the New York Post—by its conclusion in March 1963. In May, Dryfoos died of a heart ailment. Following weeks of ambiguity, Arthur Ochs Sulzberger became The New York Times's publisher. Technological advancements leveraged by newspapers such as the Los Angeles Times and improvements in coverage from The Washington Post and The Wall Street Journal necessitated adaptations to nascent computing. The New York Times published "Heed Their Rising Voices" in 1960, a full-page advertisement purchased by supporters of Martin Luther King Jr. criticizing law enforcement in Montgomery, Alabama for their response to the civil rights movement. Montgomery Public Safety commissioner L. B. Sullivan sued the Times for defamation. In New York Times Co. v. Sullivan (1964), the U.S. Supreme Court ruled that the verdict in Alabama county court and the Supreme Court of Alabama violated the First Amendment. The decision is considered to be landmark. After financial losses, The New York Times ended its international edition, acquiring a stake in the Paris Herald Tribune, forming the International Herald Tribune. The Times initially published the Pentagon Papers, facing opposition from then-president Richard Nixon. The Supreme Court ruled in The New York Times's favor in New York Times Co. v. United States (1971), allowing the Times and The Washington Post to publish the papers. The New York Times remained cautious in its initial coverage of the Watergate scandal. As Congress began investigating the scandal, the Times furthered its coverage, publishing details on the Huston Plan, alleged wiretapping of reporters and officials, and testimony from James W. McCord Jr. that the Committee for the Re-Election of the President paid the conspirators off. The exodus of readers to suburban New York newspapers, such as Newsday and Gannett papers, adversely affected The New York Times's circulation. Contemporary newspapers balked at additional sections; Time devoted a cover for its criticism and New York wrote that the Times was engaging in "middle-class self-absorption". The New York Times, the Daily News, and the New York Post were the subject of a strike in 1978, allowing emerging newspapers to leverage halted coverage. The Times deliberately avoided coverage of the AIDS epidemic, running its first front page article in May 1983. Max Frankel's editorial coverage of the epidemic, with mentions of anal intercourse, contrasted with then-executive editor A. M. Rosenthal's puritan approach, intentionally avoiding descriptions of the luridity of gay venues. Following years of waning interest in The New York Times, Sulzberger resigned in January 1992, appointing his son, Arthur Ochs Sulzberger Jr., as publisher. The Internet represented a generational shift within the Times; Sulzberger, who negotiated The New York Times Company's acquisition of The Boston Globe in 1993, derided the Internet, while his son expressed antithetical views. @times appeared on America Online's website in May 1994 as an extension of The New York Times, featuring news articles, film reviews, sports news, and business articles. Despite opposition, several employees of the Times had begun to access the Internet. The online success of publications that traditionally co-existed with the Times—such as America Online, Yahoo, and CNN—and the expansion of websites such as Monster.com and Craigslist that threatened The New York Times's classified advertisement model increased efforts to develop a website. nytimes.com debuted on January 19 and was formally announced three days later. The Times published domestic terrorist Ted Kaczynski's essay Industrial Society and Its Future in 1995, contributing to his arrest after his brother David recognized the essay's penmanship. 1945–1955: Continued period and staff changes In November 1945, the 44th Street Theatre was demolished. In its place, 229 West 43rd Street was expanded, leaving the building adjacent to Sardi's and the Paramount Theatre. By February 1948, the annex was combined with the old building, improving production capacity by more than half. The expansion gave the composing room a total of 40,000 ft2 (3,700 m2) and more than one hundred linecasting type machines. In April 1950, additional floorage was provisioned to WQXR and WQXR-FM. By 1951, the Times had an editorial staff of 1,350; despite its size, the paper was an agile news machine. On April 11, 1951, at 1 a.m., MacArthur was relieved of his duties by Harry S. Truman. Within the hour, White House correspondent William H. Lawrence had dictated the story and sent it to the presses. At the Keith-Albee Building, the Times's Washington, D.C. bureau watched MacArthur address Congress the following week. Among the staff present was Anthony Leviero, the former White House correspondent before Lawrence who traveled to Wake Island with MacArthur and Truman for a conference. Leviero hastily penned a story detailing the conference, including MacArthur's assertion that China would not intervene in the Korean War—an event that resulted in a series of defeats ultimately leading to MacArthur's relief. In December 1951, James died. He was succeeded by Turner Catledge. Under Catledge, The New York Times established daily news conferences in his office, eliminating the role of bullpen editors—such as Neil MacNeil—who determined the placement of stories and their size relative to the paper. Catledge staffed several positions, including appointing Robert Garst and Theodore Menline Bernstein as associate editors. According to Gay Talese, Catledge favored Bernstein; Garst was delegated to housekeeping roles and as acting managing editor. In 1953, Times photoengravers went on strike for two weeks. During the strike, The New York Times did not publish for the first time in its history. Supported by most Times employees, staff who crossed the picket line were ostracized. John Randolph was removed as picture editor in January 1954 after placing a photograph of newlyweds Marilyn Monroe and Joe DiMaggio kissing on the front page. Clifton Daniel became The New York Times's Moscow correspondent—the only permanent Russian correspondent for a Western newspaper—in 1954; Catledge ordered Daniel back to New York on Arthur Hays Sulzberger's orders in November 1955 after Daniel developed an ulcer. 1955–1961: McCarthyism and Sulzberger's resignation The New York Times was subject to intense investigations by the Senate Internal Security Subcommittee, a Senate subcommittee that advanced McCarthyism and investigated purported communism from within press institutions. From December 1955 to January 1956, forty-four subpoenas were issued against current or former employees of the Times. The investigations divided the staff of The New York Times, comprising current Communist Party USA members—including a copyeditor caught editing a Times dispatch from Moscow, former members turned conservatives, and opponents of McCarthyism. The New York Times's management reckoned with retaining Ochs's values and denouncing the investigation; Times management believed that the paper was being specifically singled out for its opposition to Senate Internal Security Subcommittee chairman James Eastland's values, as well as those of his colleague William E. Jenner and subcommittee counsel J. G. Sourwine, by condemning segregation in Southern schools, the methods used by other congressional committees, and McCarthyism. Sulzberger believed that The New York Times was not a sacrosanct institution above a congressional investigation and stated his opposition to communism, urging employees not to plead the Fifth Amendment. A Times copyreader who did not reveal his political leanings appeared before the Senate Internal Security Subcommittee and plead the Fifth Amendment; Sulzberger dismissed the copyreader. The American Civil Liberties Union issued a letter of protest as a result of the dismissal. Sulzberger published the letter and his response in The New York Times. The letter polarized readers and was poorly received in some quarters, including by Ochs's nephew John Bertram Oakes. Sulzberger and his son-in-law, The New York Times Company president Orvil Dryfoos, drafted a statement in November 1955 to justify dismissing further employees. The rising cost of newspaper production and the recession of 1958 cut into The New York Times's profits in the years following the investigations. By 1959, Sunday edition numbers necessitated a west side expansion of 229 West 43rd Street. The annex was used primarily for publishing the Sunday issue, which had a circulation of 1,600,000 by 1967 and varied in weight between four and seven pounds. Dryfoos's role in The New York Times increased after 1958, when Sulzberger suffered a stroke. In January 1961, following an account in The Nation, a Times correspondent in Guatemala reported of an offensive against Cuba. Correspondent Tad Szulc was in Miami while being transferred from Rio de Janeiro to Washington, D.C. and discovered invasion plans on the Bay of Pigs. Szulc appeared to Dryfoos and Catledge to inform them of the invasion; both men were hesitant to publish the story, with Dryfoos believing that the Times could be blamed for bloodshed if the invasion failed. The men called James Reston, Sulzberger's assistant, who advised them not to publish "any dispatch that would pinpoint the timing of the landing". The decision was criticized by Bernstein and news editor Lewis Jordan. The Bay of Pigs invasion in April 1961 was a failure for the United States; then-president John F. Kennedy summoned Catledge to chide him for not publishing further information. On April 25, 1961, amid poor health, Sulzberger resigned and appointed Dryfoos as his successor. As publisher, Dryfoos sought to expand The New York Times into the Pacific Coast. The endeavor was a logistical challenge for the Times, which insisted on using Linotype machines. The New York Times diverted the Western editions copies to Teletypesetters that could transmit keystrokes to Los Angeles. Led by Andrew Fisher, the Western edition was identical to the New York paper. 1961–1964: Newspaper strike and Dryfoos's death By 1962, increasing newspaper production costs, higher wage demands, and the emergence of television advertising presented existential threats to the newspaper industry. In response, publishers implemented automated printing presses. Typographers viewed the automated machines as an attempt to replace them. The New York chapter of the International Typographical Union was led by Bert Powers, who regularly disputed with publishers; Powers advocated for higher wages, bolstered pension and welfare funds, and additional sick days. Powers particularly feared automatic typesetting machines and believed that printers should develop their own identity. On December 8, 1962, the New York Typographical Union declared a strike against The New York Times, the Daily News, the New York Journal American, and the New York World-Telegram & Sun. Printers picketed outside the offices of their publishers, inadvertently affecting the New York Daily Mirror, the New York Herald Tribune, the New York Post, the Long Island Star Journal, and the Long Island Daily Press, who were forced to stop their presses and lock their doors. The strike immediately affected the routine media consumption habits of New Yorkers; some readers abandoned newspapers altogether, turning to television, news magazines, or books. Other readers who continued to read newspapers read The New York Times through the paper's Western edition mailed from California or turned to other newspapers such as The Wall Street Journal and Women's Wear Daily, including out-of-state newspapers such as The Philadelphia Inquirer and The Christian Science Monitor. Financially, printers were supported by union funds and state insurance; newspaper and business owners were most affected. New York mayor Robert F. Wagner Jr. and labor negotiator Theodore W. Kheel were able to forge an agreement on March 31, 1963. The agreement guaranteed a thirty-five hour workweek, achieved a common contract expiration date, limited the use of automated equipment, and increased salaries. The strike left New York with three remaining papers—The New York Times, the Daily News, and the New York Post—from a dozen in 1930. Following the strike, Dryfoos visited Puerto Rico. While in Puerto Rico, he was administered to a hospital in San Juan for an illness. Dryfoos was then flown to Columbia-Presbyterian Medical Center in New York, where he was pronounced dead on May 25 of a heart ailment, potentially due to the strike. Dryfoos was mourned by Kennedy, secretary of state Dean Rusk, United Nations secretary-general U Thant, politician Adlai Stevenson II, French statesman Jean Monnet, then-president of Mexico Adolfo López Mateos, Nigerian politician Jaja Wachuku, and his funeral at Temple Emanu-El attracted two thousand mourners. After Dryfoos was buried, weeks of ambiguity followed as The New York Times did not have a publisher to replace him; the Sulzberger family believed that he would live through the 1970s. Arthur Hays Sulzberger was restricted to a wheelchair, while his son, Arthur Ochs Sulzberger, did not have enough experience to run the paper. Arthur Ochs's mother, Iphigene Ochs Sulzberger, favored Reston. On June 20, Arthur Hays announced that Arthur Ochs would become The New York Times's next publisher, the youngest person to serve the role. 1964–1966: Second Sulzberger era and New York Times Co. v. Sullivan Dryfoos's death brought significant alterations to The New York Times. Following Sulzberger's accession, general manager and vice president Amory Bradford resigned; Bradford's reputation was tarnished after an article by A. H. Raskin following the strike besmirched him and accused him of being pugnacious. Bradford was succeeded by Harding F. Bancroft, a descendant of churchman Richard Bancroft. The Times retained many of its executives and printed their names above the editorial page. In January 1964, Sulzberger ceased publication of the Western edition that had routinely been published since October 1962. Though Iphigene Ochs Sulzberger was one of the wealthiest women in the United States at the time—her net worth was estimated by Fortune to be between US$150 million (equivalent to US$1.39 billion in 2025) and US$200 million (equivalent to US$1.85 billion in 2025) by 1968—the strike cut deep into the Times's reserves and circulation numbers for the Western edition decreased despite demand for the Times in the Pacific Coast. Sulzberger believed that The New York Times could not follow in his father or grandfather's footsteps, holding tradition inviolable but adjusting to nascent technologies and adapting to a precarious newspaper industry. The Washington Post and The Wall Street Journal began to improve their coverage, occasionally providing superior political and economic coverage than the Times, and the Los Angeles Times led the United States in advertising lineage, bolstered by the diversified Times Mirror Company. The Los Angeles Times began to modernize its advertising sector with computing, analyzing circulation trends; The New York Times began modernizing in 1964 with the purchase of a Honeywell 200 that would perform the accounting work of twenty-five employees. The Honeywell 200 was placed in a windowless room on the seventh floor of 229 West 43rd Street. Despite his fiscally-driven changes, Sulzberger did not cede on The New York Times's coverage. The Times continued to publish full texts of speeches and documents such as the Warren Commission report on the assassination of John F. Kennedy. In an attempt to centralize executive authority and dismiss elderly employees, Sulzberger appointed Turner Catledge executive editor on September 1, 1964, a newly created post that gave Catledge more control over The New York Times's content. Catledge's position allowed him to serve as a regent for the journalistically unaware Sulzberger. Catledge's promotion drew the ire of Lester Markel, the displaced head of the Sunday Times, who was not supportive of his collectivist ambitions nor Bernstein's additions that drew from Markel's former prerogatives; most of all, Markel believed that The New York Times was no longer above other papers and no longer held itself in an esteemed position. Dryfoos's death shifted editorial weight from Washington to New York, particularly after the resignation of Reston's associate Wallace Carroll. Sulzberger did not seek to lose Reston, the Washington bureau chief, and made him an associate editor; Sulzberger appointed Tom Wicker as his successor on Reston's behest, much to Moscow correspondent Max Frankel's scorn. The New York Times erroneously claimed that thirty-eight witnesses saw or heard the murder of Kitty Genovese in March 1964 but did not act upon the attack. Times reporter Martin Gansberg's figure gained weight with Loudon Wainwright Jr.'s reporting in Life and editor A. M. Rosenthal's book Thirty-Eight Witnesses (1964). Rosenthal stated that he heard the number thirty-eight from then-police commissioner Michael J. Murphy at Emil's Restaurant and Bar. Then-attorney general Charles Skoller told Jim Rasenberger in 2004 that there were "half a dozen that saw what was going on"; Skoller's interview was republished in the Times. The New York Times acknowledged its error in Robert D. McFadden's obituary of perpetrator Winston Moseley in 2016. The murder of Kitty Genovese was an early example of the bystander effect based on the Times's reporting and has been attributed to the creation of 9-1-1 in the United States. One witness claimed that his father called the police, reporting that a woman was "beat up" and "staggering around". On March 29, 1960, "Heed Their Rising Voices", an advertisement placed by the Committee to Defend Martin Luther King and the Struggle for Freedom in the South, appeared on page twenty-five of The New York Times. The advertisement described the civil rights movement among black students, including an "unprecedented wave of terror" that police forces met protesters with. The advertisement spoke out against the actions taken by Montgomery Police Department in Montgomery, Alabama; a number of the advertisement's assertions were proven false. Montgomery Public Safety commissioner L. B. Sullivan, despite not being named in the advertisement, sued the Times for defamation seeking US$500,000 (equivalent to US$5.44 million in 2025) in damages. Alabama courts and the Supreme Court of Alabama sided with Sullivan before the case was taken to the Supreme Court. In New York Times Co. v. Sullivan, the Court unanimously ruled in a landmark decision that newspapers cannot be held liable for defamatory statements unless made with actual malice. 1966–1971: Changing landscape and additional papers A shift in the New York newspaper landscape in 1966 significantly benefited The New York Times. In April 1966, three failing publications—the New York Herald Tribune, the New York Journal-American, and the New York World-Telegram—agreed to merge to form the New York World Journal Tribune. Union workers went on strike against the New York World Journal Tribune from April to September 1966, delaying the paper's debut until the end of the strike; the World Journal Tribune would shut down in May 1967. As The New York Times's circulation numbers increased to 875,000 in 1966—an increase of 100,000 from the previous year—and 900,000 following the New York World Journal Tribune's closure, Sulzberger increased the paper's advertising rates. The increased rates drew criticism from advertising director Monroe Green; Green would retire at the end of 1967, allowing Sulzberger to consolidate the advertising, production, and circulation departments under Andrew Fisher. In 1967, the international edition was discontinued, faced with an annual loss of US$1.5 million (equivalent to US$14.48 million in 2025) and decreasing circulation against the Paris Herald Tribune, which had recently entered a partnership with The Washington Post. Sulzberger purchased a stake in the Paris Herald Tribune, forming the International Herald Tribune. The World Journal Tribune's collapse left New York with one remaining afternoon paper, the New York Post. Sulzberger considered a second afternoon paper that would break from the Times's traditional prose, appearing more in form as the New York Herald Tribune. Several names were considered, including The Evening Times and The Metropolitan, before New York Today was chosen, later the New York Forum. Rosenthal was named the editor of the Forum. The pages were set in type in August 1967 and locked. Three employees—Rosenthal, James L. Greenfield, Stephen A. O. Golden—were authorized to be there that morning. A stringer, Jim Connolly, repeatedly grilled the men on what the paper would look like before being asked to leave by a security guard. Two hundred copies were printed in total; forty-five copies were sent to news executives before being recalled, while the remaining copies were locked in a safe in the corporate treasurer's office. Sulzberger ultimately did not print further issues of the New York Forum after several weeks. Wicker's tenure as the Washington bureau chief was met by animosity from Catledge and Daniel. Greenfield, Rosenthal's protégé, embodied their efforts to replace the aloof and distant Wicker. Catledge, Daniel, Rosenthal, and Greenfield attempted to persuade Sulzberger into appointing Greenfield in February 1968; the men nearly succeeded, but Reston vehemently opposed the plan and stated that the staff of the Washington bureau would resign en masse. A visibly stressed Sulzberger informed Catledge that he would not go through with the plan and appointed Frankel instead. Upon learning of Sulzberger's intentions, Greenfield told Rosenthal, "Abe, don't ever ask me to come into this place again." Greenfield resigned on the spot and reportedly told Arthur Gelb that he "couldn't face cleaning out his desk", asking if Gelb would send him his favorite sweater and other items from his drawer. Greenfield returned to The New York Times in September 1969 as the paper's foreign editor under Rosenthal, who became managing editor. 1971–1972: The Pentagon Papers and New York Times Co. v. United States Driven by a speech by Randy Kehler opposing the Vietnam War, RAND Corporation employee Daniel Ellsberg began photocopying pages of a Department of Defense report detailing the United States's involvement in the war, later known as the Pentagon Papers. Throughout 1970 and 1971, Ellsberg attempted to approach prominent politicians that could disseminate the Pentagon Papers, including the foremost congressional opponent of the Vietnam War, George McGovern, in January 1971, and wrote a letter to The New York Times in November 1970 describing the war as "immoral, illegal, and unconstitutional". McGovern told Ellsberg that he should go to the Times; reluctantly, he called reported Neil Sheehan in February. In March 1971, reporter Neil Sheehan met with Ellsberg and agreed to publicize the papers if The New York Times agreed to protect Ellsberg's identity. Several weeks later, Sheehan and his wife Susan, a writer for The New Yorker, checked into a hotel in Cambridge, Massachusetts under a fictitious name to copy the papers. When the Sheehans arrived in Cambridge, Ellsberg informed Sheehan that he could only read—not copy—the Pentagon Papers, because they would then be property of The New York Times. In Secrets: A Memoir of Vietnam and the Pentagon Papers (2002), Ellsberg stated that he was concerned that the Times would not publish the documents in full and that the Federal Bureau of Investigation could become aware of the papers. To Sheehan, Ellsberg's concerns were "about going to jail" and his cavalierness towards exposing the documents to members of Congress. After confiding to his wife, who told him to "Xerox it", Sheehan believed that Ellsberg was too dangerous and began photocopying the documents at multiple copy shops in Boston after he had left on vacation. The New York Times faced a race to publish the documents once they were photocopied. Greenfield stored the documents in his Manhattan apartment before they were moved to a suite at the New York Hilton Midtown. Sheehan and Allan M. Siegal primarily worked on sifting through the documents, meticulously citing each statement; other reporters joined in, including Hedrick Smith, E. W. Kenworthy, and Fox Butterfield. Despite the Times's legal counsel Lord Day & Lord advising against publishing the papers, nearly informing the Department of Justice, the Pentagon Papers appeared on the front page of The New York Times on June 13, 1971, though it was placed beside an article on the wedding of then-president Richard Nixon's daughter Tricia Nixon Cox, the New York City budget, and India–Pakistan relations. The Times must respectfully decline the request of the attorney general, believing that it is in the interest of the people of this country to be informed of the material contained in this series of articles. The following day, The New York Times received a telex from then-attorney general John N. Mitchell telling the publication to halt its publication of the Pentagon Papers and to return the documents to the Department of Defense. After the Times stated its intention to continue publishing the papers, the Department of Justice sought a restraining order against the seven reporters and editors involved and the fifteen executives listed on the masthead. New York Times Co. v. United States moved quickly to the Supreme Court; oral arguments by The New York Times's legal defense, led by Alexander Bickel, were heard on June 26. In a 6-to-3 decision, the Supreme Court ruled in a landmark decision that the Times and The Washington Post, who began publishing the Pentagon Papers on June 18 after Ben Bagdikian persuaded the publication, could publish the Pentagon Papers. Notably, The New York Times the following day did not contain images on the front page. In May 1972, the National Committee for Impeachment paid The New York Times US$17,850 (equivalent to $137,000 in 2025) for a two-page advertisement urging the House of Representatives to impeach Nixon for the war. Times pressmen derided the advertisement; New York Printing Pressmen's Union chairman Richard Siemers called the advertisement "traitorous" and "detrimental to the boys in Vietnam and prisoners of war". The pressmen demanded that the Times remove the advertisement and later asked for space in the paper to express their opinion to no avail. Nixon was pleased with the pressmen and sent an emissary to convey his thanks, charging the committee with violating the Federal Election Campaign Act. Nixon-appointed judge James L. Oakes sided with the committee in October. 1972–1977: Watergate scandal and Central Intelligence Agency investigations On June 17, 1972, the Watergate Office Building, the Democratic National Committee's headquarters, was broken into. Unbeknownst to the general public, the intrusion was performed by five individuals—Virgilio Gonzalez, Bernard Barker, James McCord, Eugenio Martínez, and Frank Sturgis—who were paid by Nixon's fundraising organization Committee for the Re-Election of the President. The Washington Post—a political paper—placed its article on the event on the front page, unlike The New York Times, who sought to be cautious. Tad Szulc, who was familiar with some of the individuals from their involvement in the Bay of Pigs invasion, was eager to cover the story but could not connect the Cubans to the Central Intelligence Agency and his source was concerned that the Nixon administration was monitoring journalists's phone calls, particularly after the publication of the Pentagon Papers. The Washington Post covered the Watergate incident extensively, primarily the work of Bob Woodward and Carl Bernstein. Woodward was provided with information from Federal Bureau of Investigation associate director Mark Felt under the pseudonym "Deep Throat". The Washington Post's first major breakthrough occurred on August 1, when Woodward and Bernstein reported that a US$25,000 (equivalent to $192,000 in 2025) cashier's check to Nixon's re-election campaign was deposited in a bank account operated by Barker. The Post missed the first edition but reported the story on the second, averting the potential for The New York Times to report on it. According to former reporter Robert M. Smith, acting Federal Bureau of Investigation director L. Patrick Gray discussed details of the intrusion with Mitchell at a Washington, D.C. restaurant a month later. Smith informed an editor at the Times's Washington bureau, Robert H. Phelps, who took notes on the conversation; Smith left Washington the following day to attend Yale Law School. The bureau focused on the Republican National Convention in the days after the lunch and Phelps left on a monthslong trip to Alaska. Phelps later stated that he had "no idea" where the notes went. The New York Times remained delayed to The Washington Post's reporting, including reporting on an October 10 article that stated that the Federal Bureau of Investigation established that the Watergate burglary was an act of political sabotage committed by the Nixon re-election campaign. The Times article did not cover the broad conclusions but rather the accusations against Donald Segretti, a political operative who was the only individual named in the Post's reporting. By 1973, The Washington Post cemented its lead in reporting the Watergate scandal through its trifecta of stories on the cashier's check, Mitchell's control of a secret fund to spy on Democrats, and the Federal Bureau of Investigation inquiry. As Congress gathered information, the Post eased its coverage, giving The New York Times an opportunity to enhance its own coverage. The Times's efforts were spearheaded by Seymour Hersh, who exclusively reported on Dwight Chapin's departure and the first link between the White House and the operation. Woodward and Bernstein turned to The New York Times in April 1973, inviting Hersh to dinner on April 8.[a] Bernstein asked Hersh what the Times would read the following morning in jest; the following day's issue of The New York Times contained James W. McCord Jr.'s testimony that the Committee for the Re-Election of the President paid the conspirators off. In May, reporter John M. Crewdson discovered that the Federal Bureau of Investigation wiretapped the phones of The New York Times, The Washington Post, The Sunday Times, six members of the National Security Council, and three high-ranking Foreign Service officials. With Christopher Lydon, Crewdson obtained the Huston Plan and published details on it. During the Watergate scandal, the Times lost multiple editors who were displeased with the Post's exclusives, including Gene Roberts. The scandal resulted in an impeachment inquiry against Nixon and House Committee on the Judiciary hearings that culminated in his resignation on August 9, 1974, and Gerald Ford assuming the presidency. The New York Times faced a push for inclusivity driven by second-wave feminism. In February 1972, the Women's Caucus of the Times was formed. The group sent Sulzberger a five-page letter in May detailing the paper's shortcomings in recruiting female employees. In 1974, Betsy Wade—a member of the caucus—sued The New York Times under the Civil Rights Act of 1964. Elizabeth Boylan et al. v. New York Times Co.[b] would represent hundreds of women, from reporters to clerks. The lawsuit was settled in October 1978; A. M. Rosenthal later asserted that he would have had to testify against his employees. The Times was forced to pay US$350,000 (equivalent to $1,727,678.57 in 2025) and establish an affirmative action program. Concurrently, a movement developed to incorporate the alternative honorific Ms. for women. Protesters gathered outside 229 West 43rd Street to advocate for Ms. to be included in The New York Times Manual of Style and Usage. Though Sunday editor Max Frankel supported the idea, Sulzberger and Rosenthal did not. Hersh remained skeptical of the Central Intelligence Agency following the Watergate scandal and he published several exposés into the agency. In October 1974, Hersh published an article on the Central Intelligence Agency's role in the 1973 Chilean coup d'état that deposed Salvador Allende. In December, he published an article revealing the existence of Operation CHAOS, a domestic espionage program that illegally surveilled over ten thousand citizens, aided by the National Security Agency. The Hersh charges were given legitimacy by James Jesus Angleton's dismissal, leading to the President's Commission on CIA Activities within the United States. Hersh intended to publish an article on Project Azorian, a Central Intelligence Agency project to recover the Soviet submarine K-129 using the Glomar Explorer, but neither Jim Phelan nor Wallace Turner could verify the story's veracity. The New York Times published its story after the Los Angeles Times had published theirs. By 1976, Rosenthal was convinced that the Central Intelligence Agency was still involved in the Times's operations and urged the paper to sue under the Freedom of Information Act. 1977–1980: Financial difficulties and newspaper strike Visions of vegetables dance in his sleepless head, along with recipes for pork chops liégeoise, treatises on termite detection, shopping guides to $44 canvas bags and $1,850 'Love' pendants from Tiffany. The exodus of readers to suburban newspapers in New York City—such as Newsday in Long Island and Gannett newspapers in Westchester County—contributed to The New York Times's decline during the 1970s. Circulation decreased from 940,000 in 1969 to 796,000 in 1976 according to figures from the Audit Bureau of Circulations and advertising lines decreased eight million from 1970 to 1975. Rosenthal identified the relative success of New York as a publication that specialized in service journalism. Rosenthal, an editor vehemently opposed on perceived attempts to compromise on the Times's news operations, balked at attempts from executives to add a food coverage section to The New York Times in 1974; his opposition subsided when Sulzberger began ordering cuts to newsroom spending. In June 1976, Rosenthal wrote a proposal to introduce additional sections to the Times, attempting to garner new audiences. A weekend section to The New York Times debuted in April 1976, followed by a home and sports section and culminating in a science section in November 1978. The additional sections were poorly received; Time devoted a cover story to critiquing the sections and New York wrote that the Times was soiling its reputation in an image of "middle-class self-absorption" amid "New York's crumbling cityscape". Despite negative reception, the sections reversed The New York Times's declining circulation. In May 1977, the Times sold more advertising lines than it had at any point in the paper's history. The home section, which began in March 1977, was led by architecture critic Ada Louise Huxtable for several issues before Paul Goldberger took the reins. The sections marked a lighter tone for The New York Times and featured articles from writers Lois Gould and William Zinsser, the latter of whom wrote a jovial article on the New Haven jogging phenomenon. In response to work rulings initiated by The New York Times, the New York Post, and the Daily News that drastically reduced manning requirements, pressmen began a trilateral strike against the papers on August 10, 1978, later joined by other unions. The strike saw the emergence of newspapers established to capitalize on the landscape, including The City News, The New York Daily Press, The New York Daily Metro, and The Graphic. Not The New York Times was published in September by a group of Times editors, including Christopher Cerf and George Plimpton. During the strike, The New York Times missed the short-lived papacy of Pope John Paul I. Not The New York Times chronicled the papacy of Pope John Paul John Paul I, whose name is an amalgamation of John Paul I, John Lennon, and Paul McCartney, lasting nineteen minutes. Not The New York Times had included the factual detail that his successor would not be Italian; Pope John Paul II, who succeeded John Paul I, was Polish. The strike ended on November 5, though the New York Post resumed publication a month earlier after owner Rupert Murdoch signed a contract with the pressmen. 1980–1986: Coverage of the AIDS epidemic and increasing circulation Under Rosenthal, The New York Times's coverage of the beginning of the AIDS epidemic was muted. In November 1980, a gunman armed with an Uzi submachine gun fired into the Ramrod, a leather bar in the gay liberation epicenter of Greenwich Village, killing two people and injuring six. The Times reserved its coverage in the metropolitan section and did not run a front-page story on AIDS until May 1983, when assistant secretary for health Edward Brandt Jr. described the epidemic as a priority for the Public Health Service; San Francisco Chronicle reporter Randy Shilts later told Fresh Air's Terry Gross that a synagogue bombing in Paris that had occurred one month prior was featured prominently on the front page. The National Gay Task Force wrote to Sulzberger to urge The New York Times to increase its coverage of the AIDS epidemic, and the Gay Men's Health Crisis noted that the Times did not run a story for a gathering it hosted in Madison Square Garden that attracted tens of thousands of people. The AIDS epidemic presented a challenge to the otherwise puritan Times, which abstained from lurid, subterranean descriptions of gay venues that attracted attention from inspectors, unlike the New York Post and the Daily News. By contrast, Frankel deliberately highlighted grotesque activities—such as anal intercourse—in his editorials. In 1982, circulation numbers were estimated to be 929,000. In October 1985, The New York Times would reach one million daily papers, a record it would hold until September 1986. Concurrently, Sulzberger began considering a Times without Rosenthal. In March 1983, he told Sydney Gruson that there would be a new publisher and executive editor. Rosenthal promoted several editors—Craig R. Whitney, Warren Hoge, and John Vinocur—in an effort to prove his testament to the editors that would succeed him. An epidemic would affect The New York Times when twenty-nine employees working at 229 West 43rd Street came down with a pneumonia-like disease in June 1985. New York City Department of Health epidemiologists surveyed the building and commissioner David Sencer made an assessment in July determining that the employees were infected with Legionnaires' disease. Medical director Howard R. Brown informed the Times that Legionella pneumophila could have made its way through the ventilation system; The New York Times then changed all of its fan-room filters. Through opinionated phrases and unattributed characterizations, the article established a tone that cast its subject in an unfavorable light. The New York Times published a profile of U.S. News & World Report publisher and real estate developer Mortimer Zuckerman in August 1985 as Zuckerman and Rosenthal entered the same social standing. The article claimed that Zuckerman "conquered New York's real-estate world", particularly following his successful bid to develop the New York Coliseum property on Columbus Circle. On the morning of the story's publication, Zuckerman called Rosenthal to enumerate its errors. The Times published an editors' note two days later. The note surprised several editors in the newsroom, including the profile's author, Jane Perlez. Former The Atlantic Monthly editor Robert Manning asked if Zuckerman "cast a spell" on him, and journalist Murray Kempton called the note a "genuine rudeness to Perlez" in his Newsday column. Rosenthal disregarded the criticism and rejected being persuaded to write the note. A month later, The Village Voice ran a cover story with an illustration by Edward Sorel depicting Rosenthal's head as a tank turret, decapitating Sydney Schanberg, who was removed from the opinion pages by Sulzberger on Rosenthal's request. Rosenthal didn't have a nervous breakdown, but he was close to it. Sulzberger expedited Rosenthal's retirement to prevent his son, Arthur Ochs Sulzberger Jr., from having to remove Rosenthal himself. Rosenthal felt that the younger Sulzberger had contempt for the institution after he appeared in socks, scolding him after he appeared in Rosenthal's office. Sulzberger assumed that Rosenthal's publicized personal life—chronicling his relationships with actress Katharine Balfour, one of his secretaries, and newspaper editor Shirley Lord—was contributing to his erratic management. Rosenthal's behavior in the office concerned other employees; Harrison Salisbury compared Rosenthal to Oedipus, who is said to have gouged out his own eyes in Oedipus Rex after realizing he had committed patricide and incest. Sulzberger later told Alex S. Jones and Susan Tifft for The Trust (1999) that Rosenthal was close to a nervous breakdown. Despite concerns, Rosenthal continued to serve through his editorship, redesigning the Metropolitan Report and dispatching Maureen Dowd to Washington. The alternate honorific Ms. became an apparent issue by April 1986. Assistant managing editor Craig Whitney informed Sulzberger in September 1985 that, at a meeting with reporters and editors, the honorific was vehemently inquired about. Feminist journalist Paula Kassell purchased ten shares of The New York Times Company to gain access to a shareholders meeting. In April 1986, she challenged Sulzberger to convene a panel of language experts to come to a decision. Kassell was informed that the debate would not need to take place because The New York Times had begun to adopt the new style. Editors of Ms. walked into the Times's offices to give a basket of flowers for Rosenthal. The policy was officially changed in June. Simultaneously, Sulzberger attempted to persuade Rosenthal to retire, inviting him to an Italian restaurant that month and offering him an opinion column. In September, Rosenthal informed his son and Associated Press reporter Andrew that Max Frankel would succeed him and Arthur Gelb would become managing editor. Rosenthal officially resigned on October 11, 1986. 1986–1992: Newsroom changes and Sulzberger's resignation Frankel's tenure as executive editor was highlighted by characteristic and ideologic change from his predecessor. Frankel complimented editors whom he felt had written great articles and bantered with employees. He focused on covering the AIDS epidemic with greater fervor, assigning several employees to the task, but remained wary. The prohibition on using the word "gay" was not lifted until July 1987. Frankel viewed The New York Times's volumetric prose unfavorably compared to newspapers such as USA Today, whose articles were significantly shorter. An amateur painter, he focused on the design of the Times and believed that stories should be able to be read in full on the front page, much to the displeasure of Sulzberger's wife Carol. Despite defining himself antithetically to Rosenthal, Frankel would take an aggressive approach to the front page, later describing his position as "authoritarian and dictatorial". Rosenthal requested that Frankel appoint John Vinocur as managing editor and hire Andrew. Frankel rejected promoting Vinocur as he was not familiar with him—Vinocur would go on to run the International Herald Tribune; he had worked with Andrew before at the Associated Press and hired him. Several editors positioned themselves to replace outgoing Washington bureau chief Bill Kovach, who was appointed in 1979 in an effort to decrease the bureau's autonomy. Frankel's accession furthered the disdain Kovach had for him; Frankel did not place Kovach's name on the masthead. Kovach resigned in 1986 to work for The Atlanta Journal-Constitution. The need for a bureau chief increased amid the Iran–Contra affair, a political scandal that was the largest political story since the attempted assassination of Ronald Reagan. Deputy Washington editor Howell Raines was rejected for his weak foreign policy and his "tendency to not think conceptually". Frankel rejected former London bureau chief R. W. Apple Jr. after harshly reviewing his London chiefship and national editor Dave Jones out of fear that he would "coddle and shelter" the bureau's staff rather than challenging them. Whitney was ultimately selected despite lacking experience in Washington. To that end, he selected Apple and Paris correspondent Judith Miller as deputy editors. The idea of hiring Miller came from the younger Sulzberger. Frankel sought to advance The New York Times's Washington coverage against The Washington Post. To wit, he delegated determining which stories the late-night staff should match to the Washington bureau rather than the night editors in New York; the Washington Bureau received a copy of The Washington Post at 11 p.m. The Times achieved initial success with Whitney, whose coverage of the Iran–Contra affair and George H. W. Bush and Bob Dole's jostling for the Republican presidential nomination earned praise. The paper's successes would diminish after then-senator Gary Hart dropped out of the Democratic presidential primaries amid a report from the Miami Herald alleging that he engaged in an extramarital affair with Donna Rice Hughes. Within the week, Whitney sent thirteen letters to presidential candidates demanding their biographical, sexual, professional, and personal information. The perceived invasion of privacy was denounced by columnists Anthony Lewis and Rosenthal. Chicago Tribune columnist Mike Royko telephoned the Times's public relations office to ask for the marital histories of Sulzberger and the editors. Frankel was displeased with Miller's performance, describing her as " dismissive, mistrustful, and disrespectful" in a letter to Whitney. My view of this bureau before I got here was that it was fat and lazy—a few terrific seasoned reporters, a few terrific but unseasoned Washington reporters, and a whole room full of just average ones. In July 1987, The New York Times issued a correction for an account of testimony it published several days prior. The erroneous article, written by Fox Butterfield, reported that National Security Council lieutenant colonel Oliver North testified to the congressional committees investigating the Iran–Contra affair that Central Intelligence Agency director William J. Casey intended to create a fund to facilitate the sale of arms to Iran. Before its publication, Butterfield's article was read by Joseph Lelyveld, who raised suspicions over the lack of a direct quote from North; Washington bureau reporters could not produce a quote after the story was published. Despite facing no resistance from other editors, Frankel realized that the story was incorrect after speaking with Lelyveld and issued a prominent and unprecedented correction on the front page. The Washington bureau faced further troubles when Whitney, who was displeased with the Washington bureau, formed a list of correspondents he felt did not have journalistic flair or who rarely broke stories and reassigned five to New York. The reassignments caused an uproar in the bureau. Congressional correspondent Martin Tolchin likened it to the Saturday Night Massacre and forty-one employees signed a letter in disagreement. The Washington Post learned of the discontent, much to Frankel's chagrin. Whitney later described the incident as the "biggest mistake" he had ever made. In November 1988, displeased with Whitney's performance, Frankel appointed London bureau chief Raines as Washington bureau chief and Whitney as London bureau chief. Unsentimental and aggressive, Raines sought to resuscitate a bureau that foundered under Whitney. Several days after becoming bureau chief, Raines had a speaker Miller used to telephone into news meetings without attending them in person removed, eventually moving her to the New York media desk. Raines formed a list of reporters who would receive better stories, exasperating journalists who were not on the list. Raines's style attracted attention from publications such as Spy, who particularly noted his eccentricities, such as installing a hotline in the clerks's desk specifically for his use. In July 1989, Lelyveld was made deputy managing editor. Bernard Gwertzman—whom Lelyveld had wanted to serve as his deputy—was appointed foreign editor. Gwertzman would run the foreign desk during the Revolutions of 1989 and the dissolution of the Soviet Union, the end of the Cold War, the Gulf War, negotiations to end apartheid in South Africa, the Oslo Accords, and the Yugoslav Wars, in what Lelyveld described as the "greatest run of foreign news since World War II". By 1987, Sulzberger had demonstrated a waning interest in The New York Times, becoming chairman of the Metropolitan Museum of Art that year. Frankel spoke to Sulzberger Jr. rather than his father when discussing budgetary cuts following Black Monday. In April 1988, Sulzberger appointed his son as deputy publisher from assistant publisher. Sulzberger Jr. was juxtaposed to the social and cultural beliefs held by his father; though he bantered with employees and invited them to his Central Park West apartment upon arriving in New York in 1986, Sulzberger Jr. did not express the same outwardness upon being made assistant publisher, believing that a publisher should not befriend his employees. Likewise, he did not involve himself in the civic fabric of New York. Sulzberger's involvement with wealthy New Yorkers became an issue when Walter Annenberg deliberated donating his US$1 billion (equivalent to US$2.36 billion in 2025) collection of Impressionist artworks to the Metropolitan Museum of Art in 1991, but disapproved of the Times's mention of his father Moses's tax evasion charges when referencing his name. After Sulzberger expressed that the mentions of his father were gratuitous to Lelyveld, Annenberg asked that Michael Kimmelman's review would be "devoid of zingers". On January 16, 1992, Sulzberger resigned. 1992–1994: Third Sulzberger era and the Internet In September 1992, Sulzberger Jr. announced that he would shift the posts of three editors, Jack Rosenthal, Hoge, and Raines. Rosenthal replaced Hoge as editor of The New York Times Magazine while Raines became editorial page editor. Rosenthal would later be made assistant managing editor as part of the arraignment. Raines would continue directing coverage of the 1992 presidential election until November, and he would take control in January 1993. Raines identified with Harry S. Truman's political philosophy of appointing one-sided economists and felt that the editorial board should reflect objectivity, ending Rosenthal's prohibition on the words "must" or "should". Sulzberger Jr. and Raines believed in environmental causes and saw a use for the board in carrying their beliefs; Robert B. Semple Jr. was empowered to write an opinion piece against the opening of a gold mine near Yellowstone National Park. Raines attracted criticism for his oft-acidic opinion pieces, in which he branded Senate Republican leader Dole as a "churlish partisan", resulting in his denouncement on the Senate floor. The New Yorker notably questioned Raines's negative perception of then-president Bill Clinton, a Democrat, and The New York Observer chastened him in an article. Despite support from Sulzberger Jr., the editorial page drew critique from Frankel—who said it was "too often shrill"—and Lelyveld—who found its language and tone excessive. The Internet represented a generational shift within the self-certitude The New York Times. Among the Internet's most prominent skeptics from within the Times was Sulzberger, who negotiated The New York Times Company's US$1.1 billion (equivalent to US$2.45 billion in 2025) acquisition of The Boston Globe in 1993. Sulzberger reaffirmed his support for print media in a speech at the Midwest Research Institute in May 1994, comparing the Internet to the unkempt highways in India. The dichotomic Sulzberger Jr. unequivocally disagreed with his father, speaking to employees of The New York Times in February of that year to defiantly state that the paper must pursue digital endeavors. In June 1994, @times appeared on America Online's website as an extension of the Times. @times featured news articles, film reviews, sports news, and business articles. Articles were retained for twenty-four hours as a result of a deal signed by The New York Times Company in 1983 giving Mead Data Central, the parent company of LexisNexis, electronic rights to The New York Times's content. In December 1994, the Mead Corporation sold Mead Data Central to Reed Elsevier, giving the Times digital rights to its content. In its first week, @times's message board had over two thousand postings, but criticism over the service's lack of convivality grew, particularly in comparison to Time's online offerings. Frankel intended to retire in 1994, exacerbated by the impending customary age at which he should retire, his wife Joyce Purnick's breast cancer diagnosis. In June 1993, Frankel told The New Yorker that he was overworked and overburdened. In his tenure, The New York Times was criticized for naming the woman who accused William Kennedy Smith of rape in 1991, an incident that drew righteous indignation from tabloids, faced dissenting opinions from within the Washington bureau, and issued a front-page correction. On April 7, 1994, Frankel resigned. Sulzberger Jr. named Lelyveld as his replacement. In one of his final decisions, Frankel promoted metropolitan editor Gerald M. Boyd to assistant managing editor in September 1993 and placed his name first on the masthead, putting Boyd in contention to replace him. The appointment created a rift between Lelyveld and Boyd, the former of whom felt he was not qualified enough. Lelyveld had instructed Boyd on how the lede story for the 1993 World Trade Center bombing should be written; Boyd dismissed him, giving Lelyveld admiration for Boyd. Lelyveld did not have an affection for any particular editor to serve as his managing editor, particularly Boyd, but selected Gene Roberts, the aging executive editor of The Philadelphia Inquirer who let Lelyveld report on the Chappaquiddick incident in 1969. 1994–1998: The New York Times Electronic Media Company and changing landscape By 1994, several employees of The New York Times had begun to access the Internet through Internet service providers such as Panix and the Pipeline, the latter of which was created by The New York Times Magazine alumnus James Gleick. Technology reporter John Markoff, who notably covered the pursuit of computer hacker Kevin Mitnick, established an email address under the domain name nyt.com in 1990. Markoff moved the address to Internex, an Internet service provider in Menlo Park, California, in 1994. The email was compromised by Mitnick, who erroneously believed that Markoff was attempting to track him down; in actuality, physicist Tsutomu Shimomura had assisted the Federal Bureau of Investigation (FBI) with locating Mitnick at the time and he was arrested weeks later. In July 1994, internet services manager Gordon Thompson sent the first email communiqué to the nytimes.com address from his Panix account. In November, senior information and technology editor Richard J. Meislin created a web page on an internal server to list resources for Times editors known as Navigator, later made public. It remained regularly updated until February 2007 and sporadically updated until 2014. Convinced of the capabilities of the Internet by a dinner he had with Meislin and Thompson,[c] Lelyveld assembled four employees—news desk editor Kevin McKenna, special projects executive editor William Stockton, advertising executive Daniel Donaghy, and information systems employee Steve Luciani—to develop a website for The New York Times at Sulzberger Jr.'s request. Changing media dynamics introduced a sense of urgency to the team; organizations that traditionally co-existed with the Times—such as America Online, Yahoo, and CNN—succeeded digitally. The expansion of websites such as Monster.com and Craigslist threatened The New York Times's classified advertisement sales, which accounted for US$300 million (equivalent to US$615.85 million in 2025) in revenue in 1996. In June 1995, The New York Times Company appointed businessman Martin Nisenholtz president of its digital media subsidiary. Nisenholtz reported directly to Lelyveld and general manager Russ Lewis, an unusual arraignment for a Times executive. Gwertzman was assigned to direct the editorial operations of the website. The team chose the domain name nytimes.com, believing that Markoff's nyt.com would be confused for the New York Telephone. In June 1995, two packages arrived at the mailrooms of The New York Times and The Washington Post addressed to then-deputy managing editor Warren Hoge and then-deputy managing editor Michael Getler respectively. The packages contained a copy of Industrial Society and Its Future (1995), a Luddite essay. The manifesto was written by Ted Kaczynski, a domestic terrorist known as the "Unabomber" who mailed and planted sixteen mail bombs between 1978 and 1995, killing three people and injuring twenty-three others. The packages contained a note stating that he—addressed as "FC" for "Freedom Club"—would "desist from terrorism" if the publications published Industrial Society and Its Future. The Washington Post publisher Donald E. Graham and executive editor Leonard Downie Jr. met with Sulzberger Jr. and Lelyveld to coordinate their response. Joined by Post president Boisfeuillet Jones Jr., the men met with FBI director Louis Freeh and attorney general Janet Reno. Freeh and Reno suggested that the publications publish the manifesto as a pamphlet or book, an idea the men rejected for its difficulty. Kaczynski's essay appeared on September 19.[d] Critics, such as the American Journalism Review, objected to giving into such demands in fear of creating a copycat effect, though The Washington Post reported that most readers from outside of the Washington, D.C. area requested reprints and souvenir copies. Sulzberger Jr. defended the publication of Kaczynski's essay citing the credibility of his threat given his experience. Kaczynski's brother David recognized the penmanship of the essay in the Times and reported his suspicions to the FBI; Kaczynski was arrested in April 1996. On January 19, 1996, at exactly 11:59 p.m., nytimes.com was launched at the Hippodrome Building but formally announced on January 22 in order to give engineers the weekend to resolve any issues. Sulzberger Jr., Lelyveld, and Lewis sent a case of French champagne to the building. The website required users to register an account; according to Nisenholtz, this was done for company-wide and advertiser analytics purposes. Jim Romenesko, then-writer for the St. Paul Pioneer Press, was the first person to register an account on the site after attempting to access it for a month. In the initial hours following the website's official launch, one reader was recorded to have registered every second. By March 1997, one million people had registered an account in comparison to the 1.1 million weekday print subscribers and the 1.6 million print subscribers on Sundays. The website was rudimentary, consisting of four stories and minimal photographs and designs, though it contained an interactive crossword puzzle and a calculator for determining the income tax one would pay under tax reforms promised by Bob Dole, the Republican nominee in the 1996 presidential election. nytimes.com was free to access and did not implement a paywall for readers in the United States, though an international paywall of US$35 (equivalent to $72 in 2025) a month was put into effect until July 1997. Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-70] | [TOKENS: 10628]
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates. In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y − z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metal–oxide–semiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machine–based computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languages—some intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#CITEREFMao1996] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================