text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_ref-11] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/New_York_Post] | [TOKENS: 7259] |
Contents New York Post The New York Post (NY Post), founded as the New York Evening Post (originally New-York Evening Post), is an American conservative daily tabloid newspaper published in New York City. The Post also operates three online sites: NYPost.com; Page Six, a gossip site; and Decider, an entertainment site. The newspaper was founded in 1801 by Alexander Hamilton, a Federalist and Founding Father who was appointed the nation's first secretary of the treasury by George Washington. Its most notable 19th-century editor was William Cullen Bryant. The newspaper became a respected broadsheet in the 19th century. In the mid-20th century, the newspaper was owned by Dorothy Schiff, who developed the tabloid format that has been used since by the newspaper. In 1976, Rupert Murdoch's News Corp bought the Post for US$30.5 million (equivalent to $173 million in 2025). Since its acquisition by News Corp, the Post has been frequently criticized over the years for its controversial headlines and editorial choices along with accusations of bias in its political coverage. As of 2023,[update] the New York Post is the third-largest newspaper by print circulation among all U.S. newspapers. History The Post was founded by Alexander Hamilton with about $10,000, equivalent to $193,500 in 2025, from a group of investors in the autumn of 1801 as the New-York Evening Post, a broadsheet. Hamilton's co-investors included other New York members of the Federalist Party, including Robert Troup and Oliver Wolcott who were dismayed by the election of Thomas Jefferson as U.S. president and the rise in popularity of the Democratic-Republican Party.: 74 At a meeting held at Archibald Gracie's weekend villa, which is now Gracie Mansion, Hamilton recruited the first investors for the new paper. Hamilton chose William Coleman as his first editor.: 74 The most notable 19th-century Evening Post editor was the abolitionist and poet William Cullen Bryant.: 90 So well respected was the Evening Post under Bryant's editorship, it received praise from the English philosopher John Stuart Mill, in 1864. In addition to literary and drama reviews, William Leggett began to write political editorials for the Post. Leggett's espoused a fierce opposition to central banking and support for the organization of labor unions. He was a member of the Equal Rights Party. In 1831, he became a co-owner and editor of the Post, eventually working as sole editor of the newspaper while Bryant traveled in Europe in 1834 and 1835. One of the co-owners of the paper during this period was John Bigelow. Born in Malden-on-Hudson, New York, Bigelow graduated in 1835 from Union College, where he was a member of the Sigma Phi Society and the Philomathean Society, and was admitted to the bar in 1838. From 1849 to 1861, he was one of the editors and co-owners of the Evening Post. Another owner with Bryant and Bigelow was Isaac Henderson. In 1877, this led to the involvement of his son Isaac Henderson Jr., who became the paper's publisher, stockholder, and member of its board, just five years after graduating from college. Henderson Sr.'s 33-year tenure with the Evening Post ended in 1879, when it was learned that he had defrauded Bryant the entire time. Henderson Jr. sold his interest in the newspaper in 1881. In 1881, Henry Villard took control of the Evening Post and The Nation, which became the Post's weekly edition. With this acquisition, the paper was managed by the triumvirate of Carl Schurz, Horace White, and Edwin L. Godkin. When Schurz left the paper in 1883, Godkin became editor-in-chief. White became editor-in-chief in 1899, and remained in that role until his retirement in 1903. In 1897, both publications passed to the management of Villard's son, Oswald Garrison Villard, a founding member of the NAACP; he was also a founder of the American Anti-Imperialist League.: 257 Villard sold the newspaper in 1918 following widespread allegations of pro-German sympathies during World War I hurt the newspaper's circulation. The new owner was Thomas Lamont, a senior partner in the Wall Street firm of J.P. Morgan & Co. Unable to stem the paper's financial losses, he sold it to a consortium of 34 financial and reform political leaders, headed by Edwin Francis Gay, dean of the Harvard Business School, whose members included Franklin D. Roosevelt. In 1924, conservative Cyrus H. K. Curtis, publisher of the Ladies Home Journal, purchased the Evening Post, and briefly turned it into a non-sensational tabloid nine years later, in 1933. In 1928, Wilella Waldorf became drama editor at the Evening Post. She was one of the first women to hold an editorial role at the newspaper, During her time at the Evening Post, she was the only female first-string critic on a New York newspaper. She was preceded by Clara Savage Littledale, the first woman reporter ever hired by the Post and the editor of the woman's page in 1914. In 1934, J. David Stern purchased the paper, changed its name to the New York Post, and restored its broadsheet size and liberal perspective.: 292 For four months of that same year, future U.S. Senator from Alaska Ernest Gruening was an editor of the paper. In 1939, Dorothy Schiff purchased the paper. Her husband George Backer was named editor and publisher. Her second editor and third husband Ted Thackrey became co-publisher and co-editor with Schiff in 1942. Together, they recast the newspaper into its modern-day tabloid format.: 556 In 1945, The Bronx Home News merged with it. In 1949, James Wechsler became editor of the paper, running both the news and the editorial pages. In 1961, he turned over the news section to Paul Sann and stayed on as editorial page editor until 1980. Under Schiff's tenure the Post was seen to have liberal tilt, supporting trade unions and social welfare, and featured some of the most popular columnists of the time, such as Joseph Cookman, Drew Pearson, Eleanor Roosevelt, Max Lerner, Murray Kempton, Pete Hamill, and Eric Sevareid, theater critic Richard Watts Jr., and gossip columnist Earl Wilson. In November 1976, it was announced that Australian Rupert Murdoch had bought the Post from Schiff with the intention that Schiff would be retained as a consultant for five years. In 2005, it was reported that Murdoch bought the newspaper for US$30.5 million. The Post at this point was the only surviving afternoon daily in New York City and its circulation under Schiff had grown by two-thirds, particularly after the failure of the competing World Journal Tribune; however, the rising cost of operating an afternoon daily in a city with worsening daytime traffic congestion, combined with mounting competition from expanded local radio and TV news cut into the Post's profitability, although it made money from 1949 until Schiff's final year of ownership, when it lost $500,000. The paper has lost money ever since.: 74 In late October 1995, the Post announced plans to change its Monday through Saturday publication schedule and begin issuing a Sunday edition, which it last published briefly in 1989. On April 14, 1996, the Post delivered its new Sunday edition at the cost of 50 cents per paper by keeping its size to 120 pages. The amount, significantly less than Sunday editions from The New York Daily News and The New York Times, was part of the Post's efforts "to find a niche in the nation's most competitive newspaper market". Because of the institution of federal regulations limiting media cross-ownership after Murdoch's purchase of WNEW-TV, which is now WNYW, and four other stations from Metromedia to launch the Fox Broadcasting Company, Murdoch was forced to sell the paper for $37.6 million in 1988, equivalent to $102 million in 2025, to Peter S. Kalikow, a real-estate magnate with no experience in the media industry. In 1988, the Post hired Jane Amsterdam, founding editor of Manhattan, inc., as its first female editor, and within six months the paper had toned down the sensationalist headlines. Within a year, Amsterdam was forced out by Kalikow, who reportedly told her "credible doesn't sell...Your big scoops are great, but they don't sell more papers." In 1993, after Kalikow declared bankruptcy, the paper was temporarily managed by Steven Hoffenberg, a financier who later pleaded guilty to securities fraud, and for two weeks by Abe Hirschfeld, who made his fortune building parking garages. Following a staff revolt against the Hoffenberg-Hirschfeld partnership, which included publication of an issue whose front page featured the iconic masthead picture of founder Alexander Hamilton with a single teardrop running down his cheek, the Post was again purchased in 1993 by Murdoch's News Corporation. This came about after numerous political officials, including Democratic governor of New York Mario Cuomo, persuaded the Federal Communications Commission to grant Murdoch a permanent waiver from the cross-ownership rules that had forced him to sell the paper five years earlier. Without this FCC ruling, the paper would have shut down. In December 2012, Murdoch announced that Jesse Angelo had been appointed publisher. Various branches of Murdoch's media groups, 21st Century Fox's Endemol Shine North America, and News Corp's New York Post created a Page Six TV nightly gossip show based on and named after the Post's gossip section. A test run in July would occur on Fox Television Stations. The show garnered the highest ratings of a nationally syndicated entertainment newsmagazine in a decade when it debuted in 2017. With Page Six TV's success, the New York Post formed New York Post Entertainment, a scripted and unscripted television entertainment division, in July 2018 with Troy Searer as president. In 2017, the New York Post was reported to be the preferred newspaper of President Donald Trump, who maintains frequent contact with its owner Murdoch. The Post promoted Trump's celebrity since at least the 1980s. In October 2020, the Post endorsed Trump for re-election, citing his "promises made, promises kept" policy. Weeks after Trump was defeated and sought to overturn the election results, the Post published a front-page editorial, asking Trump to "stop the insanity", stating that he was "cheering for an undemocratic coup", writing, "If you insist on spending your final days in office threatening to burn it all down, that will be how you are remembered. Not as a revolutionary, but as the anarchist holding the match." The Post characterized Trump attorney Sidney Powell as a "crazy person", and his former national security advisor Michael Flynn's suggestion to declare martial law as "tantamount to treason." In January 2021, Keith Poole, a top editor at The Sun, another Murdoch-owned tabloid, was appointed as the editor in chief of the New York Post Group. Around the same time, at least eight journalists had left the paper. In January 2025, Tubi released a New York Post documentary about Luigi Mangione titled New York Post Presents: Luigi Mangione Monster or Martyr? Content, coverage, and criticism The Post has been criticized since the beginning of Murdoch's ownership for sensationalism, blatant advocacy, and conservative bias. In 1980, the Columbia Journalism Review stated that the "New York Post is no longer merely a journalistic problem. It is a social problem—a force for evil." The Post has been accused of contorting its news coverage to suit Murdoch's business needs, in particular avoiding subjects which could be unflattering to the government of the People's Republic of China, where Murdoch has invested heavily in satellite television. In a 2019 article in The New Yorker, Ken Auletta wrote that Murdoch "doesn't hesitate to use the Post to belittle his business opponents", and went on to say that Murdoch's support for Edward I. Koch while he was running for mayor of New York "spilled over onto the news pages of the Post, with the paper regularly publishing glowing stories about Koch and sometimes savage accounts of his four primary opponents." According to The New York Times, Ronald Reagan's campaign team credited Murdoch and the Post for his victory in New York in the 1980 U.S. presidential election. Reagan later "waived a prohibition against owning a television station and a newspaper in the same market", allowing Murdoch to continue to control the New York Post and The Boston Herald while expanding into television. In 1997, Post executive editor Steven D. Cuozzo responded to criticism by saying that the Post "broke the elitist media stranglehold on the national agenda." In a 2004 survey conducted by Pace University, the Post was rated the least-credible major news outlet in New York, and the only news outlet to receive more responses calling it "not credible" than credible (44% not credible to 39% credible). The Post commonly publishes news reports based entirely on reporting from other sources without independent corroboration. In January 2021, the paper forbade the use of CNN, MSNBC, The Washington Post, and The New York Times as sole sources for such stories. Susan Mulcahy and Frank DiGiacomo's 2024 history was titled "Paper of Wreckage" after staffers' nickname for the Post, which was a pun on the term "paper of record". Murdoch imported the style of many of his Australian and British newspapers, such as The Sun, which remains one of the highest selling daily newspapers in the United Kingdom. This style, known as tabloid journalism, by the Post's famous headlines such as "Headless body in topless bar" (written by Vincent Musetto). In its 35th-anniversary edition, New York magazine listed this as one of the greatest headlines. It also has five other Post headlines in its "Greatest Tabloid Headlines" list. The Post has also been criticized for incendiary front-page headlines, such as one referring to the co-chairmen of the Iraq Study Group—James Baker and Lee Hamilton—as "surrender monkeys", and another on the murder of landlord Menachem Stark reading "Slumlord found burned in dumpster. Who didn't want him dead?" The Post's influential gossip section Page Six began in 1977. Created by James Brady, it was famous for its blind items. Beginning in 1985, columnist Richard Johnson edited Page Six for 25 years before British journalist Emily Smith replaced him in 2009. In June 2022, Smith was replaced by her deputy, Ian Mohr. February 2006 saw the debut of Page Six Magazine, distributed free inside the paper. In September 2007, it started to be distributed weekly in the Sunday edition of the paper. In January 2009, publication of Page Six Magazine was cut to four times a year. Beginning with the 2017–18 television season, a daily syndicated series known as Page Six TV came to air, produced by 20th Television, which was part of the 21st Century Fox side of Rupert Murdoch's holdings, and Endemol Shine North America. The show was originally hosted by comedian John Fugelsang, with contributions from Page Six and Post writers (including Carlos Greer), along with regular panelists Elizabeth Wagmeister from Variety and Bevy Smith. In March 2018, Fugelsang left the show, with the expectation that a new host would be named, although by the end of the season it was announced that Wagmeister, Greer and Smith would be retained as equal co-hosts. In April 2019, it was confirmed that the series would end after May 2019; by then, it was last in average viewership out of all U.S. syndicated newsmagazine programs, behind the similar tabloid-inspired program Daily Mail TV. Richard Jewell, a security guard wrongly suspected of being the Centennial Olympic Park bomber, sued the Post in 1998, alleging that the newspaper had libeled him in several articles, headlines, photographs, and editorial cartoons. U.S. District Judge Loretta Preska largely denied the Post's motion to dismiss, allowing the suit to proceed. The Post subsequently settled the case for an undisclosed sum. In several stories on the day of the 2013 Boston Marathon bombing, the Post inaccurately reported that twelve people had died, and that a Saudi national had been taken into custody as a suspect, which was denied by the Boston Police Department. Three days later, on April 18, the Post featured a full-page cover photo of two young men at the Boston marathon with the headline "Bag Men" (a term that implies criminality) and erroneously claimed they were being sought by police. The men, Salaheddin Barhoum and Yassine Zaimi, were not considered suspects, and the Post was heavily criticized for the apparent accusation. Then-editor Col Allan defended the story, saying they had not referred to the men as "suspects". The two men later sued the Post for libel, and the suit was settled in 2014 on undisclosed terms. In 1989, the Post described the five black and Latino teenagers arrested following the rape and assault of a white woman in Central Park as coming "from a world of crack, welfare, guns, knives, indifference, and ignorance ... a land of no fathers", and having set out "to smash, hurt, rob, stomp, rape" people who were "rich" and "white". The teenagers' convictions were later overturned after the confession of a serial rapist, which was confirmed with DNA evidence. In 2009, the Post ran a cartoon by Sean Delonas of a white police officer saying to another white police officer who has just shot a chimpanzee on the street, "They'll have to find someone else to write the next stimulus bill." Comparing Obama, a Black president, to a chimpanzee was criticized as racist, with civil rights activist Al Sharpton calling the cartoon "troubling at best given the historic racist attacks of African-Americans as being synonymous with monkeys". New York Post chairman Rupert Murdoch apologized but said the cartoon was only meant to mock a piece of legislation. The Public Enemy song "A Letter to the New York Post" from their album Apocalypse '91...The Enemy Strikes Black is a complaint about what they believed to be negative and inaccurate coverage Black people received from the paper. In 2019, the Post displayed an image of the World Trade Center in flames targeting Rep. Ilhan Omar, one of the first two Muslim women to serve in Congress. The image had been displayed due to Ms. Omar's widely criticized quote "Some people did something" which was viewed by many as insensitive and minimizing the 9/11 World Trade Center attacks. The Yemeni American Merchant Association announced a formal boycott of the paper and ten of the most prominent Yemeni bodega owners in New York agreed to stop selling the paper. As of June 2019, the boycott had extended to over 900 individual stores. Yemeni-Americans own about half of the 10,000 bodegas in New York City. On October 14, 2020, three weeks before the 2020 United States presidential election, the Post published a front-page story purporting to reveal "smoking gun" emails recovered from a laptop abandoned by Hunter Biden at a computer repair store in Wilmington, Delaware. The only sources named in the story were Trump personal attorney Rudy Giuliani and strategy advisor Steve Bannon. The story came under heavy criticism from other news sources and anonymous reporters at the Post itself for "flimsy" reporting, including questions about the reliability of its sourcing and the lack of outreach to either Hunter Biden or the Joe Biden campaign for pre-publication comment. In October 2020, over fifty former U.S. intelligence officials signed an open letter stating that they were "deeply suspicious that the Russian government played a significant role" in the story, but emphasized that "we do not know if the emails ... are genuine or not and that we do not have evidence of Russian involvement." John Ratcliffe, the Director of National Intelligence, said during a Fox News interview that "the intelligence community doesn't believe that [the emails originated from Russian disinformation] because there is no intelligence that supports that." Ratcliffe, a Trump loyalist, had previously made public assertions that contradicted professional intelligence assessments. The FBI took possession of the laptop in late 2019 and reported that they had "nothing to add" to Ratcliffe's remarks concerning Russian disinformation. The New York Times reported days after the Post story that "no concrete evidence has emerged that the laptop contains Russian disinformation". Amid mounting pressure, the FBI wrote to U.S. Senator Ron Johnson, suggesting it had not found any Russian disinformation on the laptop. It was unclear what Justice Department officials knew about the FBI investigation at the time. Fox News reported that the laptop was seized as part of an investigation into money laundering, but did not make clear if the investigation involved Hunter Biden. On December 9, 2020, The New York Times reported that investigators had initially examined possible money laundering by Hunter Biden but did not find evidence to justify further investigation. Following the 2016 U.S. presidential election, social media companies were criticized for allowing false political information to proliferate on their platforms, including from Russian intelligence, suggesting it may have assisted Trump's election. Twitter and Facebook initially limited the spread of the 2020 Post story on their platforms, citing supposed policies restricting the sharing of hacked material and personal information; Twitter also temporarily suspended the Post's account. This decision proved controversial, with many critics, including Republican senator Ted Cruz, deriding it as censorship. NPR reported that Twitter initially declined to comment how it reached this decision or what evidence it had supporting this. The New York Times initially reported that the story had been pitched to other outlets, including Fox News, which declined to publish it due to concerns over its reliability. The Times also reported that two writers at the Post declined to have their names attached to the story, and ultimately the story only listed two bylines, Gabrielle Fonrouge, who "had little to do with the reporting or writing of the article" and was unaware of her byline prior to the story's publication, and Emma-Jo Morris, a former producer for Fox News's Hannity who had no prior bylines with the Post. In response to the concerns about the veracity of the article, retired Post editor-in-chief and current advisor Col Allan responded in an email to the New York Times that "the senior editors at The Post made the decision to publish the Biden files after several days' hard work established its merit." Giuliani said he gave the story to the Post because "either nobody else would take it, or if they took it, they would spend all the time they could to try to contradict it before they put it out." The accuracy of the Hunter Biden laptop story resulted in increased scrutiny of Twitter and Facebook limiting the spread of the story by conservatives, who argued that their actions "proves Big Tech's bias". On October 30, 2020, NBC News reported, "no evidence has emerged that the documents are the product of Russian disinformation, as some experts initially suggested, but many questions remain about how the materials got into the hands of Trump's lawyer Rudy Giuliani, who had met with Russian agents in his effort to dig up dirt on Biden." On March 15, 2021, CNN reported that Giuliani and other Trump allies met with Ukrainian lawmaker Andrii Derkach, who the U.S. government later assessed was a longtime Russian intelligence agent, sanctioning him for distributing disinformation about President Biden. On March 27, 2022, Vox reported that no evidence had emerged that "the laptop's leak was a Russian plot". In March 2022, The New York Times and The Washington Post confirmed that some of the emails were authentic. In April 2022, the editorial board of The Washington Post wrote the Biden laptop story provided "an opportunity for a reckoning" by American media to ensure "accurate and relevant" stories are covered. They noted: The investigation adds new details and confirms old ones about the ways in which Joe Biden's family has profited from trading overseas on his name—something for which the president deserves criticism for tacitly condoning. What it does not do, despite some conservatives' insistence otherwise, is prove that President Biden acted corruptly. On April 28, 2022, Joan Donovan, the research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard University, said that "This is arguably the most well-known story the New York Post has ever published and it endures as a story because it was initially suppressed by social media companies and jeered by politicians and pundits alike". In 1997, a national news story concerning Rebecca Sealfon's victory in the Scripps National Spelling Bee circulated. Sealfon was sponsored by the Daily News, a direct in-market competitor. The Post published a picture of her but altered the photograph to remove the name of the Daily News as printed on a placard she was wearing. In 2004, the Post ran a full-page cover photo of 19-year-old New York University student Diana Chien jumping to her death from the twenty-fourth story of a building. University spokesman John Beckman commented "it seems to show an appalling lack of judgment and insensitivity to the young woman's family and a disregard for the feelings of students at NYU." In 2012, the Post was criticized for running a photograph of a man struggling to climb back up onto a subway platform as a train approached, along with the headline "DOOMED". Facing questions over why he did not help the man, the photographer claimed he was not strong enough and had been attempting to use the flash on his camera to alert the driver of the oncoming train. In December 2020, the Post published a story outing an emergency medical technician who made additional income from posting explicit photographs of herself to the subscription website OnlyFans. The publication was widely criticized on social media as "doxxing someone simply for trying to earn a living." In April 2021, Facebook blocked users from sharing a Post story about home real estate purchases by Black Lives Matter co-founder Patrisse Cullors, saying that it violated its privacy and personal information policy. In response, the Post argued that it was an arbitrary decision since other newspapers, magazines and websites highlight the real estate purchases of high status individuals. News Media Alliance CEO David Chavern also voiced criticism of the decision, saying in a prepared statement: "There is no balance of power between 'media' and 'Big Tech.'" In April 2021, the Post published a false front-page story asserting that copies of a book by Vice President Kamala Harris were being distributed to migrant children at an intake facility in Long Beach, California. Fox News then published a story about the matter, followed by numerous Republican politicians and pundits commenting on it, in some cases speculating that taxpayers were funding the supposed book handouts for Harris's personal profit. Responding to questions from Fox News correspondent Peter Doocy, White House press secretary Jen Psaki expressed no knowledge of the matter; the Post then published a new story headlined "Psaki has no answers when asked about Harris' book being given to child migrants." Four days after the original publication, the Post replaced the story with a new version clarifying that just one Harris book had been donated by a community member but maintained that it was an "open-arms gesture by the Biden administration", although there was no evidence of the administration's involvement. Laura Italiano, the author of the story, resigned that day, asserting she had been "ordered" to write it. In October 2022, a rogue employee of the Post published a series of racist, violent and sexually explicit headlines on its Twitter account. Shortly after these headlines appeared, a spokesperson for the Post stated that the "vile and reprehensible" headlines were the result of a hack and were immediately removed, and that the incident was under investigation. The spokesperson later stated that "the unauthorized conduct was committed by an employee, and the employee has been terminated." In May 2023, amid reports that a wave of migrants might soon cross the American southern border, the Post ran a front-page story stating that 20 homeless veterans had been ordered to vacate upstate New York hotels to make room for arriving migrants. Fox News and other conservative outlets sent the story viral, with numerous conservatives expressing outrage at President Joe Biden and other Democrats. The story was soon found to have been fabricated by a local veterans advocate. While attending a June 2024 G7 Summit in Italy, G7 leaders watched an exhibition of military parachutists jump from aircraft and land nearby. After the exhibition, President Joe Biden stepped away from the group to approach some parachutists to speak and give them a thumbs-up. The Post tweeted a cropped version of a video that did not show the parachutists, creating a false impression that Biden had wandered off in confusion. The paper ran a full front-page story the next day, asserting "Biden embarrasses US with confused wanderings at world conference". Fox News ran a segment on the Post story, displaying the front-page on air. The Washington Post factchecker assigned the story Four Pinocchios, designating it as an outright lie. In September 2024, the Milwaukee Journal Sentinel found that several of the New York Post's stories about Wisconsin politics had been authored by an individual with no clear previous journalism experience and extensive ties to the state's Republican Party. These included two recent 2024 stints completing consulting work for that party; 2023 consulting work for Dan Kelly, a conservative state Supreme Court candidate; and being the campaign manager for a state Assembly campaign in 2024. Oldest claim The New York Post was established in 1801 by Alexander Hamilton making it the oldest still published daily newspaper in the US. However, it is not the oldest continuously published paper, as the New York Post halted publication during strikes in 1958 and in 1978. If this is considered, The Providence Journal is the oldest continuously published daily newspaper in the US. The Hartford Courant is generally understood to be the oldest newspaper in America, as it was founded in 1764; however, it was founded as a semi-weekly paper and did not begin publishing daily until 1836, 35 years after the New York Post began doing so. Operations The 1906 Old New York Evening Post Building is a designated landmark. It was added to the National Register of Historic Places in 1977. It occupied the building until 1926 when a new main office for the Post was established at 75 West Street in the New York Evening Post Building. The building remained in use by the Post until 1970, it was added to the National Register of Historic Places in 2000. In 1967, Schiff bought 210 South Street, the former headquarters of the New York Journal American, which closed a year earlier. The building became an instantly recognizable symbol for the Post. In 1995, owner Rupert Murdoch relocated the Post's news and business offices to the News Corporation headquarters tower at 1211 Avenue of the Americas (Sixth Avenue) in midtown Manhattan. The Post shares this building with Fox News Channel and The Wall Street Journal, which are also owned by Murdoch. Both the Post and the New York City edition of the Journal are printed at a state-of-the-art printing plant in The Bronx. The Newspaper and Mail Deliverers Union has delivered the newspaper "since the early 1900s". In 1996, the New York Post launched an Internet version of the paper. The New York Post launched the website Decider in 2014 to provide recommendations for streaming services. The website's first and only editor-in-chief is Mark Graham. Graham said that this service would "strike a nice balance between visual imagery and the written word, and come from a place of pop culture omniscience." In 2019, Decider signed a deal with app provider Reelgood to provide Reelgood widget links at the bottom of each review and would channel some advertising revenue to both companies. The value of the deal was not disclosed. The daily circulation of the Post decreased in the final years of the Schiff era from 700,000 around 1967–68, to approximately 517,000 by the time she sold the paper to Murdoch in 1976. Under Murdoch, the Post launched a morning edition to compete directly with the rival tabloid Daily News in 1978, prompting the Daily News to retaliate with a PM edition called Daily News Tonight. But the PM edition suffered the same problems with worsening daytime traffic that the afternoon Post experienced and the Daily News ultimately folded Tonight in 1981. By that time, circulation of the all-day Post soared to a peak of 962,000, the bulk of the increase attributed to its morning edition (It set a single-day record of 1.1 million on August 11, 1977, with the news of the arrest the night before of David Berkowitz, the infamous "Son of Sam" serial killer who terrorized New York for much of that summer). The Post and the Daily News have been locked in a bitter circulation war ever since. A resurgence during the first decade of the 21st century saw Post circulation rise to 724,748 by April 2007, achieved partly by lowering the price from 50 cents to 25 cents. In October 2006, the Post surpassed the Daily News in circulation for the first time, only to see the Daily News overtake its rival a few months later. In 2010, the Post's daily circulation was 525,004, just 10,000 behind the Daily News. As of 2017,[update] the Post was the fourth-largest newspaper in the United States by circulation, while the Daily News was ranked eighth. The Post has remained unprofitable since Murdoch first purchased it from Dorothy Schiff in 1976, and was on the brink of folding when Murdoch bought it back in 1993, with at least one media report in 2012 indicating that Post loses up to $70 million a year. One commentator suggested that the Post cannot become profitable as long as the competing Daily News survives, and that Murdoch may be trying to force the Daily News to fold or sell out, leaving the two papers in an intractable war of attrition. In September 2022, The New York Post became profitable, posting a profit for the quarter and year to date. The Post's digital network reached approximately 198 million unique users in June 2022, compared to 123 million in the prior year. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/UV_radiation] | [TOKENS: 11070] |
Contents Ultraviolet Ultraviolet radiation or UV is electromagnetic radiation of wavelengths of 100–400 nanometers, shorter than that of visible light, but longer than X-rays. Wavelengths between 10 and 100 nanometers are called extreme ultraviolet and share some properties with soft X-rays. UV radiation is present in sunlight and constitutes about 10% of the total electromagnetic radiation output from the Sun. It is also produced by electric arcs, Cherenkov radiation, and specialized lights, such as mercury-vapor lamps, tanning lamps, and black lights. The photons of ultraviolet have greater energy than those of visible light, from about 3.1 to 12 electron volts, around the minimum energy required to ionize atoms.: 25–26 Although long-wavelength ultraviolet is not considered an ionizing radiation because its photons lack sufficient energy, it can induce chemical reactions and cause many substances to glow or fluoresce. Many practical applications, including chemical and biological effects, are derived from the way that UV radiation can interact with organic molecules. These interactions can involve exciting orbital electrons to higher energy states in molecules potentially breaking chemical bonds. In contrast, the main effect of longer wavelength radiation is to excite vibrational or rotational states of these molecules, increasing their temperature.: 28 Short-wave ultraviolet light is ionizing radiation. Consequently, short-wave UV damages DNA and sterilizes surfaces with which it comes into contact. For humans, suntan and sunburn are familiar effects of exposure of the skin to UV, along with an increased risk of skin cancer. The amount of UV radiation produced by the Sun means that the Earth would not be able to sustain life on dry land if most of that light were not filtered out by the atmosphere. More energetic, shorter-wavelength "extreme" UV below 121 nm ionizes air so strongly that it is absorbed before it reaches the ground. However, UV (specifically, UVB) is also responsible for the formation of vitamin D in most land vertebrates, including humans. The UV spectrum, thus, has effects both beneficial and detrimental to life. The lower wavelength limit of the visible spectrum is conventionally taken as 400 nm. Although ultraviolet rays are not generally visible to humans, 400 nm is not a sharp cutoff, with shorter and shorter wavelengths becoming less and less visible in this range. Insects, birds, and some mammals can see near-UV (NUV), i.e., somewhat shorter wavelengths than what humans can see. Visibility Humans generally cannot use ultraviolet rays for vision. The lens of the human eye and surgically implanted lenses produced since 1986 block most radiation in the near UV wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. Humans also lack color receptor adaptations for ultraviolet rays. The photoreceptors of the retina are sensitive to near-UV but the lens does not focus this light properly, causing UV light bulbs to look fuzzy. People lacking a lens (a condition known as aphakia) perceive near-UV as whitish-blue or whitish-violet. Near-UV radiation is visible to insects, some mammals, and some birds. Birds have a fourth color receptor for ultraviolet rays; this, coupled with eye structures that transmit more UV gives smaller birds "true" UV vision. History and discovery "Ultraviolet" means "beyond violet" (from Latin ultra, "beyond"), violet being the color of the highest frequencies of visible light. Ultraviolet has a higher frequency (thus a shorter wavelength) than violet light. UV radiation was discovered in February 1801 when the German physicist Johann Wilhelm Ritter observed that invisible rays just beyond the violet end of the visible spectrum darkened silver chloride-soaked paper more quickly than violet light itself. He announced the discovery in a very brief letter to the Annalen der Physik and later called them "(de-)oxidizing rays" (German: de-oxidierende Strahlen) to emphasize chemical reactivity and to distinguish them from "heat rays", discovered the previous year at the other end of the visible spectrum. The simpler term "chemical rays" was adopted soon afterwards, and remained popular throughout the 19th century, although some said that this radiation was entirely different from light (notably John William Draper, who named them "tithonic rays"). The terms "chemical rays" and "heat rays" were eventually dropped in favor of ultraviolet and infrared radiation, respectively. In 1878, the sterilizing effect of short-wavelength light by killing bacteria was discovered. By 1903, the most effective wavelengths were known to be around 250 nm. In 1960, the effect of ultraviolet radiation on DNA was established. The discovery of the ultraviolet radiation with wavelengths below 200 nm, named "vacuum ultraviolet" because it is strongly absorbed by the oxygen in air, was made in 1893 by German physicist Victor Schumann. The division of UV into UVA, UVB, and UVC was decided "unanimously" by a committee of the Second International Congress on Light on August 17th, 1932, at the Castle of Christiansborg in Copenhagen. Subtypes The electromagnetic spectrum of ultraviolet radiation (UVR), defined most broadly as 10–400 nanometers, can be subdivided into a number of ranges recommended by the ISO standard ISO 21348: Several solid-state and vacuum devices have been explored for use in different parts of the UV spectrum. Many approaches seek to adapt visible light-sensing devices, but these can suffer from unwanted response to visible light and various instabilities. Ultraviolet can be detected by suitable photodiodes and photocathodes, which can be tailored to be sensitive to different parts of the UV spectrum. Sensitive UV photomultipliers are available. Spectrometers and radiometers are made for measurement of UV radiation. Silicon detectors are used across the spectrum. Vacuum UV, or VUV, wavelengths (shorter than 200 nm) are strongly absorbed by molecular oxygen in the air, though the longer wavelengths around 150–200 nm can propagate through nitrogen. Scientific instruments can, therefore, use this spectral range by operating in an oxygen-free atmosphere (pure nitrogen, or argon for shorter wavelengths), without the need for costly vacuum chambers. Significant examples include 193-nm photolithography equipment (for semiconductor manufacturing) and circular dichroism spectrometers. Technology for VUV instrumentation was largely driven by solar astronomy for many decades. While optics can be used to remove unwanted visible light that contaminates the VUV, in general, detectors can be limited by their response to non-VUV radiation, and the development of solar-blind devices has been an important area of research. Wide-gap solid-state devices or vacuum devices with high-cutoff photocathodes can be attractive compared to silicon diodes. Extreme UV (EUV or sometimes XUV) is characterized by a transition in the physics of interaction with matter.: 2 Wavelengths longer than about 30 nm interact mainly with the outer valence electrons of atoms, while wavelengths shorter than that interact mainly with inner-shell electrons and nuclei. The long end of the EUV spectrum is set by a prominent He+ spectral line at 30.4 nm. EUV is strongly absorbed by most known materials, but synthesizing multilayer optics that reflect up to about 50% of EUV radiation at normal incidence is possible. This technology was pioneered by the NIXT and MSSTA sounding rockets in the 1990s, and it has been used to make telescopes for solar imaging. See also the Extreme Ultraviolet Explorer satellite.[citation needed] Some sources use the distinction of "hard UV" and "soft UV". For instance, in the case of astrophysics, the boundary may be at the Lyman limit (wavelength 91.2 nm, the energy needed to ionise a hydrogen atom from its ground state), with "hard UV" being more energetic; the same terms may also be used in other fields, such as cosmetology, optoelectronic, etc. The numerical values of the boundary between hard/soft, even within similar scientific fields, do not necessarily coincide; for example, one applied-physics publication used a boundary of 190 nm between hard and soft UV regions. Solar ultraviolet Very hot objects emit UV radiation (see black-body radiation). The Sun emits ultraviolet radiation at all wavelengths, including the extreme ultraviolet where it crosses into X-rays at 10 nm. Extremely hot stars (such as O- and B-type) emit proportionally more UV radiation than the Sun. Sunlight in space at the top of Earth's atmosphere (see solar constant) is composed of about 50% infrared light, 40% visible light, and 10% ultraviolet light, for a total intensity of about 1400 W/m2 in vacuum. The atmosphere blocks about 77% of the Sun's UV, when the Sun is highest in the sky (at zenith), with absorption increasing at shorter UV wavelengths. At ground level with the sun at zenith, sunlight is 44% visible light, 3% ultraviolet, and the remainder infrared. Of the ultraviolet radiation that reaches the Earth's surface, more than 95% is the longer wavelengths of UVA, with the small remainder UVB. Almost no UVC reaches the Earth's surface. The fraction of UVA and UVB which remains in UV radiation after passing through the atmosphere is heavily dependent on cloud cover and atmospheric conditions. On "partly cloudy" days, patches of blue sky showing between clouds are also sources of (scattered) UVA and UVB, which are produced by Rayleigh scattering in the same way as the visible blue light from those parts of the sky. UVB also plays a major role in plant development, as it affects most of the plant hormones. During total overcast, the amount of absorption due to clouds is heavily dependent on the thickness of the clouds and latitude, with no clear measurements correlating specific thickness and absorption of UVA and UVB. The shorter bands of UVC, as well as even more-energetic UV radiation produced by the Sun, are absorbed by oxygen and generate the ozone in the ozone layer when single oxygen atoms produced by UV photolysis of dioxygen react with more dioxygen. The ozone layer is especially important in blocking most UVB and the remaining part of UVC not already blocked by ordinary oxygen in air.[citation needed] Blockers, absorbers, and windows Ultraviolet absorbers are molecules used in organic materials (polymers, paints, etc.) to absorb UV radiation to reduce the UV degradation (photo-oxidation) of a material. The absorbers can themselves degrade over time, so monitoring of absorber levels in weathered materials is necessary.[citation needed] In sunscreen, ingredients that absorb UVA/UVB rays, such as avobenzone, oxybenzone and octyl methoxycinnamate, are organic chemical absorbers or "blockers". They are contrasted with inorganic absorbers/"blockers" of UV radiation such as titanium dioxide and zinc oxide. For clothing, the ultraviolet protection factor (UPF) represents the ratio of sunburn-causing UV without and with the protection of the fabric, similar to sun protection factor (SPF) ratings for sunscreen.[citation needed] Standard summer fabrics have UPFs around 6, which means that about 20% of UV will pass through. Suspended nanoparticles in stained-glass prevent UV rays from causing chemical reactions that change image colors. A set of stained-glass color-reference chips is planned to be used to calibrate the color cameras for the 2019 ESA Mars rover mission, since they will remain unfaded by the high level of UV present at the surface of Mars. Common soda–lime glass, such as window glass, is partially transparent to UVA, but is opaque to shorter wavelengths, passing about 90% of the light above 350 nm, but blocking over 90% of the light below 300 nm. A study found that car windows allow 3–4% of ambient UV to pass through, especially if the UV was greater than 380 nm. Other types of car windows can reduce transmission of UV that is greater than 335 nm. Fused quartz, depending on quality, can be transparent even to vacuum UV wavelengths. Crystalline quartz and some crystals such as CaF2 and MgF2 transmit well down to 150 nm or 160 nm wavelengths. Wood's glass is a deep violet-blue barium-sodium silicate glass with about 9% nickel(II) oxide developed during World War I to block visible light for covert communications.[citation needed] It allows both infrared daylight and ultraviolet night-time communications by being transparent between 320 nm and 400 nm and also the longer infrared and just-barely-visible red wavelengths.[citation needed] Its maximum UV transmission is at 365 nm, one of the wavelengths of mercury lamps.[citation needed] Artificial sources A black light lamp emits long-wave UVA radiation and little visible light. Fluorescent black light lamps work similarly to other fluorescent lamps, but use a phosphor on the inner tube surface which emits UVA radiation instead of visible light. Some lamps use a deep-bluish-purple Wood's glass optical filter that blocks almost all visible light with wavelengths longer than 400 nanometers. The purple glow given off by these tubes is not the ultraviolet itself, but visible purple light from mercury's 404 nm spectral line which escapes being filtered out by the coating. Other black lights use plain glass instead of the more expensive Wood's glass, so they appear light-blue to the eye when operating.[citation needed] Incandescent black lights are also produced, using a filter coating on the envelope of an incandescent bulb that absorbs visible light (see section below). These are cheaper but very inefficient, emitting only a small fraction of a percent of their power as UV. Mercury-vapor black lights in ratings up to 1 kW with UV-emitting phosphor and an envelope of Wood's glass are used for theatrical and concert displays.[citation needed] Black lights are used in applications in which extraneous visible light must be minimized; mainly to observe fluorescence, the colored glow that many substances give off when exposed to UV light. UVA / UVB emitting bulbs are also sold for other special purposes, such as tanning lamps and reptile-husbandry.[citation needed] Mercury-vapor lamps consisting of fused quartz tubes filled mercury and Argon, emit ultraviolet light with two peaks in the UVC band at 253.7 nm and 185 nm as well as some visible light. From 85% to 90% of the UV produced by these lamps is at 253.7 nm which very effective as a germicide. The lamps also produce UV at 185 nm effective in producing ozone with additional germicide effects. Such tubes have two or three times the UVC power of a regular fluorescent lamp tube. These low-pressure lamps have a typical efficiency of approximately 30–40%, meaning that for every 100 watts of electricity consumed by the lamp, they will produce approximately 30–40 watts of total UV output. They also emit bluish-white visible light, due to mercury's other spectral lines. These "germicidal" lamps are used extensively for disinfection of surfaces in laboratories and food-processing industries. 'Black light' incandescent lamps are also made from an incandescent light bulb with a filter coating which absorbs most visible light. Halogen lamps with fused quartz envelopes are used as inexpensive UV light sources in the near UV range, from 400 to 300 nm, in some scientific instruments. Due to its black-body spectrum a filament light bulb is a very inefficient ultraviolet source, emitting only a fraction of a percent of its energy as UV, as explained by the black body spectrum. Specialized UV gas-discharge lamps containing different gases produce UV radiation at particular spectral lines for scientific purposes. Argon and deuterium arc lamps are often used as stable sources, either windowless or with various windows such as magnesium fluoride. These are often the emitting sources in UV spectroscopy equipment for chemical analysis.[citation needed] Other UV sources with more continuous emission spectra include xenon arc lamps (commonly used as sunlight simulators), deuterium arc lamps, mercury-xenon arc lamps, and metal-halide arc lamps.[citation needed] The excimer lamp, a UV source developed in the early 2000s, is seeing increasing use in scientific fields. It has the advantages of high-intensity, high efficiency, and operation at a variety of wavelength bands into the vacuum ultraviolet.[citation needed] Light-emitting diodes (LEDs) can be manufactured to emit radiation in the ultraviolet range. In 2019, following significant advances over the preceding five years, UVA LEDs of 365 nm and longer wavelength were available, with efficiencies of 50% at 1.0 W output. Currently, the most common types of UV LEDs are in 395 nm and 365 nm wavelengths, both of which are in the UVA spectrum. The rated wavelength is the peak wavelength that the LEDs put out, but light at both higher and lower wavelengths are present. The cheaper and more common 395 nm UV LEDs are much closer to the visible spectrum, and give off a purple color. Other UV LEDs deeper into the spectrum do not emit as much visible light. LEDs are used for applications such as UV curing applications, charging glow-in-the-dark objects such as paintings or toys, and lights for detecting counterfeit money and bodily fluids. UV LEDs are also used in digital print applications and inert UV curing environments. As technological advances beginning in the early 2000s have improved their output and efficiency, they have become increasingly viable alternatives to more traditional UV lamps for use in UV curing applications, and the development of new UV LED curing systems for higher-intensity applications is a major subject of research in the field of UV curing technology. UVC LEDs are developing rapidly, but may require testing to verify effective disinfection. Citations for large-area disinfection are for non-LED UV sources known as germicidal lamps. Also, they are used as line sources to replace deuterium lamps in liquid chromatography instruments. Gas lasers, laser diodes, and solid-state lasers can be manufactured to emit ultraviolet rays, and lasers are available that cover the entire UV range. The nitrogen gas laser uses electronic excitation of nitrogen molecules to emit a beam that is mostly UV. The strongest ultraviolet lines are at 337.1 nm and 357.6 nm in wavelength. Another type of high-power gas lasers are excimer lasers. They are widely used lasers emitting in ultraviolet and vacuum ultraviolet wavelength ranges. Presently, UV argon-fluoride excimer lasers operating at 193 nm are routinely used in integrated circuit production by photolithography. The current[timeframe?] wavelength limit of production of coherent UV is about 126 nm, characteristic of the Ar2* excimer laser.[citation needed] Direct UV-emitting laser diodes are available at 375 nm. UV diode-pumped solid state lasers have been demonstrated using cerium-doped lithium strontium aluminum fluoride crystals (Ce:LiSAF), a process developed in the 1990s at Lawrence Livermore National Laboratory. Wavelengths shorter than 325 nm are commercially generated in diode-pumped solid-state lasers. Ultraviolet lasers can also be made by applying frequency conversion to lower-frequency lasers. Ultraviolet lasers have applications in industry (laser engraving), medicine (dermatology, and keratectomy), chemistry (MALDI), free-space optical communication, computing (optical storage), and manufacture of integrated circuits. The vacuum ultraviolet (V‑UV) band (100–200 nm) can be generated by non-linear 4 wave mixing in gases by sum or difference frequency mixing of 2 or more longer wavelength lasers. The generation is generally done in gasses (e.g. krypton, hydrogen which are two-photon resonant near 193 nm) or metal vapors (e.g. magnesium). By making one of the lasers tunable, the V‑UV can be tuned. If one of the lasers is resonant with a transition in the gas or vapor then the V‑UV production is intensified. However, resonances also generate wavelength dispersion, and thus the phase matching can limit the tunable range of the 4 wave mixing. Difference frequency mixing (i.e., f1 + f2 − f3) has an advantage over sum frequency mixing because the phase matching can provide greater tuning. In particular, difference frequency mixing two photons of an ArF (193 nm) excimer laser with a tunable visible or near IR laser in hydrogen or krypton provides resonantly enhanced tunable V‑UV covering from 100 nm to 200 nm. Practically, the lack of suitable gas / vapor cell window materials above the lithium fluoride cut-off wavelength limit the tuning range to longer than about 110 nm. Tunable V‑UV wavelengths down to 75 nm was achieved using window-free configurations. Lasers have been used to indirectly generate non-coherent extreme UV (E‑UV) radiation at 13.5 nm for extreme ultraviolet lithography. The E‑UV is not emitted by the laser, but rather by electron transitions in an extremely hot tin or xenon plasma, which is excited by an excimer laser. This technique does not require a synchrotron, yet can produce UV at the edge of the X‑ray spectrum. Synchrotron light sources can also produce all wavelengths of UV, including those at the boundary of the UV and X‑ray spectra at 10 nm.[citation needed] Human health-related effects The impact of ultraviolet radiation on human health has implications for the risks and benefits of sun exposure and is also implicated in issues such as fluorescent lamps and health. Getting too much sun exposure can be harmful, but in moderation, sun exposure is beneficial. UV (specifically, UVB) causes the body to produce vitamin D, which is essential for life. Humans need some UV radiation to maintain adequate vitamin D levels. According to the World Health Organization: There is no doubt that a little sunlight is good for you! But 5–15 minutes of casual sun exposure of hands, face and arms two to three times a week during the summer months is sufficient to keep your vitamin D levels high. Vitamin D can also be obtained from food and supplementation. Excess sun exposure produces harmful effects, however. UV rays also treat certain skin conditions. Modern phototherapy has been used to successfully treat psoriasis, eczema, jaundice, vitiligo, atopic dermatitis, and localized scleroderma. In addition, UV radiation, in particular UVB radiation, has been shown to induce cell cycle arrest in keratinocytes, the most common type of skin cell. As such, sunlight therapy can be a candidate for treatment of conditions such as psoriasis and exfoliative cheilitis, conditions in which skin cells divide more rapidly than usual or necessary. In humans, excessive exposure to UV radiation can result in acute and chronic harmful effects on the eye's dioptric system and retina. The risk is elevated at high altitudes and people living in high latitude areas where snow covers the ground right into early summer and sun positions even at zenith are low, are particularly at risk. Skin, the circadian system, and the immune system can also be affected. The differential effects of various wavelengths of light on the human cornea and skin are sometimes called the "erythemal action spectrum". The action spectrum shows that UVA does not cause immediate reaction, but rather UV begins to cause photokeratitis and skin redness (with lighter skinned individuals being more sensitive) at wavelengths starting near the beginning of the UVB band at 315 nm, and rapidly increasing to 300 nm. The skin and eyes are most sensitive to damage by UV at 265–275 nm, which is in the lower UVC band. At still shorter wavelengths of UV, damage continues to happen, but the overt effects are not as great with so little penetrating the atmosphere. The WHO-standard ultraviolet index is a widely publicized measurement of total strength of UV wavelengths that cause sunburn on human skin, by weighting UV exposure for action spectrum effects at a given time and location. This standard shows that most sunburn happens due to UV at wavelengths near the boundary of the UVA and UVB bands.[citation needed] Overexposure to UVB radiation not only can cause sunburn but also some forms of skin cancer. However, the degree of redness and eye irritation (which are largely not caused by UVA) do not predict the long-term effects of UV, although they do mirror the direct damage of DNA by ultraviolet. All bands of UV radiation damage collagen fibers and accelerate aging of the skin. Both UVA and UVB destroy vitamin A in skin, which may cause further damage. UVB radiation can cause direct DNA damage. This cancer connection is one reason for concern about ozone depletion and the ozone hole. The most deadly form of skin cancer, melanoma, is mostly caused by DNA damage independent from UVA radiation. This can be seen from the absence of a direct UV signature mutation in 92% of all melanoma. Occasional overexposure and sunburn are probably greater risk factors for melanoma than long-term moderate exposure. UVC is the highest-energy, most-dangerous type of ultraviolet radiation, and causes adverse effects that can variously be mutagenic or carcinogenic. In the past, UVA was considered not harmful or less harmful than UVB, but today it is known to contribute to skin cancer via indirect DNA damage (free radicals such as reactive oxygen species). UVA can generate highly reactive chemical intermediates, such as hydroxyl and oxygen radicals, which in turn can damage DNA. The DNA damage caused indirectly to skin by UVA consists mostly of single-strand breaks in DNA, while the damage caused by UVB includes direct formation of thymine dimers or cytosine dimers and double-strand DNA breakage. UVA is immunosuppressive for the entire body (accounting for a large part of the immunosuppressive effects of sunlight exposure), and is mutagenic for basal cell keratinocytes in skin. UVB photons can cause direct DNA damage. UVB radiation excites DNA molecules in skin cells, causing aberrant covalent bonds to form between adjacent pyrimidine bases, producing a dimer. Most UV-induced pyrimidine dimers in DNA are removed by the process known as nucleotide excision repair that employs about 30 different proteins. Those pyrimidine dimers that escape this repair process can induce a form of programmed cell death (apoptosis) or can cause DNA replication errors leading to mutation.[citation needed] UVB damages mRNA This triggers a fast pathway that leads to inflammation of the skin and sunburn. mRNA damage initially triggers a response in ribosomes though a protein known as ZAK-alpha in a ribotoxic stress response. This response acts as a cell surveillance system. Following this detection of RNA damage leads to inflammatory signaling and recruitment of immune cells. This, not DNA damage (which is slower to detect) results in UVB skin inflammation and acute sunburn. As a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type) levels of radiation; this is commonly known as a sun tan. The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, protecting the skin against both direct and indirect DNA damage from the UV. UVA gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes. UVB yields a tan that takes roughly 2 days to develop because it stimulates the body to produce more melanin.[citation needed] Medical organizations recommend that patients protect themselves from UV radiation by using sunscreen. Five sunscreen ingredients have been shown to protect mice against skin tumors. However, some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells. The amount of sunscreen that penetrates into the lower layers of the skin may be large enough to cause damage. Sunscreen reduces the direct DNA damage that causes sunburn, by blocking UVB, and the usual SPF rating indicates how effectively this radiation is blocked. SPF is, therefore, also called UVB-PF, for "UVB protection factor". This rating, however, offers no data about important protection against UVA, which does not primarily cause sunburn but is still harmful, since it causes indirect DNA damage and is also considered carcinogenic. Several studies suggest that the absence of UVA filters may be the cause of the higher incidence of melanoma found in sunscreen users compared to non-users. Some sunscreen lotions contain titanium dioxide, zinc oxide, and avobenzone, which help protect against UVA rays. The photochemical properties of melanin make it an excellent photoprotectant. However, sunscreen chemicals cannot dissipate the energy of the excited state as efficiently as melanin and therefore, if sunscreen ingredients penetrate into the lower layers of the skin, the amount of reactive oxygen species may be increased. The amount of sunscreen that penetrates through the stratum corneum may or may not be large enough to cause damage. In an experiment by Hanson et al. that was published in 2006, the amount of harmful reactive oxygen species (ROS) was measured in untreated and in sunscreen treated skin. In the first 20 minutes, the film of sunscreen had a protective effect and the number of ROS species was smaller. After 60 minutes, however, the amount of absorbed sunscreen was so high that the amount of ROS was higher in the sunscreen-treated skin than in the untreated skin. The study indicates that sunscreen must be reapplied within 2 hours in order to prevent UV light from penetrating to sunscreen-infused live skin cells. Ultraviolet radiation can aggravate several skin conditions and diseases, including systemic lupus erythematosus, Sjögren's syndrome, Sinear Usher syndrome, rosacea, dermatomyositis, Darier's disease, Kindler–Weary syndrome and Porokeratosis. The eye is most sensitive to damage by UV in the lower UVC band at 265–275 nm. Radiation of this wavelength is almost absent from sunlight at the surface of the Earth but is emitted by artificial sources such as the electrical arcs employed in arc welding. Unprotected exposure to these sources can cause "welder's flash" or "arc eye" (photokeratitis) and can lead to cataracts, pterygium and pinguecula formation. To a lesser extent, UVB in sunlight from 310 to 280 nm also causes photokeratitis ("snow blindness"), and the cornea, the lens, and the retina can be damaged. Protective eyewear is beneficial to those exposed to ultraviolet radiation. Since light can reach the eyes from the sides, full-coverage eye protection is usually warranted if there is an increased risk of exposure, as in high-altitude mountaineering. Mountaineers are exposed to higher-than-ordinary levels of UV radiation, both because there is less atmospheric filtering and because of reflection from snow and ice. Ordinary, untreated eyeglasses give some protection. Most plastic lenses give more protection than glass lenses, because, as noted above, glass is transparent to UVA and the common acrylic plastic used for lenses is less so. Some plastic lens materials, such as polycarbonate, inherently block most UV. Degradation of polymers, pigments and dyes UV degradation is one form of polymer degradation that affects plastics exposed to sunlight. The problem appears as discoloration or fading, cracking, loss of strength or disintegration. The effects of attack increase with exposure time and sunlight intensity. The addition of UV absorbers inhibits the effect. Sensitive polymers include thermoplastics and speciality fibers like aramids. UV absorption leads to chain degradation and loss of strength at sensitive points in the chain structure. Aramid rope must be shielded with a sheath of thermoplastic if it is to retain its strength.[citation needed] Many pigments and dyes absorb UV and change colour, so paintings and textiles may need extra protection both from sunlight and fluorescent lamps, two common sources of UV radiation. Window glass absorbs some harmful UV, but valuable artifacts need extra shielding. Many museums place black curtains over watercolour paintings and ancient textiles, for example. Since watercolours can have very low pigment levels, they need extra protection from UV. Various forms of picture framing glass, including acrylics (plexiglass), laminates, and coatings, offer different degrees of UV (and visible light) protection. Applications Because of its ability to cause chemical reactions and excite fluorescence in materials, ultraviolet radiation has a number of applications. The following table gives some uses of specific wavelength bands in the UV spectrum. Photographic film responds to ultraviolet radiation but the glass lenses of cameras usually block radiation shorter than 350 nm. Slightly yellow UV-blocking filters are often used for outdoor photography to prevent unwanted bluing and overexposure by UV rays. For photography in the near UV, special filters may be used. Photography with wavelengths shorter than 350 nm requires special quartz lenses which do not absorb the radiation. Digital cameras sensors may have internal filters that block UV to improve color rendition accuracy. Sometimes these internal filters can be removed, or they may be absent, and an external visible-light filter prepares the camera for near-UV photography. A few cameras are designed for use in the UV. Photography by reflected ultraviolet radiation is useful for medical, scientific, and forensic investigations, in applications as widespread as detecting bruising of skin, alterations of documents, or restoration work on paintings. Photography of the fluorescence produced by ultraviolet illumination uses visible wavelengths of light.[citation needed] In ultraviolet astronomy, measurements are used to discern the chemical composition of the interstellar medium, and the temperature and composition of stars. Because the ozone layer blocks many UV frequencies from reaching telescopes on the surface of the Earth, most UV observations are made from space. Corona discharge on electrical apparatus can be detected by its ultraviolet emissions. Corona causes degradation of electrical insulation and emission of ozone and nitrogen oxide. EPROMs (Erasable Programmable Read-Only Memory) are erased by exposure to UV radiation. These modules have a transparent (quartz) window on the top of the chip that allows the UV radiation in. Colorless fluorescent dyes that emit blue light under UV are added as optical brighteners to paper and fabrics. The blue light emitted by these agents counteracts yellow tints that may be present and causes the colors and whites to appear whiter or more brightly colored. UV fluorescent dyes that glow in the primary colors are used in paints, papers, and textiles either to enhance color under daylight illumination or to provide special effects when lit with UV lamps. Blacklight paints that contain dyes that glow under UV are used in a number of art and aesthetic applications.[citation needed] To help prevent counterfeiting of currency, or forgery of important documents such as driver's licenses and passports, the paper may include a UV watermark or fluorescent multicolor fibers that are visible under ultraviolet light. Postage stamps are tagged with a phosphor that glows under UV rays to permit automatic detection of the stamp and facing of the letter. UV fluorescent dyes are used in many applications (for example, biochemistry and forensics). Some brands of pepper spray will leave an invisible chemical (UV dye) that is not easily washed off on a pepper-sprayed attacker, which would help police identify the attacker later. In some types of nondestructive testing UV stimulates fluorescent dyes to highlight defects in a broad range of materials. These dyes may be carried into surface-breaking defects by capillary action (liquid penetrant inspection) or they may be bound to ferrite particles caught in magnetic leakage fields in ferrous materials (magnetic particle inspection). UV is an investigative tool at the crime scene helpful in locating and identifying bodily fluids such as semen, blood, and saliva. For example, ejaculated fluids or saliva can be detected by high-power UV sources, irrespective of the structure or colour of the surface the fluid is deposited upon. UV–vis microspectroscopy is also used to analyze trace evidence, such as textile fibers and paint chips, as well as questioned documents. Other applications include the authentication of various collectibles and art, and detecting counterfeit currency. Even materials not specially marked with UV sensitive dyes may have distinctive fluorescence under UV exposure or may fluoresce differently under short-wave versus long-wave ultraviolet. Using multi-spectral imaging it is possible to read illegible papyrus, such as the burned papyri of the Villa of the Papyri or of Oxyrhynchus, or the Archimedes palimpsest. The technique involves taking pictures of the illegible document using different filters in the infrared or ultraviolet range, finely tuned to capture certain wavelengths of light. Thus, the optimum spectral portion can be found for distinguishing ink from paper on the papyrus surface. Simple NUV sources can be used to highlight faded iron-based ink on vellum. Ultraviolet helps detect organic material deposits that remain on surfaces where periodic cleaning and sanitizing may have failed. It is used in the hotel industry, manufacturing, and other industries where levels of cleanliness or contamination are inspected. Perennial news features for many television news organizations involve an investigative reporter using a similar device to reveal unsanitary conditions in hotels, public toilets, hand rails, and such. UV/Vis spectroscopy is widely used as a technique in chemistry to analyze chemical structure, the most notable one being conjugated systems. UV radiation is often used to excite a given sample where the fluorescent emission is measured with a spectrofluorometer. In biological research, UV radiation is used for quantification of nucleic acids or proteins. In environmental chemistry, UV radiation could also be used to detect Contaminants of emerging concern in water samples. In pollution control applications, ultraviolet analyzers are used to detect emissions of nitrogen oxides, sulfur compounds, mercury, and ammonia, for example in the flue gas of fossil-fired power plants. Ultraviolet radiation can detect thin sheens of spilled oil on water, either by the high reflectivity of oil films at UV wavelengths, fluorescence of compounds in oil, or by absorbing of UV created by Raman scattering in water. UV absorbance can also be used to quantify contaminants in wastewater. Most commonly used 254 nm UV absorbance is generally used as a surrogate parameters to quantify NOM. Another form of light-based detection uses an excitation-emission matrix (EEM) to detect and identify contaminants based on their fluorescence properties. EEM could be used to discriminate different groups of NOM based on the difference in light emission and excitation of fluorophores. NOMs with certain molecular structures are reported to have fluorescent properties in a wide range of excitation/emission wavelengths. Ultraviolet lamps are also used as part of the analysis of some minerals and gems. In general, ultraviolet detectors use either a solid-state device, such as one based on silicon carbide or aluminium nitride, or a gas-filled tube as the sensing element. UV detectors that are sensitive to UV in any part of the spectrum respond to irradiation by sunlight and artificial light. A burning hydrogen flame, for instance, radiates strongly in the 185- to 260-nanometer range and only very weakly in the IR region, whereas a coal fire emits very weakly in the UV band yet very strongly at IR wavelengths; thus, a fire detector that operates using both UV and IR detectors is more reliable than one with a UV detector alone. Virtually all fires emit some radiation in the UVC band, whereas the Sun's radiation at this band is absorbed by the Earth's atmosphere. The result is that the UV detector is "solar blind", meaning it will not cause an alarm in response to radiation from the Sun, so it can easily be used both indoors and outdoors. UV detectors are sensitive to most fires, including hydrocarbons, metals, sulfur, hydrogen, hydrazine, and ammonia. Arc welding, electrical arcs, lightning, X-rays used in nondestructive metal testing equipment (though this is highly unlikely), and radioactive materials can produce levels that will activate a UV detection system. The presence of UV-absorbing gases and vapors will attenuate the UV radiation from a fire, adversely affecting the ability of the detector to detect flames. Likewise, the presence of an oil mist in the air or an oil film on the detector window will have the same effect. Ultraviolet radiation is used for very fine resolution photolithography, a procedure wherein a chemical called a photoresist is exposed to UV radiation that has passed through a mask. The exposure causes chemical reactions to occur in the photoresist. After removal of unwanted photoresist, a pattern determined by the mask remains on the sample. Steps may then be taken to "etch" away, deposit on or otherwise modify areas of the sample where no photoresist remains. Photolithography is used in the manufacture of semiconductors, integrated circuit components, and printed circuit boards. Photolithography processes used to fabricate electronic integrated circuits presently use 193 nm UV and are experimentally using 13.5 nm UV for extreme ultraviolet lithography. Electronic components that require clear transparency for light to exit or enter (photovoltaic panels and sensors) can be potted using acrylic resins that are cured using UV energy. The advantages are low VOC emissions and rapid curing. Certain inks, coatings, and adhesives are formulated with photoinitiators and resins. When exposed to UV light, polymerization occurs, and so the adhesives harden or cure, usually within a few seconds. Applications include glass and plastic bonding, optical fiber coatings, the coating of flooring, UV coating and paper finishes in offset printing, dental fillings, hydrophobic light-activated adhesive, and decorative fingernail "gels". UV sources for UV curing applications include UV lamps, UV LEDs, and excimer flash lamps. Fast processes such as flexo or offset printing require high-intensity light focused via reflectors onto a moving substrate and medium so high-pressure Hg (mercury) or Fe (iron, doped)-based bulbs are used, energized with electric arcs or microwaves. Lower-power fluorescent lamps and LEDs can be used for static applications. Small high-pressure lamps can have light focused and transmitted to the work area via liquid-filled or fiber-optic light guides. The impact of UV on polymers is used for modification of the (roughness and hydrophobicity) of polymer surfaces. For example, a poly(methyl methacrylate) surface can be smoothed by vacuum ultraviolet. UV radiation is useful in preparing low-surface-energy polymers for adhesives. Polymers exposed to UV will oxidize, thus raising the surface energy of the polymer. Once the surface energy of the polymer has been raised, the bond between the adhesive and the polymer is stronger. UV-C light is used in air conditioning systems as a method of improving indoor air quality by disinfecting the air and preventing microbial growth. UV-C light is effective at killing or inactivating harmful microorganisms, such as bacteria, viruses, mold, and mildew. When integrated into an air conditioning system, the ultraviolet light is typically placed in areas like the air handler or near the evaporator coil. In air conditioning systems, UV-C light works by irradiating the airflow within the system, killing or neutralizing harmful microorganisms before they are recirculated into the indoor environment. The effectiveness of it in air conditioning systems depends on factors such as the intensity of the light, the duration of exposure, airflow speed, and the cleanliness of system components. Using a catalytic chemical reaction from titanium dioxide and UVC exposure, oxidation of organic matter converts pathogens, pollens, and mold spores into harmless inert byproducts. However, the reaction of titanium dioxide and UVC is not a straight path. Several hundreds of reactions occur prior to the inert byproducts stage and can hinder the resulting reaction creating formaldehyde, aldehyde, and other VOC's en route to a final stage. Thus, the use of titanium dioxide and UVC requires very specific parameters for a successful outcome. The cleansing mechanism of UV is a photochemical process. Contaminants in the indoor environment are almost entirely organic carbon-based compounds, which break down when exposed to high-intensity UV at 240 to 280 nm. Short-wave ultraviolet radiation can destroy DNA in living microorganisms. UVC's effectiveness is directly related to intensity and exposure time. UV has also been shown to reduce gaseous contaminants such as carbon monoxide and VOCs. UV lamps radiating at 184 and 254 nm can remove low concentrations of hydrocarbons and carbon monoxide if the air is recycled between the room and the lamp chamber. This arrangement prevents the introduction of ozone into the treated air. Likewise, air may be treated by passing by a single UV source operating at 184 nm and passed over iron pentaoxide to remove the ozone produced by the UV lamp. Ultraviolet lamps are used to sterilize workspaces and tools used in biology laboratories and medical facilities. Commercially available low-pressure mercury-vapor lamps emit about 86% of their radiation at 254 nanometers (nm), with 265 nm being the peak germicidal effectiveness curve. UV at these germicidal wavelengths damage a microorganism's DNA/RNA so that it cannot reproduce, making it harmless, (even though the organism may not be killed). Since microorganisms can be shielded from ultraviolet rays in small cracks and other shaded areas, these lamps are used only as a supplement to other sterilization techniques. UVC LEDs are relatively new to the commercial market and are gaining in popularity.[failed verification] Due to their monochromatic nature (±5 nm)[failed verification] these LEDs can target a specific wavelength needed for disinfection. This is especially important knowing that pathogens vary in their sensitivity to specific UV wavelengths. LEDs are mercury free, instant on/off, and have unlimited cycling throughout the day. Disinfection using UV radiation is commonly used in wastewater treatment applications and is finding an increased usage in municipal drinking water treatment. Many bottlers of spring water use UV disinfection equipment to sterilize their water. Solar water disinfection has been researched for cheaply treating contaminated water using natural sunlight. The UVA irradiation and increased water temperature kill organisms in the water. Ultraviolet radiation is used in several food processes to kill unwanted microorganisms. UV can be used to pasteurize fruit juices by flowing the juice over a high-intensity ultraviolet source. The effectiveness of such a process depends on the UV absorbance of the juice. Pulsed light (PL) is a technique of killing microorganisms on surfaces using pulses of an intense broad spectrum, rich in UVC between 200 and 280 nm. Pulsed light works with xenon flash lamps that can produce flashes several times per second. Disinfection robots use pulsed UV. The antimicrobial effectiveness of filtered far-UVC (222 nm) light on a range of pathogens, including bacteria and fungi showed inhibition of pathogen growth, and since it has lesser harmful effects, it provides essential insights for reliable disinfection in healthcare settings, such as hospitals and long-term care homes. UVC has also been shown to be effective at degrading SARS-CoV-2 virus. Birds, reptiles, insects such as bees, and mammals such as mice, reindeer, dogs, and cats can see near-ultraviolet wavelengths. Many fruits, flowers, and seeds stand out more strongly from the background in ultraviolet wavelengths as compared to human color vision. Scorpions glow or take on a yellow to green color under UV illumination, thus assisting in the control of these arachnids. Many birds have patterns in their plumage that are invisible at usual wavelengths but observable in ultraviolet, and the urine and other secretions of some animals, including dogs, cats, and human beings, are much easier to spot with ultraviolet. Urine trails of rodents can be detected by pest control technicians for proper treatment of infested dwellings. Butterflies use ultraviolet as a communication system for sex recognition and mating behavior. For example, in the Colias eurytheme butterfly, males rely on visual cues to locate and identify females. Instead of using chemical stimuli to find mates, males are attracted to the ultraviolet-reflecting color of female hind wings. In Pieris napi butterflies it was shown that females in northern Finland with less UV-radiation present in the environment possessed stronger UV signals to attract their males than those occurring further south. This suggested that it was evolutionarily more difficult to increase the UV-sensitivity of the eyes of the males than to increase the UV-signals emitted by the females. Many insects use the ultraviolet wavelength emissions from celestial objects as references for flight navigation. A local ultraviolet emitter will normally disrupt the navigation process and will eventually attract the flying insect. The green fluorescent protein (GFP) is often used in genetics as a marker. Many substances, such as proteins, have significant light absorption bands in the ultraviolet that are of interest in biochemistry and related fields. UV-capable spectrophotometers are common in such laboratories. Ultraviolet traps called bug zappers are used to eliminate various small flying insects. They are attracted to the UV and are killed using an electric shock, or trapped once they come into contact with the device. Different designs of ultraviolet radiation traps are also used by entomologists for collecting nocturnal insects during faunistic survey studies. Ultraviolet radiation is helpful in the treatment of skin conditions such as psoriasis and vitiligo. Exposure to UVA, while the skin is hyper-photosensitive, by taking psoralens is an effective treatment for psoriasis. Due to the potential of psoralens to cause damage to the liver, PUVA therapy may be used only a limited number of times over a patient's lifetime. UVB phototherapy does not require additional medications or topical preparations for the therapeutic benefit; only the exposure is needed. However, phototherapy can be effective when used in conjunction with certain topical treatments such as anthralin, coal tar, and vitamin A and D derivatives, or systemic treatments such as methotrexate and Soriatane. Reptiles need UVB for biosynthesis of vitamin D, and other metabolic processes. Specifically cholecalciferol (vitamin D3), which is needed for basic cellular / neural functioning as well as the utilization of calcium for bone and egg production.[citation needed] The UVA wavelength is also visible to many reptiles and might play a significant role in their ability survive in the wild as well as in visual communication between individuals.[citation needed] Therefore, in a typical reptile enclosure, a fluorescent UV a/b source (at the proper strength / spectrum for the species), must be available for many[which?] captive species to survive. Simple supplementation with cholecalciferol (Vitamin D3) will not be enough as there is a complete biosynthetic pathway[which?] that is "leapfrogged" (risks of possible overdoses), the intermediate molecules and metabolites[which?] also play important functions in the animals health.[citation needed] Natural sunlight in the right levels is always going to be superior to artificial sources, but this might not be possible for keepers in different parts of the world.[citation needed] It is a known problem that high levels of output of the UVa part of the spectrum can both cause cellular and DNA damage to sensitive parts of their bodies – especially the eyes where blindness is the result of an improper UVa/b source use and placement photokeratitis.[citation needed] For many keepers there must also be a provision for an adequate heat source this has resulted in the marketing of heat and light "combination" products.[citation needed] Keepers should be careful of these "combination" light/ heat and UVa/b generators, they typically emit high levels of UVa with lower levels of UVb that are set and difficult to control so that animals can have their needs met.[citation needed] A better strategy is to use individual sources of these elements and so they can be placed and controlled by the keepers for the max benefit of the animals. Evolutionary significance The evolution of early reproductive proteins and enzymes is attributed in modern models of evolutionary theory to ultraviolet radiation. UVB causes thymine base pairs next to each other in genetic sequences to bond together into thymine dimers, a disruption in the strand that reproductive enzymes cannot copy. This leads to frameshifting during genetic replication and protein synthesis, usually killing the cell. Before formation of the UV-blocking ozone layer, when early prokaryotes approached the surface of the ocean, they almost invariably died out. The few that survived had developed enzymes that monitored the genetic material and removed thymine dimers by nucleotide excision repair enzymes. Many enzymes and proteins involved in modern mitosis and meiosis are similar to repair enzymes, and are believed to be evolved modifications of the enzymes originally used to overcome DNA damages caused by UV. Elevated levels of ultraviolet radiation, in particular UV-B, have also been speculated as a cause of mass extinctions in the fossil record. Photobiology Photobiology is the scientific study of the beneficial and harmful interactions of non-ionizing radiation in living organisms, conventionally demarcated around 10 eV, the first ionization energy of oxygen. UV ranges roughly from 3 to 30 eV in energy. Hence photobiology entertains some, but not all, of the UV spectrum. See also References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Electrically_charged] | [TOKENS: 4592] |
Contents Electric charge Electric charge (symbol q, sometimes Q) is a physical property of matter that causes it to experience a force when placed in an electromagnetic field. Electric charge can be positive or negative. Like charges repel each other and unlike charges attract each other. An object with no net charge is referred to as electrically neutral. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do not require consideration of quantum effects. In an isolated system, the total charge stays the same - the amount of positive charge minus the amount of negative charge does not change over time. Electric charge carriers include subatomic particles. In ordinary matter, negative charge is carried by electrons, and positive charge is carried by the protons in the nuclei of atoms. If there are more electrons than protons in a piece of matter, it will have a negative charge, if there are fewer it will have a positive charge, and if there are equal numbers it will be neutral. Charge is quantized: it comes in integer multiples of individual small units called the elementary charge, e, about 1.602×10−19 C, which is the smallest charge that can exist freely. Particles called quarks have smaller charges, multiples of 1/3e, but they are found only combined in particles that have a charge that is an integer multiple of e. In the Standard Model, charge is an absolutely conserved quantum number. The proton has a charge of +e, and the electron has a charge of −e. Today, a negative charge is defined as the charge carried by an electron and a positive charge is that carried by a proton. Before these particles were discovered, a positive charge was defined by Benjamin Franklin as the charge acquired by a glass rod when it is rubbed with a silk cloth. Electric charges produce electric fields. A moving charge also produces a magnetic field. The interaction of electric charges with an electromagnetic field (a combination of an electric and a magnetic field) is the source of the electromagnetic (or Lorentz) force, which is one of the four fundamental interactions in physics. The study of photon-mediated interactions among charged particles is called quantum electrodynamics. The SI derived unit of electric charge is the coulomb (C) named after French physicist Charles-Augustin de Coulomb. In electrical engineering it is also common to use the ampere-hour (A⋅h). In physics and chemistry it is common to use the elementary charge (e) as a unit. Chemistry also uses the Faraday constant, which is the charge of one mole of elementary charges. Overview Charge is the fundamental property of matter that exhibits electrostatic attraction or repulsion in the presence of other matter with charge. Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil drop experiment demonstrated this fact directly, and measured the elementary charge. It has been discovered that one type of particle, quarks, have fractional charges of either −1/3 or +2/3, but it is believed they always occur in multiples of integral charge; free-standing quarks have never been observed. By convention, the charge of an electron is negative, −e, while that of a proton is positive, +e. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them. The charge of an antiparticle equals that of the corresponding particle, but with opposite sign. The electric charge of a macroscopic object is the sum of the electric charges of the particles that it is made up of. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral. An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge. During the formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral. Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way, non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa. Even when an object's net charge is zero, the charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases, the object is said to be polarized. The charge due to polarization is known as bound charge, while the charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current. Unit The SI unit of quantity of electric charge is the coulomb (symbol: C). The coulomb is defined as the quantity of charge that passes through the cross section of an electrical conductor carrying one ampere for one second. This unit was proposed in 1946 and ratified in 1948. The lowercase symbol q is often used to denote a quantity of electric charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer. The elementary charge is defined as a fundamental constant in the SI. The value for elementary charge, when expressed in SI units, is exactly 1.602176634×10−19 C. After discovering the quantized character of charge, in 1891, George Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. J. J. Thomson subsequently discovered the particle that we now call the electron in 1897. The unit is today referred to as elementary charge, fundamental unit of charge, or simply denoted e, with the charge of an electron being −e. The charge of an isolated system should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a continuous quantity. In some contexts it is meaningful to speak of fractions of an elementary charge; for example, in the fractional quantum Hall effect. The unit faraday is sometimes used in electrochemistry. One faraday is the magnitude of the charge of one mole of elementary charges, i.e. 9.648533212...×104 C. History From ancient times, people were familiar with four types of phenomena that today would all be explained using the concept of electric charge: (a) lightning, (b) the torpedo fish (or electric ray), (c) St Elmo's Fire, and (d) that amber rubbed with fur would attract small, light objects. The first account of the amber effect is often attributed to the ancient Greek mathematician Thales of Miletus, who lived from c. 624 to c. 546 BC, but there are doubts about whether Thales left any writings; his account about amber is known from an account from early 200s. This account can be taken as evidence that the phenomenon was known since at least c. 600 BC, but Thales explained this phenomenon as evidence for inanimate objects having a soul. In other words, there was no indication of any conception of electric charge. More generally, the ancient Greeks did not understand the connections among these four kinds of phenomena. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump,[citation needed] but there is also a claim that no mention of electric sparks appeared until late 17th century. This property derives from the triboelectric effect. In late 1100s, the substance jet, a compacted form of coal, was noted to have an amber effect, and in the middle of the 1500s, Girolamo Fracastoro, discovered that diamond also showed this effect. Some efforts were made by Fracastoro and others, especially Gerolamo Cardano to develop explanations for this phenomenon. In contrast to astronomy, mechanics, and optics, which had been studied quantitatively since antiquity, the start of ongoing qualitative and quantitative research into electrical phenomena can be marked with the publication of De Magnete by the English scientist William Gilbert in 1600. In this book, there was a small section where Gilbert returned to the amber effect (as he called it) in addressing many of the earlier theories, and coined the Neo-Latin word electrica (from ἤλεκτρον (ēlektron), the Greek word for amber). The Latin word was translated into English as electrics. Gilbert is also credited with the term electrical, while the term electricity came later, first attributed to Sir Thomas Browne in his Pseudodoxia Epidemica from 1646. (For more linguistic details see Etymology of electricity.) Gilbert hypothesized that this amber effect could be explained by an effluvium (a small stream of particles that flows from the electric object, without diminishing its bulk or weight) that acts on other objects. This idea of a material electrical effluvium was influential in the 17th and 18th centuries. It was a precursor to ideas developed in the 18th century about "electric fluid" (Dufay, Nollet, Franklin) and "electric charge". Around 1663 Otto von Guericke invented what was probably the first electrostatic generator, but he did not recognize it primarily as an electrical device and only conducted minimal electrical experiments with it. Other European pioneers were Robert Boyle, who in 1675 published the first book in English that was devoted solely to electrical phenomena. His work was largely a repetition of Gilbert's studies, but he also identified several more "electrics", and noted mutual attraction between two bodies. In 1729 Stephen Gray was experimenting with static electricity, which he generated using a glass tube. He noticed that a cork, used to protect the tube from dust and moisture, also became electrified (charged). Further experiments (e.g., extending the cork by putting thin sticks into it) showed—for the first time—that electrical effluvia (as Gray called it) could be transmitted (conducted) over a distance. Gray managed to transmit charge with twine (765 feet) and wire (865 feet). Through these experiments, Gray discovered the importance of different materials, which facilitated or hindered the conduction of electrical effluvia. John Theophilus Desaguliers, who repeated many of Gray's experiments, is credited with coining the terms conductors and insulators to refer to the effects of different materials in these experiments. Gray also discovered electrical induction (i.e., where charge could be transmitted from one object to another without any direct physical contact). For example, he showed that by bringing a charged glass tube close to, but not touching, a lump of lead that was sustained by a thread, it was possible to make the lead become electrified (e.g., to attract and repel brass filings). He attempted to explain this phenomenon with the idea of electrical effluvia. Gray's discoveries introduced an important shift in the historical development of knowledge about electric charge. The fact that electrical effluvia could be transferred from one object to another, opened the theoretical possibility that this property was not inseparably connected to the bodies that were electrified by rubbing. In 1733 Charles François de Cisternay du Fay, inspired by Gray's work, made a series of experiments (reported in Mémoires de l'Académie Royale des Sciences), showing that more or less all substances could be 'electrified' by rubbing, except for metals and fluids and proposed that electricity comes in two varieties that cancel each other, which he expressed in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In contemporary understanding, positive charge is now defined as the charge of a glass rod after being rubbed with a silk cloth, but it is arbitrary which type of charge is called positive and which is called negative. Another important two-fluid theory from this time was proposed by Jean-Antoine Nollet (1745). Up until about 1745, the main explanation for electrical attraction and repulsion was the idea that electrified bodies gave off an effluvium. Benjamin Franklin started electrical experiments in late 1746, and by 1750 had developed a one-fluid theory of electricity, based on an experiment that showed that a rubbed glass received the same, but opposite, charge strength as the cloth used to rub the glass. Franklin imagined electricity as being a type of invisible fluid present in all matter and coined the term charge itself (as well as battery and some others); for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained an excess of the fluid it was positively charged and when it had a deficit it was negatively charged. He identified the term positive with vitreous electricity and negative with resinous electricity after performing an experiment with a glass tube he had received from his overseas colleague Peter Collinson. The experiment had participant A charge the glass tube and participant B receive a shock to the knuckle from the charged tube. Franklin identified participant B to be positively charged after having been shocked by the tube. There is some ambiguity about whether William Watson independently arrived at the same one-fluid explanation around the same time (1747). Watson, after seeing Franklin's letter to Collinson, claims that he had presented the same explanation as Franklin in spring 1747. Franklin had studied some of Watson's works prior to making his own experiments and analysis, which was probably significant for Franklin's own theorizing. One physicist suggests that Watson first proposed a one-fluid theory, which Franklin then elaborated further and more influentially. A historian of science argues that Watson missed a subtle difference between his ideas and Franklin's, so that Watson misinterpreted his ideas as being similar to Franklin's. In any case, there was no animosity between Watson and Franklin, and the Franklin model of electrical action, formulated in early 1747, eventually became widely accepted at that time. After Franklin's work, effluvia-based explanations were rarely put forward. It is now known that the Franklin model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge. Until 1800 it was only possible to study conduction of electric charge by using an electrostatic discharge. In 1800 Alessandro Volta was the first to show that charge could be maintained in continuous motion through a closed path. In 1833, Michael Faraday sought to remove any doubt that electricity is identical, regardless of the source by which it is produced. He discussed a variety of known forms, which he characterized as common electricity (e.g., static electricity, piezoelectricity, magnetic induction), voltaic electricity (e.g., electric current from a voltaic pile), and animal electricity (e.g., bioelectricity). In 1838, Faraday raised a question about whether electricity was a fluid or fluids or a property of matter, like gravity. He investigated whether matter could be charged with one kind of charge independently of the other. He came to the conclusion that electric charge was a relation between two or more bodies, because he could not charge one body without having an opposite charge in another body. In 1838, Faraday also put forth a theoretical explanation of electric force, while expressing neutrality about whether it originates from one, two, or no fluids. He focused on the idea that the normal state of particles is to be nonpolarized, and that when polarized, they seek to return to their natural, nonpolarized state. In developing a field theory approach to electrodynamics (starting in the mid-1850s), James Clerk Maxwell stops considering electric charge as a special substance that accumulates in objects, and starts to understand electric charge as a consequence of the transformation of energy in the field. This pre-quantum understanding considered magnitude of electric charge to be a continuous quantity, even at the microscopic level. Role of charge in static electricity Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects. When a piece of glass and a piece of resin—neither of which exhibit any electrical properties—are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other. A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena: This attraction and repulsion is an electrical phenomenon, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by sliding. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts. If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified. An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention—just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand. Role of charge in electric current Electric current is the flow of electric charge through an object. The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations. At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron holes that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma. The direction of the conventional current in most metallic wires is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. Conservation of electric charge The total electric charge of an isolated system remains constant regardless of changes within the system itself. : 4 This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge-current continuity equation. More generally, the rate of change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂V, which is in turn equal to the net current I: Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result: The charge transferred between times t i {\displaystyle t_{\mathrm {i} }} and t f {\displaystyle t_{\mathrm {f} }} is obtained by integrating both sides: where I is the net outward current through a closed surface and q is the electric charge contained within the volume defined by the surface. Relativistic invariance Aside from the properties described in articles about electromagnetism, electric charge is a relativistic invariant. This means that any particle that has electric charge q has the same electric charge regardless of how fast it is travelling. This property has been experimentally verified by showing that the electric charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as that of two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus). See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_note-139] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Fable_(video_game_series)] | [TOKENS: 2462] |
Contents Fable (video game series) Fable is a fantasy action role-playing game series originally created by Lionhead Studios and later developed by Playground Games. The franchise is owned and published by Xbox Game Studios. Set in the fictional land of Albion, the series is known for its emphasis on player choice, morality systems, and a satirical, fairy-tale inspired take on British folklore. Since its debut in 2004, Fable has become one of Microsoft Gaming's most recognizable role-playing franchises. The main series follows different heroes across several centuries of Albion’s history, with each installment depicting a more technologically advanced era, ranging from a medieval-inspired society to an industrialized nation. Gameplay centers on shaping a hero through moral decisions that affect the character’s appearance, the story’s outcome, and how non-player characters react. In addition to combat and quests, the games allow players to engage in activities such as property ownership, romance, family life, trading, and social interaction, blending traditional role-playing mechanics with life-simulation elements. The franchise began with Fable (2004) for the original Xbox, followed by Fable II (2008) and Fable III (2010). While praised for its originality and charm, the series was also associated with controversy due to unfulfilled design promises made during development. After a period of decline and the closure of Lionhead Studios in 2016, the franchise was revived by Playground Games, which announced a reboot in 2020. A new Fable title is scheduled for release in 2026. Beyond the main entries, the series has expanded into spin-offs, mobile and arcade titles, and a collectible card game. The franchise has also inspired a novel, Fable: The Balverine Order, as well as various promotional and crossover projects. Over the years, Fable has received generally positive critical reception and remains influential for its approach to player agency, moral choice, and world design within role-playing games. Setting The Fable series takes place in the fictional nation of Albion, a state that, at the time of the first game, is composed of numerous autonomous city-states with vast areas of countryside or wilderness in between. The setting originally resembles Medieval Britain,[citation needed] with some European elements. The name Albion itself is an ancient albeit still used name for Great Britain.[citation needed] The period of time progresses with each game; in Fable II, Albion has advanced to an era similar to that of the Age of Enlightenment, and by Fable III the nation has been unified under a monarchy and is undergoing an "Age of Industry" similar to the real-world 18th-19th-century Industrial Revolution.[citation needed] In the first Fable, players assume the role of a boy who is forced into a life of heroism when bandits attack his village, kill his father and kidnap his sister. The choices players make in the game affect the perception and reaction to their Hero by the characters of Albion and change the Hero's appearance to mirror what good or evil deeds he has performed. In addition to the main quest to learn what happened to the Hero's family, players can engage in optional quests and pursuits such as trading, romance and married life, pub gaming, boxing, exploring, and theft. Fable II takes place 500 years after the events of the first game. The world resembles Europe between the late 1600s and early 1700s, the time of highwaymen and the Enlightenment. Science and more modern ideas have suppressed the religion and magic of old Albion. Its towns have developed into cities, weaponry is slowly taking advantage of gunpowder, and social, family and economic life present more possibilities - as well as challenges. The sequel basically expands most or all parts of the gaming experience from the previous game, without changing the elementary modes of playing. The continent of Albion is larger as a game world, but contains fewer locations, and the locations that remain are more developed and detailed. In contrast to Fable, the solving of set quests is not the basis of the story; rather, the story develops from the player's situation in time and place. This gives the game a sense of more interactivity than the first title in the series. In Fable III the setting is 50 years after that of Fable II. The historical development is further advanced since the last version: Albion is experiencing an Industrial Revolution and society resembles that of the early 1800s. In all of the versions, the moral development (in a negative or a positive way) is at the core of the gameplay. This moral development is expanded to include the personal or psychological and has a more political aspect, as the goal of the game is to overthrow the oppressive king of Albion, as well as defend the continent from attacks from abroad. Gameplay As role-playing video games, the Fable series constructs the development of a protagonist controlled by the player, and the development is related to the same character's interaction with the game world. A major part of this interaction is for the Fable series related to interaction with people, be it conversation, storytelling, education, trading, gaming, courting and relationships, or fighting. The player is able to develop the protagonist following several parameters, such as magic, strength and social skills. The player may also direct the moral quality of the protagonist, so that skills may be developed in equal terms and conditions both in the negative and positive field. In addition to this basis of the gameplay, some of the versions focus on set quests that together give the protagonist the opportunity to develop, as well as unveiling strands of the story of the game. Fable II and Fable III include cooperative gameplay, where two players with their own character can join forces in their different tasks. History The first game, Fable, was teased in 2001 by developer Lionhead Studios. Lead designer and Lionhead co-founder Peter Molyneux "promised an experience like no other" and that the game would "revolutionize the RPG". Fable was released for Xbox on 14 September 2004. It was originally seen very poorly as it was mostly reported that the game had no content due to the substantial amount of unfulfilled "promises" by Molyneux, which he soon apologized for, garnering even more press coverage. Despite offers from such large companies, such as Electronic Arts, the over-ambition experienced during Fable's development and overestimated sales of the original game had left Lionhead Studios with low stocks and in debt. To gain access to a bigger budget Lionhead signed with Microsoft Game Studios. An extended version, Fable: The Lost Chapters, was released for Windows and Xbox in September 2005; Feral Interactive ported the game to the Mac platform on 31 March 2008. It featured new content in many forms and, with the support of Microsoft, was a critical and commercial success. Fable II was released for Xbox 360 on 24 October 2008. It was also a critical and commercial success. It featured a tie-in game called Fable II Pub Games that was released on the Xbox Live Arcade, and an interactive online flash game called Fable: A Hero's Tale that allowed players to open a secret chest in the main game. A third game, Fable III, was released for Xbox 360 on 29 October 2010, and a Microsoft Windows release on 17 March 2011. This game also featured a tie-in phone game called Fable Coin Golf. On 2 May 2012, Fable Heroes, was released for the Xbox Live Arcade. Despite the amount of differences the game has from others in the series and its mixed critical reception, it being a multiplayer-based family-friendly beat-em-up, the game is popular among fans as it still embodies some of fans' favorite iconic elements of the series.[citation needed] Fable: The Journey, a spin-off within the series, was released in October 2012 in North America and Europe. The game utilized the Kinect attachment for the Xbox 360. Lead designer Peter Molyneux departed Lionhead Studios in 2012. Lionhead Studios released an Xbox 360 remake of the original game, including The Lost Chapters, called Fable Anniversary to mixed reviews in February 2014. Fable Trilogy, a compilation for Xbox 360 that includes Fable Anniversary, Fable II and Fable III was released in February 2014. Fable-themed card games were released as part of the Microsoft Solitaire Collection for the PC on March 4, 2014 and a Fable Anniversary theme was released for the Microsoft Jigsaw collection.[citation needed] In August 2013, Lionhead Studios released a teaser trailer for Fable Legends, an Xbox One title set during the "Age of Heroes" long before the events of the first game. The trailer emphasizes that in the game the player would play alongside four other players and may choose to be the Hero of the story or the Villain. Microsoft canceled the project in March 2016 and Lionhead Studios was closed soon afterwards. In May 2016, former Lionhead developers launched a Kickstarter campaign to crowdfund Fable Fortune, a free-to-play collectible card game. The game was previously in development at Lionhead prior to the studio's closure. The game was released for the Xbox One in February 2018. In January 2018, rumors surfaced that a new Fable game was being developed by Playground Games, and that studio was hiring 177 positions for an open world role-playing game. During the Xbox Games Showcase in July 2020, a new Fable was announced as being in development, with the game releasing on the Xbox Series X and Series S and Microsoft Windows at an undisclosed date. It will run on the Forza series' in-house game engine, ForzaTech. In November 2021, Eidos-Montréal would join the project as a co-developer. By March 2023, the game was reported to be in the early stages of full production. On June 11, 2023, Playground Games unveiled the first in-game trailer of Fable at the Xbox Games Showcase, featuring actor Richard Ayoade, subsequently followed by another July 2024 trailer featuring actor Matt King. The game was originally planned for release in 2025. However, on February 25, 2025, Craig Duncan, head of Xbox Game Studios, announced on the Xbox Podcast that the launch had been moved to 2026 to improve overall quality and address technical issues. Shortly after the announcement, rumors surfaced suggesting that the delay was intended to allow the game to launch simultaneously on PlayStation 5; however, these claims were denied by Microsoft insiders. On January 8, 2026, Xbox announced that the game would receive a deep-dive presentation at Xbox Developer Direct on January 23, 2026. During the event, Playground Games confirmed that the game will be released simultaneously on PlayStation 5, Xbox Series X/S, and Microsoft Windows in autumn 2026. Marking this game to be the first Fable title to release on a non-Xbox home console as well as on a PlayStation console. Novel Fable: The Balverine Order is a fantasy novel by Peter David based on the series. The novel was released in North America and Europe in October 2010. The book was released with an exclusive code to unlock a unique weapon in Fable III. The story is told from the point of view of a king of an unknown country who listens to an unnamed story-teller in the Fable universe. It takes place between Fable II and III. The central story involves the characters Thomas Kirkman, a wealthy son of a textile merchant whose mother's death puts him on his quest to find a balverine, and his manservant, James Skelton, a child in a large poor family. The two friends brave the wilds in search of a balverine that killed Thomas' brother, Stephen. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/El%27ad] | [TOKENS: 809] |
Contents El'ad El'ad (Hebrew: אלעד) is a city in the Central District of Israel. In the 1990s, it was built for a Haredi population and to a lesser extent, it was also built for a Religious Zionist Jewish population. Located about 25 kilometres (16 mi) east of Tel Aviv on Route 444 between Rosh HaAyin and Shoham, it had a population of 49,766 in 2023. El'ad is the only locality in Israel officially designated a religious municipality. The name El'ad means "Forever God", but it is also named after a member of the tribe of Ephraim, who lived in this area (1 Chronicles 7:21). History During the 18th and 19th centuries, El'ad was the site of the Arab village of Al-Muzayri'a. It belonged to the Nahiyeh (sub-district) of Lod that encompassed the area of the present-day city of Modi'in-Maccabim-Re'ut in the south to the present-day city of El’ad in the north, and from the foothills in the east, through the Lod Valley to the outskirts of Jaffa in the west. This area was home to thousands of inhabitants in about 20 villages, who had at their disposal tens of thousands of hectares of prime agricultural land. The building of El'ad started in the late 1990s, following a government decision in 1990 to build a series of settlements along the seam line with the West Bank under then-housing minister Ariel Sharon, and provide immediate housing for 50,000 residents. The town was built from scratch as a planned community according to urban planning paradigms not unlike Modi'in and nearby Shoham. While those towns were designed to suit a mixed population of secular and religious Jews, El'ad was originally planned to suit a mixed population of Modern Orthodox/Religious Zionist Jews and ultra-Orthodox Haredi Jews, offering a solution to the acute shortage of affordable housing for Haredi families. The majority of the population are Haredi Jews. Accordingly, El'ad is built in a way that suits their religious lifestyle, with a larger selection of housing options offering larger than average apartments to accommodate religious families, who tend to have more children than the average national population. Another characteristic is easy access and short walking distances to local education institutions to avoid the need for school transportation costs. The city was built partially over the ruins of the Palestinian Arab village of Al-Muzayri'a, whose population fled in 1948. By 1998, El'ad had already achieved local council status; in February 2008, El'ad's official status was changed to a city. The city's current mayor is Yehuda Botbol, a member of the Shas party. On 5 May 2022 on Israel's Independence Day, in a park in El'ad four people were killed and four wounded in a 2022 El'ad stabbing by two Palestinians. Demographics El'ad is one of the fastest-growing towns in Israel, with an annual population growth of 0.8 percent. The population density per square kilometer is 13.1, median age is 11. The percentage of those eligible for a matriculation certificate among 12th grade students in the year 2019-2020 was 28.3%. The average monthly salary of an employee during the year 2019 was 6,219 NIS (national average: 9,745 NIS). Economy The support center of Ramat Gan-based Israeli company Daronet is located in El'ad. Its workers are ultra-Orthodox women. In 2012, Daronet signed a sales agreement worth NIS700,000 (US$180,000) with Saudi energy giant Yanar. Notable people References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/G_99-47] | [TOKENS: 289] |
Contents G 99-47 G 99-47 (V1201 Orionis) is a nearby degenerate star (white dwarf) of spectral class DAP8 (DAP8.9, or DAP8.7), the single known component of the system, located in the constellation Orion. G 99-47 is the 10th-closest known white dwarf, the next closest after LP 658-2 and GJ 3991 B. The mass of G 99-47 is 0.71±0.03 Solar masses; its surface gravity is 108.20 ± 0.05 (1.58 · 108) cm·s−2, or approximately 162 000 of Earth's, corresponding to a radius 7711 km, or 121% of Earth's. Its temperature is 5790 ± 110 K, almost like the Sun's; its cooling age, i. e. age as degenerate star (not including lifetime as main sequence star and as giant star) is 3.97 Gyr. Due almost equal to the Sun's temperature, GJ 1087 should appear almost the same white color as the Sun. The white dwarf has a strong magnetic field, with measured vertical component near surface equal to 560 T. See also Notes References This article about a white dwarf is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Tachanun] | [TOKENS: 2972] |
Contents Tachanun Tachanun (Hebrew: תַחֲנוּן, romanized: Taḥănûn, lit. 'supplication'), also referred to as nefilat apayim (נְפִילַת אַפַּיִם, 'falling [on the] faces'), is a supplicatory and confessional component of Shacharit (שַחֲרִית, 'dawn') and Mincha (מִנְחָה, 'offering'), the morning and afternoon prayer services of Judaism, respectively. The recitation of Tachanun follows the Amidah, the central part of the daily Jewish prayer services. It is also recited at the end of the Selichot service. It is omitted on Shabbat, Jewish holidays, and many other celebratory occasions (e.g., in the presence of a groom in the week following his marriage). Most traditions[which?] recite a longer prayer on Mondays and Thursdays. Format There are two formats of Tachanun: a short and a long one. The long format is reserved for Monday and Thursday mornings, during which the Torah is chanted in the synagogue. The short format, recited on other weekday mornings and afternoons, consists of three (in some communities, two) short paragraphs. In Nusach Sefard—followed by most Hasidic Jews, who may or may not be Sephardic Jews—and most Sephardic rites (which differ from Sefardic rites despite the similar name), Tachanun begins with vidui (confessional prayer) and recitation of the Thirteen Attributes of Mercy. Among communities of Sephardic Jews and some Moroccan Jews, these are recited only in the long Tachanun. In vidui, several specific sins are mentioned, and the heart is symbolically struck with the right fist during the mention of each. Vidui is followed by the mention of God's Thirteen Attributes of Mercy. By and large, the Hasidic Jews who follow Nusach Sefard do not rest their heads on their hands for Kabbalistic reasons; Sephardic and some Moroccan Jews, who do not follow Sefardic customs, do. In most communities using Nusach Ashkenaz, Tachanun begins with introductory verses from 2 Samuel 24:14, which is followed by a short confession—that Israel has sinned and God should answer the Jewish people's prayers—and Psalm 6:2–11, which King David is traditionally believed to have composed while sick and in pain. Most Sephardic communities also recite these verses, although only after reciting vidui and the Thirteen Attributes. In the Sephardic, Italian, and Romaniote rites—also adopted in some Hasidic communities, including Chabad—Psalm 25 is recited as Tachanun. In Baladi-rite prayer, a prayer from a non-scriptural source[specify] is recited. In the presence of a sefer Torah, this paragraph is recited with the head leaning on the back of the left hand or sleeve (in most Ashkenazic communities, one leans on the right hand when wearing tefillin on the left). The following paragraph, "שומר ישראל" ("Guardian of Israel"), is recited seated but erect (some communities recite it only on fast days). After this point, and following the words "va'anachnu lo neida", it is customary in many communities to rise, and the remainder of the final paragraph is recited while standing. Other rites' adherents, especially those who don't recite "Guardian of Israel" daily, remain seated but erect for this passage. Tachanun is invariably followed by "half kaddish" at Shacharit and by "full kaddish" at Mincha and in Selichot. The Talmud (Bava Kamma) marks Monday and Thursday as "eth ratzon", a time of divine goodwill during which a supplication is more likely to be received by God. On Monday and Thursday mornings, therefore, a longer prayer is recited. The order differs by custom. In Nusach Ashkenaz, a long prayer beginning with "ve-hu rachum" is recited before nefilat apayim. After Psalm 6, a few stanzas with a refrain "Hashem elokey Yisra'el" is added. The service continues with Shomer Yisrael (in some communities, this is recited only on fast days), and Tachanun is concluded as usual. Other Nusach Ashkenaz communities, especially in Israel, have adopted the Sephardic custom of reciting Vidui and the Thirteen Attributes at the beginning of long Tachanun. In some of these places, this is omitted during the Selichot season during which vidui and the Thirteen Attributes were recited right before the service; they revert to the older custom of not reciting it. In Nusach Sefard, the order is vidui, Thirteen Attributes, nefilat apayim, "ve-hu rachum", "Hashem elokey Yisra'el", Shomer Yisra'el, and then Tachanun is concluded as normal. In the Sephardic rite, there are two variations: The older custom (maintained by Spanish and Portuguese and some Moroccan Jews) is to recite the Thirteen Attributes, "Anshei Amanah Avadu" (on Monday) or "Tamanu me-ra'ot" (on Thursday), another Thirteen Attributes, "al ta'as imanu kalah", Vidui, "ma nomar", another Thirteen Attributes, "ve-hu rachum", nefilat apayim, "Hashem ayeh chasadech ha-rishonim" (on Monday) or "Hashem she'arit peletat Ariel" (on Thursday), and Tachnun is concluded as on other days. Most Sephardic communities today have adopted a different order based on the Kabbalah of the Ari. This order includes vidui, "ma nomar", Thirteen Attributes, and nefilat apayim, which is concluded as every day. After this, another Thirteen Attributes, "Anshei Amanah Avadu", another Thirteen Attributes, "Tamanu me-ra'ot", another Thirteen Attributes, "al ta'as imanu kalah", and Tachnun concludes with "ve-hu rachum". In the Italian rite, several verses from Daniel are recited - these verses are included in "ve-hu rachum" recited in other rites, but the prayer in the Italian rite is much shorter. This is followed by Thirteen Attributes, Vidui, "ma nomar", nefilat apayim, Psalm 130, a collection of verses from Jeremiah and Micah, a piyyut beginning "Zechor berit Avraham" (this is different from the famous selicha of Zechor Berit known in other rites), Psalm 20, and Tachanun is concluded as on other days. The Yemenite rite did not initially include any additions for Monday and Thursday. However, due to the influence of other communities, they have adopted the following order: nefilat apayim, Thirteen Attributes, "al ta'as imanu kalah", Vidui, "ma nomar", another Thirteen Attributes, "ve-hu rachum", "Hashem ayeh chasadech ha-rishonim" (on Monday) or "Hashem she'arit peletat Ariel" (on Thursday), and Tachnun is concluded as on other days. History The source of the supplicatory prayer (Taḥanun) is in Daniel 9:3 and 1 Kings 8:54, in which the text indicates that one's prayer should always be followed by supplication. Based on this, the Sages developed the habit of adding a personal appeal to God following the set prayers (some examples are listed in Berakhot 16b). In the fourteenth century, these spontaneous supplications were standardized and formalized as Tachanun.[full citation needed] The custom of bending over and resting one's head on the left hand is suggested by the name Tachanun took in the halakhic literature: nefilat apayim (lit. 'falling on [the] face'). It is also reminiscent of the Korban sacrifice brought in the Temple in Jerusalem, which was laid on its left side to be slaughtered. A person's arm should be covered with a sleeve, tallit, or other covering. This posture, developed in the post-Talmudic period, symbolizes the original practice of prostrating with their faces touching the ground to show humility and submission to God. The pose was also used by Moses and Joshua, who fell on their faces before God after the sin of the Golden calf. Because Joshua fell on his face before the Ark of the Covenant, Ashkenazi custom is that one puts one's head down only when praying in front of a Torah ark containing a Torah scroll. Otherwise, it is proper to sit with the head up. One source says that if the synagogue's Torah ark can be seen from one's seat and has a valid Torah scroll within it, one puts one's head down during Tachanun. The same source reports a custom of in-the-next-room, and notes that it is not universally accepted.[further explanation needed] The source also states that Tachanun is said with one's head down by some in Jerusalem; in the presence of a Torah scroll outside an ark; and at home if one "knows at exactly what time the congregation recites Tachanun in the synagogue". In a different article, Rabbi Moshe Feinstein is cited as ruling that "because Jerusalem is such a holy city", it is as if one is always in the presence of a Torah scroll. He also makes a case for "in the same room"[further explanation needed] and advises, "If not, then you say it sitting without putting your head down." The longer version of Tachanun recited on Mondays and Thursdays is sourced by classical sources (e.g., S. Baer's Siddur Avodath Yisrael[citation needed]) to three sages who had escaped the destruction of the Second Temple. While on a ship on the way to Europe, they were caught in a storm, and all three recited a personal prayer, after which the storm subsided. These sages went on to establish communities in Europe. David Abudirham writes that the words "rachum ve-chanun" ("merciful and gracious") mark the beginning of the next segment.[citation needed] Days on which Tachanun is omitted Tachanun is omitted from the prayers on Shabbat (beginning from Friday afternoon), all the major holidays and festivals (including Chol HaMoed, the intermediate days of Pesach and Sukkot), Rosh Chodesh (new moon) starting on the afternoon of the day before, Hanukkah and Purim, as these days are of a festive nature and reciting Tachanun, which is mildly mournful, would not be appropriate. The following is a list of all the other days, "minor holidays", when tachanun is excluded from the prayers, and Psalm 126 is recited during Birkat HaMazon. It is typically also omitted from the Mincha prayers the preceding afternoon, unless otherwise noted: It is also not recited in the house of a mourner (reasons vary: either so as not to add to the mourner's grief by highlighting God's judgment, or because a mourner's house is a house of judgment, and a house of judgment is not a suitable place for requesting mercy; see bereavement in Judaism), nor is it said in the presence of a groom in the sheva yemei hamishte (the seven celebratory days subsequent to his marriage; see marriage in Judaism). Additionally, Tachanun is omitted in a synagogue when a circumcision is taking place in the synagogue at that time, and when either the father of the baby, the sandek (the one who holds the baby during the circumcision), or the mohel (the one who performs the circumcision) is present. Some Nusach Sefard communities omit Tachanun during mincha, primarily because it was common for Hasidic congregations to pray mincha after sunset, in which case some hold that Tachanun needs be omitted. Additionally, many Hasidic communities omit Tachanun on the anniversary of the death of various Rebbes (except Lubavitch makes a point of saying), since that is considered a day for religious renewal and celebration. There is a Hasidic custom of omitting Tachanun the entire week of Purim (11-17 Adar) and the entire week of Lag BaOmer (14-20 Iyar). Some Chasidic communities omit Tachanun on 7 Adar because it is the anniversary of the death of Moses. Additionally some Hasidic congregations omit Tachanun on Friday mornings (getting ready for Shabbat), and some even on Sunday mornings (revival from Shabbat). In many congregations, it is customary to omit Tachanun on holidays established by the State of Israel: Yom Ha'atzmaut (Independence Day), 5 Iyar (most years, date changes depending on day of week); and Yom Yerushalayim (the anniversary of the reunification of Jerusalem in 1967), 28 Iyar. Some communities in the Diaspora will also omit Tachanun on civil holidays in their own country (such as Thanksgiving in the United States). References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Maor_Farid#cite_ref-12] | [TOKENS: 1458] |
Contents Maor Farid Dr. Maor Farid (Hebrew: מאור פריד; born April 20, 1992) is an Israeli scientist, engineer and artificial intelligence researcher at Massachusetts Institute of Technology, social activist, and author. He is the founder and CEO of Learn to Succeed (Hebrew: ללמוד להצליח) for empowering of youths from the Israeli socio-economic periphery and youths at risk, a regional manager of the Israeli center of ScienceAbroad at MIT, and an activist in the American Technion Society. He is an alumnus of Unit 8200, and a fellow of Fulbright Program and the Israel Scholarship Educational Foundation [he]. Dr. Farid was elected to the Forbes 30 Under 30 list of 2019, and won the Moskowitz Prize for Zionism. Early life Maor was born in Ness Ziona, a city in central Israel, as the eldest son for parents from immigrating families of Mizrahi Jews from Iraq and Libya. Maor suffered from Attention deficit hyperactivity disorder (ADHD) from a young age, and was classified as a problematic and violent student. His ADHD issues were diagnosed only after he began his university studies. However, inspired by his parents' background, he aspired to excel at school for a better future for his family. During elementary school, Maor attended local quizzes about Jewish history and Zionism, which significantly shaped his identity and national perspective. Farid graduated high school with the highest GPA in school. Later he was recruited to the Israel Defense Forces and drafted to the Brakim Program [he] – an excellence program of the Israeli Intelligence Corps for training leading R&D officers for the Israeli military and defense industry. Maor graduated the program with honors and was elected by the Israeli Prime Minister's Office and Unit 8200, where he served as an artificial intelligence researcher, officer, and commander. During his Military service, he received various honors and awards, such as the Excellent Scientist Award, given to the top three academics serving in the Israel Defense Forces. In 2019, Farid completed his military service in the rank of a Captain. Education and academic career As part of the (4 years) Brakim Program, Maor completed his Bachelor's and Master's degrees at the Technion in Mechanical Engineering with honors. Then, he initiated his Ph.D. research as a collaboration with the Israel Atomic Energy Commission (IAEC) in parallel to his duty military service. The main goals of his Ph.D. research were predicting irreversible effects of major earthquakes on Israel's nuclear facilities, and improving their seismic resistance using energy absorption technologies. The mathematical models developed by Farid were able to forecast earthquake effects on facilities with major hazard potential, and predicted the failure of liquid storage tanks due to earthquakes took place in Italy (2012) and Mexico (2017). The energy absorption technologies used, increased in up to 90% the seismic resistance abilities of those sensitive facilities. The research results were published in multiple papers in peer-reviewed academic journals and presented in international academic conferences. Later, this research expanded to an official collaboration between the Technion and the Shimon Peres Negev Nuclear Research Center, which aims to implement the findings obtained on existing sensitive systems, and won funding of 1.5 million NIS from the Pazy foundation of the Israel Atomic Energy Commission and the Council for Higher Education. In 2017, Farid completed his Ph.D. and as the youngest graduate at the Technion for that year, at the age of 24. In the graduation ceremonies, he honored his parents to receive the diplomas on his behalf. At the same year, he served as a lecturer at Ben-Gurion University in an original course he developed as a solution for knowledge gaps he identified in the Israeli defense industry. In 2018, Dr. Farid served as an artificial intelligence researcher at a Data Science team of Unit 8200, where he developed machine learning-based solutions for military and operational needs. In 2019, Farid won the Fulbright and the Israel Scholarship Educational Foundation scholarships, and was accepted to post-doctoral position at Massachusetts Institute of Technology where he develops real-time methods for predicting earthquake effects using machine learning techniques. In 2020, Farid was accepted to the Emerging Leaders Program at Harvard Kennedy School in Cambridge, Massachusetts. At the same year, he received the excellence research grant of the Israel Academy of Sciences and Humanities for leading his research in collaboration between MIT and the Technion. Social activism Farid social activism focuses on empowering youths from disadvantaged backgrounds from an early age. In 2010–2015, he served as a mentor of a robotics team from Dimona in FIRST Robotics Competition, a mathematics tutor in "Aharai!" [he] program for high-school students at risk in Dimona and Be'er Sheva, and a mentor and private tutor of adolescence and reserve duty soldiers from disadvantaged backgrounds. In 2010, he initiated "Learn to Succeed" (Hebrew: ללמוד להצליח) project, for mitigating the social gaps in the Israeli society by empowering youths from the social, economical, and geographical periphery for excellence, self-fulfillment and gaining formal education. In 2018, Learn to Succeed became an official non-profit organization. At the same year, Farid led a crowdfunding project of 150,000 NIS in order to expand the organization to a national scale. In 2019, he published the book "Learn to Succeed", in which he describes his struggle with ADHD, the violent environment in which he grew up, and the changing process he went through from being a violent teenager to becoming the youngest Ph.D. graduate at the Technion. The book was given to more than two thousand youths at risk and became a top seller in Israel shortly after its publication. Maor dedicated the book to his parents and to the memorial of his friend Captain Tal Nachman who was killed in operational activity during his military service in 2014. The organization consists of hundreds of volunteers, gives full scholarships to STEM students from the periphery who serve as mentors of youths, both Jews and Arabs, from disadvantaged backgrounds, runs a hotline which gives online practical and mental support to hundreds of youths, parents and educators, initiates inspirational activities with military orientation to increase the motivation of its teen-age members for significant military service, and gives inspirational lectures to more than 5,000 youths each year. In 2019, Maor initiated a collaboration with Unit 8200 in which tens of the program's members are being interviewed to the unit. This opportunity is usually given to students with the highest grades in the matriculate exams in each class. In 2020, Dr. Farid established the ScienceAbroad center at MIT, aiming to strengthen the connections between Israeli researchers in the institute and the state of Israel. Moreover, he serves as a volunteer in the American Technion Society. Honors and awards Personal life Farid is married to Michal. Interviews and articles References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Forza] | [TOKENS: 5195] |
Contents Forza Forza (/ˈfɔːrtsə/ FORT-sə, Italian: [ˈfɔrtsa]; Italian for "force" and "strength") is a racing video game series for Xbox consoles and Microsoft Windows published by Xbox Game Studios. The franchise is primarily divided into two ongoing titles. The original Forza Motorsport series developed by American developer Turn 10 Studios focuses on primarily simulation racing around a variety of both real and fictional tracks, and seeks to emulate the performance and handling characteristics of many real-life production, modified, and racing cars. The Forza Horizon series developed by British developer Playground Games features more arcade-style racing while maintaining a toned down version of Motorsport's simulation physics. Horizon revolves around a music festival called the "Horizon Festival" and features open world environments set in fictional representations of real-world areas in which players may freely roam and participate in racing events. Apart from Motorsport and Horizon, Forza has also seen two since-discontinued mobile and computer free-to-play spin-offs; Forza Street (2019–2022), a drag racing-style game set in Miami, and Forza Customs (2023–2025), a tile-matching video game based on car customization. Both spin-offs were initially released as independent games before being rebranded as Forza titles. The franchise has sold 16 million copies as of December 2016 and has garnered critical acclaim. History Turn 10 Studios was established in 2001 by Microsoft, under its Microsoft Game Studios division, to develop a series of racing games, which later became known as Forza, as the Xbox rival to Gran Turismo for PlayStation. At the time of the studio's establishment, most staff had experience in publishing games, such as Project Gotham Racing and Golf 4.0, but had not been involved in game development. The first Forza Motorsport was designed to showcase the technological capabilities of Microsoft's first console, the original Xbox, including its Xbox Live online multiplayer network. From the start, Turn 10's approach to the series has been to broaden its appeal to the general audience and not limit it to racing enthusiasts, passionately highlighting car culture along the way. Also integral to the series' development is the considerable amount of research put into the race cars' handling, sometimes involving professional race teams as of Formula One and NASCAR. Every Forza title includes an artificial neural network used by its AI racers, called Drivatars, a portmanteau of driver and avatar. Drivatars were developed and designed by Microsoft Research Cambridge to learn and adapt to the player's driving. In early Forza titles, the Drivatars ran on a Bayesian neural network, which calculates possible solutions to a problem and their probabilities based on player data collected from previous races before selecting the one with the highest confidence value. Such a problem may be reaching a certain turn, at which point the appropriate angle at which to turn and the amount of pressure on the accelerator must be determined. Initially, the only way to share the learned Drivatars was to copy them onto Xbox Memory Units for use by other Xbox consoles. Since Forza Motorsport 5, the Drivatars have used a reinforcement learning paradigm, and have recorded racing data of all players connected to the cloud as part of the Xbox Network. In this paradigm, the Drivatars track the player's car position and speed and the consistency of the behavior and guess their turn angle and speed for a given segment, enabling the Drivatars to infer solutions for courses the player had not yet raced on and input for cars the player had not yet raced with. The data is then uploaded to the cloud to update the Drivatar behavior, and the new Drivatars are then downloaded to other Xbox consoles. Each upload is timestamped, and older uploads are treated with less confidence. There have been concerns that players could abuse the system to rubber-band the Drivatars' AI back to them, but Turn 10 has stated that the only thing that is rubber-banded are their cars, whose performance is slightly modified based on how far they are ahead of or behind the player. Turn 10 inserted a layer of control between player and AI inputs that allows for the developers to modify Drivatar behavior so as to prevent unexpected results. Initially, Turn 10 designed models for cars and tracks using commercial off-the-shelf software such as 3D Studio Max. Today, it develops and uses a proprietary 3D modelling software called Fuel that allows multiple artists to work on the same model simultaneously, primarily those for cars and racing tracks. Due to increasing complexity of video games, it took six months for four people to design a single car model for Xbox One versions of Forza, so Turn 10 came to rely on car manufacturers to share CAD files, scan real-life cars, or send out a photographer to take hundreds or thousands of photographs of newly launched cars. Playground Games was co-founded in 2009 in Leamington Spa, England, by Gavin Raeburn, Trevor Williams, and Ralph Fulton, all former employees of Codemasters. Raeburn was known for his role in developing critical hits such as the Dirt and TOCA series, but he became inspired by the open world environment of Test Drive Unlimited, and so left Codemasters feeling that it lacked the resources to fulfill his ambitions. Other members of Playground have included former employees of Bizarre Creations, Black Rock Studio, and Sony Liverpool—all racing game companies of the United Kingdom. At the same time, Turn 10 Studios began seeking out businesses in hopes of finding one willing to expand and branch out its franchise. At E3 2010, he and Williams offered Turn 10 a concept of an open-world Forza Motorsport in exchange for resources, to which Turn 10 agreed. Playground then developed what was to become Forza Horizon, collaborating with and closely backed by Turn 10. Playground Games had had a strong relationship as a third-party developer with Microsoft Studios, but it was only in a June 10 conference at E3 2018 that the company announced its acquisition by Microsoft. Until 2019, each installment of the franchise series had alternated on a biennial basis; the Motorsport entries were released in odd-numbered years, while the Horizon entries were released in even-numbered years. This pattern was altered due to the absence of a new Motorsport game in 2019. In 2025, Forza Horizon 5 was announced and slated for release on Sony's PlayStation 5 console in Q1/Q2 2025 as part of Microsoft Gaming's ongoing plans to distribute their first-party library on multiple platforms, marking the first time the franchise has shipped on a non-Xbox console. In July 2025, as part of Microsoft Gaming's restructuring of operations, Turn 10 Studios suffered a series of layoffs that resulted in a loss of nearly half their existing workforce. Subsequent reports inferred that the developer would be reorganized into a support studio for Forza Horizon entries and developments on the ForzaTech engine, effectively discontinuing the Forza Motorsport series according to former content coordinator Fred Russell. ForzaTech is the proprietary video game engine created by Turn 10 Studios. It is the main engine used for the Forza series. The game engine was trademarked by Microsoft in 2015 and has since been used to develop current Forza games, as well as the Fable reboot. Titles Forza Motorsport was released on May 3, 2005, and is the first installment in the Forza Motorsport series. It is the only title in the series to be released on the original Xbox console. It features 231 cars: 16–69 and racetracks from 15 real-world and fictional locations.: 98–153 Common elements established by this game for future Forza titles include effects of damage on car performance,: 190–192 a paint job and decal editor,: 95–97 the ability to tune one's car and purchase upgrades using in-game credits won at previous races,: 70–84 and assist functions that make driving easier but at the cost of bonus end-of-race credits.: 15 It also supported online multiplayer via Xbox Live,: 204–205 as have all of its successors. The Honda NSX and a tuned Nissan 350Z are the cover vehicles. It is playable on the Xbox 360 via backwards compatibility. The sequel to Forza Motorsport and the first Xbox 360 title in the series, Forza Motorsport 2, was released on May 29, 2007. It features a total of 349 cars: 61–154 and 23 different circuits from twelve locations.: 188–217 The series' support for force feedback first appears in this installment (Microsoft's Xbox 360 Wireless Racing Wheel was designed to work with the game and support that feature). Car customization is also expanded; there are about 50 percent more parts and upgrades than in the previous installment, up to 4,100 vinyls can be applied to any car,: 180–181 and cars can be sold online with custom skins for in-game credits in the game's new auction house.: 58–59 The system that groups cars into letter-based classes based on their performance, expressed as a numerical performance index, now takes into account changes to one's car that make it more or less powerful, raising or lowering the index and possibly reclassifying the car. Prior to the game's release, Microsoft launched Forza Motorsport Showdown, a four-part TV miniseries on Speed. The show was produced by Bud Brutsman and hosted by Lee Reherman. A tuned Nissan 350z is the cover vehicle. Forza Motorsport 3, released on October 27, 2009, includes more than 400 cars from 50 manufacturers and about 100 race track variations. Sport utility vehicles and stock cars make their debut in this game's car roster. The game introduces more optional assists aimed at making driving less challenging for less-experienced players. One of them is the rewind feature, which allows the player to turn back time to fix any previous mistake made on the track. Auto-braking brakes the player's car to prevent it from skidding off the track at turns, and the auto-tuner automatically tunes the car's aspects. The career mode has been revised for this edition of Forza Motorsport to contain 250 events, some of which involve two new modes: drag racing and drifting. It is also the first game in the franchise to feature a cockpit camera, as well as the ability to capture, edit, and share clips of gameplay. The Audi R8 is the cover vehicle. For Forza Motorsport 4, which was released on October 11, 2011, Turn 10 Studios partnered with BBC's Top Gear to get Jeremy Clarkson to provide commentary for the new Autovista mode, which allows players to explore a certain selection of cars in great detail. The game is also the first in the franchise to utilize the Kinect sensor. Players can utilize the sensor to turn their head to either side, and the game dynamically follows in a similar motion, turning the game camera to the side. It is the final Forza Motorsport released for Xbox 360. The 2009 Ferrari 458 is the cover vehicle. Forza Motorsport 5, the fifth installment in the Motorsport series and the sixth in the Forza series, was released as an Xbox One launch title on November 22, 2013. The game expanded on the Top Gear partnership by having Richard Hammond and James May provide commentary alongside Clarkson. The Autovista mode was renamed Forzavista, and new to the series are open-wheel cars and integrated cloud computing, which collects and uses driving data from connected players to shape Drivatar behavior and through which user-generated paint jobs can be downloaded. The 2013 McLaren P1 was the cover vehicle. Forza Motorsport 6, released for Xbox One on September 15, 2015, introduces new gameplay elements such as racing in the rain or at night, an online ranking system called Leagues that matches players based on their skill level, and game-modifying cards. The game increases the number of racers in any race to 24 and has a much richer selection of cars and locations than its predecessor—460 and 26, respectively. Players can also now choose whether to toggle the Drivatars' aggression. A cut-down, free-to-play Windows 10 version of the game, known as Forza Motorsport 6: Apex, was released on September 6, 2016, as "a focused and curated single-player tour of Forza Motorsport's best content". The 2017 Ford GT super-car is the cover vehicle. On September 15, 2019, it was made unavailable for purchase due to the expiration of various car and track licenses. Forza Motorsport 7 was developed for Windows 10 and Xbox One. The game was released on October 3, 2017. This game includes many tracks, including the return of Maple Valley Raceway, the fictional track last included in Forza Motorsport 4. Forza Motorsport 7 has the largest set of playable vehicles of any Forza game to date, at 830 cars. 700 cars are included in the base game, while 130 were later added as downloadable content. The 2018 Porsche 911 GT2 RS is the cover vehicle. The eighth Forza Motorsport game serves as a reboot of the Motorsport sub-series. It was first announced during Microsoft's Xbox Games Showcase on July 23, 2020, and was eventually released on October 10, 2023. The 2023 Cadillac V-Series.R and the 2024 Chevrolet Corvette E-Ray are the cover vehicles. Forza Horizon was developed for the Xbox 360 and is the first open-world game in the series. It is based around a fictitious festival called the Horizon Festival, set in the U.S. state of Colorado. The game incorporates many different gameplay aspects from previous Forza Motorsport titles, like the large variety of cars, realistic physics, and high-definition graphics. The aim is to progress through the game by means of obtaining wristbands by driving fast, destroying property, winning races, and other driving antics. Horizon features the physics of Forza Motorsport 4, which have been optimized to work on the more than 65 variants of terrain said to be present in the game. Players can drive off-road in select areas, while others are limited by guardrails or other means. Horizon allows the player to modify the car that is selected from the garage by changing numerous features, both internally and externally on a car. One can also obtain cars by winning races with random drivers on the street, by winning larger competitive races, and by finding barns housing hidden treasure cars that cannot otherwise be bought through the game's "Auto-show" or through racing. The 2013 Dodge SRT Viper GTS is the cover car. The game is backward-compatible with the Xbox One and the Xbox Series X/S. Forza Horizon 2 was developed for the Xbox 360 and Xbox One. The game is set in Southern France and Northern Italy, and the playing field is three times the map of its predecessor. The Xbox 360 version was developed by Sumo Digital, and is the final Forza game for Xbox 360. The Xbox One version introduced dynamic weather and lighting systems to the series. Tuning also made a return in the Xbox One version of the game, after being absent from the previous Horizon title. Both versions feature day-and-night cycles and cross-country races of up to 12 players and two "Bucket Lists", one for France and the other for Italy. Bucket Lists are lists of location-specific challenges involving certain vehicles for the player to complete, such as driving a Ford Raptor through a forest with only headlights to light the player's way. Additionally, its single-player and multiplayer modes have merged to allow for seamless connectivity, where other players can join in or drop out of the host's session without interrupting the latter's progress. In this edition of Forza Horizon, "Car Meets" serves as an online hub for players to compare their cars and share their own designs or tunes for others to use, as well as socialize and challenge each other in showdown races. The 2014 Lamborghini Huracán LP 610-4 was the cover car. Forza Horizon 3 was released for Xbox One and Windows 10 on September 27, 2016. Its support for Xbox Play Anywhere makes it the first Forza title to allow cross-play on the two Microsoft platforms. The game is set in Australia, and has the player represented in the game as the host of the Horizon Festival itself. Its topography, car roster, and cast of player avatars have all diversified. For the first time, the terrain includes sand and deep bodies of water that can be driven on or into, and the car roster, which is expanded to 350 cars, encompasses off-road racing buggies and trophy trucks. The single-player campaign mode adds co-op, in which up to three players join the host to complete the latter's objectives. Progression is kept regardless of which mode campaign is played in. A new mode called Horizon Blueprint allows players to edit events by changing their routes, number thereof, and time of day and determining which cars are eligible for the events. An expansion titled Blizzard Mountain was released on December 13, 2016, featuring a snow area along with the name giving blizzard storms, eight new cars, and the 2016 Ford Focus RSRX as its cover vehicle. A second expansion themed around Hot Wheels was released on May 9, 2017. This expansion features a new area called "Thrilltopia" and adds orange and blue Hot Wheels track with loops, jumps, corkscrews, boost pads, half-pipes and more. The expansion also includes ten new cars. The Hot Wheels Twin Mill is the cover vehicle. The 2016 Lamborghini Centenario LP 770-4 and the 2017 Ford F-150 Raptor Race Truck were the cover cars. Forza Horizon 4 was developed for the Xbox One and Windows 10 and released on October 2, 2018. The game is set in Great Britain, and features over 450 cars from more than one hundred manufacturers. It introduces a dynamic four-season scheme that rotates on a weekly basis and changes aspects of the environment, such as rivers drying in the summer. Places such as the Edinburgh Castle can now be purchased as property, unlocking benefits. For the first time, players are given the option to traverse the same world as others in single-player or on a 72-player server. The 2018 McLaren Senna and 1997 Land Rover Defender 90 are the cover cars. Shortly after launch, a patch was released adding a Route Creator, where players draw custom point-to-point and circuit racing routes and place their checkpoints on the map. Forza Horizon 5 was developed and released for the Xbox One, Xbox Series X/S, and Windows 10 on November 9, 2021, and is set in Mexico. The seasons return, but to account for Mexico's diverse landscape, different parts of the map have their own weather that rotates seasonally. A new mode called EventLab allows players to create races with their own rules and objectives. Also new is Forza Link, an AI assistant that tracks one's progress and preferred means of playing the game and the players they meet online. It then uses that information to match players with statistically similar interests. The Mercedes-AMG One and the 2021 Ford Bronco Badlands are the cover cars. In 2025, Forza Horizon 5 was announced and slated for release on Sony's PlayStation 5 console on April 29, 2025 as part of Microsoft Gaming's ongoing plans to distribute their first-party library on multiple platforms, marking the first time the franchise has shipped on a non-Xbox console. Forza Horizon 6 is an upcoming video game revealed at the 2025 Tokyo Game Show. The game is planned for release on Xbox Series X/S and PC at launch, with PlayStation 5 support arriving later. The game is set in Japan, with seasons returning. Locations including Tokyo and Mount Fuji are confirmed. The Toyota GR GT and the Toyota Land Cruiser 250 are the cover vehicles. Forza Street was a free-to-play racing game developed by Electric Square that was initially released for Windows 10 as Miami Street on May 8, 2018. The game was re-branded as a Forza title on April 15, 2019, and was also released for iOS and Android on May 5, 2020. Forza Street used Unreal Engine 4 instead of ForzaTech. Unlike the main Motorsport and Horizon titles, Street featured short, quick street races, and was meant to be played on low-end devices. Gameplay involved players controlling only the acceleration and braking by pressing and releasing a button or a touch screen; steering was handled automatically. Players could also use nitrous to give their cars a speed boost. There was no definite cover car, as the app icon changed the cars out based on what special event was going on. Forza Street's reviews were mixed. Although its visuals were praised, it was criticized for its overly simplistic controls, its implementation of the freemium model, and tedious gameplay. Critics rank it as the worst title on their lists of Forza games that include the spin-off. On January 10, 2022, Andy Beaudoin, a principal design director at Turn 10 Studios, announced the closure of Forza Street in spring 2022, due to the "shift its focus to new and exciting Forza experiences." The game received its final update on the same date, reducing the energy recharge time, increasing energy storage, reducing wait times for car shows, reducing prices on most items purchasable using in-game currency, and disabling the purchasing of microtransactions, refunding customers who purchased any microtransactions in the last 30 days prior to the in-app store's closure. The game was shut down on April 11, 2022. Forza Customs is a free-to-play tile-matching video game developed by British developer Hutch Games, the second mobile spin-off in the Forza franchise, and the franchise's first non-racing game title, using the Unity engine instead of ForzaTech. Similar to Forza Street, it is a rebrand of a previously released game titled Custom Car Works, and is themed after car modification and customization. The game was closed and removed from app stores on March 10, 2025. Reception The Forza series is viewed as one of the most recognized brands of the racing genre. The games sold over 10 million copies by August 2014 and 16 million copies as of April 2021, becoming the sixth best-selling racing franchise. It is also one of the highest-grossing video game franchises, grossing over US$1 billion at retail by December 2016. Individually, Forza Motorsport 3 sold 3 million units by February 2010. Forza Motorsport 5 was bought by over one third of all Xbox One owners in February 2014, which Eurogamer estimates amounts to 1.3 million copies. By December 2016, around 2.5 million Forza Horizon 3 units were sold. Over 14 million unique players were registered in the Forza community on Xbox One and Windows 10 by December 2016. The first Forza Motorsport received critical acclaim for its realistic handling mechanics, paint job editor—both features that have reappeared in every subsequent version of Forza—and Xbox Live integration. Additional assists in Forza Motorsport 3's such as the rewind ability were praised. Reviewers also lauded Forza Motorsport 4's Autovista mode. The franchise as a whole has received generally favorable reviews. Every main title in the Forza Motorsport and Forza Horizon series has received an aggregate review score of at least 80 out of 100 on Metacritic; the only exception is Forza Motorsport 5, which at launch received criticism for featuring fewer cars and tracks compared to its predecessors (though some of the content withdrawn from the final release reappeared in the form of DLCs), as well as microtransactions in which players could purchase in-game tokens to progress faster, to the latter of which Turn 10 Studios responded by increasing rewards won at the end of a race and decreasing car prices. The concept of Forza Horizon received positive remarks for demonstrating the potential that the new series had, although the game's Drivatars' AI and sparse multiplayer were criticized. As of August 2019, Forza Horizon 4 surpassed 12 million players. The Horizon series has since outperformed the Motorsport games, which despite their technical leaps have struggled to replicate the former's success for not being substantially different and as trends show that players prefer Horizon's gameplay. The series is notable for employing one of the longest-running applications of machine learning in gaming for its AI. Since Horizon 3, the Forza Horizon series have been a perennial winner of Best Sports/Racing Game at The Game Awards. References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Python_(programming_language)#cite_note-188] | [TOKENS: 4314] |
Contents Python (programming language) Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. Python is dynamically type-checked and garbage-collected. It supports multiple programming paradigms, including structured (particularly procedural), object-oriented and functional programming. Guido van Rossum began working on Python in the late 1980s as a successor to the ABC programming language. Python 3.0, released in 2008, was a major revision and not completely backward-compatible with earlier versions. Beginning with Python 3.5, capabilities and keywords for typing were added to the language, allowing optional static typing. As of 2026[update], the Python Software Foundation supports Python 3.10, 3.11, 3.12, 3.13, and 3.14, following the project's annual release cycle and five-year support policy. Python 3.15 is currently in the alpha development phase, and the stable release is expected to come out in October 2026. Earlier versions in the 3.x series have reached end-of-life and no longer receive security updates. Python has gained widespread use in the machine learning community. It is widely taught as an introductory programming language. Since 2003, Python has consistently ranked in the top ten of the most popular programming languages in the TIOBE Programming Community Index, which ranks based on searches in 24 platforms. History Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde & Informatica (CWI) in the Netherlands. It was designed as a successor to the ABC programming language, which was inspired by SETL, capable of exception handling and interfacing with the Amoeba operating system. Python implementation began in December 1989. Van Rossum first released it in 1991 as Python 0.9.0. Van Rossum assumed sole responsibility for the project, as the lead developer, until 12 July 2018, when he announced his "permanent vacation" from responsibilities as Python's "benevolent dictator for life" (BDFL); this title was bestowed on him by the Python community to reflect his long-term commitment as the project's chief decision-maker. (He has since come out of retirement and is self-titled "BDFL-emeritus".) In January 2019, active Python core developers elected a five-member Steering Council to lead the project. The name Python derives from the British comedy series Monty Python's Flying Circus. (See § Naming.) Python 2.0 was released on 16 October 2000, featuring many new features such as list comprehensions, cycle-detecting garbage collection, reference counting, and Unicode support. Python 2.7's end-of-life was initially set for 2015, and then postponed to 2020 out of concern that a large body of existing code could not easily be forward-ported to Python 3. It no longer receives security patches or updates. While Python 2.7 and older versions are officially unsupported, a different unofficial Python implementation, PyPy, continues to support Python 2, i.e., "2.7.18+" (plus 3.11), with the plus signifying (at least some) "backported security updates". Python 3.0 was released on 3 December 2008, and was a major revision and not completely backward-compatible with earlier versions, with some new semantics and changed syntax. Python 2.7.18, released in 2020, was the last release of Python 2. Several releases in the Python 3.x series have added new syntax to the language, and made a few (considered very minor) backward-incompatible changes. As of January 2026[update], Python 3.14.3 is the latest stable release. All older 3.x versions had a security update down to Python 3.9.24 then again with 3.9.25, the final version in 3.9 series. Python 3.10 is, since November 2025, the oldest supported branch. Python 3.15 has an alpha released, and Android has an official downloadable executable available for Python 3.14. Releases receive two years of full support followed by three years of security support. Design philosophy and features Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of their features support functional programming and aspect-oriented programming – including metaprogramming and metaobjects. Many other paradigms are supported via extensions, including design by contract and logic programming. Python is often referred to as a 'glue language' because it is purposely designed to be able to integrate components written in other languages. Python uses dynamic typing and a combination of reference counting and a cycle-detecting garbage collector for memory management. It uses dynamic name resolution (late binding), which binds method and variable names during program execution. Python's design offers some support for functional programming in the "Lisp tradition". It has filter, map, and reduce functions; list comprehensions, dictionaries, sets, and generator expressions. The standard library has two modules (itertools and functools) that implement functional tools borrowed from Haskell and Standard ML. Python's core philosophy is summarized in the Zen of Python (PEP 20) written by Tim Peters, which includes aphorisms such as these: However, Python has received criticism for violating these principles and adding unnecessary language bloat. Responses to these criticisms note that the Zen of Python is a guideline rather than a rule. The addition of some new features had been controversial: Guido van Rossum resigned as Benevolent Dictator for Life after conflict about adding the assignment expression operator in Python 3.8. Nevertheless, rather than building all functionality into its core, Python was designed to be highly extensible via modules. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, which represented the opposite approach. Python claims to strive for a simpler, less-cluttered syntax and grammar, while giving developers a choice in their coding methodology. Python lacks do .. while loops, which Rossum considered harmful. In contrast to Perl's motto "there is more than one way to do it", Python advocates an approach where "there should be one – and preferably only one – obvious way to do it". In practice, however, Python provides many ways to achieve a given goal. There are at least three ways to format a string literal, with no certainty as to which one a programmer should use. Alex Martelli is a Fellow at the Python Software Foundation and Python book author; he wrote that "To describe something as 'clever' is not considered a compliment in the Python culture." Python's developers typically prioritize readability over performance. For example, they reject patches to non-critical parts of the CPython reference implementation that would offer increases in speed that do not justify the cost of clarity and readability.[failed verification] Execution speed can be improved by moving speed-critical functions to extension modules written in languages such as C, or by using a just-in-time compiler like PyPy. Also, it is possible to transpile to other languages. However, this approach either fails to achieve the expected speed-up, since Python is a very dynamic language, or only a restricted subset of Python is compiled (with potential minor semantic changes). Python is meant to be a fun language to use. This goal is reflected in the name – a tribute to the British comedy group Monty Python – and in playful approaches to some tutorials and reference materials. For instance, some code examples use the terms "spam" and "eggs" (in reference to a Monty Python sketch), rather than the typical terms "foo" and "bar". A common neologism in the Python community is pythonic, which has a broad range of meanings related to program style: Pythonic code may use Python idioms well; be natural or show fluency in the language; or conform with Python's minimalist philosophy and emphasis on readability. Syntax and semantics Python is meant to be an easily readable language. Its formatting is visually uncluttered and often uses English keywords where other languages use punctuation. Unlike many other languages, it does not use curly brackets to delimit blocks, and semicolons after statements are allowed but rarely used. It has fewer syntactic exceptions and special cases than C or Pascal. Python uses whitespace indentation, rather than curly brackets or keywords, to delimit blocks. An increase in indentation comes after certain statements; a decrease in indentation signifies the end of the current block. Thus, the program's visual structure accurately represents its semantic structure. This feature is sometimes termed the off-side rule. Some other languages use indentation this way; but in most, indentation has no semantic meaning. The recommended indent size is four spaces. Python's statements include the following: The assignment statement (=) binds a name as a reference to a separate, dynamically allocated object. Variables may subsequently be rebound at any time to any object. In Python, a variable name is a generic reference holder without a fixed data type; however, it always refers to some object with a type. This is called dynamic typing—in contrast to statically-typed languages, where each variable may contain only a value of a certain type. Python does not support tail call optimization or first-class continuations; according to Van Rossum, the language never will. However, better support for coroutine-like functionality is provided by extending Python's generators. Before 2.5, generators were lazy iterators; data was passed unidirectionally out of the generator. From Python 2.5 on, it is possible to pass data back into a generator function; and from version 3.3, data can be passed through multiple stack levels. Python's expressions include the following: In Python, a distinction between expressions and statements is rigidly enforced, in contrast to languages such as Common Lisp, Scheme, or Ruby. This distinction leads to duplicating some functionality, for example: A statement cannot be part of an expression; because of this restriction, expressions such as list and dict comprehensions (and lambda expressions) cannot contain statements. As a particular case, an assignment statement such as a = 1 cannot be part of the conditional expression of a conditional statement. Python uses duck typing, and it has typed objects but untyped variable names. Type constraints are not checked at definition time; rather, operations on an object may fail at usage time, indicating that the object is not of an appropriate type. Despite being dynamically typed, Python is strongly typed, forbidding operations that are poorly defined (e.g., adding a number and a string) rather than quietly attempting to interpret them. Python allows programmers to define their own types using classes, most often for object-oriented programming. New instances of classes are constructed by calling the class, for example, SpamClass() or EggsClass()); the classes are instances of the metaclass type (which is an instance of itself), thereby allowing metaprogramming and reflection. Before version 3.0, Python had two kinds of classes, both using the same syntax: old-style and new-style. Current Python versions support the semantics of only the new style. Python supports optional type annotations. These annotations are not enforced by the language, but may be used by external tools such as mypy to catch errors. Python includes a module typing including several type names for type annotations. Also, mypy supports a Python compiler called mypyc, which leverages type annotations for optimization. 1.33333 frozenset() Python includes conventional symbols for arithmetic operators (+, -, *, /), the floor-division operator //, and the modulo operator %. (With the modulo operator, a remainder can be negative, e.g., 4 % -3 == -2.) Also, Python offers the ** symbol for exponentiation, e.g. 5**3 == 125 and 9**0.5 == 3.0. Also, it offers the matrix‑multiplication operator @ . These operators work as in traditional mathematics; with the same precedence rules, the infix operators + and - can also be unary, to represent positive and negative numbers respectively. Division between integers produces floating-point results. The behavior of division has changed significantly over time: In Python terms, the / operator represents true division (or simply division), while the // operator represents floor division. Before version 3.0, the / operator represents classic division. Rounding towards negative infinity, though a different method than in most languages, adds consistency to Python. For instance, this rounding implies that the equation (a + b)//b == a//b + 1 is always true. Also, the rounding implies that the equation b*(a//b) + a%b == a is valid for both positive and negative values of a. As expected, the result of a%b lies in the half-open interval [0, b), where b is a positive integer; however, maintaining the validity of the equation requires that the result must lie in the interval (b, 0] when b is negative. Python provides a round function for rounding a float to the nearest integer. For tie-breaking, Python 3 uses the round to even method: round(1.5) and round(2.5) both produce 2. Python versions before 3 used the round-away-from-zero method: round(0.5) is 1.0, and round(-0.5) is −1.0. Python allows Boolean expressions that contain multiple equality relations to be consistent with general usage in mathematics. For example, the expression a < b < c tests whether a is less than b and b is less than c. C-derived languages interpret this expression differently: in C, the expression would first evaluate a < b, resulting in 0 or 1, and that result would then be compared with c. Python uses arbitrary-precision arithmetic for all integer operations. The Decimal type/class in the decimal module provides decimal floating-point numbers to a pre-defined arbitrary precision with several rounding modes. The Fraction class in the fractions module provides arbitrary precision for rational numbers. Due to Python's extensive mathematics library and the third-party library NumPy, the language is frequently used for scientific scripting in tasks such as numerical data processing and manipulation. Functions are created in Python by using the def keyword. A function is defined similarly to how it is called, by first providing the function name and then the required parameters. Here is an example of a function that prints its inputs: To assign a default value to a function parameter in case no actual value is provided at run time, variable-definition syntax can be used inside the function header. Code examples "Hello, World!" program: Program to calculate the factorial of a non-negative integer: Libraries Python's large standard library is commonly cited as one of its greatest strengths. For Internet-facing applications, many standard formats and protocols such as MIME and HTTP are supported. The language includes modules for creating graphical user interfaces, connecting to relational databases, generating pseudorandom numbers, arithmetic with arbitrary-precision decimals, manipulating regular expressions, and unit testing. Some parts of the standard library are covered by specifications—for example, the Web Server Gateway Interface (WSGI) implementation wsgiref follows PEP 333—but most parts are specified by their code, internal documentation, and test suites. However, because most of the standard library is cross-platform Python code, only a few modules must be altered or rewritten for variant implementations. As of 13 March 2025,[update] the Python Package Index (PyPI), the official repository for third-party Python software, contains over 614,339 packages. Development environments Most[which?] Python implementations (including CPython) include a read–eval–print loop (REPL); this permits the environment to function as a command line interpreter, with which users enter statements sequentially and receive results immediately. Also, CPython is bundled with an integrated development environment (IDE) called IDLE, which is oriented toward beginners.[citation needed] Other shells, including IDLE and IPython, add additional capabilities such as improved auto-completion, session-state retention, and syntax highlighting. Standard desktop IDEs include PyCharm, Spyder, and Visual Studio Code; there are web browser-based IDEs, such as the following environments: Implementations CPython is the reference implementation of Python. This implementation is written in C, meeting the C11 standard since version 3.11. Older versions use the C89 standard with several select C99 features, but third-party extensions are not limited to older C versions—e.g., they can be implemented using C11 or C++. CPython compiles Python programs into an intermediate bytecode, which is then executed by a virtual machine. CPython is distributed with a large standard library written in a combination of C and native Python. CPython is available for many platforms, including Windows and most modern Unix-like systems, including macOS (and Apple M1 Macs, since Python 3.9.1, using an experimental installer). Starting with Python 3.9, the Python installer intentionally fails to install on Windows 7 and 8; Windows XP was supported until Python 3.5, with unofficial support for VMS. Platform portability was one of Python's earliest priorities. During development of Python 1 and 2, even OS/2 and Solaris were supported; since that time, support has been dropped for many platforms. All current Python versions (since 3.7) support only operating systems that feature multithreading, by now supporting not nearly as many operating systems (dropping many outdated) than in the past. All alternative implementations have at least slightly different semantics. For example, an alternative may include unordered dictionaries, in contrast to other current Python versions. As another example in the larger Python ecosystem, PyPy does not support the full C Python API. Creating an executable with Python often is done by bundling an entire Python interpreter into the executable, which causes binary sizes to be massive for small programs, yet there exist implementations that are capable of truly compiling Python. Alternative implementations include the following: Stackless Python is a significant fork of CPython that implements microthreads. This implementation uses the call stack differently, thus allowing massively concurrent programs. PyPy also offers a stackless version. Just-in-time Python compilers have been developed, but are now unsupported: There are several compilers/transpilers to high-level object languages; the source language is unrestricted Python, a subset of Python, or a language similar to Python: There are also specialized compilers: Some older projects existed, as well as compilers not designed for use with Python 3.x and related syntax: A performance comparison among various Python implementations, using a non-numerical (combinatorial) workload, was presented at EuroSciPy '13. In addition, Python's performance relative to other programming languages is benchmarked by The Computer Language Benchmarks Game. There are several approaches to optimizing Python performance, despite the inherent slowness of an interpreted language. These approaches include the following strategies or tools: Language Development Python's development is conducted mostly through the Python Enhancement Proposal (PEP) process; this process is the primary mechanism for proposing major new features, collecting community input on issues, and documenting Python design decisions. Python coding style is covered in PEP 8. Outstanding PEPs are reviewed and commented on by the Python community and the steering council. Enhancement of the language corresponds with development of the CPython reference implementation. The mailing list python-dev is the primary forum for the language's development. Specific issues were originally discussed in the Roundup bug tracker hosted by the foundation. In 2022, all issues and discussions were migrated to GitHub. Development originally took place on a self-hosted source-code repository running Mercurial, until Python moved to GitHub in January 2017. CPython's public releases have three types, distinguished by which part of the version number is incremented: Many alpha, beta, and release-candidates are also released as previews and for testing before final releases. Although there is a rough schedule for releases, they are often delayed if the code is not ready yet. Python's development team monitors the state of the code by running a large unit test suite during development. The major academic conference on Python is PyCon. Also, there are special Python mentoring programs, such as PyLadies. Naming Python's name is inspired by the British comedy group Monty Python, whom Python creator Guido van Rossum enjoyed while developing the language. Monty Python references appear frequently in Python code and culture; for example, the metasyntactic variables often used in Python literature are spam and eggs, rather than the traditional foo and bar. Also, the official Python documentation contains various references to Monty Python routines. Python users are sometimes referred to as "Pythonistas". Languages influenced by Python See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Ross_47] | [TOKENS: 152] |
Contents Ross 47 Ross 47 is a variable star of spectral type M4 located in the constellation Orion. Based on parallax measurements, it lies 18.888 light-years from Earth. Ross 47 is a small red dwarf with 23.8% of the Sun's size, and 6% of its luminosity. Its effective temperature is much cooler than the Sun's, about 3,330 K. It is a BY Draconis variable, a star which whose brightness varies across its rotation, having multiple starspots that contribute to the brightness variation. Its apparent magnitude varies from +11.48 to 11.55. Ross 47 was given the variable star designation V1352 Orionis in 1997. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Giv%27at_Shmuel] | [TOKENS: 608] |
Contents Giv'at Shmuel Giv'at Shmuel (Hebrew: גבעת שמואל, lit. 'Samuel's Hill') is a city in the Center District of Israel. It is located in the eastern part of the Gush Dan metropolitan area and bordered by Ramat Gan and Bnei Brak to the West, Kiryat Ono to the South and Petah Tikva to the East and North. In 2023 it had a population of 28,628. History Giv'at Shmuel was founded in 1944. It was named for the Romanian Zionist leader Samuel Pineles, founder and president of the Zionist Congress in Focșani and Vice-President of the First Zionist Congress in Basel. On November 5, 2007, the Israeli Minister of Interior accepted a committee recommendation to change the municipal status of Giv'at Shmuel to 'city'. Demographics At the end of 2019, the population of Givat Shmuel numbered 26,578 with a growth rate of 2.1%, and with the building of new neighborhoods is planned to grow to 40,000. Demographics are mixed religious/secular, with a socioeconomic standing of 8/10. Amongst immigrants from English-speaking countries, Giv'at Shmuel is home to Israel's largest community of lone immigrants, at approximately 950 students, young professionals, newly married couples and young families. It also has the highest rate of "successful aliyah" - the number of immigrants who remain in Israel after 5 years - in the country. Since 2013, Nefesh B'Nefesh has organized various events and activities, and works with the local authorities to expand programming. The GSC - Givat Shmuel Community (R.A.) was formed, creating an infrastructural backbone for English-speaking activities in the area. Education Sports Maccabi Habik'a, formerly Elitzur Givat Shmuel, is a basketball team that played in Ligat HaAl, the top division of Israeli basketball, until relegation in 2007. The team reached the State Cup final in 2003, but lost to Maccabi Tel Aviv. In 2010, an annual 10 km (6 mi) race was inaugurated with 350 runners from all over the country. Landmarks A leisure and sports center was established on an area of about 32 dunams (8 acres) in northeastern Giv'at Shmuel which incorporates tennis courts, fitness rooms, swimming pools, a roller skating rink, a cafeteria and other services, along with water park, covering an area of about 5 dunams (1.25 acre). In the center of town there is a park named after the Israeli astronaut, Ilan Ramon who died in the Space Shuttle Columbia disaster. Twin towns – sister cities Notable people Gallery References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Second_inauguration_of_Donald_Trump] | [TOKENS: 4112] |
Contents Second inauguration of Donald Trump The inauguration of Donald Trump as the 47th president of the United States took place on Monday, January 20, 2025. Due to freezing temperatures and high winds, it was held inside the U.S. Capitol rotunda in Washington, D.C. It was the 60th U.S. presidential inauguration and the second inauguration of Trump as U.S. president, marking the commencement of his second and final presidential term, and the first term of JD Vance as vice president. It was the second re-inauguration for a former U.S. president, after the second inauguration of Grover Cleveland in 1893. Trump's first inauguration was eight years earlier, on January 20, 2017. The event included a swearing-in ceremony, a signing ceremony, an inaugural luncheon, a first honors ceremony, and then a procession and parade at Capital One Arena. Inaugural balls were held at various venues before and after the inaugural ceremonies. The Capitol rotunda can seat approximately 600 people; the number of attendees has not been disclosed. Context The inauguration marked the formal culmination of Donald Trump's presidential transition that began with his election on November 6, 2024, him becoming the president-elect. Trump and his running mate JD Vance were formally elected by the Electoral College on December 17, 2024. The victory was certified by an electoral vote tally by a joint session of Congress on January 6, 2025. In accordance with Article I, Section 6 of the United States Constitution, Vance resigned his seat in the U.S. Senate effective midnight on January 10, 2025. Planning Held on the third Monday of January, the inauguration was on the same day as Martin Luther King Jr. Day, which marked the third time an inauguration occurred on the same date as the holiday, following the second inaugurations of Bill Clinton in 1997 and Barack Obama in 2013 (January 21). On January 17, Trump announced the inauguration ceremony would be moved inside to the Capitol rotunda due to expected cold weather, like the public second inauguration of Ronald Reagan on January 21, 1985.[a] In May 2024, both houses of Congress appointed a Joint Committee on Inaugural Ceremonies to oversee the construction of the platform and other temporary structures that were expected to be used for the later-canceled outdoor ceremonies and celebrations. Construction of the inaugural platform ceremonially began on September 18, 2024, with the driving of the first nail by United States senator Amy Klobuchar using a nail made from iron ore mined and processed from the Iron Range in Minnesota. In October 2024, the United States Capitol Police conducted an intelligence assessment that concluded an activist group "with a history of large-scale demonstrations involving illegal activity plans to protest the Inauguration regardless of the outcome" and that other groups protesting the Israel–Hamas war were "nearly certain to target the Inauguration" regardless of who would be elected president. According to the New York Times, organizers of the 2017 Women's March were committed to recreating it under the refreshed branding "People's March". On January 18, thousands participated in the march, but turnout fell short of the expected 50,000 attendees. Agencies expected to be involved with planning of the ceremony include the U.S. Capitol Police, the Washington, D.C., Metropolitan Police, and the U.S. Park Police. Twenty-four states offered National Guard support for the electoral vote certification and inaugural ceremonies. On January 17, approximately 8,000 National Guard soldiers were deputized as special deputy United States marshals, providing them police authority within Washington, D.C. On November 9, 2024, Trump announced the formation of the Trump Vance Inaugural Committee, Inc., a 501(c)(4) organization dedicated to planning inaugural events. The committee co-chairs were real-estate developer Steve Witkoff and former U.S. senator Kelly Loeffler, longtime friends and supporters of Trump. Various technology companies and their leaders pledged donations and services for the inauguration. OpenAI CEO Sam Altman said through a spokesperson that he would make a $1 million personal donation. Mark Zuckerberg, the head of Meta and the parent company of Facebook and Instagram, sent $1 million. It was also reported in The Wall Street Journal that Amazon's CEO, Jeff Bezos, offered to stream the ceremony on Amazon Prime Video, and this amounted to a $1 million in-kind donation on top of a $1 million cash donation. Apple CEO Tim Cook personally donated $1 million. Uber and its CEO Dara Khosrowshahi each agreed to donate $1 million to the inauguration. Alphabet donated $1 million and supported an inauguration livestream with a direct link on the homepage of YouTube. Microsoft, Adobe, and Perplexity also donated $1 million. NPR quoted Margaret O'Mara, a Silicon Valley historian at the University of Washington, as saying these donations were due to some of these tech leaders, having previously been in conflict with Trump, wishing to reduce regulatory pressure on their companies under the incoming administration. Ford Motor Company and General Motors announced that they would donate $1 million each and provide a fleet of vehicles for the inauguration. Toyota, Chevron, Hyundai, and Stellantis also donated $1 million. Various financial services businesses and their leaders donated at least $1 million, including Goldman Sachs, Bank of America, JPMorgan, Kraken, Coinbase, Intuit, Robinhood, Ken Griffin, Ripple, and Ondo Finance. Major donors from the telecommunications industry included AT&T, Comcast, and Charter Communications. Major donors from the healthcare and pharmaceutical industry included PhRMA, Pfizer, and Hims & Hers. Major donors from the manufacturing and industrial sector included Stanley Black & Decker, Pratt Industries, Boeing, and Lockheed Martin. Other major donors included Delta Air Lines and McDonald's. In April 2025, it was reported that $239 million was donated to Trump's inaugural committee, more than doubling the previous record of $107 million raised for Trump's 2017 inauguration, including 29 gifts totaling $13 million from subsidiaries of companies based outside the United States. Pre-inaugural events On the morning of January 19, Trump and Vice President-elect Vance visited the Arlington National Cemetery, where they placed a wreath at the Tomb of the Unknown Soldier. They were joined by family members of some of the victims of the 2021 Kabul airport attack. On the evening of January 19, the Trump campaign organized the "Make America Great Again Victory Rally", a rally for supporters at Capital One Arena in Washington, D.C. The event featured performances by Kid Rock and Lee Greenwood, as well as speeches by Trump and Megyn Kelly. Trump also performed his signature dance to a rendition of "Y.M.C.A." performed by Village People, who joined him on stage. On the morning of the inauguration, on January 20, after staying the night at the Blair House, the traditional house used by the incoming president-elect due to its proximity to the White House,[citation needed] Trump and his wife, Melania, and JD Vance and his wife, Usha, attended a church service at St. John's Episcopal Church. Every president since James Madison has attended the church at least once, while every president since Franklin D. Roosevelt has attended it on the day of their inauguration. The service was led by Robert Jeffress, a Southern Baptist minister who campaigned for Trump during the election. After the church service, Trump and his wife went to the White House to meet with President Joe Biden and First Lady Jill Biden. The Bidens greeted the Trumps, and they then posed for photos in front of the White House press corps. Afterward, they held a tea reception inside the White House, along with Vice President Kamala Harris and her husband, Doug Emhoff, and JD Vance and his wife, Usha Vance. As per tradition, following the meeting between the president and the president-elect, they shared the presidential motorcade limousine, and made their way to the Capitol for the inaugural ceremony. Inaugural events The transfer of power included the transition of official administration X (formerly named Twitter) accounts @POTUS and @VP. Members of the Trump administration also assumed ownership of a number of institutional accounts, including @WhiteHouse, @FLOTUS for First Lady Melania Trump, @SLOTUS for Second Lady Usha Vance, @WHCOS for White House chief of staff Susie Wiles, and @PressSec for White House press secretary Karoline Leavitt. New executive branch websites were initialized; previous administrations' websites reside in the National Archives. Trump's inauguration marked the first time that a U.S. president-elect formally welcomed foreign leaders to the ceremony. Outgoing U.S. president Joe Biden (who defeated Trump in 2020 and was inaugurated as the 46th president in 2021), outgoing U.S. vice president Kamala Harris (who had been Trump's main opponent in 2024), former U.S. presidents Bill Clinton, George W. Bush, and Barack Obama (whom Trump first succeeded in 2017) attended the inauguration. Former first ladies Hillary Clinton (Trump's former opponent in 2016) and Laura Bush also attended the inauguration, but former first lady Michelle Obama was absent. Former U.S. vice presidents Dan Quayle and Mike Pence (who served under Trump during his first term) and former second lady Marilyn Quayle were also in attendance, while former vice presidents Al Gore and Dick Cheney and former second lady Karen Pence were absent. New York mayor Eric Adams and media proprietor Rupert Murdoch also attended the inauguration. Chinese president Xi Jinping was invited to the ceremony, but sent vice president Han Zheng as his special representative instead. This marked the first time a senior official of China's government was sent to a US presidential inauguration. El Salvador's president Nayib Bukele and Italian prime minister Giorgia Meloni were also reportedly invited. Israeli prime minister Benjamin Netanyahu initially planned to attend, but ultimately did not after not receiving a formal invitation. Argentine president Javier Milei and the last democratically elected Georgian president Salome Zourabichvili had been reportedly planning to attend. Former Brazilian president Jair Bolsonaro has indicated that he was an invitee, but he would have needed his confiscated passport to be returned by the government in order to travel. Russia confirmed that President Vladimir Putin did not receive an invitation. Trump stated that he had not invited President Volodymyr Zelenskyy to his inauguration but expressed willingness to welcome him if he decided to attend. Current British Prime Minister Keir Starmer did not attend the inauguration, while former British prime ministers Boris Johnson and Liz Truss attended. Ecuadorian president Daniel Noboa, first lady Lavinia Valbonesi, and Paraguayan president Santiago Peña were also planning to attend. Edmundo González, whom the U.S. government recognizes as the winner of the 2024 Venezuelan presidential election, also reportedly attended. The foreign ministers of Quad nations, S. Jaishankar from India, Penny Wong from Australia, and Takeshi Iwaya from Japan, also attended the inauguration. They were expected to meet with Trump the day after the ceremony for discussions. A number of right-wing populist politicians attended the inauguration. French Reconquête politicians Éric Zemmour and Sarah Knafo, National Rally politicians Louis Aliot, Julien Sanchez, and Alexandre Sabatou, and Identity–Liberties leader Marion Maréchal, attended the ceremony. Spanish Vox leader Santiago Abascal, Belgian Vlaams Belang leader Tom Van Grieken, Reform UK leader Nigel Farage, Alternative for Germany (AfD) co-leader Tino Chrupalla, Estonian Conservative People's Party leader Martin Helme, Alliance for the Union of Romanians leader George Simion, Danish People's Party leader Morten Messerschmidt, Portuguese Chega leader Andre Ventura, Hungarian Fidesz vice-president Kinga Gál, and former Polish Prime Minister Mateusz Morawiecki were also in attendance. From the Czech Patriots.eu representation were invited ANO MEP Ondřej Knotek and vice president of the Patriots for Europe MEP Klára Dostálová, Přísaha senator Robert Šlachta, MEP Filip Turek from the Motorists for Themselves, as well as their founder and leader Petr Macinka, the latter of whom – as manager of the Václav Klaus Institute – was also invited to the Foster's Outriders Foundation inauguration ball at the Museum of the Bible to meet Tucker Carlson and Rick Santorum. AfD Members of the Bundestag Jan Wenzel Schmidt and Beatrix von Storch alongside her husband Sven von Storch have confirmed their attendance. AfD co-leader Alice Weidel, Freedom Party of Austria leader Herbert Kickl, and Bulgarian Revival leader Kostadin Kostadinov were invited, but did not attend the ceremony. Hristijan Mickoski, Prime Minister of North Macedonia, was invited to the inauguration. Numerous businesspeople including Bernard Arnault, Delphine Arnault, Sergey Brin, Elon Musk, Jeff Bezos, and Mark Zuckerberg, among the world's richest people, attended the inauguration. They had a prominent role at the event, seated together on the platform alongside other distinguished guests, including Cabinet nominees and elected officials. TikTok CEO Shou Zi Chew attended the inauguration. Alphabet's Sundar Pichai, Apple's Tim Cook, OpenAI's Sam Altman, Reliance's Mukesh Ambani, and Uber's Dara Khosrowshahi also attended the event. Las Vegas Sands owner Miriam Adelson also attended the ceremony. Several celebrities and sports figures – including Victor Willis, Carrie Underwood (who sang "America the Beautiful"), Christopher Macchio (who sang the national anthem), Antonio Brown, Mike Tyson, Jorge Masvidal, Evander Kane, Gianni Infantino, Anuel AA, Justin Quiles, Rod Wave, Kodak Black, Lee Greenwood, Fivio Foreign, Jake and Logan Paul, Theo Von, Conor McGregor, Danica Patrick, Dana White, Joe Rogan, and Wayne Gretzky – attended the ceremony. Media personalities Charlie Kirk, Laura Ingraham, and Tucker Carlson also attended the event. An order of events for the January 20, 2025, inauguration was published by the Joint Congressional Committee on Inaugural Ceremonies and the National Park Service. Signing Ceremony Supreme Court Associate Justice Brett Kavanaugh administered the vice presidential oath of office to JD Vance. It was the first time since 1969 that "Hail, Columbia" was not played for the new vice president immediately upon taking the oath. Chief Justice John Roberts hastily administered the presidential oath of office to Donald Trump. Trump's wife, Melania, held two Bibles for Trump to place his left hand on while reciting the oath, in accordance with custom, but he did not do so. This is merely a tradition; incoming presidents are not required to place their hand on the Bibles. President Trump's inaugural address started with references to the indictments against him, which he described as unfounded and politically motivated, followed by the announcement of the new administration's policy priorities, including immigration restrictions, the easing of environmental regulations, anti-DEI and anti-gender ideology policies, the establishment of a Department of Government Efficiency, and a negotiated settlement to the Russo-Ukrainian war. Trump defined his presidency as the beginning of a golden era for America. The Washington Post described Trump's inaugural address as attempting to emphasize unity, but as his speeches usually do, it veered off course and came off as "dark". He described himself as chosen by God, and that he was "tested and challenged more than any president in our 250-year history," which The Post noted would place himself above every other president in U.S. history, including George Washington and Abraham Lincoln. Trump invoked the phrase "Manifest Destiny" as he described an expansionist agenda, and criticized Democrats and other leaders. NPR said the speech gave the American public a better idea of what Trump's policies and directives would be, noted he spoke nothing of the January 6th U.S. Capitol attack nor his prior promises of political retribution, and pointed out his derision of the outgoing administration, in front of Biden and Harris, the former president and vice president. Invocations preceding the inaugural address were offered by Cardinal Timothy M. Dolan, Roman Catholic Archdiocese of New York, and Rev. Franklin Graham of the Billy Graham Evangelistic Association. Benedictions were offered by Pastor Lorenzo Sewell, Rabbi Ari Berman, and Father Frank Mann, Roman Catholic Diocese of Brooklyn. Imam Husham Al-Husainy, a Muslim cleric from Dearborn, Michigan, was initially on the program but did not speak or appear at the event. After the inaugural ceremony, President Donald Trump, First Lady Melania Trump, Vice President JD Vance and Second Lady Usha Vance escorted former president Joe Biden and former first lady Jill Biden to a departure ceremony on the east side of the U.S. Capitol. The Trumps exchanged remarks and bid farewell to the Bidens at the base of the helicopter that would transport them to Joint Base Andrews, and then returned to the steps of the Capitol building where they waved as the Bidens' helicopter took off. Meanwhile, Harris and Emhoff took a limousine and then they boarded a plane for Los Angeles.[citation needed] Following the Bidens' departure, President Trump gave remarks in front of supporters at Emancipation Hall. Customarily, inaugural balls are held at various venues before and after the inaugural ceremonies. Official balls, at which the president and first lady appear, are organized by the inaugural committee, while unofficial balls are not. Three official inaugural balls occurred, at which performers including Nelly, Rascal Flatts, and Jason Aldean appeared. A larger number of unofficial balls were organized. Viewership Approximately 24.6 million total viewers watched the inauguration across 15 networks. The viewership number peaked at 34.4 million when Trump took the oath of office at 12:15 PM ET. Viewership was lower than that of Biden's 2021 inauguration as well as Trump's first inauguration in 2017. The figures below, Nielsen data sourced from Adweek, do not include streaming figures. Protests Several members of the Democratic Party in the 119th Congress decided to boycott the inauguration. This boycott was perceived as an initial opposition to the incoming administration. Multiple reasons were given for the decision to boycott, including the event coinciding with Martin Luther King Jr. Day events and memories from the January 6 United States Capitol attack. As of December 14, below is a list of House Democrats who publicly stated they would not be attending the inauguration: Protest rallies and marches occurred in cities and towns all over the United States the weekend before and on the day of the inauguration. Organizers of the Women's March (which first took place the day after Trump's first inauguration and every year thereafter) rebranded their event the People's March and had events in at least 70 locations. The People's March was co-organized with Abortion Rights Now, Sierra Club, Planned Parenthood, ACLU and National Women's Law Center. Attendance at one of the Washington marches was, according to the Associated Press, "far fewer than the expected 50,000 participants, already just one-tenth the size of the first march". "We Fight Back" rallies, organized by the People's Forum, Party for Socialism and Liberation, the ANSWER Coalition, Democratic Socialists of America, Dream Defenders, CODEPINK, labor unions, tenant unions and other groups were held in 90 locations. Around the world, anti-Trump protests occurred at consulates and elsewhere in Mexico City, London, Paris, Brussels, Amsterdam, Berlin, Edinburgh, Lisbon, Prague, Warsaw, Panama City, and Manila. See also Further reading Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/United_States_non-interventionism_before_entering_World_War_II] | [TOKENS: 5580] |
Contents United States non-interventionism United States non-interventionism primarily refers to the foreign policy that was eventually applied by the United States between the late 18th century and the first half of the 20th century whereby it sought to avoid alliances with other nations in order to prevent itself from being drawn into wars that were not related to the direct territorial self-defense of the United States. Neutrality and non-interventionism found support among elite and popular opinion in the United States, which varied depending on the international context and the country's interests. At times, the degree and nature of this policy was better known as isolationism, such as the interwar period, while some consider the term isolationism to be a pejorative used to discredit non-interventionist policy. It is key to decipher between the terms isolationism and non-interventionism as they represent two distinct types of foreign policy. Isolationism is the act of completely disengaging from any global affairs such as military alliances, international organisations and economic treaties. Whereas, non-interventionism although also opposed to military engagement, there was still room for diplomatic and economic relations with the rest of the world. This can be seen during the build up to World War II, where non-interventionists opposed direct military involvement in Europe whilst supporting them with economic aid such as the Lend Lease Act. Due to the start of the Cold War in the aftermath of World War II and the rise of the United States as a global superpower, its traditional foreign policy turned towards American imperialism with diplomatic and military interventionism, engaging or somehow intervening in virtually any overseas armed conflict ever since, and concluding multiple bilateral and regional military alliances, chiefly the North Atlantic Treaty Organization. Non-interventionist policies have had continued support from some Americans since World War II, mostly regarding specific armed conflicts in Korea, Vietnam, Syria, and Ukraine. Background Robert Walpole, Britain's first Whig prime minister, proclaimed in 1723: "My politics are to keep free from all engagements as long as we possibly can." He emphasized economic advantage and rejected the idea of intervening in European affairs to maintain a balance of power. Walpole's position was known to Americans. However, during the American Revolution, the Second Continental Congress debated about forming an alliance with France. It rejected non-interventionism when it was apparent that the American Revolutionary War could be won in no other manner than a military alliance with France, which Benjamin Franklin successfully negotiated in 1778. After Britain and France went to war in 1792, George Washington declared neutrality, with unanimous support of his cabinet, after deciding that the treaty with France of 1778 did not apply. Secretary of the Treasury Alexander Hamilton concurred, arguing in Pacificus 2 that the treaty stipulated a defensive alliance, and not an offensive one. Washington's Farewell Address of 1796 explicitly announced the policy of American non-interventionism: No entangling alliances (19th century) President Thomas Jefferson extended Washington's ideas about foreign policy in his March 4, 1801 inaugural address. Jefferson said that one of the "essential principles of our government" is that of "peace, commerce, and honest friendship with all nations, entangling alliances with none." He also stated that "Commerce with all nations, alliance with none", should be the motto of the United States. Extending at times into isolationism, both Jefferson and Madison also practiced the boycotting of belligerent nations with the Embargo Act of 1807. In 1823, President James Monroe articulated what would come to be known as the Monroe Doctrine, which some have interpreted as non-interventionist in intent: "In the wars of the European powers, in matters relating to themselves, we have never taken part, nor does it comport with our policy, so to do. It is only when our rights are invaded, or seriously menaced that we resent injuries, or make preparations for our defense." It was applied to Hawaii in 1842 in support of eventual annexation there, and to support U.S. expansion on the North American continent. During the Hungarian Revolution of 1848–1849, the United States adhered to its formal policy of non-intervention while offering diplomatic support to the Hungarian cause. American public opinion overwhelmingly favored the revolutionaries. President Zachary Taylor expressed sympathy for the "Magyar patriots," and the U.S. Congress debated resolutions in their favor. The Austrian Empire's suppression of the revolution—bolstered by Russian military intervention—sparked outrage in the United States and led to a heated diplomatic exchange between Austrian Ambassador Johann Von Hülsemann and Secretary of State Daniel Webster, who defended America’s right to comment on foreign affairs. Though the U.S. declined to recognize Hungarian independence or offer military aid, it secured the release of Hungarian leader Lajos Kossuth from Ottoman custody and welcomed him on a celebrated tour of the United States. After Tsar Alexander II put down the 1863 January Uprising in Poland, French Emperor Napoleon III asked the United States to "join in a protest to the Tsar." Secretary of State William H. Seward declined, "defending 'our policy of non-intervention—straight, absolute, and peculiar as it may seem to other nations,'" and insisted that "[t]he American people must be content to recommend the cause of human progress by the wisdom with which they should exercise the powers of self-government, forbearing at all times, and in every way, from foreign alliances, intervention, and interference." President Ulysses S. Grant attempted to annex the Dominican Republic in 1870, but failed to get the support of the Radical Republicans in the Senate. The United States' policy of non-intervention was wholly abandoned with the Spanish–American War, followed by the Philippine–American War from 1899 to 1902. 20th century non-interventionism President Theodore Roosevelt's administration is credited with inciting the Panamanian Revolt against Colombia, completed November 1903, in order to secure construction rights for the Panama Canal (begun in 1904).[citation needed] President Woodrow Wilson was able to navigate neutrality in World War I for about three years, and to win 1916 reelection with the slogan "He kept us out of war." The neutrality policy was supported by the tradition of shunning foreign entanglements, and by the large population of immigrants from Europe with divided loyalties in the conflict. America did enter the war in April 1917, however. Congress voted to declare war on Germany, 373 to 50 in the House of Representatives and 82 to 6 in the Senate. Technically the US joined the side of the Triple Entente only as an "associated power" fighting the same enemy, not as officially allied with the Entente. A few months after the declaration of war, Wilson gave a speech to Congress outlining his aims for conclusion of the conflict, labeled the Fourteen Points. That American proclamation was less triumphalist than the stated aims of some other belligerents, and its final point proposed that a "general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial integrity to great and small states alike." After the war, Wilson traveled to Europe and remained there for months to labor on the post-war treaty, longer than any previous Presidential sojourn outside the country. In that Treaty of Versailles, Wilson's "general association of nations" was formulated as the League of Nations.[citation needed] In the wake of the First World War, the non-interventionist tendencies gained ascendancy. The Treaty of Versailles, and thus, United States' participation in the League of Nations, even with reservations, was rejected by the Senate in the final months of Wilson's presidency. Republican Senate leader Henry Cabot Lodge supported the Treaty with reservations to be sure Congress had final authority on sending the U.S. into war. Wilson and his Democratic supporters rejected the Lodge Reservations. The strongest opposition to American entry into the League of Nations came from the Senate where a tight-knit faction known as the Irreconcilables, led by William Borah and George Norris, had great objections regarding the clauses of the treaty which compelled America to come to the defense of other nations. Senator William Borah, of Idaho, declared that it would "purchase peace at the cost of any part of our [American] independence." Senator Hiram Johnson, of California, denounced the League of Nations as a "gigantic war trust." While some of the sentiment was grounded in adherence to Constitutional principles, most of the sentiment bore a reassertion of nativist and inward-looking policy. American society in the interwar period was characterized by a division in values between urban and rural areas as Americans in urban areas tended to be liberal while those in rural areas tended to be conservative. Adding to the division was that Americans in rural areas tended to be Protestant of British and/or German descent while those in urban areas were often Catholic or Jewish and came from eastern or southern Europe. The rural-urban divide was seen most dramatically in the intense debate about Prohibition as urban Americans tended to be "wets" while rural Americans tended to be "drys". The way that American society was fractured along an urban-rural divide served to distract public attention from foreign affairs. In the 1920s, the State Department had about 600 employees in total with an annual budget of $2 million, which reflected a lack of interest on the part of Congress in foreign affairs. The State Department was very much an elitist body that recruited mostly from graduates of the select "Ivy League" universities, which reflected the idea that foreign policy was the concern of elites. Likewise, the feeling that the United States was taking in far too many immigrants from eastern and southern Europe-who were widely depicted in the American media as criminals and revolutionaries-led to laws restricting immigration from Europe. In turn, the anti-immigrant mood increased isolationism as the picture of Europe as a place overflowing with dangerous criminals and equally dangerous Communist revolutionaries led to the corresponding conclusion that the United States should have little as possible to do with nations whose peoples were depicted as disagreeable and unpleasant. The same way that Congress had virtually banned all non-white immigration to the United States likewise led an indifference about the fate of non-white nations such as China and Ethiopia. The debate about Prohibition in the 1920s also encouraged nativist and isolationist feelings as "drys" often engaged in American exceptionalism by arguing that the United States was a uniquely morally pure nation that had banned alcohol, unlike the rest of the world which remained "wet" and was depicted as mired in corruption and decadence. The United States acted independently to become a major player in the 1920s in international negotiations and treaties. The Harding Administration achieved naval disarmament among the major powers through the Washington Naval Conference in 1921–22. The Dawes Plan refinanced war debts and helped restore prosperity to Germany. In August 1928, fifteen nations signed the Kellogg–Briand Pact, brainchild of American secretary of state Frank Kellogg and French Foreign Minister Aristide Briand. This pact that was said to have outlawed war and showed the United States commitment to international peace had its semantic flaws. For example, it did not hold the United States to the conditions of any existing treaties, it still allowed European nations the right to self-defense, and it stated that if one nation broke the Pact, it would be up to the other signatories to enforce it. Briand had sent a message on 6 April 1927 to mark the 10th anniversary of the American declaration of war on Germany in 1917 proposing that France and the United States sign a non-aggression pact. Briand was attempting to create a Franco-American alliance to counter Germany as Briand envisioned turning the negotiations for the non-aggression pact into an some sort of an alliance. Kellogg had no interest in an alliance with France, and countered with a vague offer for a treaty to ban all war. The Kellogg–Briand Pact was more of a sign of good intentions on the part of the US, rather than a legitimate step towards the sustenance of world peace.[citation needed][neutrality is disputed] Another reason for isolationism was the belief that the Treaty of Versailles was too harsh towards Germany and the question of war debts to the United States. American public opinion was especially hostile towards France, which was depicted in the words of the Republican senator Reed Smoot who in August 1930 called France a greedy "Shylock" intent upon taking the last "pound of flesh" from Germany via reparations while refusing to pay its war debts to the United States. In the early 1930s, French diplomats at the embassy in Washington stated that the image of France was at an all-time low in the United States with American public opinion being especially incensed by France's decision to default on its war debts on 15 December 1932. French diplomats throughout the interwar period complained that the German embassy and consulates in the United States waged a slick, well funded propaganda campaign designed to persuade the Americans that the Treaty of Versailles was a monstrous, unjust peace treaty while the French embassy and consulates did nothing equivalent to make the case for France. The effect of German propaganda tended to persuade many Americans it had been a huge mistake to have declared war on Germany in 1917 and it would be wrong for the United States to go to war to maintain the international order created by the Treaty of Versailles. The economic depression that ensued after the Crash of 1929, also continued to abet non-intervention. The attention of the country focused mostly on addressing the problems of the national economy. Isolationism fit the national mood of the 1930s, as the economic crisis led to a lack of willingness to extend the resources to others, this combined with the ongoing issue of the allies failing to pay back their war debts created disillusionment on the benefit of foreign interventions to actually accomplish anything other than profiting imperialists. The rise of aggressive imperialist policies by Fascist Italy and the Empire of Japan led to conflicts such as the Italian conquest of Ethiopia and the Japanese invasion of Manchuria. These events led to ineffectual condemnations by the League of Nations. Official American response was muted. America also did not take sides in the brutal Spanish Civil War and withdrew its troops from Haiti with the inauguration of the Good Neighbor Policy in 1934. In an attempt to influence American public opinion into taking a more favorable view of France, the Quai d'Orsay founded in 1935 the Association our la Constitution aux Etats-Unis d'un Office Français de Renseignements based in New York, a cultural propaganda council designed to give Americans a more favorable image of France. Better known as the French Information Center, the group created a French Cinema Center to distribution of French films in the United States and by 1939 had handled out for free about 5, 000 copies of French films to American universities and high schools. The French Information Center provided briefings to American journalists and columnists about the French point of view with the emphasis upon France as a democracy that had potential powerful enemies in the form of totalitarian dictatorships such as Germany and Italy. Such propaganda did not seek to challenge American isolationism directly, but the prevailing theme was that France and the United States as democracies had more in common than what divided them. By 1939, René Doynel de Saint-Quentin, the French ambassador in Washington reported that image of France was much higher than what it had been in 1932. During this period, a significant proportion of non-interventionist sentiment in the United States was shaped as a result of women's peace organisations. For instance a key organisation was the Women's International League for Peace and Freedom (WILPF). It was a group of female pacifists who were against the U.S intervention in World War 1. Key African American women activists include Addie Hunton, Mary Church Terrel and Maude White Katz. They were significant as their membership was made up of women from diverse backgrounds such as African American Women. They altered the trajectory of the organisations goals, as they argued that you can challenge U.S Imperialism abroad without solving the domestic issues of race equality at home. As Europe moved closer to war in the late 1930s, the United States Congress continued to demand American neutrality. Between 1936 and 1937, much to the dismay of President Franklin D. Roosevelt, Congress passed the Neutrality Acts. For example, in the final Neutrality Act, Americans could not sail on ships flying the flag of a belligerent nation or trade arms with warring nations. Such activities had played a role in American entrance into World War I. On 1 September 1939, Germany invaded Poland, marking the start of World War II, and the United Kingdom and France subsequently declared war on Germany. In an address to the American people two days later, Roosevelt assured the nation that he would do all he could to keep them out of war. "When peace has been broken anywhere, the peace of all countries everywhere is in danger," Roosevelt said. Even though he was intent on neutrality as the official policy of the United States, he still echoed the dangers of staying out of this war. He also cautioned the American people to not let their wish to avoid war at all costs supersede the security of the nation. The war in Europe split the American people into two camps: non-interventionists and interventionists. The two sides argued over America's involvement in this World War II. The basic principle of the interventionist argument was fear of German invasion. One of the rhetorical criticisms of interventionism was that it was driven by the so-called merchants of death - businesses who had profited from World War I lobbying for involvement in order to profit from another large war. By the summer of 1940, France suffered a stunning defeat by Germans, and Britain was the only democratic enemy of Germany. In a 1940 speech, Roosevelt argued, "Some, indeed, still hold to the now somewhat obvious delusion that we … can safely permit the United States to become a lone island … in a world dominated by the philosophy of force." A Life survey published in July found that in the summer of 1940, 67% of Americans believed that a German-Italian victory would endanger the United States, that if such an event occurred 88% supported "arm[ing] to the teeth at any expense to be prepared for any trouble", and that 71% favored "the immediate adoption of compulsory military training for all young men". The magazine wrote that the survey showed "the emergence of a majority attitude very different from that of six or even three months ago". Ultimately, the ideological rift between the ideals of the United States and the goals of the fascist powers empowered the interventionist argument. Writer Archibald MacLeish asked, "How could we sit back as spectators of a war against ourselves?" In an address to the American people on December 29, 1940, Roosevelt said, "the Axis not merely admits but proclaims that there can be no ultimate peace between their philosophy of government and our philosophy of government." There were still many who held on to non-interventionism. Although a minority, they were well organized, and had a powerful presence in Congress. Pro-German or anti-British opinion contributed to non-interventionism. Roosevelt's national share of the 1940 presidential vote declined by seven percentage points from 1936. Of the 20 counties in which his share declined by 35 points or more, 19 were largely German-speaking. Of the 35 counties in which his share declined by 25 to 34 points, German was the largest or second-largest original nationality in 31. Non-interventionists rooted a significant portion of their arguments in historical precedent, citing events such as Washington's farewell address and the failure of World War I. "If we have strong defenses and understand and believe in what we are defending, we need fear nobody in this world," Robert Maynard Hutchins, President of the University of Chicago, wrote in a 1940 essay. Isolationists believed that the safety of the nation was more important than any foreign war. As 1940 became 1941, the actions of the Roosevelt administration made it more and more clear that the United States was on a course to war. This policy shift, driven by the President, came in two phases. The first came in 1939 with the passage of the Fourth Neutrality Act, which permitted the United States to trade arms with belligerent nations, as long as these nations came to America to retrieve the arms, and pay for them in cash. This policy was quickly dubbed, 'Cash and Carry.' The second phase was the Lend-Lease Act of early 1941. This act allowed the President "to lend, lease, sell, or barter arms, ammunition, food, or any 'defense article' or any 'defense information' to 'the government of any country whose defense the President deems vital to the defense of the United States.'" American public opinion supported Roosevelt's actions. As United States involvement in the Battle of the Atlantic grew with incidents such as the sinking of the USS Reuben James (DD-245), by late 1941 72% of Americans agreed that "the biggest job facing this country today is to help defeat the Nazi Government", and 70% thought that defeating Germany was more important than staying out of the war. After the attack on Pearl Harbor caused America to enter the war in December 1941, isolationists such as Charles Lindbergh's America First Committee and Herbert Hoover announced their support of the war effort. Isolationist families' sons fought in the war as much as others. Propaganda activities conducted by German embassy staff such as George Sylvester Viereck, assisted by isolationist politicians such as Hamilton Fish III, were investigated and dampened by federal prosecutors before and after U.S. joined WWII. In 1941, Fish was implicated in the America First Committee franking controversy, whereby isolationist politicians were found to be using their free mailing privileges to aid the German propaganda campaign. William Power Maloney's grand jury investigated Nazi penetration in the United States and secured convictions of Viereck and George Hill, Fish's chief of staff. Ohio Senator Robert A. Taft was a leading opponent of interventionism after 1945, although it always played a secondary role to his deep interest in domestic affairs. Historian George Fujii, citing the Taft papers, argues: In 1951, in the midst of bitter partisan debate over the Korean War, Taft increasingly spoke out on foreign policy issues. According to his biographer James T. Patterson: Norman A. Graebner argues: Eisenhower won the nomination and secured Taft's support by promising Taft a dominant voice in domestic policies, while Eisenhower's internationalism would set the foreign-policy agenda. Graebner argues that Eisenhower succeeded in moving the conservative Republicans away from their traditional attacks on foreign aid and reciprocal trade policies, and collective security arrangements, to support for those policies. By 1964 the Republican conservatives rallied behind Barry Goldwater who was an aggressive advocate of an anti-communist internationalist foreign policy. Goldwater wanted to roll back Communism and win the Cold War, asking "Why Not Victory?" Non-interventionism in the 21st century During the presidency of Barack Obama, some members of the United States federal government, including President Obama and Secretary of State John Kerry, considered intervening militarily in the Syrian Civil War. A poll from late April 2013 found that 62% of Americans thought that the "United States has no responsibility to do something about the fighting in Syria between government forces and antigovernment groups," with only twenty-five percent disagreeing with that statement. A writer for The New York Times referred to this as "an isolationist streak," a characterization international relations scholar Stephen Walt strongly objected to, calling the description "sloppy journalism." According to Walt, "the overwhelming majority of people who have doubts about the wisdom of deeper involvement in Syria—including yours truly—are not 'isolationist.' They are merely sensible people who recognize that we may not have vital interests there, that deeper involvement may not lead to a better outcome and could make things worse, and who believe that the last thing the United States needs to do is to get dragged into yet another nasty sectarian fight in the Arab/Islamic world." In December 2013, the Pew Research Center reported that their newest poll, "American's Place in the World 2013," had revealed that 52 percent of respondents in the national poll said that the United States "should mind its own business internationally and let other countries get along the best they can on their own." This was the most people to answer that question this way in the history of the question, one which pollsters began asking in 1964. Only about a third of respondents felt this way a decade ago. A July 2014 poll of "battleground voters" across the United States found "77 percent in favor of full withdrawal from Afghanistan by the end of 2016; only 15 percent and 17 percent interested in more involvement in Syria and Ukraine, respectively; and 67 percent agreeing with the statement that, 'U.S. military actions should be limited to direct threats to our national security.'" Polls indicate growing impatience among Americans with the war in Ukraine, with 2023 polls showing just 17% of Americans think their country is "not doing enough" to support Ukraine. This percentage is the lowest since the war began. Conservative and libertarian policies Rathbun (2008) compares three separate themes in conservative policies since the 1980s: conservatism, neoconservatism, and isolationism. These approaches are similar in that they all invoked the mantle of "realism" and pursued foreign policy goals designed to promote national interests. Conservatives were the only group that was "realist" in the academic sense in that they defined the national interest narrowly, strove for balances of power internationally, viewed international relations as amoral, and especially valued sovereignty. By contrast, neoconservatives based their foreign policy on nationalism, and isolationists sought to minimize any involvement in foreign affairs and raise new barriers to immigration. Former Republican Congressman Ron Paul favored a return to the non-interventionist policies of Thomas Jefferson and frequently opposed military intervention in countries like Iran and Iraq. After Russia's full-scale invasion of Ukraine, the Republican Party has been divided on Ukraine's aid, believing that it is not in the interests of the United States to get involved in a "proxy war" against Russia. President Donald Trump has called on the United States to push for peace talks rather than continue to support Ukraine. Supporters of non-interventionism Criticism In his World Policy Journal review of Bill Kauffman's 1995 book America First! Its History, Culture, and Politics, Benjamin Schwartz described America's history of isolationism as a tragedy and being rooted in Puritan thinking. See also Notes References and further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Black_hole#cite_note-34] | [TOKENS: 13839] |
Contents Black hole A black hole is an astronomical body so compact that its gravity prevents anything, including light, from escaping. Albert Einstein's theory of general relativity predicts that a sufficiently compact mass will form a black hole. The boundary of no escape is called the event horizon. In general relativity, a black hole's event horizon seals an object's fate but produces no locally detectable change when crossed. General relativity also predicts that every black hole should have a central singularity, where the curvature of spacetime is infinite. In many ways, a black hole acts like an ideal black body, as it reflects no light. Quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterise a black hole. Due to his influential research, the Schwarzschild metric is named after him. David Finkelstein, in 1958, first interpreted Schwarzschild's model as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes typically form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses may form by absorbing other stars and merging with other black holes, or via direct collapse of gas clouds. There is consensus that supermassive black holes exist in the centres of most galaxies. The presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as visible light. Matter falling toward a black hole can form an accretion disk of infalling plasma, heated by friction and emitting light. In extreme cases, this creates a quasar, some of the brightest objects in the universe. Merging black holes can also be detected by observation of the gravitational waves they emit. If other stars are orbiting a black hole, their orbits can be used to determine the black hole's mass and location. Such observations can be used to exclude possible alternatives such as neutron stars. In this way, astronomers have identified numerous stellar black hole candidates in binary systems and established that the radio source known as Sagittarius A*, at the core of the Milky Way galaxy, contains a supermassive black hole of about 4.3 million solar masses. History The idea of a body so massive that even light could not escape was first proposed in the late 18th century by English astronomer and clergyman John Michell and independently by French scientist Pierre-Simon Laplace. Both scholars proposed very large stars in contrast to the modern concept of an extremely dense object. Michell's idea, in a short part of a letter published in 1784, calculated that a star with the same density but 500 times the radius of the sun would not let any emitted light escape; the surface escape velocity would exceed the speed of light.: 122 Michell correctly hypothesized that such supermassive but non-radiating bodies might be detectable through their gravitational effects on nearby visible bodies. In 1796, Laplace mentioned that a star could be invisible if it were sufficiently large while speculating on the origin of the Solar System in his book Exposition du Système du Monde. Franz Xaver von Zach asked Laplace for a mathematical analysis, which Laplace provided and published in a journal edited by von Zach. In 1905, Albert Einstein showed that the laws of electromagnetism would be invariant under a Lorentz transformation: they would be identical for observers travelling at different velocities relative to each other. This discovery became known as the principle of special relativity. Although the laws of mechanics had already been shown to be invariant, gravity remained yet to be included.: 19 In 1907, Einstein published a paper proposing his equivalence principle, the hypothesis that inertial mass and gravitational mass have a common cause. Using the principle, Einstein predicted the redshift and half of the lensing effect of gravity on light; the full prediction of gravitational lensing required development of general relativity.: 19 By 1915, Einstein refined these ideas into his general theory of relativity, which explained how matter affects spacetime, which in turn affects the motion of other matter. This formed the basis for black hole physics. Only a few months after Einstein published the field equations describing general relativity, astrophysicist Karl Schwarzschild set out to apply the idea to stars. He assumed spherical symmetry with no spin and found a solution to Einstein's equations.: 124 A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution. At a certain radius from the center of the mass, the Schwarzschild solution became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this radius, which later became known as the Schwarzschild radius, was not understood at the time. Many physicists of the early 20th century were skeptical of the existence of black holes. In a 1926 popular science book, Arthur Eddington critiqued the idea of a star with mass compressed to its Schwarzschild radius as a flaw in the then-poorly-understood theory of general relativity.: 134 In 1939, Einstein himself used his theory of general relativity in an attempt to prove that black holes were impossible. His work relied on increasing pressure or increasing centrifugal force balancing the force of gravity so that the object would not collapse beyond its Schwarzschild radius. He missed the possibility that implosion would drive the system below this critical value.: 135 By the 1920s, astronomers had classified a number of white dwarf stars as too cool and dense to be explained by the gradual cooling of ordinary stars. In 1926, Ralph Fowler showed that quantum-mechanical degeneracy pressure was larger than thermal pressure at these densities.: 145 In 1931, Subrahmanyan Chandrasekhar calculated that a non-rotating body of electron-degenerate matter below a certain limiting mass is stable, and by 1934 he showed that this explained the catalog of white dwarf stars.: 151 When Chandrasekhar announced his results, Eddington pointed out that stars above this limit would radiate until they were sufficiently dense to prevent light from exiting, a conclusion he considered absurd. Eddington and, later, Lev Landau argued that some yet unknown mechanism would stop the collapse. In the 1930s, Fritz Zwicky and Walter Baade studied stellar novae, focusing on exceptionally bright ones they called supernovae. Zwicky promoted the idea that supernovae produced stars with the density of atomic nuclei—neutron stars—but this idea was largely ignored.: 171 In 1939, based on Chandrasekhar's reasoning, J. Robert Oppenheimer and George Volkoff predicted that neutron stars below a certain mass limit, later called the Tolman–Oppenheimer–Volkoff limit, would be stable due to neutron degeneracy pressure. Above that limit, they reasoned that either their model would not apply or that gravitational contraction would not stop.: 380 John Archibald Wheeler and two of his students resolved questions about the model behind the Tolman–Oppenheimer–Volkoff (TOV) limit. Harrison and Wheeler developed the equations of state relating density to pressure for cold matter all the way through electron degeneracy and neutron degeneracy. Masami Wakano and Wheeler then used the equations to compute the equilibrium curve for stars, relating mass to circumference. They found no additional features that would invalidate the TOV limit. This meant that the only thing that could prevent black holes from forming was a dynamic process ejecting sufficient mass from a star as it cooled.: 205 The modern concept of black holes was formulated by Robert Oppenheimer and his student Hartland Snyder in 1939.: 80 In the paper, Oppenheimer and Snyder solved Einstein's equations of general relativity for an idealized imploding star, in a model later called the Oppenheimer–Snyder model, then described the results from far outside the star. The implosion starts as one might expect: the star material rapidly collapses inward. However, as the density of the star increases, gravitational time dilation increases and the collapse, viewed from afar, seems to slow down further and further until the star reaches its Schwarzschild radius, where it appears frozen in time.: 217 In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, calling it "a perfect unidirectional membrane: causal influences can cross it in only one direction". In this sense, events that occur inside of the black hole cannot affect events that occur outside of the black hole. Finkelstein created a new reference frame to include the point of view of infalling observers.: 103 Finkelstein's new frame of reference allowed events at the surface of an imploding star to be related to events far away. By 1962 the two points of view were reconciled, convincing many skeptics that implosion into a black hole made physical sense.: 226 The era from the mid-1960s to the mid-1970s was the "golden age of black hole research", when general relativity and black holes became mainstream subjects of research.: 258 In this period, more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the cylindrically symmetric solution for a black hole that is both rotating and electrically charged. In 1967, Werner Israel found that the Schwarzschild solution was the only possible solution for a nonspinning, uncharged black hole, meaning that a Schwarzschild black hole would be defined by its mass alone. Similar identities were later found for Reissner-Nordstrom and Kerr black holes, defined only by their mass and their charge or spin respectively. Together, these findings became known as the no-hair theorem, which states that a stationary black hole is completely described by the three parameters of the Kerr–Newman metric: mass, angular momentum, and electric charge. At first, it was suspected that the strange mathematical singularities found in each of the black hole solutions only appeared due to the assumption that a black hole would be perfectly spherically symmetric, and therefore the singularities would not appear in generic situations where black holes would not necessarily be symmetric. This view was held in particular by Vladimir Belinski, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions, although they would later reverse their positions. However, in 1965, Roger Penrose proved that general relativity without quantum mechanics requires that singularities appear in all black holes. Astronomical observations also made great strides during this era. In 1967, Antony Hewish and Jocelyn Bell Burnell discovered pulsars and by 1969, these were shown to be rapidly rotating neutron stars. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities, but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse. Based on observations in Greenwich and Toronto in the early 1970s, Cygnus X-1, a galactic X-ray source discovered in 1964, became the first astronomical object commonly accepted to be a black hole. Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed: 442 when Hawking, in 1974, showed that quantum field theory implies that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole, predicting the effect now known as Hawking radiation. While Cygnus X-1, a stellar-mass black hole, was generally accepted by the scientific community as a black hole by the end of 1973, it would be decades before a supermassive black hole would gain the same broad recognition. Although, as early as the 1960s, physicists such as Donald Lynden-Bell and Martin Rees had suggested that powerful quasars in the center of galaxies were powered by accreting supermassive black holes, little observational proof existed at the time. However, the Hubble Space Telescope, launched decades later, found that supermassive black holes were not only present in these active galactic nuclei, but that supermassive black holes in the center of galaxies were ubiquitous: Almost every galaxy had a supermassive black hole at its center, many of which were quiescent. In 1999, David Merritt proposed the M–sigma relation, which related the dispersion of the velocity of matter in the center bulge of a galaxy to the mass of the supermassive black hole at its core. Subsequent studies confirmed this correlation. Around the same time, based on telescope observations of the velocities of stars at the center of the Milky Way galaxy, independent work groups led by Andrea Ghez and Reinhard Genzel concluded that the compact radio source in the center of the galaxy, Sagittarius A*, was likely a supermassive black hole. On 11 February 2016, the LIGO Scientific Collaboration and Virgo Collaboration announced the first direct detection of gravitational waves, named GW150914, representing the first observation of a black hole merger. At the time of the merger, the black holes were approximately 1.4 billion light-years away from Earth and had masses of 30 and 35 solar masses.: 6 In 2017, Rainer Weiss, Kip Thorne, and Barry Barish, who had spearheaded the project, were awarded the Nobel Prize in Physics for their work. Since the initial discovery in 2015, hundreds more gravitational waves have been observed by LIGO and another interferometer, Virgo. On 10 April 2019, the first direct image of a black hole and its vicinity was published, following observations made by the Event Horizon Telescope (EHT) in 2017 of the supermassive black hole in Messier 87's galactic centre. In 2022, the Event Horizon Telescope collaboration released an image of the black hole in the center of the Milky Way galaxy, Sagittarius A*; The data had been collected in 2017. In 2020, the Nobel Prize in Physics was awarded for work on black holes. Andrea Ghez and Reinhard Genzel shared one-half for their discovery that Sagittarius A* is a supermassive black hole. Penrose received the other half for his work showing that the mathematics of general relativity requires the formation of black holes. Cosmologists lamented that Hawking's extensive theoretical work on black holes would not be honored since he died in 2018. In December 1967, a student reportedly suggested the phrase black hole at a lecture by John Wheeler; Wheeler adopted the term for its brevity and "advertising value", and Wheeler's stature in the field ensured it quickly caught on, leading some to credit Wheeler with coining the phrase. However, the term was used by others around that time. Science writer Marcia Bartusiak traces the term black hole to physicist Robert H. Dicke, who in the early 1960s reportedly compared the phenomenon to the Black Hole of Calcutta, notorious as a prison where people entered but never left alive. The term was used in print by Life and Science News magazines in 1963, and by science journalist Ann Ewing in her article "'Black Holes' in Space", dated 18 January 1964, which was a report on a meeting of the American Association for the Advancement of Science held in Cleveland, Ohio. Definition A black hole is generally defined as a region of spacetime from which no information-carrying signals or objects can escape. However, verifying an object as a black hole by this definition would require waiting for an infinite time and at an infinite distance from the black hole to verify that indeed, nothing has escaped, and thus cannot be used to identify a physical black hole. Broadly, physicists do not have a precisely-agreed-upon definition of a black hole. Among astrophysicists, a black hole is a compact object with a mass larger than four solar masses. A black hole may also be defined as a reservoir of information: 142 or a region where space is falling inwards faster than the speed of light. Properties The no-hair theorem postulates that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, electric charge, and angular momentum; the black hole is otherwise featureless. If the conjecture is true, any two black holes that share the same values for these properties, or parameters, are indistinguishable from one another. The degree to which the conjecture is true for real black holes is currently an unsolved problem. The simplest static black holes have mass but neither electric charge nor angular momentum. According to Birkhoff's theorem, these Schwarzschild black holes are the only vacuum solution that is spherically symmetric. Solutions describing more general black holes also exist. Non-rotating charged black holes are described by the Reissner–Nordström metric, while the Kerr metric describes a non-charged rotating black hole. The most general stationary black hole solution known is the Kerr–Newman metric, which describes a black hole with both charge and angular momentum. The simplest static black holes have mass but neither electric charge nor angular momentum. Contrary to the popular notion of a black hole "sucking in everything" in its surroundings, from far away, the external gravitational field of a black hole is identical to that of any other body of the same mass. While a black hole can theoretically have any positive mass, the charge and angular momentum are constrained by the mass. The total electric charge Q and the total angular momentum J are expected to satisfy the inequality Q 2 4 π ϵ 0 + c 2 J 2 G M 2 ≤ G M 2 {\displaystyle {\frac {Q^{2}}{4\pi \epsilon _{0}}}+{\frac {c^{2}J^{2}}{GM^{2}}}\leq GM^{2}} for a black hole of mass M. Black holes with the maximum possible charge or spin satisfying this inequality are called extremal black holes. Solutions of Einstein's equations that violate this inequality exist, but they do not possess an event horizon. These are so-called naked singularities that can be observed from the outside. Because these singularities make the universe inherently unpredictable, many physicists believe they could not exist. The weak cosmic censorship hypothesis, proposed by Sir Roger Penrose, rules out the formation of such singularities, when they are created through the gravitational collapse of realistic matter. However, this theory has not yet been proven, and some physicists believe that naked singularities could exist. It is also unknown whether black holes could even become extremal, forming naked singularities, since natural processes counteract increasing spin and charge when a black hole becomes near-extremal. The total mass of a black hole can be estimated by analyzing the motion of objects near the black hole, such as stars or gas. All black holes spin, often fast—One supermassive black hole, GRS 1915+105 has been estimated to spin at over 1,000 revolutions per second. The Milky Way's central black hole Sagittarius A* rotates at about 90% of the maximum rate. The spin rate can be inferred from measurements of atomic spectral lines in the X-ray range. As gas near the black hole plunges inward, high energy X-ray emission from electron-positron pairs illuminates the gas further out, appearing red-shifted due to relativistic effects. Depending on the spin of the black hole, this plunge happens at different radii from the hole, with different degrees of redshift. Astronomers can use the gap between the x-ray emission of the outer disk and the redshifted emission from plunging material to determine the spin of the black hole. A newer way to estimate spin is based on the temperature of gasses accreting onto the black hole. The method requires an independent measurement of the black hole mass and inclination angle of the accretion disk followed by computer modeling. Gravitational waves from coalescing binary black holes can also provide the spin of both progenitor black holes and the merged hole, but such events are rare. A spinning black hole has angular momentum. The supermassive black hole in the center of the Messier 87 (M87) galaxy appears to have an angular momentum very close to the maximum theoretical value. That uncharged limit is J ≤ G M 2 c , {\displaystyle J\leq {\frac {GM^{2}}{c}},} allowing definition of a dimensionless spin magnitude such that 0 ≤ c J G M 2 ≤ 1. {\displaystyle 0\leq {\frac {cJ}{GM^{2}}}\leq 1.} Most black holes are believed to have an approximately neutral charge. For example, Michal Zajaček, Arman Tursunov, Andreas Eckart, and Silke Britzen found the electric charge of Sagittarius A* to be at least ten orders of magnitude below the theoretical maximum. A charged black hole repels other like charges just like any other charged object. If a black hole were to become charged, particles with an opposite sign of charge would be pulled in by the extra electromagnetic force, while particles with the same sign of charge would be repelled, neutralizing the black hole. This effect may not be as strong if the black hole is also spinning. The presence of charge can reduce the diameter of the black hole by up to 38%. The charge Q for a nonspinning black hole is bounded by Q ≤ G M , {\displaystyle Q\leq {\sqrt {G}}M,} where G is the gravitational constant and M is the black hole's mass. Classification Black holes can have a wide range of masses. The minimum mass of a black hole formed by stellar gravitational collapse is governed by the maximum mass of a neutron star and is believed to be approximately two-to-four solar masses. However, theoretical primordial black holes, believed to have formed soon after the Big Bang, could be far smaller, with masses as little as 10−5 grams at formation. These very small black holes are sometimes called micro black holes. Black holes formed by stellar collapse are called stellar black holes. Estimates of their maximum mass at formation vary, but generally range from 10 to 100 solar masses, with higher estimates for black holes progenated by low-metallicity stars. The mass of a black hole formed via a supernova has a lower bound: If the progenitor star is too small, the collapse may be stopped by the degeneracy pressure of the star's constituents, allowing the condensation of matter into an exotic denser state. Degeneracy pressure occurs from the Pauli exclusion principle—Particles will resist being in the same place as each other. Smaller progenitor stars, with masses less than about 8 M☉, will be held together by the degeneracy pressure of electrons and will become a white dwarf. For more massive progenitor stars, electron degeneracy pressure is no longer strong enough to resist the force of gravity and the star will be held together by neutron degeneracy pressure, which can occur at much higher densities, forming a neutron star. If the star is still too massive, even neutron degeneracy pressure will not be able to resist the force of gravity and the star will collapse into a black hole.: 5.8 Stellar black holes can also gain mass via accretion of nearby matter, often from a companion object such as a star. Black holes that are larger than stellar black holes but smaller than supermassive black holes are called intermediate-mass black holes, with masses of approximately 102 to 105 solar masses. These black holes seem to be rarer than their stellar and supermassive counterparts, with relatively few candidates having been observed. Physicists have speculated that such black holes may form from collisions in globular and star clusters or at the center of low-mass galaxies. They may also form as the result of mergers of smaller black holes, with several LIGO observations finding merged black holes within the 110-350 solar mass range. The black holes with the largest masses are called supermassive black holes, with masses more than 106 times that of the Sun. These black holes are believed to exist at the centers of almost every large galaxy, including the Milky Way. Some scientists have proposed a subcategory of even larger black holes, called ultramassive black holes, with masses greater than 109-1010 solar masses. Theoretical models predict that the accretion disc that feeds black holes will be unstable once a black hole reaches 50-100 billion times the mass of the Sun, setting a rough upper limit to black hole mass. Structure While black holes are conceptually invisible sinks of all matter and light, in astronomical settings, their enormous gravity alters the motion of surrounding objects and pulls nearby gas inwards at near-light speed, making the area around black holes the brightest objects in the universe. Some black holes have relativistic jets—thin streams of plasma travelling away from the black hole at more than one-tenth of the speed of light. A small faction of the matter falling towards the black hole gets accelerated away along the hole rotation axis. These jets can extend as far as millions of parsecs from the black hole itself. Black holes of any mass can have jets. However, they are typically observed around spinning black holes with strongly-magnetized accretion disks. Relativistic jets were more common in the early universe, when galaxies and their corresponding supermassive black holes were rapidly gaining mass. All black holes with jets also have an accretion disk, but the jets are usually brighter than the disk. Quasars, typically found in other galaxies, are believed to be supermassive black holes with jets; microquasars are believed to be stellar-mass objects with jets, typically observed in the Milky Way. The mechanism of formation of jets is not yet known, but several options have been proposed. One method proposed to fuel these jets is the Blandford-Znajek process, which suggests that the dragging of magnetic field lines by a black hole's rotation could launch jets of matter into space. The Penrose process, which involves extraction of a black hole's rotational energy, has also been proposed as a potential mechanism of jet propulsion. Due to conservation of angular momentum, gas falling into the gravitational well created by a massive object will typically form a disk-like structure around the object.: 242 As the disk's angular momentum is transferred outward due to internal processes, its matter falls farther inward, converting its gravitational energy into heat and releasing a large flux of x-rays. The temperature of these disks can range from thousands to millions of Kelvin, and temperatures can differ throughout a single accretion disk. Accretion disks can also emit in other parts of the electromagnetic spectrum, depending on the disk's turbulence and magnetization and the black hole's mass and angular momentum. Accretion disks can be defined as geometrically thin or geometrically thick. Geometrically thin disks are mostly confined to the black hole's equatorial plane and have a well-defined edge at the innermost stable circular orbit (ISCO), while geometrically thick disks are supported by internal pressure and temperature and can extend inside the ISCO. Disks with high rates of electron scattering and absorption, appearing bright and opaque, are called optically thick; optically thin disks are more translucent and produce fainter images when viewed from afar. Accretion disks of black holes accreting beyond the Eddington limit are often referred to as polish donuts due to their thick, toroidal shape that resembles that of a donut. Quasar accretion disks are expected to usually appear blue in color. The disk for a stellar black hole, on the other hand, would likely look orange, yellow, or red, with its inner regions being the brightest. Theoretical research suggests that the hotter a disk is, the bluer it should be, although this is not always supported by observations of real astronomical objects. Accretion disk colors may also be altered by the Doppler effect, with the part of the disk travelling towards an observer appearing bluer and brighter and the part of the disk travelling away from the observer appearing redder and dimmer. In Newtonian gravity, test particles can stably orbit at arbitrary distances from a central object. In general relativity, however, there exists a smallest possible radius for which a massive particle can orbit stably. Any infinitesimal inward perturbations to this orbit will lead to the particle spiraling into the black hole, and any outward perturbations will, depending on the energy, cause the particle to spiral in, move to a stable orbit further from the black hole, or escape to infinity. This orbit is called the innermost stable circular orbit, or ISCO. The location of the ISCO depends on the spin of the black hole and the spin of the particle itself. In the case of a Schwarzschild black hole (spin zero) and a particle without spin, the location of the ISCO is: r I S C O = 3 r s = 6 G M c 2 , {\displaystyle r_{\rm {ISCO}}=3\,r_{\text{s}}={\frac {6\,GM}{c^{2}}},} where r I S C O {\displaystyle r_{\rm {_{ISCO}}}} is the radius of the ISCO, r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius of the black hole, G {\displaystyle G} is the gravitational constant, and c {\displaystyle c} is the speed of light. The radius of this orbit changes slightly based on particle spin. For charged black holes, the ISCO moves inwards. For spinning black holes, the ISCO is moved inwards for particles orbiting in the same direction that the black hole is spinning (prograde) and outwards for particles orbiting in the opposite direction (retrograde). For example, the ISCO for a particle orbiting retrograde can be as far out as about 9 r s {\displaystyle 9r_{\text{s}}} , while the ISCO for a particle orbiting prograde can be as close as at the event horizon itself. The photon sphere is a spherical boundary for which photons moving on tangents to that sphere are bent completely around the black hole, possibly orbiting multiple times. Light rays with impact parameters less than the radius of the photon sphere enter the black hole. For Schwarzschild black holes, the photon sphere has a radius 1.5 times the Schwarzschild radius; the radius for non-Schwarzschild black holes is at least 1.5 times the radius of the event horizon. When viewed from a great distance, the photon sphere creates an observable black hole shadow. Since no light emerges from within the black hole, this shadow is the limit for possible observations.: 152 The shadow of colliding black holes should have characteristic warped shapes, allowing scientists to detect black holes that are about to merge. While light can still escape from the photon sphere, any light that crosses the photon sphere on an inbound trajectory will be captured by the black hole. Therefore, any light that reaches an outside observer from the photon sphere must have been emitted by objects between the photon sphere and the event horizon. Light emitted towards the photon sphere may also curve around the black hole and return to the emitter. For a rotating, uncharged black hole, the radius of the photon sphere depends on the spin parameter and whether the photon is orbiting prograde or retrograde. For a photon orbiting prograde, the photon sphere will be 1-3 Schwarzschild radii from the center of the black hole, while for a photon orbiting retrograde, the photon sphere will be between 3-5 Schwarzschild radii from the center of the black hole. The exact location of the photon sphere depends on the magnitude of the black hole's rotation. For a charged, nonrotating black hole, there will only be one photon sphere, and the radius of the photon sphere will decrease for increasing black hole charge. For non-extremal, charged, rotating black holes, there will always be two photon spheres, with the exact radii depending on the parameters of the black hole. Near a rotating black hole, spacetime rotates similar to a vortex. The rotating spacetime will drag any matter and light into rotation around the spinning black hole. This effect of general relativity, called frame dragging, gets stronger closer to the spinning mass. The region of spacetime in which it is impossible to stay still is called the ergosphere. The ergosphere of a black hole is a volume bounded by the black hole's event horizon and the ergosurface, which coincides with the event horizon at the poles but bulges out from it around the equator. Matter and radiation can escape from the ergosphere. Through the Penrose process, objects can emerge from the ergosphere with more energy than they entered with. The extra energy is taken from the rotational energy of the black hole, slowing down the rotation of the black hole.: 268 A variation of the Penrose process in the presence of strong magnetic fields, the Blandford–Znajek process, is considered a likely mechanism for the enormous luminosity and relativistic jets of quasars and other active galactic nuclei. The observable region of spacetime around a black hole closest to its event horizon is called the plunging region. In this area it is no longer possible for free falling matter to follow circular orbits or stop a final descent into the black hole. Instead, it will rapidly plunge toward the black hole at close to the speed of light, growing increasingly hot and producing a characteristic, detectable thermal emission. However, light and radiation emitted from this region can still escape from the black hole's gravitational pull. For a nonspinning, uncharged black hole, the radius of the event horizon, or Schwarzschild radius, is proportional to the mass, M, through r s = 2 G M c 2 ≈ 2.95 M M ⊙ k m , {\displaystyle r_{\mathrm {s} }={\frac {2GM}{c^{2}}}\approx 2.95\,{\frac {M}{M_{\odot }}}~\mathrm {km,} } where rs is the Schwarzschild radius and M☉ is the mass of the Sun.: 124 For a black hole with nonzero spin or electric charge, the radius is smaller,[Note 1] until an extremal black hole could have an event horizon close to r + = G M c 2 , {\displaystyle r_{\mathrm {+} }={\frac {GM}{c^{2}}},} half the radius of a nonspinning, uncharged black hole of the same mass. Since the volume within the Schwarzschild radius increase with the cube of the radius, average density of a black hole inside its Schwarzschild radius is inversely proportional to the square of its mass: supermassive black holes are much less dense than stellar black holes. The average density of a 108 M☉ black hole is comparable to that of water. The defining feature of a black hole is the existence of an event horizon, a boundary in spacetime through which matter and light can pass only inward towards the center of the black hole. Nothing, not even light, can escape from inside the event horizon. The event horizon is referred to as such because if an event occurs within the boundary, information from that event cannot reach or affect an outside observer, making it impossible to determine whether such an event occurred.: 179 For non-rotating black holes, the geometry of the event horizon is precisely spherical, while for rotating black holes, the event horizon is oblate. To a distant observer, a clock near a black hole would appear to tick more slowly than one further from the black hole.: 217 This effect, known as gravitational time dilation, would also cause an object falling into a black hole to appear to slow as it approached the event horizon, never quite reaching the horizon from the perspective of an outside observer.: 218 All processes on this object would appear to slow down, and any light emitted by the object to appear redder and dimmer, an effect known as gravitational redshift. An object falling from half of a Schwarzschild radius above the event horizon would fade away until it could no longer be seen, disappearing from view within one hundredth of a second. It would also appear to flatten onto the black hole, joining all other material that had ever fallen into the hole. On the other hand, an observer falling into a black hole would not notice any of these effects as they cross the event horizon. Their own clocks appear to them to tick normally, and they cross the event horizon after a finite time without noting any singular behaviour. In general relativity, it is impossible to determine the location of the event horizon from local observations, due to Einstein's equivalence principle.: 222 Black holes that are rotating and/or charged have an inner horizon, often called the Cauchy horizon, inside of the black hole. The inner horizon is divided up into two segments: an ingoing section and an outgoing section. At the ingoing section of the Cauchy horizon, radiation and matter that fall into the black hole would build up at the horizon, causing the curvature of spacetime to go to infinity. This would cause an observer falling in to experience tidal forces. This phenomenon is often called mass inflation, since it is associated with a parameter dictating the black hole's internal mass growing exponentially, and the buildup of tidal forces is called the mass-inflation singularity or Cauchy horizon singularity. Some physicists have argued that in realistic black holes, accretion and Hawking radiation would stop mass inflation from occurring. At the outgoing section of the inner horizon, infalling radiation would backscatter off of the black hole's spacetime curvature and travel outward, building up at the outgoing Cauchy horizon. This would cause an infalling observer to experience a gravitational shock wave and tidal forces as the spacetime curvature at the horizon grew to infinity. This buildup of tidal forces is called the shock singularity. Both of these singularities are weak, meaning that an object crossing them would only be deformed a finite amount by tidal forces, even though the spacetime curvature would still be infinite at the singularity. This is as opposed to a strong singularity, where an object hitting the singularity would be stretched and squeezed by an infinite amount. They are also null singularities, meaning that a photon could travel parallel to the them without ever being intercepted. Ignoring quantum effects, every black hole has a singularity inside, points where the curvature of spacetime becomes infinite, and geodesics terminate within a finite proper time.: 205 For a non-rotating black hole, this region takes the shape of a single point; for a rotating black hole it is smeared out to form a ring singularity that lies in the plane of rotation.: 264 In both cases, the singular region has zero volume. All of the mass of the black hole ends up in the singularity.: 252 Since the singularity has nonzero mass in an infinitely small space, it can be thought of as having infinite density. Observers falling into a Schwarzschild black hole (i.e., non-rotating and not charged) cannot avoid being carried into the singularity once they cross the event horizon. As they fall further into the black hole, they will be torn apart by the growing tidal forces in a process sometimes referred to as spaghettification or the noodle effect. Eventually, they will reach the singularity and be crushed into an infinitely small point.: 182 However any perturbations, such as those caused by matter or radiation falling in, would cause space to oscillate chaotically near the singularity. Any matter falling in would experience intense tidal forces rapidly changing in direction, all while being compressed into an increasingly small volume. Alternative forms of general relativity, including addition of some quatum effects, can lead to regular, or nonsingular, black holes without singularities. For example, the fuzzball model, based on string theory, states that black holes are actually made up of quantum microstates and need not have a singularity or an event horizon. The theory of loop quantum gravity proposes that the curvature and density at the center of a black hole is large, but not infinite. Formation Black holes are formed by gravitational collapse of massive stars, either by direct collapse or during a supernova explosion in a process called fallback. Black holes can result from the merger of two neutron stars or a neutron star and a black hole. Other more speculative mechanisms include primordial black holes created from density fluctuations in the early universe, the collapse of dark stars, a hypothetical object powered by annihilation of dark matter, or from hypothetical self-interacting dark matter. Gravitational collapse occurs when an object's internal pressure is insufficient to resist the object's own gravity. At the end of a star's life, it will run out of hydrogen to fuse, and will start fusing more and more massive elements, until it gets to iron. Since the fusion of elements heavier than iron would require more energy than it would release, nuclear fusion ceases. If the iron core of the star is too massive, the star will no longer be able to support itself and will undergo gravitational collapse. While most of the energy released during gravitational collapse is emitted very quickly, an outside observer does not actually see the end of this process. Even though the collapse takes a finite amount of time from the reference frame of infalling matter, a distant observer would see the infalling material slow and halt just above the event horizon, due to gravitational time dilation. Light from the collapsing material takes longer and longer to reach the observer, with the delay growing to infinity as the emitting material reaches the event horizon. Thus the external observer never sees the formation of the event horizon; instead, the collapsing material seems to become dimmer and increasingly red-shifted, eventually fading away. Observations of quasars at redshift z ∼ 7 {\displaystyle z\sim 7} , less than a billion years after the Big Bang, has led to investigations of other ways to form black holes. The accretion process to build supermassive black holes has a limiting rate of mass accumulation and a billion years is not enough time to reach quasar status. One suggestion is direct collapse of nearly pure hydrogen gas (low metalicity) clouds characteristic of the young universe, forming a supermassive star which collapses into a black hole. It has been suggested that seed black holes with typical masses of ~105 M☉ could have formed in this way which then could grow to ~109 M☉. However, the very large amount of gas required for direct collapse is not typically stable to fragmentation to form multiple stars. Thus another approach suggests massive star formation followed by collisions that seed massive black holes which ultimately merge to create a quasar.: 85 A neutron star in a common envelope with a regular star can accrete sufficient material to collapse to a black hole or two neutron stars can merge. These avenues for the formation of black holes are considered relatively rare. In the current epoch of the universe, conditions needed to form black holes are rare and are mostly only found in stars. However, in the early universe, conditions may have allowed for black hole formations via other means. Fluctuations of spacetime soon after the Big Bang may have formed areas that were denser then their surroundings. Initially, these regions would not have been compact enough to form a black hole, but eventually, the curvature of spacetime in the regions become large enough to cause them to collapse into a black hole. Different models for the early universe vary widely in their predictions of the scale of these fluctuations. Various models predict the creation of primordial black holes ranging from a Planck mass (~2.2×10−8 kg) to hundreds of thousands of solar masses. Primordial black holes with masses less than 1015 g would have evaporated by now due to Hawking radiation. Despite the early universe being extremely dense, it did not re-collapse into a black hole during the Big Bang, since the universe was expanding rapidly and did not have the gravitational differential necessary for black hole formation. Models for the gravitational collapse of objects of relatively constant size, such as stars, do not necessarily apply in the same way to rapidly expanding space such as the Big Bang. In principle, black holes could be formed in high-energy particle collisions that achieve sufficient density, although no such events have been detected. These hypothetical micro black holes, which could form from the collision of cosmic rays and Earth's atmosphere or in particle accelerators like the Large Hadron Collider, would not be able to aggregate additional mass. Instead, they would evaporate in about 10−25 seconds, posing no threat to the Earth. Evolution Black holes can also merge with other objects such as stars or even other black holes. This is thought to have been important, especially in the early growth of supermassive black holes, which could have formed from the aggregation of many smaller objects. The process has also been proposed as the origin of some intermediate-mass black holes. Mergers of supermassive black holes may take a long time: As a binary of supermassive black holes approach each other, most nearby stars are ejected, leaving little for the remaining black holes to gravitationally interact with that would allow them to get closer to each other. This phenomenon has been called the final parsec problem, as the distance at which this happens is usually around one parsec. When a black hole accretes matter, the gas in the inner accretion disk orbits at very high speeds because of its proximity to the black hole. The resulting friction heats the inner disk to temperatures at which it emits vast amounts of electromagnetic radiation (mainly X-rays) detectable by telescopes. By the time the matter of the disk reaches the ISCO, between 5.7% and 42% of its mass will have been converted to energy, depending on the black hole's spin. About 90% of this energy is released within about 20 black hole radii. In many cases, accretion disks are accompanied by relativistic jets that are emitted along the black hole's poles, which carry away much of the energy. The mechanism for the creation of these jets is currently not well understood, in part due to insufficient data. Many of the universe's most energetic phenomena have been attributed to the accretion of matter on black holes. Active galactic nuclei and quasars are believed to be the accretion disks of supermassive black holes. X-ray binaries are generally accepted to be binary systems in which one of the two objects is a compact object accreting matter from its companion. Ultraluminous X-ray sources may be the accretion disks of intermediate-mass black holes. At a certain rate of accretion, the outward radiation pressure will become as strong as the inward gravitational force, and the black hole should unable to accrete any faster. This limit is called the Eddington limit. However, many black holes accrete beyond this rate due to their non-spherical geometry or instabilities in the accretion disk. Accretion beyond the limit is called Super-Eddington accretion and may have been commonplace in the early universe. Stars have been observed to get torn apart by tidal forces in the immediate vicinity of supermassive black holes in galaxy nuclei, in what is known as a tidal disruption event (TDE). Some of the material from the disrupted star forms an accretion disk around the black hole, which emits observable electromagnetic radiation. The correlation between the masses of supermassive black holes in the centres of galaxies with the velocity dispersion and mass of stars in their host bulges suggests that the formation of galaxies and the formation of their central black holes are related. Black hole winds from rapid accretion, particularly when the galaxy itself is still accreting matter, can compress gas nearby, accelerating star formation. However, if the winds become too strong, the black hole may blow nearly all of the gas out of the galaxy, quenching star formation. Black hole jets may also energize nearby cavities of plasma and eject low-entropy gas from out of the galactic core, causing gas in galactic centers to be hotter than expected. If Hawking's theory of black hole radiation is correct, then black holes are expected to shrink and evaporate over time as they lose mass by the emission of photons and other particles. The temperature of this thermal spectrum (Hawking temperature) is proportional to the surface gravity of the black hole, which is inversely proportional to the mass. Hence, large black holes emit less radiation than small black holes.: Ch. 9.6 A stellar black hole of 1 M☉ has a Hawking temperature of 62 nanokelvins. This is far less than the 2.7 K temperature of the cosmic microwave background radiation. Stellar-mass or larger black holes receive more mass from the cosmic microwave background than they emit through Hawking radiation and thus will grow instead of shrinking. To have a Hawking temperature larger than 2.7 K (and be able to evaporate), a black hole would need a mass less than the Moon. Such a black hole would have a diameter of less than a tenth of a millimetre. The Hawking radiation for an astrophysical black hole is predicted to be very weak and would thus be exceedingly difficult to detect from Earth. A possible exception is the burst of gamma rays emitted in the last stage of the evaporation of primordial black holes. Searches for such flashes have proven unsuccessful and provide stringent limits on the possibility of existence of low mass primordial black holes, with modern research predicting that primordial black holes must make up less than a fraction of 10−7 of the universe's total mass. NASA's Fermi Gamma-ray Space Telescope, launched in 2008, has searched for these flashes, but has not yet found any. The properties of a black hole are constrained and interrelated by the theories that predict these properties. When based on general relativity, these relationships are called the laws of black hole mechanics. For a black hole that is not still forming or accreting matter, the zeroth law of black hole mechanics states the black hole's surface gravity is constant across the event horizon. The first law relates changes in the black hole's surface area, angular momentum, and charge to changes in its energy. The second law says the surface area of a black hole never decreases on its own. Finally, the third law says that the surface gravity of a black hole is never zero. These laws are mathematical analogs of the laws of thermodynamics. They are not equivalent, however, because, according to general relativity without quantum mechanics, a black hole can never emit radiation, and thus its temperature must always be zero.: 11 Quantum mechanics predicts that a black hole will continuously emit thermal Hawking radiation, and therefore must always have a nonzero temperature. It also predicts that all black holes have entropy which scales with their surface area. When quantum mechanics is accounted for, the laws of black hole mechanics become equivalent to the classical laws of thermodynamics. However, these conclusions are derived without a complete theory of quantum gravity, although many potential theories do predict black holes having entropy and temperature. Thus, the true quantum nature of black hole thermodynamics continues to be debated.: 29 Observational evidence Millions of black holes with around 30 solar masses derived from stellar collapse are expected to exist in the Milky Way. Even a dwarf galaxy like Draco should have hundreds. Only a few of these have been detected. By nature, black holes do not themselves emit any electromagnetic radiation other than the hypothetical Hawking radiation, so astrophysicists searching for black holes must generally rely on indirect observations. The defining characteristic of a black hole is its event horizon. The horizon itself cannot be imaged, so all other possible explanations for these indirect observations must be considered and eliminated before concluding that a black hole has been observed.: 11 The Event Horizon Telescope (EHT) is a global system of radio telescopes capable of directly observing a black hole shadow. The angular resolution of a telescope is based on its aperture and the wavelengths it is observing. Because the angular diameters of Sagittarius A* and Messier 87* in the sky are very small, a single telescope would need to be about the size of the Earth to clearly distinguish their horizons using radio wavelengths. By combining data from several different radio telescopes around the world, the Event Horizon Telescope creates an effective aperture the diameter size of the Earth. The EHT team used imaging algorithms to compute the most probable image from the data in its observations of Sagittarius A* and M87*. Gravitational-wave interferometry can be used to detect merging black holes and other compact objects. In this method, a laser beam is split down two long arms of a tunnel. The laser beams reflect off of mirrors in the tunnels and converge at the intersection of the arms, cancelling each other out. However, when a gravitational wave passes, it warps spacetime, changing the lengths of the arms themselves. Since each laser beam is now travelling a slightly different distance, they do not cancel out and produce a recognizable signal. Analysis of the signal can give scientists information about what caused the gravitational waves. Since gravitational waves are very weak, gravitational-wave observatories such as LIGO must have arms several kilometers long and carefully control for noise from Earth to be able to detect these gravitational waves. Since the first measurements in 2016, multiple gravitational waves from black holes have been detected and analyzed. The proper motions of stars near the centre of the Milky Way provide strong observational evidence that these stars are orbiting a supermassive black hole. Since 1995, astronomers have tracked the motions of 90 stars orbiting an invisible object coincident with the radio source Sagittarius A*. In 1998, by fitting the motions of the stars to Keplerian orbits, the astronomers were able to infer that Sagittarius A* must be a 2.6×106 M☉ object must be contained within a radius of 0.02 light-years. Since then, one of the stars—called S2—has completed a full orbit. From the orbital data, astronomers were able to refine the calculations of the mass of Sagittarius A* to 4.3×106 M☉, with a radius of less than 0.002 light-years. This upper limit radius is larger than the Schwarzschild radius for the estimated mass, so the combination does not prove Sagittarius A* is a black hole. Nevertheless, these observations strongly suggest that the central object is a supermassive black hole as there are no other plausible scenarios for confining so much invisible mass into such a small volume. Additionally, there is some observational evidence that this object might possess an event horizon, a feature unique to black holes. The Event Horizon Telescope image of Sagittarius A*, released in 2022, provided further confirmation that it is indeed a black hole. X-ray binaries are binary systems that emit a majority of their radiation in the X-ray part of the electromagnetic spectrum. These X-ray emissions result when a compact object accretes matter from an ordinary star. The presence of an ordinary star in such a system provides an opportunity for studying the central object and to determine if it might be a black hole. By measuring the orbital period of the binary, the distance to the binary from Earth, and the mass of the companion star, scientists can estimate the mass of the compact object. The Tolman-Oppenheimer-Volkoff limit (TOV limit) dictates the largest mass a nonrotating neutron star can be, and is estimated to be about two solar masses. While a rotating neutron star can be slightly more massive, if the compact object is much more massive than the TOV limit, it cannot be a neutron star and is generally expected to be a black hole. The first strong candidate for a black hole, Cygnus X-1, was discovered in this way by Charles Thomas Bolton, Louise Webster, and Paul Murdin in 1972. Observations of rotation broadening of the optical star reported in 1986 lead to a compact object mass estimate of 16 solar masses, with 7 solar masses as the lower bound. In 2011, this estimate was updated to 14.1±1.0 M☉ for the black hole and 19.2±1.9 M☉ for the optical stellar companion. X-ray binaries can be categorized as either low-mass or high-mass; This classification is based on the mass of the companion star, not the compact object itself. In a class of X-ray binaries called soft X-ray transients, the companion star is of relatively low mass, allowing for more accurate estimates of the black hole mass. These systems actively emit X-rays for only several months once every 10–50 years. During the period of low X-ray emission, called quiescence, the accretion disk is extremely faint, allowing detailed observation of the companion star. Numerous black hole candidates have been measured by this method. Black holes are also sometimes found in binaries with other compact objects, such as white dwarfs, neutron stars, and other black holes. The centre of nearly every galaxy contains a supermassive black hole. The close observational correlation between the mass of this hole and the velocity dispersion of the host galaxy's bulge, known as the M–sigma relation, strongly suggests a connection between the formation of the black hole and that of the galaxy itself. Astronomers use the term active galaxy to describe galaxies with unusual characteristics, such as unusual spectral line emission and very strong radio emission. Theoretical and observational studies have shown that the high levels of activity in the centers of these galaxies, regions called active galactic nuclei (AGN), may be explained by accretion onto supermassive black holes. These AGN consist of a central black hole that may be millions or billions of times more massive than the Sun, a disk of interstellar gas and dust called an accretion disk, and two jets perpendicular to the accretion disk. Although supermassive black holes are expected to be found in most AGN, only some galaxies' nuclei have been more carefully studied in attempts to both identify and measure the actual masses of the central supermassive black hole candidates. Some of the most notable galaxies with supermassive black hole candidates include the Andromeda Galaxy, Messier 32, Messier 87, the Sombrero Galaxy, and the Milky Way itself. Another way black holes can be detected is through observation of effects caused by their strong gravitational field. One such effect is gravitational lensing: The deformation of spacetime around a massive object causes light rays to be deflected, making objects behind them appear distorted. When the lensing object is a black hole, this effect can be strong enough to create multiple images of a star or other luminous source. However, the distance between the lensed images may be too small for contemporary telescopes to resolve—this phenomenon is called microlensing. Instead of seeing two images of a lensed star, astronomers see the star brighten slightly as the black hole moves towards the line of sight between the star and Earth and then return to its normal luminosity as the black hole moves away. The turn of the millennium saw the first 3 candidate detections of black holes in this way, and in January 2022, astronomers reported the first confirmed detection of a microlensing event from an isolated black hole. This was also the first determination of an isolated black hole mass, 7.1±1.3 M☉. Alternatives While there is a strong case for supermassive black holes, the model for stellar-mass black holes assumes of an upper limit for the mass of a neutron star: objects observed to have more mass are assumed to be black holes. However, the properties of extremely dense matter are poorly understood. New exotic phases of matter could allow other kinds of massive objects. Quark stars would be made up of quark matter and supported by quark degeneracy pressure, a form of degeneracy pressure even stronger than neutron degeneracy pressure. This would halt gravitational collapse at a higher mass than for a neutron star. Even stronger stars called electroweak stars would convert quarks in their cores into leptons, providing additional pressure to stop the star from collapsing. If, as some extensions of the Standard Model posit, quarks and leptons are made up of the even-smaller fundamental particles called preons, a very compact star could be supported by preon degeneracy pressure. While none of these hypothetical models can explain all of the observations of stellar black hole candidates, a Q star is the only alternative which could significantly exceed the mass limit for neutron stars and thus provide an alternative for supermassive black holes.: 12 A few theoretical objects have been conjectured to match observations of astronomical black hole candidates identically or near-identically, but which function via a different mechanism. A dark energy star would convert infalling matter into vacuum energy; This vacuum energy would be much larger than the vacuum energy of outside space, exerting outwards pressure and preventing a singularity from forming. A black star would be gravitationally collapsing slowly enough that quantum effects would keep it just on the cusp of fully collapsing into a black hole. A gravastar would consist of a very thin shell and a dark-energy interior providing outward pressure to stop the collapse into a black hole or formation of a singularity; It could even have another gravastar inside, called a 'nestar'. Open questions According to the no-hair theorem, a black hole is defined by only three parameters: its mass, charge, and angular momentum. This seems to mean that all other information about the matter that went into forming the black hole is lost, as there is no way to determine anything about the black hole from outside other than those three parameters. When black holes were thought to persist forever, this information loss was not problematic, as the information can be thought of as existing inside the black hole. However, black holes slowly evaporate by emitting Hawking radiation. This radiation does not appear to carry any additional information about the matter that formed the black hole, meaning that this information is seemingly gone forever. This is called the black hole information paradox. Theoretical studies analyzing the paradox have led to both further paradoxes and new ideas about the intersection of quantum mechanics and general relativity. While there is no consensus on the resolution of the paradox, work on the problem is expected to be important for a theory of quantum gravity.: 126 Observations of faraway galaxies have found that ultraluminous quasars, powered by supermassive black holes, existed in the early universe as far as redshift z ≥ 7 {\displaystyle z\geq 7} . These black holes have been assumed to be the products of the gravitational collapse of large population III stars. However, these stellar remnants were not massive enough to produce the quasars observed at early times without accreting beyond the Eddington limit, the theoretical maximum rate of black hole accretion. Physicists have suggested a variety of different mechanisms by which these supermassive black holes may have formed. It has been proposed that smaller black holes may have also undergone mergers to produce the observed supermassive black holes. It is also possible that they were seeded by direct-collapse black holes, in which a large cloud of hot gas avoids fragmentation that would lead to multiple stars, due to low angular momentum or heating from a nearby galaxy. Given the right circumstances, a single supermassive star forms and collapses directly into a black hole without undergoing typical stellar evolution. Additionally, these supermassive black holes in the early universe may be high-mass primordial black holes, which could have accreted further matter in the centers of galaxies. Finally, certain mechanisms allow black holes to grow faster than the theoretical Eddington limit, such as dense gas in the accretion disk limiting outward radiation pressure that prevents the black hole from accreting. However, the formation of bipolar jets prevent super-Eddington rates. In fiction Black holes have been portrayed in science fiction in a variety of ways. Even before the advent of the term itself, objects with characteristics of black holes appeared in stories such as the 1928 novel The Skylark of Space with its "black Sun" and the "hole in space" in the 1935 short story Starship Invincible. As black holes grew to public recognition in the 1960s and 1970s, they began to be featured in films as well as novels, such as Disney's The Black Hole. Black holes have also been used in works of the 21st century, such as Christopher Nolan's science fiction epic Interstellar. Authors and screenwriters have exploited the relativistic effects of black holes, particularly gravitational time dilation. For example, Interstellar features a black hole planet with a time dilation factor of over 60,000:1, while the 1977 novel Gateway depicts a spaceship approaching but never crossing the event horizon of a black hole from the perspective of an outside observer due to time dilation effects. Black holes have also been appropriated as wormholes or other methods of faster-than-light travel, such as in the 1974 novel The Forever War, where a network of black holes is used for interstellar travel. Additionally, black holes can feature as hazards to spacefarers and planets: A black hole threatens a deep-space outpost in 1978 short story The Black Hole Passes, and a binary black hole dangerously alters the orbit of a planet in the 2018 Netflix reboot of Lost in Space. Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/The_Sunday_Times] | [TOKENS: 4628] |
Contents The Sunday Times The Sunday Times is a British Sunday newspaper whose circulation makes it the largest in Britain's quality press market category. It was founded in 1821 as The New Observer. It is published by Times Newspapers Ltd, a subsidiary of News UK (formerly News International), which is owned by News Corp. Times Newspapers also publishes The Times. The two papers, founded separately and independently, have been under the same ownership since 1966. They were bought by News International in 1981. In March 2020, The Sunday Times had a circulation of 647,622, exceeding that of its main rivals, The Sunday Telegraph and The Observer, combined. While some other national newspapers moved to a tabloid format in the early 2000s, The Sunday Times retained the larger broadsheet format and has said that it intends to continue to do so. As of December 2019, it sold 75% more copies than its sister paper, The Times, which is published from Monday to Saturday. The paper publishes The Sunday Times Rich List and The Sunday Times Fast Track 100. History The paper began publication on 18 February 1821 as The New Observer, but from 21 April its title was changed to the Independent Observer. Its founder, Henry White, chose the name apparently in an attempt to take advantage of the success of The Observer, which had been founded in 1791, although there was no connection between the two papers. On 20 October 1822 it was reborn as The Sunday Times, although it had no relationship with The Times. In January 1823, White sold the paper to Daniel Whittle Harvey, a radical politician.[citation needed] Under its new owner, The Sunday Times notched up several firsts. A wood engraving it published of the coronation of Queen Victoria in 1838 was the largest illustration to have appeared in a British newspaper. In 1841, it became one of the first papers to serialise a novel: William Harrison Ainsworth's Old St Paul's. The paper was bought in 1887 by Alice Anne Cornwell, who had made a fortune in mining in Australia and by floating the Midas Mine Company on the London Stock Exchange. She bought the paper to promote her new company, The British and Australasian Mining Investment Company, and as a gift to her lover Phil Robinson. Robinson was installed as editor and the two were later married in 1894. In 1893 Cornwell sold the paper to Frederick Beer, who already owned The Observer. Beer appointed his wife, Rachel Sassoon Beer, as editor. She was already editor of The Observer – the first woman to run a national newspaper – and continued to edit both titles until 1901. There was a further change of ownership in 1903, and then in 1915 the paper was bought by William Berry and his brother, Gomer Berry, later ennobled as Lord Camrose and Viscount Kemsley respectively. Under their ownership, The Sunday Times continued its reputation for innovation: on 23 November 1930, it became the first Sunday newspaper to publish a 40-page issue and on 21 January 1940, news replaced advertising on the front page. In 1943, the Kemsley Newspapers Group was established, with The Sunday Times becoming its flagship paper. At this time, Kemsley was the largest newspaper group in Britain.[citation needed] On 12 November 1945, Ian Fleming, who later created James Bond, joined the paper as foreign manager (foreign editor) and special writer. The following month, circulation reached 500,000. On 28 September 1958, the paper launched a separate Review section, becoming the first newspaper to publish two sections regularly. The Kemsley group was bought in 1959 by Lord Thomson, and in October 1960 circulation reached one million for the first time. In another first, on 4 February 1962 the editor, Denis Hamilton, launched The Sunday Times Magazine. (At the insistence of newsagents, worried at the impact on sales of standalone magazines, it was initially called the "colour section" and did not take the name The Sunday Times Magazine until 9 August 1964.) The cover picture of the first issue was of Jean Shrimpton wearing a Mary Quant outfit and was taken by David Bailey. The magazine got off to a slow start, but the advertising soon began to pick up, and, over time, other newspapers launched magazines of their own.[citation needed] English writer Jilly Cooper got her first break in journalism, writing a column on young married life from 1968, leading to her first book How to Stay Married and a 2020 collection, Between the Covers. In 1963, the Insight investigative team was established under Clive Irving. The "Business" section was launched on 27 September 1964, making The Sunday Times Britain's first regular three-section newspaper. In September 1966, Thomson bought The Times, to form Times Newspapers Ltd (TNL). It was the first time The Sunday Times and The Times had been brought under the same ownership.[citation needed] Harold Evans, editor from 1967 until 1981, established The Sunday Times as a leading campaigning and investigative newspaper. On 19 May 1968, the paper published its first major campaigning report on the drug thalidomide, which had been reported by the Australian doctor William McBride in The Lancet in 1961 as being associated with birth defects, and been quickly withdrawn. The newspaper published a four-page Insight investigation, titled "The Thalidomide File", in the "Weekly Review" section. On 17 November 1972, the Queen's Bench Divisional Court issued an injunction to prevent The Sunday Times from publishing further articles, as it was feared that the paper's campaign might affect ongoing lawsuits over the ensuing scandal. The newspaper appealed to the European Court of Human Rights, which found that the injunction violated the publisher's right to freedom of expression, noting that the articles were moderate and balanced and thus unlikely to disrupt proceedings. A compensation settlement for the UK victims was eventually reached with Distillers Company (now part of Diageo), which had distributed the drug in the UK.[citation needed] TNL was plagued by a series of industrial disputes at its plant at Gray's Inn Road in London, with the print unions resisting attempts to replace the old-fashioned hot-metal and labour-intensive Linotype method with technology that would allow the papers to be composed digitally. Thomson offered to invest millions of pounds to buy out obstructive practices and overmanning, but the unions rejected every proposal. As a result, publication of The Sunday Times and other titles in the group was suspended in November 1978. It did not resume until November 1979.[citation needed] Although journalists at The Times had been on full pay during the suspension, they went on strike demanding more money after production was resumed. Kenneth Thomson, the head of the company, felt betrayed and decided to sell. Evans tried to organise a management buyout of The Sunday Times, but Thomson decided instead to sell to Rupert Murdoch, who he thought had a better chance of dealing with the trade unions.[citation needed] Rupert Murdoch's News International acquired the group in February 1981. Murdoch, an Australian who in 1985 became a naturalised American citizen, already owned The Sun and the News of the World, but the Conservative government decided not to refer the deal to the Monopolies and Mergers Commission, citing a clause in the Fair Trading Act that exempted uneconomic businesses from referral. The Thomson Corporation had threatened to close the papers down if they were not taken over by someone else within an allotted time, and it was feared that any legal delay to Murdoch's takeover might lead to the two titles' demise. In return, Murdoch provided legally binding guarantees to preserve the titles' editorial independence.[citation needed] Evans was appointed editor of The Times in February 1981 and was replaced at The Sunday Times by Frank Giles. In 1983, the newspaper bought the serialisation rights to publish the faked Hitler Diaries, thinking them to be genuine after they were authenticated by the own newspaper's own independent director, Hugh Trevor-Roper, the historian and author of The Last Days of Hitler. Under Andrew Neil, editor from 1983 until 1994, The Sunday Times took a strongly Thatcherite slant that contrasted with the traditional paternalistic conservatism expounded by Peregrine Worsthorne at the rival Sunday Telegraph. It also built on its reputation for investigations. Its scoops included the revelation in 1986 that Israel had manufactured more than 100 nuclear warheads and the publication in 1992 of extracts from Andrew Morton's book, Diana: Her True Story in Her Own Words. In January 1986, after the announcement of a strike by print workers, production of The Sunday Times, along with other newspapers in the group, was shifted to a new plant in Wapping, and the strikers were dismissed. The plant, which allowed journalists to input copy directly, was activated with the help of the Electrical, Electronic, Telecommunications and Plumbing Union (EETPU). The print unions posted pickets and organised demonstrations outside the new plant to try to dissuade journalists and others from working there, in what became known as the Wapping dispute. The demonstrations sometimes turned violent. The protest ended in failure in February 1987.[citation needed] During Neil's editorship, a number of new sections were added: the annual "The Sunday Times Rich List" and the "Funday Times", in 1989 (the latter stopped appearing in print and was relaunched as a standalone website in March 2006, but was later closed); "Style & Travel", "News Review" and "Arts" in 1990; and "Culture" in 1992. In September 1994, "Style" and "Travel" became two separate sections.[citation needed] During Neil's time as editor, The Sunday Times backed a campaign to prove that HIV was not a cause of AIDS. In 1990, The Sunday Times serialised a book by an American conservative who rejected the scientific consensus on the causes of AIDS and argued that AIDS could not spread to heterosexuals. Articles and editorials in The Sunday Times cast doubt on the scientific consensus, described HIV as a "politically correct virus" about which there was a "conspiracy of silence", disputed that AIDS was spreading in Africa, claimed that tests for HIV were invalid, described the HIV/AIDS treatment drug AZT as harmful, and characterised the WHO as an "Empire-building AIDS [organisation]". The pseudoscientific coverage of HIV/AIDS in The Sunday Times led the scientific journal Nature to monitor the newspaper's coverage and to publish letters rebutting Sunday Times articles which The Sunday Times refused to publish. In response to this, The Sunday Times published an article headlined "AIDS – why we won't be silenced", which claimed that Nature engaged in censorship and "sinister intent". In his 1996 book, Full Disclosure, Neil wrote that the HIV/AIDS denialism "deserved publication to encourage debate". That same year, he wrote that The Sunday Times had been vindicated in its coverage, "The Sunday Times was one of a handful of newspapers, perhaps the most prominent, which argued that heterosexual Aids was a myth. The figures are now in and this newspaper stands totally vindicated ... The history of Aids is one of the great scandals of our time. I do not blame doctors and the Aids lobby for warning that everybody might be at risk in the early days, when ignorance was rife and reliable evidence scant." He criticised the "AIDS establishment" and said "Aids had become an industry, a job-creation scheme for the caring classes." John Witherow, who became editor at the end of 1994 (after several months as acting editor), continued the newspaper's expansion. A website was launched in 1996 and new print sections added: "Home" in 2001, and "Driving" in 2002, which in 2006 was renamed "InGear". (It reverted to the name "Driving" from 7 October 2012, to coincide with the launch of a new standalone website, Sunday Times Driving.) Technology coverage was expanded in 2000 with the weekly colour magazine "Doors", and in 2003 "The Month", an editorial section presented as an interactive CD-ROM. Magazine partworks were regular additions, among them "1000 Makers of Music", published over six weeks in 1997.[citation needed] John Witherow oversaw a rise in circulation to 1.3 million and reconfirmed The Sunday Times's reputation for publishing hard-hitting news stories – such as the cash for questions scandal in 1994 and the cash for honours scandal in 2006, and revelations of corruption at FIFA in 2010. The newspaper's foreign coverage has been especially strong, and its reporters, Marie Colvin, Jon Swain, Hala Jaber, Mark Franchetti and Christina Lamb have dominated the Foreign Reporter of the Year category at the British Press Awards since 2000.[citation needed] Colvin, who worked for the paper from 1985, was killed in February 2012 by Syrian forces while covering the siege of Homs during that country's civil war. In common with other newspapers, The Sunday Times has been hit by a fall in circulation, which has declined from a peak of 1.3 million to just over 710,000.[when?] It has a number of digital-only subscribers, which numbered 99,017 by January 2019.[needs update] During January 2013, Martin Ivens became 'acting' editor of The Sunday Times in succession to John Witherow, who became the 'acting' editor of The Times at the same time. The independent directors rejected a permanent position for Ivens as editor to avoid any possible merger of The Sunday Times and daily Times titles. In 2019, after passing government scrutiny, The Sunday Times and The Times began to "share resources" in what was considered a partial merger, though retaining distinct editors. Election endorsements The paper endorsed the Conservative Party in the 2005, 2010, 2015, 2017 and 2019 UK general elections, before endorsing the Labour Party in the 2024 election. Online presence The online presences of The Sunday Times and The Times have been repeatedly combined and separated over the years. Prior to 2001, distinct websites thetimes.co.uk and thesundaytimes.co.uk existed. In 2001, these were combined into Times Online. In 2010, Times Online was replaced with distinct websites again for The Sunday Times and The Times. In 2016, The Times and The Sunday Times' websites were once again merged into one. In 2024, the domain name was changed from "thetimes.co.uk" to "thetimes.com". An iPad edition was launched in December 2010, and an Android version in August 2011. Since July 2012, the digital version of the paper has been available on Apple's Newsstand platform, allowing automated downloading of the news section. With over 500 MB of content every week, it is the biggest newspaper app in the world.[failed verification] The Sunday Times iPad app was named newspaper app of the year at the 2011 Newspaper Awards and has twice been ranked best newspaper or magazine app in the world by iMonitor. Various subscription packages exist, giving access to both the print and digital versions of the paper. On 2 October 2012, The Sunday Times launched Sunday Times Driving, a separate classified advertising site for premium vehicles that also includes editorial content from the newspaper as well as specially commissioned articles. It can be accessed without cost.[importance?] Related publications This 164-page monthly magazine was sold separately from the newspaper and was Britain's best-selling travel magazine. The first issue of The Sunday Times Travel Magazine was in 2003, and it included news, features and insider guides. Notable stories Some of the more notable or controversial stories published in The Sunday Times include: Controversies In July 2011, The Sunday Times was implicated in the wider News International phone hacking scandal, which primarily involved the News of the World, a Murdoch tabloid newspaper published in the UK from 1843 to 2011. Former British prime minister Gordon Brown accused The Sunday Times of employing "known criminals" to impersonate him and obtain his private financial records. Brown's bank reported that an investigator employed by The Sunday Times repeatedly impersonated Brown to gain access to his bank account records. The Sunday Times vigorously denied these accusations and said that the story was in the public interest and that it had followed the Press Complaints Commission code on using subterfuge.[citation needed] Over two years in the early 1990s, The Sunday Times published a series of articles rejecting the role of HIV in causing AIDS, calling the African AIDS epidemic a myth. In response, the scientific journal Nature described the paper's coverage of HIV/AIDS as "seriously mistaken, and probably disastrous". Nature argued that the newspaper had "so consistently misrepresented the role of HIV in the causation of AIDS that Nature plans to monitor its future treatment of the issue." In January 2010, The Sunday Times published an article by Jonathan Leake, alleging that a figure in the IPCC Fourth Assessment Report was based on an "unsubstantiated claim". The story attracted worldwide attention. However, a scientist quoted in the same article later stated that the newspaper story was wrong and that quotes of him had been used in a misleading way. Following an official complaint to the Press Complaints Commission, The Sunday Times retracted the story and apologised. In September 2012, Jonathan Leake published an article in The Sunday Times under the headline "Only 100 adult cod in North Sea". This figure was later shown by a BBC article to be wildly incorrect. The newspaper published a correction, apologising for an over simplification in the headline, which had referred to a fall in the number of fully mature cod over the age of 13, thereby indicating this is the breeding age of cod. In fact, as the newspaper subsequently pointed out, cod can start breeding between the ages of four and six, in which case there are many more mature cod in the North Sea. In 1992, the paper agreed to pay David Irving, an author widely criticised for Holocaust denial, the sum of £75,000 to authenticate the Goebbels diaries and edit them for serialisation. The deal was quickly cancelled after drawing strong international criticism.[citation needed] In January 2013, The Sunday Times published a Gerald Scarfe caricature depicting Israel's Prime Minister Benjamin Netanyahu cementing a wall with blood and Palestinians trapped between the bricks. The cartoon sparked an outcry in Israel, compounded by the fact that its publication coincided with International Holocaust Remembrance Day, and was condemned by the pro-Israeli Anti-Defamation League. After Rupert Murdoch tweeted that he considered it a "grotesque, offensive cartoon" and that Scarfe had "never reflected the opinions of The Sunday Times" the newspaper issued an apology. Journalist Ian Burrell, writing in The Independent, described the apology as an "indication of the power of the Israel lobby in challenging critical media coverage of its politicians" and one that questions Rupert Murdoch's assertion that he does not "interfere in the editorial content of his papers". In July 2017, Kevin Myers wrote a column in The Sunday Times saying "I note that two of the best-paid women presenters in the BBC – Claudia Winkleman and Vanessa Feltz, with whose, no doubt, sterling work I am tragically unacquainted – are Jewish. Good for them". He continued "Jews are not generally noted for their insistence on selling their talent for the lowest possible price, which is the most useful measure there is of inveterate, lost-with-all-hands stupidity. I wonder, who are their agents? If they're the same ones that negotiated the pay for the women on the lower scales, then maybe the latter have found their true value in the marketplace". After the column The Sunday Times fired Myers. The Campaign Against Antisemitism criticised The Sunday Times for allowing Myers to write the column despite his past comments about Jews. Other editions The Republic of Ireland edition of The Sunday Times was launched on a small scale in 1993 with just two staff: Alan Ruddock and John Burns (who started as financial correspondent for the newspaper and is at present acting associate editor). It used the slogan "The English just don't get it". It is now the third biggest-selling newspaper in Ireland measured in terms of full-price cover sales (Source: ABC January–June 2012).[full citation needed] Circulation had grown steadily to over 127,000 in the two decades before 2012, but has declined since and currently stands at 60,352 (January to June 2018). The paper is heavily editionalised, with extensive Irish coverage of politics, general news, business, personal finance, sport, culture and lifestyle. The office employs 25 people. The paper also has a number of well-known freelance columnists including Brenda Power, Liam Fay, Matt Cooper, Damien Kiberd, Jill Kerby and Stephen Price. However, it ended collaboration with Kevin Myers after he had published a controversial column. The Irish edition has had four editors since it was set up: Alan Ruddock from 1993 until 1996, Rory Godson from 1996 until 2000, Fiona McHugh from 2000 to 2005, and from 2005 until 2020 Frank Fitzgibbon. John Burns has been acting editor of the Irish edition from 2020.[citation needed] For more than 20 years the paper has published a separate Scottish edition, which has been edited since January 2012 by Jason Allardyce. While most of the articles that run in the English edition appear in the Scottish edition, its staff also produces about a dozen Scottish news stories, including a front-page article, most weeks. The edition also contains a weekly "Scottish Focus" feature and Scottish commentary, and covers Scottish sport in addition to providing Scottish television schedules. The Scottish issue is the biggest-selling 'quality newspaper' in the market, outselling both Scotland on Sunday and the Sunday Herald.[citation needed] Editors See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Azeris] | [TOKENS: 8554] |
Contents Azerbaijanis Azerbaijanis (/ˌæzərbaɪˈdʒæni, -ɑːni/; Azerbaijani: Azərbaycanlılar, آذربایجانلیلار), Azeris or Azerbaijani Turks (Azərbaycan türkləri, آذربایجان تۆرکلری) are a Turkic ethnic group living mainly in the Azerbaijan region of northwestern Iran and the Republic of Azerbaijan. They are the largest ethnic group in the Republic of Azerbaijan and the second-largest ethnic group in neighboring Iran and Georgia. They speak the Azerbaijani language, belonging to the Oghuz branch of the Turkic languages, and predominantly practice Shia Islam. Following the Russo-Persian Wars of 1813 and 1828, the territories of Qajar Iran in the Caucasus were ceded to the Russian Empire and the treaties of Gulistan in 1813 and Turkmenchay in 1828 finalized the borders between Russia and Iran. After more than 80 years of being under the Russian Empire in the Caucasus, the Azerbaijan Democratic Republic was established in 1918 which defined the territory of the Republic of Azerbaijan. Etymology Azerbaijan is believed to be named after Atropates, a Persian satrap (governor) who ruled in Atropatene (modern Iranian Azerbaijan) circa 321 BC.: 2 The name Atropates is the Hellenistic form of Old Persian Aturpat which means 'guardian of fire' itself a compound of ātūr () 'fire' (later āður (آذر) in (early) New Persian, and is pronounced āzar today) + -pat () suffix for -guardian, -lord, -master (-pat in early Middle Persian, -bod (بُد) in New Persian). Present-day name Azerbaijan is the Arabicized form of Āzarpāyegān (Persian: آذرپایگان) meaning 'the guardians of fire' later becoming Azerbaijan (Persian: آذربایجان) due to the phonemic shift from /p/ to /b/ and /g/ to /dʒ/ which is a result of the medieval Arabic influences that followed the Arab invasion of Iran, and is due to the lack of the phoneme /p/ and /g/ in the Arabic language. The word Azarpāyegān itself is ultimately from Old Persian Āturpātakān (Persian: آتورپاتکان) meaning 'the land associated with (satrap) Aturpat' or 'the land of fire guardians' (-an, in its postvocalic form -kān, is a suffix for association or forming adverbs and plurals; e.g.: Gilan 'land associated with Gil people'). The modern ethnonym "Azerbaijani" or "Azeri" refers to the Turkic peoples of Iran's northwestern historic region of Azerbaijan (also known as Iranian Azerbaijan) and the Republic of Azerbaijan. They historically called themselves or were referred to by others as Muslims and/or Turks. They were also referred to as Ajam (meaning from Iran), using the term incorrectly to denote their Shia belief rather than ethnic identity. When the Southern Caucasus became part of the Russian Empire in the nineteenth century, the Russian authorities, who traditionally referred to all Turkic people as Tatars, defined Tatars living in the Transcaucasus region as Caucasian Tatars or more rarely Aderbeijanskie (Адербейджанские) Tatars or even Persian Tatars in order to distinguish them from other Turkic groups and the Persian speakers of Iran. The Russian Brockhaus and Efron Encyclopedic Dictionary, written in the 1890s, also referred to Tatars in Azerbaijan as Aderbeijans (адербейджаны), but noted that the term had not been widely adopted. This ethnonym was also used by Joseph Deniker in 1900. In Azerbaijani language publications, the expression "Azerbaijani nation" referring to those who were known as Tatars of the Caucasus first appeared in the newspaper Kashkul in 1880. During the early Soviet period, the term "Transcaucasian Tatars" was supplanted by "Azerbaijani Turks" and ultimately "Azerbaijanis." For some time afterwards, the term "Azerbaijanis" was then applied to all Turkic-speaking Muslims in Transcaucasia, from the Meskhetian Turks in southwestern Georgia, to the Terekemes of southern Dagestan, as well as assimilated Tats and Talysh. The temporary designation of Meskhetian Turks as "Azerbaijanis" was most likely related to the existing administrative framework of the Transcaucasian SFSR, as the Azerbaijan SSR was one of its founding members. After the establishment of the Azerbaijan SSR, on the order of Soviet leader Stalin, the "name of the formal language" of the Azerbaijan SSR was also "changed from Turkic to Azerbaijani". Among post-Soviet Azerbaijanis, the term "Azeri" usually provokes a negative reaction. The Chechen and Ingush names for Azerbaijanis[a] are Ghezloy/Ghoazloy (ГӀезлой/ГӀоазлой) and Ghazaroy/Ghazharey (ГӀажарой/ГӀажарей). The former goes back to the name of Qizilbash while the latter goes back to the name of Qajars, having presumably emerged in Chechen and Ingush languages during the reign of Qajars in Iran in the 18th–19th centuries. History Ancient residents of the area, known as Azaris, spoke Old Azeri from the Iranian branch of the Indo-European languages. In the 11th century AD with Seljuq conquests, Oghuz Turkic tribes started moving across the Iranian Plateau into the Caucasus and Anatolia. The influx of the Oghuz and other Turkmen tribes was further accentuated by the Mongol invasion. These Turkmen tribes spread as smaller groups, a number of which settled down in the Caucasus and Iran, resulting in the Turkification of the local population. Over time they converted to Shia Islam and gradually absorbed Azerbaijan and Shirvan. Caucasian-speaking Albanian tribes are believed to be the earliest inhabitants of the region in the north of Aras river, where the Republic of Azerbaijan is located. The region also saw Scythian settlement in the ninth century BC, following which the Medes came to dominate the area to the south of the Aras River. Alexander the Great defeated the Achaemenids in 330 BC, but allowed the Median satrap Atropates to remain in power. Following the decline of the Seleucids in Persia in 247 BC, an Armenian Kingdom exercised control over parts of Caucasian Albania. Caucasian Albanians established a kingdom in the first century BC and largely remained independent until the Persian Sassanids made their kingdom a vassal state in 252 AD.: 38 Caucasian Albania's ruler, King Urnayr, went to Armenia and then officially adopted Christianity as the state religion in the fourth century AD, and Albania remained a Christian state until the 8th century. Sassanid control ended with their defeat by the Rashidun Caliphate in 642 AD through the Muslim conquest of Persia. The Arabs made Caucasian Albania a vassal state after the Christian resistance, led by Prince Javanshir, surrendered in 667.: 71 Between the ninth and tenth centuries, Arab authors began to refer to the region between the Kura and Aras rivers as Arran.: 20 During this time, Arabs from Basra and Kufa came to Azerbaijan and seized lands that indigenous peoples had abandoned; the Arabs became a land-owning elite.: 48 Conversion to Islam was slow as local resistance persisted for centuries and resentment grew as small groups of Arabs began migrating to cities such as Tabriz and Maraghah. This influx sparked a major rebellion in Iranian Azerbaijan from 816 to 837, led an Iranian Zoroastrian commoner named Babak Khorramdin. However, despite pockets of continued resistance, the majority of the inhabitants of Azerbaijan converted to Islam. Later, in the 10th and 11th centuries, parts of Azerbaijan were ruled by the Kurdish dynasty of Shaddadid and Arab Radawids. In the middle of the eleventh century, the Seljuq dynasty overthrew Arab rule and established an empire that encompassed most of Southwest Asia. The Seljuk period marked the influx of Oghuz nomads into the region. The emerging dominance of the Turkic language was chronicled in epic poems or dastans, the oldest being the Book of Dede Korkut, which relate allegorical tales about the early Turks in the Caucasus and Asia Minor.: 45 Turkic dominion was interrupted by the Mongols in 1227, but it returned with the Timurids and then Sunni Qara Qoyunlū (Black Sheep Turkmen) and Aq Qoyunlū (White Sheep Turkmen), who dominated Azerbaijan, large parts of Iran, eastern Anatolia, and other minor parts of West Asia, until the Shi'a Safavids took power in 1501.: 113 : 285 The Safavids, who rose from around Ardabil in Iranian Azerbaijan and lasted until 1722, established the foundations of the modern Iranian state. The Safavids, alongside their Ottoman archrivals, dominated the entire West Asian region and beyond for centuries. At its peak under Shah Abbas the Great, it rivaled its political and ideological archrival the Ottoman Empire in military strength. Noted for achievements in state-building, architecture, and the sciences, the Safavid state crumbled due to internal decay (mostly royal intrigues), ethnic minority uprisings and external pressures from the Russians, and the eventually opportunistic Afghans, who would mark the end of the dynasty. The Safavids encouraged and spread Shi'a Islam, as well as the arts and culture, and Shah Abbas the Great created an intellectual atmosphere that according to some scholars was a new "golden age". He reformed the government and the military and responded to the needs of the common people. After the Safavid state disintegrated, it was followed by the conquest by Nader Shah Afshar, a Shia chieftain from Khorasan who reduced the power of the ghulat Shi'a and empowered a moderate form of Shi'ism,: 300 and, exceptionally noted for his military genius, making Iran reach its greatest extent since the Sassanid Empire. The brief reign of Karim Khan came next, followed by the Qajars, who ruled what is the present-day Azerbaijan Republic and Iran from 1779.: 106 Russia loomed as a threat to Persian and Turkish holdings in the Caucasus in this period. The Russo-Persian Wars, despite already having had minor military conflicts in the 17th century, officially began in the eighteenth century and ended in the early nineteenth century with the Treaty of Gulistan of 1813 and the Treaty of Turkmenchay in 1828, which ceded the Caucasian portion of Qajar Iran to the Russian Empire.: 17 While Azerbaijanis in Iran integrated into Iranian society, Azerbaijanis who used to live in Aran, were incorporated into the Russian Empire. Despite the Russian conquest, throughout the entire 19th century, preoccupation with Iranian culture, literature, and language remained widespread amongst Shia and Sunni intellectuals in the Russian-held cities of Baku, Ganja and Tiflis (Tbilisi, now Georgia). Within the same century, in post-Iranian Russian-held East Caucasia, an Azerbaijani national identity emerged at the end of the 19th century. In 1891, the idea of recognizing oneself as an "Azerbaijani Turk" was first popularized amongst the Caucasus Tatars in the periodical Kashkül. The articles printed in Kaspiy and Kashkül in 1891 are typically credited as being the earliest expressions of a cultural Azerbaijani identity. Modernisation—compared to the neighboring Armenians and Georgians—was slow to develop amongst the Tatars of the Russian Caucasus. According to the 1897 Russian Empire census, less than five percent of the Tatars were able to read or write. The intellectual and newspaper editor Ali bey Huseynzade (1864–1940) led a campaign to 'Turkify, Islamise, modernise' the Caucasian Tatars, whereas Mammed Said Ordubadi (1872–1950), another journalist and activist, criticized superstition amongst Muslims. After the collapse of the Russian Empire during World War I, the short-lived Transcaucasian Democratic Federative Republic was declared, constituting what are the present-day republics of Azerbaijan, Georgia, and Armenia. This was followed by March Days massacres that took place between 30 March and 2 April 1918 in the city of Baku and adjacent areas of the Baku Governorate of the Russian Empire. When the republic dissolved in May 1918, the leading Musavat party adopted the name "Azerbaijan" for the newly established Azerbaijan Democratic Republic, which was proclaimed on 27 May 1918, for political reasons, even though the name of "Azerbaijan" had been used to refer to the adjacent region of contemporary northwestern Iran. The ADR was the first modern parliamentary republic in the Turkic world and Muslim world. Among the important accomplishments of the Parliament was the extension of suffrage to women, making Azerbaijan the first Muslim nation to grant women equal political rights with men. Another important accomplishment of ADR was the establishment of Baku State University, which was the first modern-type university founded in Muslim East. By March 1920, it was obvious that Soviet Russia would attack the much-needed Baku. Vladimir Lenin said that the invasion was justified as Soviet Russia could not survive without Baku's oil. Independent Azerbaijan lasted only 23 months until the Bolshevik 11th Soviet Red Army invaded it, establishing the Azerbaijan SSR on 28 April 1920. Although the bulk of the newly formed Azerbaijani army was engaged in putting down an Armenian revolt that had just broken out in Karabakh, Azeris did not surrender their brief independence of 1918–20 quickly or easily. As many as 20,000 Azerbaijani soldiers died resisting what was effectively a Russian reconquest. The brief independence gained by the short-lived Azerbaijan Democratic Republic in 1918–1920 was followed by over 70 years of Soviet rule.: 91 Neverthelesss, it was in the early Soviet period that the Azerbaijani national identity was forged. After the restoration of independence in October 1991, the Republic of Azerbaijan became embroiled in a war with neighboring Armenia over the Nagorno-Karabakh region.: 97 The First Nagorno-Karabakh War resulted in the displacement of approximately 725,000 Azerbaijanis and 300,000–500,000 Armenians from both Azerbaijan and Armenia. As a result of the 2020 Nagorno-Karabakh war, Azerbaijan took control of 5 cities, 4 towns, 286 villages in the region. According to the 2020 Nagorno-Karabakh ceasefire agreement, internally displaced persons and refugees shall return to the territory of Nagorno-Karabakh and adjacent areas under the supervision of the United Nations High Commissioner for Refugees. In Iran, Azerbaijanis such as Sattar Khan sought constitutional reform. The Persian Constitutional Revolution of 1906–11 shook the Qajar dynasty. A parliament (Majlis) was founded on the efforts of the constitutionalists, and pro-democracy newspapers appeared. The last Shah of the Qajar dynasty was soon removed in a military coup led by Reza Khan. In the quest to impose national homogeneity on a country where half of the population were ethnic minorities, Reza Shah banned in quick succession the use of the Azerbaijani language in schools, theatrical performances, religious ceremonies, and books. Upon the dethronement of Reza Shah in September 1941, Soviet forces took control of Iranian Azerbaijan and helped to set up the Azerbaijan People's Government, a client state under the leadership of Sayyid Jafar Pishevari backed by Soviet Azerbaijan. The Soviet military presence in Iranian Azerbaijan was mainly aimed at securing the Allied supply route during World War II. Concerned with the continued Soviet presence after World War II, the United States and Britain pressured the Soviets to withdraw by late 1946. Immediately thereafter, the Iranian government regained control of Iranian Azerbaijan. According to Professor Gary R. Hess, local Azerbaijanis favored the Iranian rule, while the Soviets forewent the Iranian Azerbaijan due to the exaggerated sentiment for autonomy and oil being their top priority. Origins In many references, Azerbaijanis are designated as a Turkic people, while some sources describe the origin of Azerbaijanis as "unclear", mainly Caucasian, mainly Iranian, mixed Caucasian Albanian and Turkish, and mixed with Caucasian, Iranian, and Turkic elements. Russian historian and orientalist Vladimir Minorsky writes that largely Iranian and Caucasian populations became Turkic-speaking following the Oghuz occupation of the region, though the characteristic features of the local Turkic language, such as Persian intonations and disregard of the vocalic harmony, were a remnant of the non-Turkic population. Historical research suggests that the Old Azeri language, belonging to the Northwestern branch of the Iranian languages and believed to have descended from the language of the Medes, gradually gained currency and was widely spoken in said region for many centuries. Some Azerbaijanis of the Republic of Azerbaijan are believed to be descended from the inhabitants of Caucasian Albania, an ancient country located in the eastern Caucasus region, and various Iranian peoples which settled the region. They claim there is evidence that, due to repeated invasions and migrations, the aboriginal Caucasian population may have gradually been culturally and linguistically assimilated, first by Iranian peoples, such as the Persians, and later by the Oghuz Turks. Considerable information has been learned about the Caucasian Albanians, including their language, history, early conversion to Christianity, and relations with the Armenians and Georgians, under whose strong religious and cultural influence the Caucasian Albanians came in the coming centuries. Turkification of the non-Turkic population derives from the Turkic settlements in the area now known as Azerbaijan, which began and accelerated during the Seljuk period. The migration of Oghuz Turks from present-day Turkmenistan, which is attested by linguistic similarity, remained high through the Mongol period, as many troops under the Ilkhanids were Turkic. By the Safavid period, the Turkic nature of Azerbaijan increased with the influence of the Qizilbash, an association of the Turkoman nomadic tribes that was the backbone of the Safavid Empire. According to Soviet scholars, the Turkicization of Azerbaijan was largely completed during the Ilkhanid period. Faruk Sümer posits three periods in which Turkicization took place: Seljuk, Mongol and Post-Mongol (Qara Qoyunlu, Aq Qoyunlu and Safavid). In the first two, Oghuz Turkic tribes advanced or were driven to Anatolia and Arran. In the last period, the Turkic elements in Iran (Oghuz, with lesser admixtures of Uyghur, Qipchaq, Qarluq as well as Turkicized Mongols) were joined now by Anatolian Turks migrating back to Iran. This marked the final stage of Turkicization. 10th-century Arab historian Al-Masudi attested the Old Azeri language and described that the region of Azerbaijan was inhabited by Persians. Archaeological evidence indicates that the Iranian religion of Zoroastrianism was prominent throughout the Caucasus before Christianity and Islam. According to Encyclopaedia Iranica, Azerbaijanis mainly originate from the earlier Iranian speakers, who still exist to this day in smaller numbers, and a massive migration of Oghuz Turks in the 11th and 12th centuries gradually Turkified Azerbaijan as well as Anatolia. According to Encyclopædia Britannica, the Azerbaijanis are of mixed descent, originating in the indigenous population of eastern Transcaucasia and possibly the Medians from northern Iran. There is evidence that, due to repeated invasions and migrations, aboriginal Caucasians may have been culturally assimilated, first by Ancient Iranian peoples and later by the Oghuz. Considerable information has been learned about the Caucasian Albanians including their language, history, early conversion to Christianity. The Udi language, still spoken in Azerbaijan, may be a remnant of the Albanians' language. Contemporary Western Asian genomes, a region that includes Azerbaijan, have been greatly influenced by early agricultural populations in the area; later population movements, such as those of Turkic speakers, also contributed. However, as of 2017, there is no whole genome sequencing study for Azerbaijan; sampling limitations such as these prevent forming a "finer-scale picture of the genetic history of the region". A 2014 study comparing the genetics of the populations from Armenia, Georgia, Azerbaijan, (which were grouped as "Western Silk Road") Kazakhstan, Uzbekistan, and Tajikistan (grouped as "Eastern Silk Road") found that the samples from Azerbaijan were the only group from the Western Silk Road to show significant contribution from the Eastern Silk Road, despite the overall clustering with the other samples from the Western Silk Road. The eastern input into the Azerbaijani genetics was estimated to be roughly 25 generations ago, corresponding to the time of the Mongolian expansion. A 2002 study focusing on eleven Y-chromosome markers suggested that Azerbaijanis are genetically more related to their Caucasian geographic neighbors than to their linguistic neighbors. Iranian Azerbaijanis are genetically more similar to northern Azerbaijanis and the neighboring Turkic population than they are to geographically distant Turkmen populations. Iranian-speaking populations from Azerbaijan (the Talysh and Tats) are genetically closer to Azerbaijanis of the Republic than to other Iranian-speaking populations (Persian people and Kurds from Iran, Ossetians, and Tajiks). Several genetic studies suggested that the Azerbaijanis originate from a native population long resident in the area who adopted a Turkic language through language replacement, including possibility of elite dominance scenario. However, the language replacement in Azerbaijan (and in Turkey) might not have been in accordance with the elite dominance model, with estimated Central Asian contribution to Azerbaijan being 18% for females and 32% for males. A subsequent study also suggested 33% Central Asian contribution to Azerbaijan. A 2001 study which looked into the first hypervariable segment of the MtDNA suggested that "genetic relationships among Caucasus populations reflect geographical rather than linguistic relationships", with Armenians and Azerbaijanians being "most closely related to their nearest geographical neighbours". Another 2004 study that looked into 910 MtDNAs from 23 populations in the Iranian plateau, the Indus Valley, and Central Asia suggested that populations "west of the Indus basin, including those from Iran, Anatolia [Turkey] and the Caucasus, exhibit a common mtDNA lineage composition, consisting mainly of western Eurasian lineages, with a very limited contribution from South Asia and eastern Eurasia". While genetic analysis of mtDNA indicates that Caucasian populations are genetically closer to Europeans than to Near Easterners, Y-chromosome results indicate closer affinity to Near Eastern groups. The range of haplogroups across the region may reflect historical genetic admixture, perhaps as a result of invasive male migrations. In a comparative study (2013) on the complete mitochondrial DNA diversity in Iranians has indicated that Iranian Azeris are more related to the people of Georgia, than they are to other Iranians, as well as to Armenians. However the same multidimensional scaling plot shows that Azeris from the Caucasus, despite their supposed common origin with Iranian Azeris, "occupy an intermediate position between the Azeris/Georgians and Turks/Iranians grouping". A 2007 study which looked into class two Human leukocyte antigen suggested that there were "no close genetic relationship was observed between Azeris of Iran and the people of Turkey or Central Asians". A 2017 study which looked into HLA alleles put the samples from Azeris in Northwest Iran "in the Mediterranean cluster close to Kurds, Gorgan, Chuvash (South Russia, towards North Caucasus), Iranians and Caucasus populations (Svan and Georgians)". This Mediterranean stock includes "Turkish and Caucasian populations". Azeri samples were also in a "position between Mediterranean and Central Asian" samples, suggesting Turkification "process caused by Oghuz Turkic tribes could also contribute to the genetic background of Azeri people". In a 2019 study examining genome-wide data from selected populations in North Africa and West Eurasia, Azeris were grouped with Balkars, Circassians, Georgians, Lezgins, and Turkish people. Demographics and society The vast majority of Azerbaijanis live in the Republic of Azerbaijan and Iranian Azerbaijan. Between 12 and 23 million Azerbaijanis live in Iran, mainly in the northwestern provinces. Approximately 9.1 million Azerbaijanis are found in the Republic of Azerbaijan. A diaspora of over a million is spread throughout the rest of the world. According to Ethnologue, there are over 1 million speakers of the northern Azerbaijani dialect in southern Dagestan, Estonia, Georgia, Kazakhstan, Kyrgyzstan, Russian proper, Turkmenistan, and Uzbekistan. No Azerbaijanis were recorded in the 2001 census in Armenia, where the Nagorno-Karabakh conflict resulted in population shifts. Other sources, such as national censuses, confirm the presence of Azerbaijanis throughout the other states of the former Soviet Union. Azerbaijanis are by far the largest ethnic group in The Republic of Azerbaijan (over 90%), holding the second-largest community of ethnic Azerbaijanis after neighboring Iran. The literacy rate is very high, and is estimated at 99.5%. Azerbaijan began the twentieth century with institutions based upon those of Russia and the Soviet Union, with an official policy of atheism and strict state control over most aspects of society. Since independence, there is a secular system. Azerbaijan has benefited from the oil industry, but high levels of corruption have prevented greater prosperity for the population. Despite these problems, there is a financial rebirth in Azerbaijan as positive economic predictions and an active political opposition appear determined to improve the lives of average Azerbaijanis. The exact number of Azerbaijanis in Iran is heavily disputed. Since the early twentieth century, successive Iranian governments have avoided publishing statistics on ethnic groups. Unofficial population estimates of Azerbaijanis in Iran are around the 16% area put forth by the CIA and Library of Congress. An independent poll in 2009 placed the figure at around 20–22%. According to the Iranologist Victoria Arakelova in peer-reviewed journal Iran and the Caucasus, estimating the number of Azeris in Iran has been hampered for years since the dissolution of the Soviet Union, when the "once invented theory of the so called separated nation (i.e. the citizens of the Azerbaijan Republic, the so-called Azerbaijanis, and the Azaris in Iran), was actualised again (see in detail Reza 1993)". Arakelova adds that the number of Azeris in Iran, featuring in the politically biased publications as "Azerbaijani minority of Iran", is considered to be the "highly speculative part of this theory". Even though all Iranian censuses of population distinguish exclusively religious minorities, numerous sources have presented different figures regarding Iran's Turkic-speaking communities, without "any justification or concrete references". In the early 1990s, right after the collapse of the Soviet Union, the most popular figure depicting the number of "Azerbaijanis" in Iran was thirty-three million, at a time when the entire population of Iran was barely sixty million. Therefore, at the time, half of Iran's citizens were considered "Azerbaijanis". Shortly after, this figure was replaced by thirty million, which became "almost a normative account on the demographic situation in Iran, widely circulating not only among academics and political analysts, but also in the official circles of Russia and the West". Then, in the 2000s, the figure decreased to 20 million; this time, at least within the Russian political establishment, the figure became "firmly fixed". This figure, Arakelova adds, has been widely used and kept up to date, only with a few minor adjustments. A cursory look at Iran's demographic situation however, shows that all these figures have been manipulated and were "definitely invented on political purpose". Arakelova estimates the number of Azeris i.e. "Azerbaijanis" in Iran based on Iran's population demographics at 6 to 6.5 million. Azerbaijanis in Iran are mainly found in the northwest provinces: West Azerbaijan, East Azerbaijan, Ardabil, Zanjan, parts of Hamadan, Qazvin, and Markazi. Azerbaijani minorities live in the Qorveh and Bijar counties of Kurdistan, in Gilan, as ethnic enclaves in Galugah in Mazandaran, around Lotfabad and Dargaz in Razavi Khorasan, and in the town of Gonbad-e Qabus in Golestan. Large Azerbaijani populations can also be found in central Iran (Tehran and Alborz) due to internal migration. Azerbaijanis make up 25% of Tehran's population and 30.3% – 33% of the population of the Tehran Province, where Azerbaijanis are found in every city. They are the largest ethnic groups after Persians in Tehran and the Tehran Province. Arakelova notes that the widespread "cliché" among residents of Tehran on the number of Azerbaijanis in the city ("half of Tehran consists of Azerbaijanis"), cannot be taken "seriously into consideration". Arakelova adds that the number of Tehran's inhabitants who have migrated from northwestern areas of Iran, who are currently Persian-speakers "for the most part", is not more than "several hundred thousands", with the maximum being one million. Azerbaijanis have also emigrated and resettled in large numbers in Khorasan, especially in Mashhad. Generally, Azerbaijanis in Iran were regarded as "a well integrated linguistic minority" by academics prior to Iran's Islamic Revolution. Despite friction, Azerbaijanis in Iran came to be well represented at all levels of "political, military, and intellectual hierarchies, as well as the religious hierarchy". Resentment came with Pahlavi policies that suppressed the use of the Azerbaijani language in local government, schools, and the press. However, with the advent of the Iranian Revolution in 1979, emphasis shifted away from nationalism as the new government highlighted religion as the main unifying factor. Islamic theocratic institutions dominate nearly all aspects of society. The Azerbaijani language and its literature are banned in Iranian schools. There are signs of civil unrest due to the policies of the Iranian government in Iranian Azerbaijan and increased interaction with fellow Azerbaijanis in Azerbaijan and satellite broadcasts from Turkey and other Turkic countries have revived Azerbaijani nationalism. In May 2006, Iranian Azerbaijan witnessed riots over publication of a cartoon depicting a cockroach speaking Azerbaijani that many Azerbaijanis found offensive. The cartoon was drawn by Mana Neyestani, an Azeri, who was fired along with his editor as a result of the controversy. One of the major incidents that happened recently was Azeris protests in Iran (2015) started in November 2015, after children's television programme Fitileha aired on 6 November on state TV that ridiculed and mocked the accent and language of Azeris and included offensive jokes. As a result, ethnic Azeris protested a program on state TV that contained what they consider an ethnic slur. The head of the country's state broadcaster Islamic Republic of Iran Broadcasting (IRIB) Mohammad Sarafraz has apologized for airing the program, whose broadcast was later discontinued. Azerbaijanis are an intrinsic community of Iran, and their style of living closely resemble those of Persians: The lifestyles of urban Azerbaijanis do not differ from those of Persians, and there is considerable intermarriage among the upper classes in cities of mixed populations. Similarly, customs among Azerbaijani villagers do not appear to differ markedly from those of Persian villagers. Azeris are famously active in commerce and in bazaars all over Iran their voluble voices can be heard. Older Azeri men wear the traditional wool hat, and their music & dances have become part of the mainstream culture. Azeris are well integrated, and many Azeri-Iranians are prominent in Persian literature, politics, and clerical world. There is significant cross-border trade between Azerbaijan and Iran, and Azerbaijanis from Azerbaijan go into Iran to buy goods that are cheaper, but the relationship was tense until recently. However, relations have significantly improved since the Rouhani administration took office. There are at least ten Azerbaijani ethnic groups, each of which has particularities in the economy, culture, and everyday life. Some Azerbaijani ethnic groups continued in the last quarter of the 19th century. Major Azerbaijani ethnic groups: In Azerbaijan, women were granted the right to vote in 1917. Women have attained Western-style equality in major cities such as Baku, although in rural areas more reactionary views remain. Violence against women, including rape, is rarely reported, especially in rural areas, not unlike other parts of the former Soviet Union. In Azerbaijan, the veil was abandoned during the Soviet period. Women are under-represented in elective office but have attained high positions in parliament. An Azerbaijani woman is the Chief Justice of the Supreme Court in Azerbaijan, and two others are Justices of the Constitutional Court. In the 2010 election, women constituted 16% of all MPs (twenty seats in total) in the National Assembly of Azerbaijan. Abortion is available on demand in the Republic of Azerbaijan. Elmira Süleymanova, who served as human rights ombudsman from 2002 to 2019, was a woman. In Iran, a groundswell of grassroots movements have sought gender equality since the 1980s. Protests in defiance of government bans are dispersed through violence, as on 12 June 2006 when female demonstrators in Haft Tir Square in Tehran were beaten. Past Iranian leaders, such as the reformer ex-president Mohammad Khatami promised women greater rights, but the Guardian Council of Iran opposes changes that they interpret as contrary to Islamic doctrine. In the 2004 legislative elections, nine women were elected to parliament (Majlis), eight of whom were conservatives. The social fate of Azerbaijani women largely mirrors that of other women in Iran.[citation needed] Culture The Azerbaijanis speak the Azerbaijani language, a Turkic language descended from the branches of Oghuz Turkic language that became established in Azerbaijan in the 11th and 12th centuries CE. The Azerbaijani language is closely related to Qashqai, Gagauz, Turkish, Turkmen and Crimean Tatar, sharing varying degrees of mutual intelligibility with each of those languages. Certain lexical and grammatical differences formed within the Azerbaijani language as spoken in the Republic of Azerbaijan and Iran, after nearly two centuries of separation between the communities speaking the language; mutual intelligibility, however, has been preserved. Additionally, the Turkish and Azerbaijani languages are mutually intelligible to a high enough degree that their speakers can have simple conversations without prior knowledge of the other. Early literature was mainly based on oral tradition, and the later compiled epics and heroic stories of Dede Korkut probably derive from it. The first written, classical Azerbaijani literature arose after the Mongol invasion, while the first accepted Oghuz Turkic text goes back to the 15th century. Some of the earliest Azerbaijani writings trace back to the poet Nasimi (died 1417) and then decades later Fuzûlî (1483–1556). Ismail I, Shah of Safavid Iran wrote Azerbaijani poetry under the pen name Khatâ'i. Modern Azerbaijani literature continued with a traditional emphasis upon humanism, as conveyed in the writings of Samad Vurgun, Shahriar, and many others. Azerbaijanis are generally bilingual, often fluent in either Russian (in Azerbaijan) or Persian (in Iran) in addition to their native Azerbaijani. As of 1996, around 38% of Azerbaijan's roughly 8,000,000 population spoke Russian fluently. An independent telephone survey in Iran in 2009 reported that 20% of respondents could understand Azerbaijani, the most spoken minority language in Iran, and all respondents could understand Persian. The majority of Azerbaijanis are Twelver Shi'a Muslims. Religious minorities include Sunni Muslims (mainly Shafi'i just like other Muslims in the surrounding North Caucasus), and Baháʼís. An unknown number of Azerbaijanis in the Republic of Azerbaijan have no religious affiliation. Many describe themselves as Shia Muslims. There is a small number of Naqshbandi Sufis among Muslim Azerbaijanis. Christian Azerbaijanis number around 5,000 people in the Republic of Azerbaijan and consist mostly of recent converts. Some Azerbaijanis from rural regions retain pre-Islamic animist or Zoroastrian-influenced beliefs, such as the sanctity of certain sites and the veneration of fire, certain trees and rocks. In Azerbaijan, traditions from other religions are often celebrated in addition to Islamic holidays, including Nowruz and Christmas. In the group dance the performers come together in a semi-circular or circular formation as, "The leader of these dances often executes special figures as well as signaling and changes in the foot patterns, movements, or direction in which the group is moving, often by gesturing with his or her hand, in which a kerchief is held." Azerbaijani musical tradition can be traced back to singing bards called Ashiqs, a vocation that survives. Modern Ashiqs play the saz (lute) and sing dastans (historical ballads). Other musical instruments include the tar (another type of lute), balaban (a wind instrument), kamancha (fiddle), and the dhol (drums). Azerbaijani classical music, called mugham, is often an emotional singing performance. Composers Uzeyir Hajibeyov, Gara Garayev and Fikret Amirov created a hybrid style that combines Western classical music with mugham. Other Azerbaijanis, notably Vagif and Aziza Mustafa Zadeh, mixed jazz with mugham. Some Azerbaijani musicians have received international acclaim, including Rashid Behbudov (who could sing in over eight languages), Muslim Magomayev (a pop star from the Soviet era), Googoosh, and more recently Sami Yusuf.[citation needed] After the 1979 revolution in Iran due to the clerical opposition to music in general, Azerbaijani music took a different course. According to Iranian singer Hossein Alizadeh, "Historically in Iran, music faced strong opposition from the religious establishment, forcing it to go underground." Some Azerbaijanis have been film-makers, such as Rustam Ibragimbekov, who wrote Burnt by the Sun, winner of the Grand Prize at the Cannes Film Festival and an Academy Award for Best Foreign Language Film in 1994. Other ancient sports include wrestling, javelin throwing and fencing. The Soviet legacy has in modern times propelled some Azerbaijanis to become accomplished athletes at the Olympic level. The Azerbaijani government supports the country's athletic legacy and encourages youth participation. Iranian athletes of Azerbaijani origin have particularly excelled in weight lifting, gymnastics, shooting, javelin throwing, karate, boxing, and wrestling. Weight lifters, such as Iran's Hossein Reza Zadeh, world super heavyweight-lifting record holder and two-time Olympic champion in 2000 and 2004, or Hadi Saei, a former Iranian. Ramil Guliyev, an ethnic Azerbaijani who plays for Turkey, became the first world champion in athletics in the history of Turkey. Athletes such as Nizami Pashayev, who won the European heavyweight title in 2006, have excelled at the international level. Chess is another popular pastime in the Republic of Azerbaijan. The strongest players of Azerbaijani origin Vugar Gashimov, Shahriyar Mammadyarov and Teimour Radjabov, all three highly ranked internationally. Karate is also popular, where Rafael Aghayev achieved particular success, becoming a five-time world champion and eleven-time European champion. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Quantum_computer] | [TOKENS: 9024] |
Contents Quantum computing A quantum computer is a (real or theoretical) computer that exploits superposed and entangled states. Quantum computers can be viewed as sampling from quantum systems. These systems evolve in ways that operate on an enormous number of possibilities simultaneously, though they remain subject to strict computational constraints. By contrast, ordinary ("classical") computers operate according to deterministic rules. (A classical computer can, in principle, be replicated by a classical mechanical device, with only a simple multiple of time cost. On the other hand (it is believed), a quantum computer would require exponentially more time and energy to be simulated classically.) It is widely believed that a quantum computer could perform some calculations exponentially faster than any classical computer. For example, a large-scale quantum computer could break some widely used public-key cryptographic schemes and aid physicists in performing physical simulations. However, current hardware implementations of quantum computation are largely experimental and only suitable for specialized tasks. The basic unit of information in quantum computing, the qubit (or "quantum bit"), serves the same function as the bit in ordinary or "classical" computing. However, unlike a classical bit, which can be in one of two states (a binary), a qubit can exist in a linear combination of two states known as a quantum superposition. The result of measuring a qubit is one of the two states given by a probabilistic rule. If a quantum computer manipulates the qubit in a particular way, wave interference effects amplify the probability of the desired measurement result. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform this amplification. Quantum computers are not yet practical for real-world applications. Physically engineering high-quality qubits has proven to be challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. National governments have invested heavily in experimental research aimed at developing scalable qubits with longer coherence times and lower error rates. Example implementations include superconductors (which isolate an electrical current by eliminating electrical resistance) and ion traps (which confine a single atomic particle using electromagnetic fields). Researchers have claimed, and are widely believed to be correct, that certain quantum devices can outperform classical computers on narrowly defined tasks, a milestone referred to as quantum advantage or quantum supremacy. These tasks are not necessarily useful for real-world applications. History For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory was developed in the 1920s to explain perplexing physical phenomena observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for nuclear physics used in the Manhattan Project. As physicists applied quantum mechanical models to computational problems and swapped digital bits for qubits, the fields of quantum mechanics and computer science began to converge. In 1980, Paul Benioff introduced the quantum Turing machine, which uses quantum theory to describe a simplified computer. When digital computers became faster, physicists faced an exponential increase in overhead when simulating quantum dynamics, prompting Yuri Manin and Richard Feynman to independently suggest that hardware based on quantum phenomena might be more efficient for computer simulation. In a 1984 paper, Charles Bennett and Gilles Brassard applied quantum theory to cryptography protocols and demonstrated that quantum key distribution could enhance information security. Quantum algorithms then emerged for solving oracle problems, such as Deutsch's algorithm in 1985, the Bernstein–Vazirani algorithm in 1993, and Simon's algorithm in 1994. These algorithms did not solve practical problems, but demonstrated mathematically that one could obtain more information by querying a black box with a quantum state in superposition, sometimes referred to as quantum parallelism. Peter Shor built on these results with his 1994 algorithm for breaking the widely used RSA and Diffie–Hellman encryption protocols, which drew significant attention to the field of quantum computing. In 1996, Grover's algorithm established a quantum speedup for the widely applicable unstructured search problem. The same year, Seth Lloyd proved that quantum computers could simulate quantum systems without the exponential overhead present in classical simulations, validating Feynman's 1982 conjecture. Over the years, experimentalists have constructed small-scale quantum computers using trapped ions and superconductors. In 1998, a two-qubit quantum computer demonstrated the feasibility of the technology, and subsequent experiments have increased the number of qubits and reduced error rates. In 2019, Google AI and NASA announced that they had achieved quantum supremacy with a 54-qubit machine, performing a computation that any classical computer would find impossible. This announcement was met with a rebuttal from IBM, which contended that the calculation Google claimed would take 10,000 years could be performed in just 2.5 days on its Summit supercomputer if its architecture were optimized, sparking a debate over the precise threshold for "quantum supremacy". Quantum information processing Computer engineers typically describe a modern computer's operation in terms of classical electrodynamics. In these "classical" computers, some components (such as semiconductors and random number generators) may rely on quantum behavior; however, because they are not isolated from their environment, any quantum information eventually quickly decoheres. While programmers may depend on probability theory when designing a randomized algorithm, quantum-mechanical notions such as superposition and wave interference are largely irrelevant in program analysis. Quantum programs, in contrast, rely on precise control of coherent quantum systems. Physicists describe these systems mathematically using linear algebra. Complex numbers model probability amplitudes, vectors model quantum states, and matrices model the operations that can be performed on these states. Programming a quantum computer is then a matter of composing operations in such a way that the resulting program computes a useful result in theory and is implementable in practice. As physicist Charlie Bennett describes the relationship between quantum and classical computers, A classical computer is a quantum computer ... so we shouldn't be asking about "where do quantum speedups come from?" We should say, "Well, all computers are quantum. ... Where do classical slowdowns come from?" Just as the bit is the basic concept of classical information theory, the qubit is the fundamental unit of quantum information. The same term qubit is used to refer to an abstract mathematical model and to any physical system that is represented by that model. A classical bit, by definition, exists in either of two physical states, which can be denoted 0 and 1. A qubit is also described by a state, and two states, often written | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } , serve as the quantum counterparts of the classical states 0 and 1. However, the quantum states | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } belong to a vector space, meaning that they can be multiplied by constants and added together, and the result is again a valid quantum state. Such a combination is known as a superposition of | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } . A two-dimensional vector mathematically represents a qubit state. Physicists typically use bra–ket notation for quantum mechanical linear algebra, writing | ψ ⟩ {\displaystyle |\psi \rangle } 'ket psi' for a vector labeled ψ {\displaystyle \psi } . Because a qubit is a two-state system, any qubit state takes the form α | 0 ⟩ + β | 1 ⟩ {\displaystyle \alpha |0\rangle +\beta |1\rangle } , where | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } are the standard basis states,[a] and α {\displaystyle \alpha } and β {\displaystyle \beta } are the probability amplitudes, which are in general complex numbers. If either α {\displaystyle \alpha } or β {\displaystyle \beta } is zero, the qubit is effectively a classical bit; when both are nonzero, the qubit is in superposition. Such a quantum state vector behaves similarly to a (classical) probability vector, with one key difference: unlike probabilities, probability amplitudes are not necessarily positive numbers. Negative amplitudes allow for destructive wave interference. When a qubit is measured in the standard basis, the result is a classical bit. The Born rule describes the norm-squared correspondence between amplitudes and probabilities—when measuring a qubit α | 0 ⟩ + β | 1 ⟩ {\displaystyle \alpha |0\rangle +\beta |1\rangle } , the state collapses to | 0 ⟩ {\displaystyle |0\rangle } with probability | α | 2 {\displaystyle |\alpha |^{2}} , or to | 1 ⟩ {\displaystyle |1\rangle } with probability | β | 2 {\displaystyle |\beta |^{2}} . Any valid qubit state has coefficients α {\displaystyle \alpha } and β {\displaystyle \beta } such that | α | 2 + | β | 2 = 1 {\displaystyle |\alpha |^{2}+|\beta |^{2}=1} . As an example, measuring the qubit 1 / 2 | 0 ⟩ + 1 / 2 | 1 ⟩ {\displaystyle 1/{\sqrt {2}}|0\rangle +1/{\sqrt {2}}|1\rangle } would produce either | 0 ⟩ {\displaystyle |0\rangle } or | 1 ⟩ {\displaystyle |1\rangle } with equal probability. Each additional qubit doubles the dimension of the state space. As an example, the vector 1/√2|00⟩ + 1/√2|01⟩ represents a two-qubit state, a tensor product of the qubit |0⟩ with the qubit 1/√2|0⟩ + 1/√2|1⟩. This vector inhabits a four-dimensional vector space spanned by the basis vectors |00⟩, |01⟩, |10⟩, and |11⟩. The Bell state 1/√2|00⟩ + 1/√2|11⟩ is impossible to decompose into the tensor product of two individual qubits—the two qubits are entangled because neither qubit has a state vector of its own. In general, the vector space for an n-qubit system is 2n-dimensional, and this makes it challenging for a classical computer to simulate a quantum one: representing a 100-qubit system requires storing 2100 classical values. The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix X := ( 0 1 1 0 ) . {\displaystyle X:={\begin{pmatrix}0&1\\1&0\end{pmatrix}}.} Mathematically, the application of such a logic gate to a quantum state vector is modeled with matrix multiplication. Thus The mathematics of single-qubit gates can be extended to operate on multi-qubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit while leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are | 00 ⟩ := ( 1 0 0 0 ) ; | 01 ⟩ := ( 0 1 0 0 ) ; | 10 ⟩ := ( 0 0 1 0 ) ; | 11 ⟩ := ( 0 0 0 1 ) . {\displaystyle |00\rangle :={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}};\quad |01\rangle :={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}};\quad |10\rangle :={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}};\quad |11\rangle :={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}.} The controlled NOT (CNOT) gate can then be represented using the following matrix: CNOT := ( 1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0 ) . {\displaystyle \operatorname {CNOT} :={\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&0&1\\0&0&1&0\end{pmatrix}}.} As a mathematical consequence of this definition, CNOT | 00 ⟩ = | 00 ⟩ {\textstyle \operatorname {CNOT} |00\rangle =|00\rangle } , CNOT | 01 ⟩ = | 01 ⟩ {\textstyle \operatorname {CNOT} |01\rangle =|01\rangle } , CNOT | 10 ⟩ = | 11 ⟩ {\textstyle \operatorname {CNOT} |10\rangle =|11\rangle } , and CNOT | 11 ⟩ = | 10 ⟩ {\textstyle \operatorname {CNOT} |11\rangle =|10\rangle } . In other words, the CNOT applies a NOT gate ( X {\textstyle X} from before) to the second qubit if and only if the first qubit is in the state | 1 ⟩ {\textstyle |1\rangle } . If the first qubit is | 0 ⟩ {\textstyle |0\rangle } , nothing is done to either qubit. In summary, quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements. Quantum parallelism is the heuristic that quantum computers can be thought of as evaluating a function for multiple input values simultaneously. This can be achieved by preparing a quantum system in a superposition of input states and applying a unitary transformation that encodes the function to be evaluated. The resulting state encodes the function's output values for all input values in the superposition, enabling the simultaneous computation of multiple outputs. This property is key to the speedup of many quantum algorithms. However, "parallelism" in this sense is insufficient to speed up a computation, because the measurement at the end of the computation gives only one value. To be useful, a quantum algorithm must also incorporate some other conceptual ingredient. There are multiple models of computation for quantum computing, distinguished by the basic elements in which the computation is decomposed. A quantum gate array decomposes computation into a sequence of few-qubit quantum gates. A quantum computation can be described as a network of quantum logic gates and measurements. However, any measurement can be deferred to the end of quantum computation, though this deferment may come at a computational cost, so most quantum circuits depict a network consisting only of quantum logic gates and no measurements. Any quantum computation (which is, in the above formalism, any unitary matrix of size 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} over n {\displaystyle n} qubits) can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set, since a computer that can run such circuits is a universal quantum computer. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem. Implementation of Boolean functions using the few-qubit quantum gates is presented here. A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state (a cluster state), using a technique called quantum gate teleportation. An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution. Neuromorphic quantum computing (abbreviated 'n.quantum computing') is an unconventional process of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and do not follow the von Neumann architecture. They both construct a system (a circuit) that represents the physical problem at hand and then leverage their respective physics properties of the system to seek the "minimum". Neuromorphic quantum computing and quantum computing share similar physical properties during computation. A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice. A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical. The threshold theorem shows how increasing the number of qubits can mitigate errors, yet fully fault-tolerant quantum computing remains "a rather distant dream". According to some researchers, noisy intermediate-scale quantum (NISQ) machines may have specialized uses in the near future, but noise in quantum gates limits their reliability. Scientists at Harvard University successfully created "quantum circuits" that correct errors more efficiently than alternative methods, which may potentially remove a major obstacle to practical quantum computers. The Harvard research team was supported by MIT, QuEra Computing, Caltech, and Princeton University and funded by DARPA's Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program. Digital cryptography enables communications to remain private, preventing unauthorized parties from accessing them. Conventional encryption, the obscuring of a message with a key through an algorithm, relies on the algorithm being difficult to reverse. Encryption is also the basis for digital signatures and authentication mechanisms. Quantum computing may be sufficiently more powerful that difficult reversals are feasible, allowing messages relying on conventional encryption to be read. Quantum cryptography replaces conventional algorithms with computations based on quantum computing. In principle, quantum encryption would be impossible to decode even with a quantum computer. This advantage comes at a significant cost in terms of elaborate infrastructure, while effectively preventing legitimate decoding of messages by governmental security officials. Ongoing research in quantum and post-quantum cryptography has led to new algorithms for quantum key distribution, initial work on quantum random number generation and to some early technology demonstrations.: 1012–1036 Communication Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys.: 1017 When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping. Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware (such as quantum repeaters), hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing. Algorithms Progress in finding quantum algorithms typically focuses on the quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms. Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and, more generally, solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and don't necessarily translate to speedups for practical problems. Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations, have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that "no quantum algorithm" provides a super-polynomial speedup, which is believed to be unlikely. In addition to these problems, quantum algorithms are being explored for applications in cryptography, optimization, and machine learning, although most of these remain at the research stage and require significant advances in error correction and hardware scalability before practical implementation. Some quantum algorithms, such as Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems. These speed-ups are, however, over the theoretical worst-case of classical algorithms, and concrete real-world speed-ups over algorithms used in practice have not been demonstrated. Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer. About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertiliser industry (even though naturally occurring organisms also produce ammonia). Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid-2020s although some have predicted it will take longer. A notable application of quantum computing is in attacking cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible on a classical computer for large integers if they are the product of a few prime numbers (e.g., the product of two 300-digit primes). By contrast, a quantum computer could solve this problem exponentially faster using Shor's algorithm to factor the integer. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security. Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, such as the McEliece cryptosystem, which relies on a hard problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice-based cryptosystems, is a well-studied open problem. It has been shown that applying Grover's algorithm to break a symmetric (secret-key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have comparable security against an attack using Grover's algorithm to that AES-128 has against classical brute-force search (see Key size). The most well-known example of a problem that allows for a polynomial quantum speedup is unstructured search, which involves finding a marked item out of a list of n {\displaystyle n} items in a database. This can be solved by Grover's algorithm using O ( n ) {\displaystyle O({\sqrt {n}})} queries to the database, quadratically fewer than the Ω ( n ) {\displaystyle \Omega (n)} queries required for classical algorithms. In this case, the advantage is not only provable but also optimal: it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Many examples of provable quantum speedups for query problems are based on Grover's algorithm, including Brassard, Høyer, and Tapp's algorithm for finding collisions in two-to-one functions, and Farhi, Goldstone, and Gutmann's algorithm for evaluating NAND trees. Problems that can be efficiently addressed with Grover's algorithm have the following properties: For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies. Quantum annealing uses the adiabatic theorem to perform calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough, the system will stay in its ground state at all times through the process. Quantum annealing can solve Ising models and the (computationally equivalent) QUBO problem, which in turn can be used to encode a wide range of combinatorial optimization problems. Adiabatic optimization may be helpful for solving computational biology problems. Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks. For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks. Deep generative chemistry models have been explored for potential applications in drug discovery. Early experimental work has explored the use of near-term quantum hardware in molecular generative modeling for drug discovery. In 2023, researchers at Gero reported a hybrid quantum–classical generative model based on a restricted Boltzmann machine, implemented on a commercially available quantum annealing device, to generate novel drug-like small molecules with physicochemical properties comparable to known medicinal compounds. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms. Engineering As of 2023,[update] classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain. There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co. The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. One of the greatest challenges involved in constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment, as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, the lattice vibrations, and the background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperatures. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin (usually using a dilution refrigerator) in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds. As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions. These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter, and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time; hence, any operation must be completed much more quickly than the decoherence time. As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing. Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of binary digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger. One approach to overcoming errors combines low-density parity-check code with cat qubits that have intrinsic bit-flip error suppression. Implementing 100 logical qubits with 768 cat qubits could reduce the error rate to one part in 108 per cycle per bit. Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates. Non-Abelian anyons can, in effect, remember how they have been manipulated, making them potentially useful in quantum computing. As of 2025, Microsoft and other organizations are investing in quasi-particle research. Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark. In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it. In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds. Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications. In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarized current quantum computers as being "For now, [good for] absolutely nothing". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that, in the long term, such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are "insufficient for practical quantum advantage without significant improvements across the software/hardware stack". It argues that the most promising candidates for achieving speedup with quantum computers are "small-data problems", for example, in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, "will not achieve quantum advantage with current quantum algorithms in the foreseeable future", and it identified I/O constraints that make speedup unlikely for "big data problems, unstructured linear systems, and database search based on Grover's algorithm". This state of affairs can be traced to several current and long-term considerations. In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain a sufficiently high degree of entanglement for a long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms (which may be unknown) and show that some quantum algorithms asymptotically improve upon those bounds. Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well. For example, topological quantum computer approaches are being explored for more fault-tolerance computing systems. The first quantum logic gates were implemented with trapped ions and prototype general-purpose machines with up to 20 qubits have been realized. However, the technology behind these devices combines complex vacuum equipment, lasers, and microwave and radio frequency equipment, making full-scale processors difficult to integrate with standard computing equipment. Moreover, the trapped ion system itself has engineering challenges to overcome. The largest commercial systems are based on superconductor devices and have scaled to 2000 qubits. However, the error rates for larger machines have been on the order of 5%. Technologically, these devices are all cryogenic and scaling to large numbers of qubits requires wafer-scale integration, a serious engineering challenge by itself. In addition to cryogenic platforms, room-temperature approaches to spin–photon interfaces have been experimentally demonstrated. In 2025, researchers at Stanford University realized a nanoscale device in which a thin layer of molybdenum diselenide is integrated on a nanostructured silicon substrate, enabling a spin–photon interface that operates at ambient conditions using structured “twisted” light to couple electronic and photonic degrees of freedom. Such room-temperature, chip-integrated spin–photon interfaces are being investigated as potential building blocks for heterogeneous quantum networks that combine different qubit modalities and reduce reliance on large cryogenic infrastructures. Potential applications From the perspective of business management, the potential applications of quantum computing are commonly classified into four key domains: (1) cybersecurity; (2) data analytics and artificial intelligence; (3) optimization and simulation; and (4) data management and search. Other applications include healthcare (i.e., drug discovery), financial modeling, and natural language processing. Theory Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers. Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis. While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers. The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for "bounded error, quantum, polynomial time". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP ("bounded error, probabilistic, polynomial time"), the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that B P P ⊆ B Q P {\displaystyle {\mathsf {BPP\subseteq BQP}}} but there is no proof B Q P ≠ B P P {\displaystyle {\mathsf {BQP\neq BPP}}} , which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity. The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that P ⊆ B Q P ⊆ P S P A C E {\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}} ; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning that there exist problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP (integer factorization and the discrete logarithm problem are both in NP, for example). It is suspected that N P ⊈ B Q P {\displaystyle {\mathsf {NP\nsubseteq BQP}}} ; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems (if an NP-complete problem were in BQP, then it would follow from NP-hardness that all problems in NP are in BQP). See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Pukwudgie] | [TOKENS: 388] |
Contents Pukwudgie A Pukwudgie, also spelled Puk-Wudjie (another spelling, Puck-wudj-ininee, translated by Henry Schoolcraft as "little wild man of the woods that vanishes"), is a human-like creature of Wampanoag folklore, found in Delaware, Prince Edward Island, and parts of Indiana and Massachusetts, sometimes said to be two to three feet (0.61 to 0.91 m) tall. In mythology According to legend, Pukwudgies can appear and disappear at will, shapeshift (of which the most common form is a creature that looks like a porcupine from the back and a half-troll, half-human from the front and walks upright), lure people to their deaths, use magic, launch poison arrows, and create fire. Native Americans believed that Pukwudgies were once friendly to humans, but then turned against them, and are best left alone. According to lore, a person who annoyed a Pukwudgie would be subject to nasty tricks by it, or subject to being followed by the Pukwudgie, who would cause trouble for them. They are known to kidnap people, push them off cliffs, attack their victims with short knives and spears, and to use sand to blind their victims. Pukwudgies are said to be the enemies of culture heroes, the giant Maushop and his wife, Granny Squannit. One story from Wampanoag folklore explains that they began causing mischief and tormenting the natives out of jealousy of the devotion and affection the natives had for Maushop, who eventually exiled them to different parts of North America. The Pukwudgies have since been hostile to humans, and took revenge by killing Maushop's five sons. Some variations even suggest that they killed Maushop himself. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Filename] | [TOKENS: 3760] |
Contents Filename A filename is used to uniquely identify a computer file in a file system. Different file systems impose different restrictions on filename lengths. A filename may (depending on the file system) include: The components required to identify a file by utilities and applications varies across operating systems, as does the syntax and format for a valid filename. The characters allowed in filenames depend on the file system. The letters A–Z and digits 0–9 are allowed by most file systems; many file systems support additional characters, such as the letters a–z, special characters, and other printable characters such as accented letters, symbols in non-Roman alphabets, and symbols in non-alphabetic scripts. Some file systems allow even unprintable characters, including Bell, Null, Return and Linefeed, to be part of a filename, although most utilities do not handle them well. Filenames may include things like a revision or generation number of the file, a numerical sequence number (widely used by digital cameras through the DCF standard), a date and time (widely used by smartphone camera software and for screenshots), or a comment such as the name of a subject or a location or any other text to help identify the file. Some people use the term filename when referring to a complete specification of device, subdirectories and filename such as the Windows C:\Program Files\Microsoft Games\Chess\Chess.exe. The filename in this case is Chess.exe. Some utilities have settings to suppress the extension as with MS Windows Explorer.[not verified in body] History During the 1970s, some mainframe and minicomputers had operating systems where files on the system were identified by a user name, or account number. For example, on the TOPS-10 and RSTS/E operating systems from Digital Equipment Corporation, files were identified by On the OS/360 and successor operating systems from IBM, a file name can be up to 44 characters, consisting of upper case letters, digits, and the period; a file name had to start with a letter or number, a period must occur at least once each 8 characters, two consecutive periods can not appear in the name, and the name must end with a letter or digit. By convention, when using TSO, the letters and numbers before the first period is the account number of the owner or the project it belongs to, but there is no requirement to use this convention. On the McGill University MUSIC/SP system, file names consisted of The Univac VS/9 operating system had file names consisting of In 1985, RFC 959 officially defined a pathname to be the character string that must be entered into a file system by a user in order to identify a file. On early personal computers using the CP/M operating system, filenames were always 11 characters. This was referred to as the 8.3 filename with a maximum of an 8 byte name and a maximum of a 3 byte extension. Utilities and applications allowed users to specify filenames without trailing spaces and include a dot before the extension. The dot was not actually stored in the directory. Using only 7 bit characters allowed several file attributes to be included in the actual filename by using the high-order-bit; these attributes included Readonly, Archive, and System. Eventually this was too restrictive and the number of characters allowed increased. The attribute bits were moved to a special block of the file including additional information.[citation needed] The original File Allocation Table (FAT) file system, used by Standalone Disk BASIC-80, had a 6.3 file name, with a maximum of 6 bytes in the name and a maximum of 3 bytes in the extension. The FAT12 and FAT16 file systems in IBM PC DOS/MS-DOS and Microsoft Windows prior to Windows 95 used the same 8.3 convention as the CP/M file system. The FAT file systems supported 8-bit characters, allowing them to support non-ASCII characters in file names, and stored the attributes separately from the file name. Around 1995, VFAT, an extension to the MS-DOS FAT filesystem, was introduced in Windows 95 and Windows NT. It allowed mixed-case long filenames (LFNs), using Unicode characters, in addition to classic "8.3" names. File naming schemes Programs and devices may automatically assign names to files such as a numerical counter (for example IMG_0001.JPG) or a time stamp with the current date and time. The benefit of a time stamped file name is that it facilitates searching files by date, given that file managers usually feature file searching by name. In addition, files from different devices can be merged in one directory without file naming conflicts. Numbered file names, on the other hand, do not require that the device has a correctly set internal clock. For example, some digital camera users might not bother setting the clock of their camera. Internet-connected devices such as smartphones may synchronize their clock from a NTP server. Perhaps the most common file naming convention is to limit directory names and file names to the 65 characters in the POSIX portable filename character set. One common approach is to store the full "title" of a document inside the file itself as arbitrary UTF-8 characters, and then automatically generating a "slug" from that title to use as the filename. References: absolute vs relative An absolute reference includes all directory levels. In some systems, a filename reference that does not include the complete directory path defaults to the current working directory. This is a relative reference. One advantage of using a relative reference in program configuration files or scripts is that different instances of the script or program can use different files. This makes an absolute or relative path composed of a sequence of filenames. Number of names per file Unix-like file systems allow a file to have more than one name; in traditional Unix-style file systems, the names are hard links to the file's inode or equivalent. Windows supports hard links on NTFS file systems, and provides the command fsutil in Windows XP, and mklink in later versions, for creating them. Hard links are different from Windows shortcuts, classic Mac OS/macOS aliases, or symbolic links. The introduction of LFNs with VFAT allowed filename aliases. For example, longfi~1.??? with a maximum of eight plus three characters was a filename alias of "long file name.???" as a way to conform to 8.3 limitations for older programs. This property was used by the move command algorithm that first creates a second filename and then only removes the first filename. Other filesystems, by design, provide only one filename per file, which guarantees that alteration of one filename's file does not alter the other filename's file. Length restrictions Some filesystems restrict the length of filenames. In some cases, these lengths apply to the entire file name, as in 44 characters in IBM z/OS. In other cases, the length limits may apply to particular portions of the filename, such as the name of a file in a directory, or a directory name. For example, 9 (e.g., 8-bit FAT in Standalone Disk BASIC), 11 (e.g. FAT12, FAT16, FAT32 in DOS), 14 (e.g. early Unix), 21 (Human68K), 31, 30 (e.g. Apple DOS 3.2 and 3.3), 15 (e.g. Apple ProDOS), 44 (e.g. IBM S/370), or 255 (e.g. early Berkeley Unix) characters or bytes. Length limits often result from assigning fixed space in a filesystem to storing components of names, so increasing limits often requires an incompatible change, as well as reserving more space. A particular issue with filesystems that store information in nested directories is that it may be possible to create a file with a complete pathname that exceeds implementation limits, since length checking may apply only to individual parts of the name rather than the entire name. Many Windows applications are limited to a MAX_PATH value of 260, but Windows file names can easily exceed this limit. From Windows 10, version 1607, MAX_PATH limitations have been removed. Filename extensions Filenames in some file systems, such as FAT and the ODS-1 and ODS-2 levels of Files-11, are composed of two parts: a base name or stem and an extension or suffix used by some applications to indicate the file type. Some other file systems, such as Unix file systems, VFAT, and NTFS, treat a filename as a single string; a convention often used on those file systems is to treat the characters following the last period in the filename, in a filename containing periods, as the extension part of the filename. Multiple output files created by an application may use the same basename and various extensions. For example, a Fortran compiler might use the extension FOR for source input file, OBJ for the object output and LST for the listing. Although there are some common extensions, they are arbitrary and a different application might use REL and RPT. Extensions have been restricted, at least historically on some systems, to a length of 3 characters, but in general can have any length, e.g., html. Encoding interoperability There is no general encoding standard for filenames. File names have to be exchanged between software environments for network file transfer, file system storage, backup and file synchronization software, configuration management, data compression and archiving, etc. It is thus very important not to lose file name information between applications. This led to wide adoption of Unicode as a standard for encoding file names, although legacy software might not be Unicode-aware. Traditionally, filenames allowed any character in their filenames as long as they were file system safe. Although this permitted the use of any encoding, and thus allowed the representation of any local text on any local system, it caused many interoperability issues. A filename could be stored using different byte strings in distinct systems within a single country, such as if one used Japanese Shift JIS encoding and another Japanese EUC encoding. Conversion was not possible as most systems did not expose a description of the encoding used for a filename as part of the extended file information. This forced costly filename encoding guessing with each file access. A solution was to adopt Unicode as the encoding for filenames. In the classic Mac OS, however, encoding of the filename was stored with the filename attributes. The Unicode standard solves the encoding determination issue. Nonetheless, some limited interoperability issues remain, such as normalization (equivalence), or the Unicode version in use. For instance, UDF is limited to Unicode 2.0; macOS's HFS+ file system applies NFD Unicode normalization and is optionally case-sensitive (case-insensitive by default.) Filename maximum length is not standard and might depend on the code unit size. Although it is a serious issue, in most cases this is a limited one. On Linux, this means the filename is not enough to open a file: additionally, the exact byte representation of the filename on the storage device is needed. This can be solved at the application level, with some tricky normalization calls. The issue of Unicode equivalence is known as "normalized-name collision". A solution is the Non-normalizing Unicode Composition Awareness used in the Subversion and Apache technical communities. This solution does not normalize paths in the repository. Paths are only normalized for the purpose of comparisons. Nonetheless, some communities have patented this strategy, forbidding its use by other communities.[clarification needed] To limit interoperability issues, some ideas described by Sun are to: Those considerations create a limitation not allowing a switch to a future encoding different from UTF-8. One issue was migration to Unicode. For this purpose, several software companies provided software for migrating filenames to the new Unicode encoding. Mac OS X 10.3 marked Apple's adoption of Unicode 3.2 character decomposition, superseding the Unicode 2.1 decomposition used previously. This change caused problems for developers writing software for Mac OS X. Uniqueness Within a single directory, filenames must be unique.[disputed – discuss] Since the filename syntax also applies for directories, it is not possible to create a file and directory entries with the same name in a single directory. Multiple files in different directories may have the same name. Uniqueness approach may differ both on the case sensitivity and on the Unicode normalization form such as NFC, NFD. This means two separate files might be created with the same text filename and a different byte implementation of the filename, such as L"\x00C0.txt" (UTF-16, NFC) (Latin capital A with grave) and L"\x0041\x0300.txt" (UTF-16, NFD) (Latin capital A, grave combining). Letter case preservation Some filesystems, such as FAT prior to the introduction of VFAT, store filenames as upper-case regardless of the letter case used to create them. For example, a file created with the name "MyName.Txt" or "myname.txt" would be stored with the filename "MYNAME.TXT" (VFAT preserves the letter case). Any variation of upper and lower case can be used to refer to the same file. These kinds of file systems are called case-insensitive and are not case-preserving. Some filesystems prohibit the use of lower case letters in filenames altogether. Some file systems store filenames in the form that they were originally created; these are referred to as case-retentive or case-preserving. Such a file system can be case-sensitive or case-insensitive. If case-sensitive, then "MyName.Txt" and "myname.txt" may refer to two different files in the same directory, and each file must be referenced by the exact capitalization by which it is named. On a case-insensitive, case-preserving file system, on the other hand, only one of "MyName.Txt", "myname.txt" and "Myname.TXT" can be the name of a file in a given directory at a given time, and a file with one of these names can be referenced by any capitalization of the name. From its original inception, the file systems on Unix and its derivative systems were case-sensitive and case-preserving. However, not all file systems on those systems are case-sensitive; by default, HFS+ and APFS in macOS are case-insensitive but case-preserving, and SMB servers usually provide case-insensitive behavior (even when the underlying file system is case-sensitive, e.g. Samba on most Unix-like systems), and SMB client file systems provide case-insensitive behavior. File system case sensitivity is a considerable challenge for software such as Samba and Wine, which must interoperate efficiently with both systems that treat uppercase and lowercase files as different and with systems that treat them the same. Reserved characters and words File systems have not always provided the same character set for composing a filename. Before Unicode became a de facto standard, file systems mostly used a locale-dependent character set. By contrast, some new systems permit a filename to be composed of almost any character of the Unicode repertoire, and even some non-Unicode byte sequences. Limitations may be imposed by the file system, operating system, application, or requirements for interoperability with other systems. Many file system utilities prohibit control characters from appearing in filenames. In Unix-like file systems, the null character and the path separator / are prohibited. File system utilities and naming conventions on various systems prohibit particular characters from appearing in filenames or make them problematic: Except as otherwise stated, the symbols in the Character column, " and < for example, cannot be used in Windows filenames. Note 1: While they are allowed in Unix file and directory names, most Unix shells require specific characters such as spaces, <, >, |, \, and sometimes :, (, ), &, ;, #, as well as wildcards such as ? and *, to be quoted or escaped: five\ and\ six\<seven (example of escaping)'five and six<seven' or "five and six<seven" (examples of quoting) The character å (U+00E5) was not allowed as the first letter in a filename under 86-DOS and MS-DOS/PC DOS 1.x-2.x, but can be used in later versions. In Windows utilities, the space and the period are not allowed as the final character of a filename. The period is allowed as the first character, but some Windows applications, such as Windows Explorer, forbid creating or renaming such files (despite this convention being used in Unix-like systems to describe hidden files and directories). Workarounds include appending a dot when renaming the file (that is then automatically removed afterwards), using alternative file managers, creating the file using the command line, or saving a file with the desired filename from within an application. Some file systems on a given operating system (especially file systems originally implemented on other operating systems), and particular applications on that operating system, may apply further restrictions and interpretations. See comparison of file systems for more details on restrictions. In Unix-like systems, DOS, and Windows, the filenames "." and ".." have special meanings (current and parent directory respectively). Windows 95/98/ME also uses names like "...", "...." and so on to denote grandparent or great-grandparent directories. All Windows versions forbid creation of filenames that consist of only dots, although names consisting of three dots ("...") or more are legal in Unix. In addition, in Windows and DOS utilities, some words are also reserved and cannot be used as filenames. For example, DOS device files: Systems that have these restrictions cause incompatibilities with some other filesystems. For example, Windows will fail to handle, or raise error reports for, these legal UNIX filenames: aux.c, q"uote"s.txt, or NUL.txt. NTFS filenames that are used internally include: Comparison of filename limitations The following table describes common attributes of filenames as implemented on various notable file systems. Forbids device names including: $IDLE$, AUX, COM1...COM4, CON, CONFIG$, CLOCK$, KEYBD$, LPT1...LPT4, LST, NUL, PRN, SCREEN$; depending on AVAILDEV status everywhere or only in virtual \DEV\ directory Forbids MS-DOS device names.[a] The Win32 API strips trailing dots, and leading and trailing spaces, except for a UNC path. [b] See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Best_current_practice] | [TOKENS: 260] |
Contents Best current practice A best current practice, abbreviated as BCP, is a de facto level of performance in engineering and information technology. It is more flexible than a standard, since techniques and tools are continually evolving. The Internet Engineering Task Force publishes Best Current Practice documents in a numbered document series. Each document in this series is paired with the currently valid Request for Comments (RFC) document. BCP was introduced in RFC-1818. BCPs are document guidelines, processes, methods, and other matters not suitable for standardization. The Internet standards process itself is defined in a series of BCPs, as is the formal organizational structure of the IETF, Internet Engineering Steering Group, Internet Architecture Board, and other groups involved in that process. IETF's separate Standard Track (STD) document series defines the fully standardized network protocols of the Internet, such as the Internet Protocol, the Transmission Control Protocol, and the Domain Name System. Each RFC number refers to a specific version of a document Standard Track, but the BCP number refers to the most recent revision of the document. Thus, citations often reference both the BCP number and the RFC number. Example citations for BCPs are: BCP 38, RFC 2827. Significant fields of application See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/IBM_Q_System_One] | [TOKENS: 294] |
Contents IBM Q System One IBM Quantum System One is the first circuit-based commercial quantum computer, introduced by IBM in January 2019. This integrated quantum computing system is housed in an airtight borosilicate glass cube that maintains a controlled physical environment. Each face of the cube is 9 feet (2.7 m) wide and tall. A cylindrical protrusion from the center of the ceiling is a dilution refrigerator, containing a 20-qubit transmon quantum processor. It was tested for the first time in the summer of 2018, for two weeks, in Milan, Italy. IBM Quantum System One was developed by IBM Research, with assistance from the Map Project Office and Universal Design Studio. CERN, ExxonMobil, Fermilab, Argonne National Laboratory, and Lawrence Berkeley National Laboratory are among the clients signed up to access the system remotely. From April 6 to May 31, 2019, the Boston Museum of Science hosted an exhibit featuring a replica of the IBM Quantum System One. On June 15, 2021, IBM deployed the first unit of Quantum System One in Germany at its headquarters in Ehningen. On April 5, 2024, IBM unveiled a Quantum System One at the Rensselaer Polytechnic Institute, the first IBM quantum system on a university campus. See also References External links This computer hardware article is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/V1647_Orionis] | [TOKENS: 1752] |
Contents V1647 Orionis V1647 Orionis (V1647 Ori) is a young stellar object visible in the constellation Orion, located about 1470 light-years from the Solar System. It is situated in the reflection nebula M78 and is associated with McNeil's Nebula. The object is known to have experienced intense eruptive phenomena on several occasions (the last of which occurred in 2008), the characteristics of which have led to the object being considered a middle ground between two classes of pre-main-sequence star, FU Orionis (FUor) and EX Lupi (EXor). Characteristics Studies have revealed that V1647 Orionis is a young stellar object, presumably a pre-main sequence star; the age of the object, based on evolutionary models and data obtained, is between 100,000 and half a million years. Like all forming stars, V1647 Orionis has a disk of gas and silicate dust in its orbit, which mediates the accretion of the star, surrounded by a gas envelope that replenishes the disk with material. The accretion proceeds at a rate averaging between ~10−6 and 3×10−7 solar masses (M☉) per year. It is also a source of infrared radiation, cataloged as IRAS 05436-0007. Observations in 2018 with the ALMA radio interferometer allowed astronomers to estimate the total mass of the circumstellar disk to be about 0.1 M☉, consisting largely of gas and about 1 percent of dust (~430 M⊕), while its distance to the protostar is about 40 AU. Spectroscopic and infrared analyses have made it possible to measure some of the object's physical parameters to a certain approximation. The object seems to have accumulated so far an amount of matter of about 0.8±0.2 M☉, but it possesses a rather large radius, about three times that of the Sun; this results in a density that is still insufficient for the fusion reactions of hydrogen into helium to begin. The large radiating surface area causes the object to have a higher luminosity than the sun's, averaging about nine times higher. The object's spectrum also shows carbon monoxide (CO) absorption lines, typical of young protostars, with evidence of metals such as sodium and calcium. The CO emission probably originates from the gases in the innermost portion of the disk, heated to 2,500 K, and is perceptible due to a dust clearance area, that is, an area where the dust is more rarefied and therefore does not absorb radiation. Eruptive phenomena V1647 Orionis is characterized by great variability, manifested by strong eruptions that greatly increase its brightness. The first recorded eruption of the object occurred in 1966-1967, identified by Gianluca Masi on archival images by Evered Kreimer, and was studied by analysis of photographic plates obtained from the Asiago and Harvard observatories; the precise duration of the event is not known but would be between 5 and 20 months. Towards the end of 2003, the object manifested a sudden increase in its luminosity, a sign that a second, intense eruption had occurred; the event was studied for two years, corresponding to the period in which it maintained an above-normal luminosity; in October 2005 its luminosity began to decrease, until, in February 2006, it returned to its pre-burst levels. During the eruption, the object reached an effective luminosity of 44 L☉. A new burst was recorded in mid-2008, and had very similar characteristics to those of the eruption that began four years earlier. The eruption of V1647 Orionis is most likely associated with a sudden mass discharge toward the photosphere of the young star from the hot circumstellar disk. The sudden increase in brightness recorded would be due to a significant increase in the accretion rate (with peaks of 5×10−6 M☉/year), probably caused by an instability event in the disk; This increase results in the emission of an energetic wind that thins the surrounding dust, making the object visible, which is normally occulted by the dust that fuels its growth. These eruptions are believed to occur at characteristic intervals, occurring whenever a significant portion of what will be the final mass of the star has been accreted. These dynamics are characteristic of both FU Orionis objects and EX Lupi stars; for these reasons, the classification of V1647 Ori into one or the other class is a matter of debate. While FUor is characterized by drastic increases in luminosity (greater than 5 magnitudes in the visible) and last even for several decades, EXor explosions appear fainter and last for less time, a few years at most; they also seem to recur over time. The explosions of V1647 Orionis are as short-lived and recurrent as the EXor, while the increase in luminosity reaches values comparable to those of the FUor, as well as the spectral energy distribution (SED) of the object itself traces that of the FUor; the optical absorption spectrum is also distinguishable from that of both the FUor and EXor. In light also of the accretion rate values, which are intermediate between these two types of pre-main sequence stars, it has been suggested that V1647 Ori constitutes a middle ground between these two stellar classes. The SED itself, coupled with the frequency of eruptive phenomena, also shows that V1647 Orionis is a class I object, which is in the transition phase from an opaque to an optically transparent disk. During the eruptive period, NASA's Chandra Space Telescope detected intense X-ray emission from the young stellar object, reflecting the degree of reorganization that the object's and disk's magnetic field strength lines undergo before and during accretion rate increases. From 2008 to 2018, the brightness of the object gradually decreased as it did between 2006 and 2008, reaching a minimum in early 2018 of magnitude 20 in the R-band. Associated nebulosity The object is located on the northwestern edge of M78 (also known as NGC 2068), a reflection nebula well known because of its brilliance; it emits a bluish color characteristic for this kind of object, as the light source is a blue-colored star. The eruption of the star that began in 2004 illuminated some of the gas in the cloud, which was named McNeil's Cloud after its discoverer. The star also appears to be associated with the Herbig-Haro object HH 23, for which it would be the probable source. In addition to V1647 Ori, 44 other young stars with strong Hα emission, several protostars plus a candidate class 0 protostar, cataloged as LBS 17-H, have been identified in the cloud. Just southwest of M78, three other interconnected Herbig-Haro objects are observed, cataloged as HH 24, HH 25, and HH 26; this section of the cloud has a complex morphology due to the intense star formation phenomena taking place here. As a consequence of this, the region is rich in young stellar objects and intense sources of infrared radiation. Galactic environment V1647 Orionis along with its associated nebulosities is located within the Orion B region (LDN 1630); with a distance of about 410 parsecs (1340 light-years), it also comes physically very close to the Orion A star-forming region, of which the Orion Nebula is also a part, and includes the fainter nebulae NGC 2024 (also known as the Flame Nebula), NGC 2023, NGC 2071 and the aforementioned M78. The first two are located in the southwestern sector of the region and show high activity of star formation phenomena All are located within the Orion Molecular Nebula Complex, a vast complex of giant molecular clouds that lies between 1,500 and 1,600 light-years from Earth, hundreds of light-years wide. It is also one of the most active star-forming regions that can be observed in the night sky, as well as one of the richest in protoplanetary disks and very young stars. The complex is most revealing in images taken at the infrared wavelength, where the most hidden star formation processes are detected. The complex counts dark nebula, emission nebula, and H II regions among its components. See also References |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.