text stringlengths 0 473k |
|---|
[SOURCE: https://en.wikipedia.org/wiki/Shane_Legg] | [TOKENS: 652] |
Contents Shane Legg Shane Legg CBE (born 1973 or 1974) is a machine learning researcher and entrepreneur. With Demis Hassabis and Mustafa Suleyman, he cofounded DeepMind Technologies (later bought by Google and now called Google DeepMind), and works there as the chief AGI scientist. He is also known for his academic work on artificial general intelligence, including his thesis supervised by Marcus Hutter. Early life and education Legg attended Rotorua Lakes High School in Rotorua, on New Zealand's North Island. He completed his undergraduate studies at Waikato University in 1996. Also in 1996, he obtained his MSc degree with a thesis entitled "Solomonoff Induction", with Cristian S. Calude at the University of Auckland. Research interests In the early 2000s, Legg re-introduced and popularized with Ben Goertzel the term "artificial general intelligence" (AGI), to describe an AI that can do practically any cognitive task a human can do. At that time, talking about AGI "would put you on the lunatic fringe". Legg is known for his concern of existential risk from AI, highlighted in 2011 in an interview on LessWrong and in 2023 he signed the statement on AI risk of extinction. Career Before his PhD and before cofounding DeepMind, Shane Legg worked at "a number of software development positions at private companies", including the "big data firm Adaptive Intelligence" and the startup WebMind founded by Ben Goertzel. Legg later obtained a PhD at the Dalle Molle Institute for Artificial Intelligence Research (IDSIA), a joint research institute of USI Universitร della Svizzera italiana and SUPSI. He worked on theoretical models of super intelligent machines (AIXI) with Marcus Hutter, and completed in 2008 his doctoral thesis entitled "Machine Super Intelligence". He then went on to complete a postdoctoral fellowship in finance at USI, and began a further fellowship at University College London's Gatsby Computational Neuroscience Unit. Demis Hassabis and Shane Legg first met in 2009 at University College London, where Legg was a postdoctoral researcher. In 2010, Legg cofounded the start-up DeepMind Technologies along with Demis Hassabis and Mustafa Suleyman. DeepMind Technologies was bought in 2014 by Google. After the merge with Google Brain in 2023, the company is now known as Google DeepMind. According to a 2017 article, a significant part of his job as the chief scientist was to supervise recruitment, to decide where DeepMind should focus its efforts, and to lead DeepMind's AI safety work. As of July 2023[update], Legg works at Google DeepMind as the Chief AGI Scientist. Awards and honors Legg was awarded the $10,000 prize of the Singularity Institute for Artificial Intelligence for his PhD done in 2008. Legg was appointed Commander of the Order of the British Empire (CBE) in the 2019 Birthday Honours for services to the science and technology sector and to investment. References |
======================================== |
[SOURCE: https://www.wired.com/tag/wearables/] | [TOKENS: 400] |
WearablesGearThe Best Smart Rings for Tracking EverythingGearWhich Apple Watch Is Best Right Now?The Even Realities G2 Are Impressive Smart Glasses, but the Software Needs PolishGearFor $4,550, Would You Buy a Single Premium Watch or a Swarm of Affordable Ones?GearThe Latest Apple Watch Is $100 OffMotorolaโs Moto Watch Is Powered by Polarโs Fitness PlatformGearA Continuous Glucose Monitor Might Actually Help You Lose WeightGearBuilding a Watch Collection on a Budget? Hereโs Where to StartGearGoogleโs Smart Glasses Will Have the Best Software. But Theyโll Have to Win on Style TooGearWe Strapped on Exoskeletons and Raced. Thereโs One Clear WinnerGearSave $50 on the OnePlus Watch 3, Which Can Run for 16 Days on a ChargeGearOur Favorite Smartwatches Do Much More Than Just Tell TimeGearWhat Is VO2 Max and Why Is It Important for Longevity?GearThe Samsung Galaxy Watch Is Discounted on AmazonGearThis Is the Blood Glucose Monitor Weโve Been Waiting ForGearWhatโs Going on With Smart Rings?GearThe Best Fitness Trackers and Watches for EveryoneScienceBrain Gear Is the Hot New WearableGearPebble Is Making a $75 Smart RingGearAs Key Talent Abandons Apple, Meet the New Generation of Leaders Taking On the Old GuardGearThe Best Fitness Tracker for Gym Bros Is Holding a SaleDealsMy Favorite Fitness Tracker for Pets Is on Sale for Black FridayGearThe Oura Ring 4 Is $100 off for Black FridayGearThe Best Smart Glasses to Augment Your RealityMore Stories Wearables ยฉ 2026 Condรฉ Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condรฉ Nast. Ad Choices |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/BBC_News#cite_note-22] | [TOKENS: 8810] |
Contents BBC News BBC News is an operational business division of the British Broadcasting Corporation (BBC) responsible for the gathering and broadcasting of news and current affairs in the UK and around the world. The department is the world's largest broadcast news organisation and generates about 120 hours of radio and television output each day, as well as online news coverage. The service has over 5,500 journalists working across its output including in 50 foreign news bureaus where more than 250 foreign correspondents are stationed. Deborah Turness has been the CEO of news and current affairs since September 2022. In 2019, it was reported in an Ofcom report that the BBC spent ยฃ136m on news during the period April 2018 to March 2019. BBC News' domestic, global and online news divisions are housed within the largest live newsroom in Europe, in Broadcasting House in central London. Parliamentary coverage is produced and broadcast from studios in London. Through BBC English Regions, the BBC also has regional centres across England and national news centres in Northern Ireland, Scotland and Wales. All nations and English regions produce their own local news programmes and other current affairs and sport programmes. The BBC is a quasi-autonomous corporation authorised by royal charter, making it operationally independent of the government. As of 2024, the BBC reaches an average of 450 million people per week, with the BBC World Service accounting for 320 million people. History This is London calling โ 2LO calling. Here is the first general news bulletin, copyright by Reuters, Press Association, Exchange Telegraph and Central News. โ BBC news programme opening during the 1920s The British Broadcasting Company broadcast its first radio bulletin from radio station 2LO on 14 November 1922. Wishing to avoid competition, newspaper publishers persuaded the government to ban the BBC from broadcasting news before 7 pm, and to force it to use wire service copy instead of reporting on its own. The BBC gradually gained the right to edit the copy and, in 1934, created its own news operation. However, it could not broadcast news before 6 p.m. until World War II. In addition to news, Gaumont British and Movietone cinema newsreels had been broadcast on the TV service since 1936, with the BBC producing its own equivalent Television Newsreel programme from January 1948. A weekly Children's Newsreel was inaugurated on 23 April 1950, to around 350,000 receivers. The network began simulcasting its radio news on television in 1946, with a still picture of Big Ben. Televised bulletins began on 5 July 1954, broadcast from leased studios within Alexandra Palace in London. The public's interest in television and live events was stimulated by Elizabeth II's coronation in 1953. It is estimated that up to 27 million people viewed the programme in the UK, overtaking radio's audience of 12 million for the first time. Those live pictures were fed from 21 cameras in central London to Alexandra Palace for transmission, and then on to other UK transmitters opened in time for the event. That year, there were around two million TV Licences held in the UK, rising to over three million the following year, and four and a half million by 1955. Television news, although physically separate from its radio counterpart, was still firmly under radio news' control in the 1950s. Correspondents provided reports for both outlets, and the first televised bulletin, shown on 5 July 1954 on the then BBC television service and presented by Richard Baker, involved his providing narration off-screen while stills were shown. This was then followed by the customary Television Newsreel with a recorded commentary by John Snagge (and on other occasions by Andrew Timothy). On-screen newsreaders were introduced a year later in 1955 โ Kenneth Kendall (the first to appear in vision), Robert Dougall, and Richard Bakerโthree weeks before ITN's launch on 21 September 1955. Mainstream television production had started to move out of Alexandra Palace in 1950 to larger premises โ mainly at Lime Grove Studios in Shepherd's Bush, west London โ taking Current Affairs (then known as Talks Department) with it. It was from here that the first Panorama, a new documentary programme, was transmitted on 11 November 1953, with Richard Dimbleby becoming anchor in 1955. In 1958, Hugh Carleton Greene became head of News and Current Affairs. On 1 January 1960, Greene became Director-General. Greene made changes that were aimed at making BBC reporting more similar to its competitor ITN, which had been highly rated by study groups held by Greene. A newsroom was created at Alexandra Palace, television reporters were recruited and given the opportunity to write and voice their own scripts, without having to cover stories for radio too. On 20 June 1960, Nan Winton, the first female BBC network newsreader, appeared in vision. 19 September 1960 saw the start of the radio news and current affairs programme The Ten O'clock News. BBC2 started transmission on 20 April 1964 and began broadcasting a new show, Newsroom. The World at One, a lunchtime news programme, began on 4 October 1965 on the then Home Service, and the year before News Review had started on television. News Review was a summary of the week's news, first broadcast on Sunday, 26 April 1964 on BBC 2 and harking back to the weekly Newsreel Review of the Week, produced from 1951, to open programming on Sunday eveningsโthe difference being that this incarnation had subtitles for the deaf and hard-of-hearing. As this was the decade before electronic caption generation, each superimposition ("super") had to be produced on paper or card, synchronised manually to studio and news footage, committed to tape during the afternoon, and broadcast early evening. Thus Sundays were no longer a quiet day for news at Alexandra Palace. The programme ran until the 1980s โ by then using electronic captions, known as Anchor โ to be superseded by Ceefax subtitling (a similar Teletext format), and the signing of such programmes as See Hear (from 1981). On Sunday 17 September 1967, The World This Weekend, a weekly news and current affairs programme, launched on what was then Home Service, but soon-to-be Radio 4. Preparations for colour began in the autumn of 1967 and on Thursday 7 March 1968 Newsroom on BBC2 moved to an early evening slot, becoming the first UK news programme to be transmitted in colour โ from Studio A at Alexandra Palace. News Review and Westminster (the latter a weekly review of Parliamentary happenings) were "colourised" shortly after. However, much of the insert material was still in black and white, as initially only a part of the film coverage shot in and around London was on colour reversal film stock, and all regional and many international contributions were still in black and white. Colour facilities at Alexandra Palace were technically very limited for the next eighteen months, as it had only one RCA colour Quadruplex videotape machine and, eventually two Pye plumbicon colour telecinesโalthough the news colour service started with just one. Black and white national bulletins on BBC 1 continued to originate from Studio B on weekdays, along with Town and Around, the London regional "opt out" programme broadcast throughout the 1960s (and the BBC's first regional news programme for the South East), until it started to be replaced by Nationwide on Tuesday to Thursday from Lime Grove Studios early in September 1969. Town and Around was never to make the move to Television Centre โ instead it became London This Week which aired on Mondays and Fridays only, from the new TVC studios. The BBC moved production out of Alexandra Palace in 1969. BBC Television News resumed operations the next day with a lunchtime bulletin on BBC1 โ in black and white โ from Television Centre, where it remained until March 2013. This move to a smaller studio with better technical facilities allowed Newsroom and News Review to replace back projection with colour-separation overlay. During the 1960s, satellite communication had become possible; however, it was some years before digital line-store conversion was able to undertake the process seamlessly. On 14 September 1970, the first Nine O'Clock News was broadcast on television. Robert Dougall presented the first week from studio N1 โ described by The Guardian as "a sort of polystyrene padded cell"โthe bulletin having been moved from the earlier time of 20.50 as a response to the ratings achieved by ITN's News at Ten, introduced three years earlier on the rival ITV. Richard Baker and Kenneth Kendall presented subsequent weeks, thus echoing those first television bulletins of the mid-1950s. Angela Rippon became the first female news presenter of the Nine O'Clock News in 1975. Her work outside the news was controversial at the time, appearing on The Morecambe and Wise Christmas Show in 1976 singing and dancing. The first edition of John Craven's Newsround, initially intended only as a short series and later renamed just Newsround, came from studio N3 on 4 April 1972. Afternoon television news bulletins during the mid to late 1970s were broadcast from the BBC newsroom itself, rather than one of the three news studios. The newsreader would present to camera while sitting on the edge of a desk; behind him staff would be seen working busily at their desks. This period corresponded with when the Nine O'Clock News got its next makeover, and would use a CSO background of the newsroom from that very same camera each weekday evening. Also in the mid-1970s, the late night news on BBC2 was briefly renamed Newsnight, but this was not to last, or be the same programme as we know today โ that would be launched in 1980 โ and it soon reverted to being just a news summary with the early evening BBC2 news expanded to become Newsday. News on radio was to change in the 1970s, and on Radio 4 in particular, brought about by the arrival of new editor Peter Woon from television news and the implementation of the Broadcasting in the Seventies report. These included the introduction of correspondents into news bulletins where previously only a newsreader would present, as well as the inclusion of content gathered in the preparation process. New programmes were also added to the daily schedule, PM and The World Tonight as part of the plan for the station to become a "wholly speech network". Newsbeat launched as the news service on Radio 1 on 10 September 1973. On 23 September 1974, a teletext system which was launched to bring news content on television screens using text only was launched. Engineers originally began developing such a system to bring news to deaf viewers, but the system was expanded. The Ceefax service became much more diverse before it ceased on 23 October 2012: it not only had subtitling for all channels, it also gave information such as weather, flight times and film reviews. By the end of the decade, the practice of shooting on film for inserts in news broadcasts was declining, with the introduction of ENG technology into the UK. The equipment would gradually become less cumbersome โ the BBC's first attempts had been using a Philips colour camera with backpack base station and separate portable Sony U-matic recorder in the latter half of the decade. In 1980, the Iranian Embassy Siege had been shot electronically by the BBC Television News Outside broadcasting team, and the work of reporter Kate Adie, broadcasting live from Prince's Gate, was nominated for BAFTA actuality coverage, but this time beaten by ITN for the 1980 award. Newsnight, the news and current affairs programme, was due to go on air on 23 January 1980, although trade union disagreements meant that its launch from Lime Grove was postponed by a week. On 27 August 1981 Moira Stuart became the first African Caribbean female newsreader to appear on British television. By 1982, ENG technology had become sufficiently reliable for Bernard Hesketh to use an Ikegami camera to cover the Falklands War, coverage for which he won the "Royal Television Society Cameraman of the Year" award and a BAFTA nomination โ the first time that BBC News had relied upon an electronic camera, rather than film, in a conflict zone. BBC News won the BAFTA for its actuality coverage, however the event has become remembered in television terms for Brian Hanrahan's reporting where he coined the phrase "I'm not allowed to say how many planes joined the raid, but I counted them all out and I counted them all back" to circumvent restrictions, and which has become cited as an example of good reporting under pressure. The first BBC breakfast television programme, Breakfast Time also launched during the 1980s, on 17 January 1983 from Lime Grove Studio E and two weeks before its ITV rival TV-am. Frank Bough, Selina Scott, and Nick Ross helped to wake viewers with a relaxed style of presenting. The Six O'Clock News first aired on 3 September 1984, eventually becoming the most watched news programme in the UK (however, since 2006 it has been overtaken by the BBC News at Ten). In October 1984, images of millions of people starving to death in the Ethiopian famine were shown in Michael Buerk's Six O'Clock News reports. The BBC News crew were the first to document the famine, with Buerk's report on 23 October describing it as "a biblical famine in the 20th century" and "the closest thing to hell on Earth". The BBC News report shocked Britain, motivating its citizens to inundate relief agencies, such as Save the Children, with donations, and to bring global attention to the crisis in Ethiopia. The news report was also watched by Bob Geldof, who would organise the charity single "Do They Know It's Christmas?" to raise money for famine relief followed by the Live Aid concert in July 1985. Starting in 1981, the BBC gave a common theme to its main news bulletins with new electronic titlesโa set of computer-animated "stripes" forming a circle on a red background with a "BBC News" typescript appearing below the circle graphics, and a theme tune consisting of brass and keyboards. The Nine used a similar (striped) number 9. The red background was replaced by a blue from 1985 until 1987. By 1987, the BBC had decided to re-brand its bulletins and established individual styles again for each one with differing titles and music, the weekend and holiday bulletins branded in a similar style to the Nine, although the "stripes" introduction continued to be used until 1989 on occasions where a news bulletin was screened out of the running order of the schedule. In 1987, John Birt resurrected the practice of correspondents working for both TV and radio with the introduction of bi-media journalism. During the 1990s, a wider range of services began to be offered by BBC News, with the split of BBC World Service Television to become BBC World (news and current affairs), and BBC Prime (light entertainment). Content for a 24-hour news channel was thus required, followed in 1997 with the launch of domestic equivalent BBC News 24. Rather than set bulletins, ongoing reports and coverage was needed to keep both channels functioning and meant a greater emphasis in budgeting for both was necessary. In 1998, after 66 years at Broadcasting House, the BBC Radio News operation moved to BBC Television Centre. New technology, provided by Silicon Graphics, came into use in 1993 for a re-launch of the main BBC 1 bulletins, creating a virtual set which appeared to be much larger than it was physically. The relaunch also brought all bulletins into the same style of set with only small changes in colouring, titles, and music to differentiate each. A computer generated cut-glass sculpture of the BBC coat of arms was the centrepiece of the programme titles until the large scale corporate rebranding of news services in 1999. In November 1997, BBC News Online was launched, following individual webpages for major news events such as the 1996 Olympic Games, 1997 general election, and the death of Princess Diana. In 1999, the biggest relaunch occurred, with BBC One bulletins, BBC World, BBC News 24, and BBC News Online all adopting a common style. One of the most significant changes was the gradual adoption of the corporate image by the BBC regional news programmes, giving a common style across local, national and international BBC television news. This also included Newyddion, the main news programme of Welsh language channel S4C, produced by BBC News Wales. Following the relaunch of BBC News in 1999, regional headlines were included at the start of the BBC One news bulletins in 2000. The English regions did however lose five minutes at the end of their bulletins, due to a new headline round-up at 18:55. 2000 also saw the Nine O'Clock News moved to the later time of 22:00. This was in response to ITN who had just moved their popular News at Ten programme to 23:00. ITN briefly returned News at Ten but following poor ratings when head-to-head against the BBC's Ten O'Clock News, the ITN bulletin was moved to 22.30, where it remained until 14 January 2008. The retirement in 2009 of Peter Sissons and departure of Michael Buerk from the Ten O'Clock News led to changes in the BBC One bulletin presenting team on 20 January 2003. The Six O'Clock News became double headed with George Alagiah and Sophie Raworth after Huw Edwards and Fiona Bruce moved to present the Ten. A new set design featuring a projected fictional newsroom backdrop was introduced, followed on 16 February 2004 by new programme titles to match those of BBC News 24. BBC News 24 and BBC World introduced a new style of presentation in December 2003, that was slightly altered on 5 July 2004 to mark 50 years of BBC Television News. On 7 March 2005 director general Mark Thompson launched the "Creative Futures" project to restructure the organisation. The individual positions of editor of the One and Six O'Clock News were replaced by a new daytime position in November 2005. Kevin Bakhurst became the first Controller of BBC News 24, replacing the position of editor. Amanda Farnsworth became daytime editor while Craig Oliver was later named editor of the Ten O'Clock News. Bulletins received new titles and a new set design in May 2006, to allow for Breakfast to move into the main studio for the first time since 1997. The new set featured Barco videowall screens with a background of the London skyline used for main bulletins and originally an image of cirrus clouds against a blue sky for Breakfast. This was later replaced following viewer criticism. The studio bore similarities with the ITN-produced ITV News in 2004, though ITN uses a CSO Virtual studio rather than the actual screens at BBC News. BBC News became part of a new BBC Journalism group in November 2006 as part of a restructuring of the BBC. The then-Director of BBC News, Helen Boaden reported to the then-Deputy Director-General and head of the journalism group, Mark Byford until he was made redundant in 2010. On 18 October 2007, ED Mark Thompson announced a six-year plan, "Delivering Creative Futures" (based on his project begun in March 2005), merging the television current affairs department into a new "News Programmes" division. Thompson's announcement, in response to a ยฃ2 billion shortfall in funding, would, he said, deliver "a smaller but fitter BBC" in the digital age, by cutting its payroll and, in 2013, selling Television Centre. The various separate newsrooms for television, radio and online operations were merged into a single multimedia newsroom. Programme making within the newsrooms was brought together to form a multimedia programme making department. BBC World Service director Peter Horrocks said that the changes would achieve efficiency at a time of cost-cutting at the BBC. In his blog, he wrote that by using the same resources across the various broadcast media meant fewer stories could be covered, or by following more stories, there would be fewer ways to broadcast them. A new graphics and video playout system was introduced for production of television bulletins in January 2007. This coincided with a new structure to BBC World News bulletins, editors favouring a section devoted to analysing the news stories reported on. The first new BBC News bulletin since the Six O'Clock News was announced in July 2007 following a successful trial in the Midlands. The summary, lasting 90 seconds, has been broadcast at 20:00 on weekdays since December 2007 and bears similarities with 60 Seconds on BBC Three, but also includes headlines from the various BBC regions and a weather summary. As part of a long-term cost cutting programme, bulletins were renamed the BBC News at One, Six and Ten respectively in April 2008 while BBC News 24 was renamed BBC News and moved into the same studio as the bulletins at BBC Television Centre. BBC World was renamed BBC World News and regional news programmes were also updated with the new presentation style, designed by Lambie-Nairn. 2008 also saw tri-media introduced across TV, radio, and online. The studio moves also meant that Studio N9, previously used for BBC World, was closed, and operations moved to the previous studio of BBC News 24. Studio N9 was later refitted to match the new branding, and was used for the BBC's UK local elections and European elections coverage in early June 2009. A strategy review of the BBC in March 2010, confirmed that having "the best journalism in the world" would form one of five key editorial policies, as part of changes subject to public consultation and BBC Trust approval. After a period of suspension in late 2012, Helen Boaden ceased to be the Director of BBC News. On 16 April 2013, incoming BBC Director-General Tony Hall named James Harding, a former editor of The Times of London newspaper as Director of News and Current Affairs. From August 2012 to March 2013, all news operations moved from Television Centre to new facilities in the refurbished and extended Broadcasting House, in Portland Place. The move began in October 2012, and also included the BBC World Service, which moved from Bush House following the expiry of the BBC's lease. This new extension to the north and east, referred to as "New Broadcasting House", includes several new state-of-the-art radio and television studios centred around an 11-storey atrium. The move began with the domestic programme The Andrew Marr Show on 2 September 2012, and concluded with the move of the BBC News channel and domestic news bulletins on 18 March 2013. The newsroom houses all domestic bulletins and programmes on both television and radio, as well as the BBC World Service international radio networks and the BBC World News international television channel. BBC News and CBS News established an editorial and newsgathering partnership in 2017, replacing an earlier long-standing partnership between BBC News and ABC News. In an October 2018 Simmons Research survey of 38 news organisations, BBC News was ranked the fourth most trusted news organisation by Americans, behind CBS News, ABC News and The Wall Street Journal. In January 2020 the BBC announced a BBC News savings target of ยฃ80 million per year by 2022, involving about 450 staff reductions from the current 6,000. BBC director of news and current affairs Fran Unsworth said there would be further moves toward digital broadcasting, in part to attract back a youth audience, and more pooling of reporters to stop separate teams covering the same news. A further 70 staff reductions were announced in July 2020. BBC Three began airing the news programme The Catch Up in February 2022. It is presented by Levi Jouavel, Kirsty Grant, and Callum Tulley and aims to get the channel's target audience (16 to 34-year olds) to make sense of the world around them while also highlighting optimistic stories. Compared to its predecessor 60 Seconds, The Catch Up is three times longer, running for about three minutes and not airing during weekends. According to its annual report as of December 2021[update], India has the largest number of people using BBC services in the world. In May 2025, following the earthquake that hit Myanmar and Thailand, a television news bulletin (BBC News Myanmar) from the Burmese service using a vacated Voice of America satellite frequency began its broadcasts. Programming and reporting In November 2023, BBC News joined with the International Consortium of Investigative Journalists, Paper Trail Media [de] and 69 media partners including Distributed Denial of Secrets and the Organised Crime and Corruption Reporting Project (OCCRP) and more than 270 journalists in 55 countries and territories to produce the 'Cyprus Confidential' report on the financial network which supports the regime of Vladimir Putin, mostly with connections to Cyprus, and showed Cyprus to have strong links with high-up figures in the Kremlin, some of whom have been sanctioned. Government officials including Cyprus president Nikos Christodoulides and European lawmakers began responding to the investigation's findings in less than 24 hours, calling for reforms and launching probes. BBC News is responsible for the news programmes and documentary content on the BBC's general television channels, as well as the news coverage on the BBC News Channel in the UK, and 22 hours of programming for the corporation's international BBC World News channel. Coverage for BBC Parliament is carried out on behalf of the BBC at Millbank Studios, though BBC News provides editorial and journalistic content. BBC News content is also output onto the BBC's digital interactive television services under the BBC Red Button brand, and until 2012, on the Ceefax teletext system. The music on all BBC television news programmes was introduced in 1999 and composed by David Lowe. It was part of the re-branding which commenced in 1999 and features 'BBC Pips'. The general theme was used on bulletins on BBC One, News 24, BBC World and local news programmes in the BBC's Nations and Regions. Lowe was also responsible for the music on Radio One's Newsbeat. The theme has had several changes since 1999, the latest in March 2013. The BBC Arabic Television news channel launched on 11 March 2008, a Persian-language channel followed on 14 January 2009, broadcasting from the Peel wing of Broadcasting House; both include news, analysis, interviews, sports and highly cultural programmes and are run by the BBC World Service and funded from a grant-in-aid from the British Foreign Office (and not the television licence). The BBC Verify service was launched in 2023 to fact-check news stories, followed by BBC Verify Live in 2025. BBC Radio News produces bulletins for the BBC's national radio stations and provides content for local BBC radio stations via the General News Service (GNS), a BBC-internal news distribution service. BBC News does not produce the BBC's regional news bulletins, which are produced individually by the BBC nations and regions themselves. The BBC World Service broadcasts to some 150 million people in English as well as 27 languages across the globe. BBC Radio News is a patron of the Radio Academy. BBC News Online is the BBC's news website. Launched in November 1997, it is one of the most popular news websites, with 1.2 billion website visits in April 2021, as well as being used by 60% of the UK's internet users for news. The website contains international news coverage as well as entertainment, sport, science, and political news. Mobile apps for Android, iOS and Windows Phone systems have been provided since 2010. Many television and radio programmes are also available to view on the BBC iPlayer and BBC Sounds services. The BBC News channel is also available to view 24 hours a day, while video and radio clips are also available within online news articles. In October 2019, BBC News Online launched a mirror on the dark web anonymity network Tor in an effort to circumvent censorship. Criticism The BBC is required by its charter to be free from both political and commercial influence and answers only to its viewers and listeners. This political objectivity is sometimes questioned. For instance, The Daily Telegraph (3 August 2005) carried a letter from the KGB defector Oleg Gordievsky, referring to it as "The Red Service". Books have been written on the subject, including anti-BBC works like Truth Betrayed by W J West and The Truth Twisters by Richard Deacon. The BBC has been accused of bias by Conservative MPs. The BBC's Editorial Guidelines on Politics and Public Policy state that while "the voices and opinions of opposition parties must be routinely aired and challenged", "the government of the day will often be the primary source of news". The BBC is regularly accused by the government of the day of bias in favour of the opposition and, by the opposition, of bias in favour of the government. Similarly, during times of war, the BBC is often accused by the UK government, or by strong supporters of British military campaigns, of being overly sympathetic to the view of the enemy. An edition of Newsnight at the start of the Falklands War in 1982 was described as "almost treasonable" by John Page, MP, who objected to Peter Snow saying "if we believe the British". During the first Gulf War, critics of the BBC took to using the satirical name "Baghdad Broadcasting Corporation". During the Kosovo War, the BBC were labelled the "Belgrade Broadcasting Corporation" (suggesting favouritism towards the FR Yugoslavia government over ethnic Albanian rebels) by British ministers, although Slobodan Miloseviฤ (then FRY president) claimed that the BBC's coverage had been biased against his nation. Conversely, some of those who style themselves anti-establishment in the United Kingdom or who oppose foreign wars have accused the BBC of pro-establishment bias or of refusing to give an outlet to "anti-war" voices. Following the 2003 invasion of Iraq, a study by the Cardiff University School of Journalism of the reporting of the war found that nine out of 10 references to weapons of mass destruction during the war assumed that Iraq possessed them, and only one in 10 questioned this assumption. It also found that, out of the main British broadcasters covering the war, the BBC was the most likely to use the British government and military as its source. It was also the least likely to use independent sources, like the Red Cross, who were more critical of the war. When it came to reporting Iraqi casualties, the study found fewer reports on the BBC than on the other three main channels. The report's author, Justin Lewis, wrote "Far from revealing an anti-war BBC, our findings tend to give credence to those who criticised the BBC for being too sympathetic to the government in its war coverage. Either way, it is clear that the accusation of BBC anti-war bias fails to stand up to any serious or sustained analysis." Prominent BBC appointments are constantly assessed by the British media and political establishment for signs of political bias. The appointment of Greg Dyke as Director-General was highlighted by press sources because Dyke was a Labour Party member and former activist, as well as a friend of Tony Blair. The BBC's former Political Editor, Nick Robinson, was some years ago a chairman of the Young Conservatives and did, as a result, attract informal criticism from the former Labour government, but his predecessor Andrew Marr faced similar claims from the right because he was editor of The Independent, a liberal-leaning newspaper, before his appointment in 2000. Mark Thompson, former Director-General of the BBC, admitted the organisation has been biased "towards the left" in the past. He said, "In the BBC I joined 30 years ago, there was, in much of current affairs, in terms of people's personal politics, which were quite vocal, a massive bias to the left". He then added, "The organization did struggle then with impartiality. Now it is a completely different generation. There is much less overt tribalism among the young journalists who work for the BBC." Following the EU referendum in 2016, some critics suggested that the BBC was biased in favour of leaving the EU. For instance, in 2018, the BBC received complaints from people who took issue that the BBC was not sufficiently covering anti-Brexit marches while giving smaller-scale events hosted by former UKIP leader Nigel Farage more airtime. On the other hand, a poll released by YouGov showed that 45% of people who voted to leave the EU thought that the BBC was 'actively anti-Brexit' compared to 13% of the same kinds of voters who think the BBC is pro-Brexit. In 2008, the BBC Hindi was criticised by some Indian outlets for referring to the terrorists who carried out the 2008 Mumbai attacks as "gunmen". The response to this added to prior criticism from some Indian commentators suggesting that the BBC may have an Indophobic bias. In March 2015, the BBC was criticised for a BBC Storyville documentary interviewing one of the rapists in India. In spite of a ban ordered by the Indian High court, the BBC still aired the documentary "India's Daughter" outside India. BBC News was at the centre of a political controversy following the 2003 invasion of Iraq. Three BBC News reports (Andrew Gilligan's on Today, Gavin Hewitt's on The Ten O'Clock News and another on Newsnight) quoted an anonymous source that stated the British government (particularly the Prime Minister's office) had embellished the September Dossier with misleading exaggerations of Iraq's weapons of mass destruction capabilities. The government denounced the reports and accused the corporation of poor journalism. In subsequent weeks the corporation stood by the report, saying that it had a reliable source. Following intense media speculation, David Kelly was named in the press as the source for Gilligan's story on 9 July 2003. Kelly was found dead, by suicide, in a field close to his home early on 18 July. An inquiry led by Lord Hutton was announced by the British government the following day to investigate the circumstances leading to Kelly's death, concluding that "Dr. Kelly took his own life." In his report on 28 January 2004, Lord Hutton concluded that Gilligan's original accusation was "unfounded" and the BBC's editorial and management processes were "defective". In particular, it specifically criticised the chain of management that caused the BBC to defend its story. The BBC Director of News, Richard Sambrook, the report said, had accepted Gilligan's word that his story was accurate in spite of his notes being incomplete. Davies had then told the BBC Board of Governors that he was happy with the story and told the Prime Minister that a satisfactory internal inquiry had taken place. The Board of Governors, under the chairman's, Gavyn Davies, guidance, accepted that further investigation of the Government's complaints were unnecessary. Because of the criticism in the Hutton report, Davies resigned on the day of publication. BBC News faced an important test, reporting on itself with the publication of the report, but by common consent (of the Board of Governors) managed this "independently, impartially and honestly". Davies' resignation was followed by the resignation of Director General, Greg Dyke, the following day, and the resignation of Gilligan on 30 January. While undoubtedly a traumatic experience for the corporation, an ICM poll in April 2003 indicated that it had sustained its position as the best and most trusted provider of news. The BBC has faced accusations of holding both anti-Israel and anti-Palestine bias. Douglas Davis, the London correspondent of The Jerusalem Post, has described the BBC's coverage of the ArabโIsraeli conflict as "a relentless, one-dimensional portrayal of Israel as a demonic, criminal state and Israelis as brutal oppressors [which] bears all the hallmarks of a concerted campaign of vilification that, wittingly or not, has the effect of delegitimising the Jewish state and pumping oxygen into a dark old European hatred that dared not speak its name for the past half-century.". However two large independent studies, one conducted by Loughborough University and the other by Glasgow University's Media Group concluded that Israeli perspectives are given greater coverage. Critics of the BBC argue that the Balen Report proves systematic bias against Israel in headline news programming. The Daily Mail and The Daily Telegraph criticised the BBC for spending hundreds of thousands of British tax payers' pounds from preventing the report being released to the public. Jeremy Bowen, the Middle East Editor for BBC world news, was singled out specifically for bias by the BBC Trust which concluded that he violated "BBC guidelines on accuracy and impartiality." An independent panel appointed by the BBC Trust was set up in 2006 to review the impartiality of the BBC's coverage of the IsraeliโPalestinian conflict. The panel's assessment was that "apart from individual lapses, there was little to suggest deliberate or systematic bias." While noting a "commitment to be fair accurate and impartial" and praising much of the BBC's coverage the independent panel concluded "that BBC output does not consistently give a full and fair account of the conflict. In some ways the picture is incomplete and, in that sense, misleading." It notes that, "the failure to convey adequately the disparity in the Israeli and Palestinian experience, [reflects] the fact that one side is in control and the other lives under occupation". Writing in the Financial Times, Philip Stephens, one of the panellists, later accused the BBC's director-general, Mark Thompson, of misrepresenting the panel's conclusions. He further opined "My sense is that BBC news reporting has also lost a once iron-clad commitment to objectivity and a necessary respect for the democratic process. If I am right, the BBC, too, is lost". Mark Thompson published a rebuttal in the FT the next day. The description by one BBC correspondent reporting on the funeral of Yassir Arafat that she had been left with tears in her eyes led to other questions of impartiality, particularly from Martin Walker in a guest opinion piece in The Times, who picked out the apparent case of Fayad Abu Shamala, the BBC Arabic Service correspondent, who told a Hamas rally on 6 May 2001, that journalists in Gaza were "waging the campaign shoulder to shoulder together with the Palestinian people". Walker argues that the independent inquiry was flawed for two reasons. Firstly, because the time period over which it was conducted (August 2005 to January 2006) surrounded the Israeli withdrawal from Gaza and Ariel Sharon's stroke, which produced more positive coverage than usual. Furthermore, he wrote, the inquiry only looked at the BBC's domestic coverage, and excluded output on the BBC World Service and BBC World. Tom Gross accused the BBC of glorifying Hamas suicide bombers, and condemned its policy of inviting guests such as Jenny Tonge and Tom Paulin who have compared Israeli soldiers to Nazis. Writing for the BBC, Paulin said Israeli soldiers should be "shot dead" like Hitler's S.S, and said he could "understand how suicide bombers feel". The BBC also faced criticism for not airing a Disasters Emergency Committee aid appeal for Palestinians who suffered in Gaza during 22-day war there between late 2008 and early 2009. Most other major UK broadcasters did air this appeal, but rival Sky News did not. British journalist Julie Burchill has accused BBC of creating a "climate of fear" for British Jews over its "excessive coverage" of Israel compared to other nations. In light of the Gaza war, the BBC suspended seven Arab journalists over allegations of expressing support for Hamas via social media. BBC and ABC share video segments and reporters as needed in producing their newscasts. with the BBC showing ABC World News Tonight with David Muir in the UK. However, in July 2017, the BBC announced a new partnership with CBS News allows both organisations to share video, editorial content, and additional newsgathering resources in New York, London, Washington and around the world. BBC News subscribes to wire services from leading international agencies including PA Media (formerly Press Association), Reuters, and Agence France-Presse. In April 2017, the BBC dropped Associated Press in favour of an enhanced service from AFP. BBC News reporters and broadcasts are now and have in the past been banned in several countries primarily for reporting which has been unfavourable to the ruling government. For example, correspondents were banned by the former apartheid regime of South Africa. The BBC was banned in Zimbabwe under Mugabe for eight years as a terrorist organisation until being allowed to operate again over a year after the 2008 elections. The BBC was banned in Burma (officially Myanmar) after their coverage and commentary on anti-government protests there in September 2007. The ban was lifted four years later in September 2011. Other cases have included Uzbekistan, China, and Pakistan. BBC Persian, the BBC's Persian language news site, was blocked from the Iranian internet in 2006. The BBC News website was made available in China again in March 2008, but as of October 2014[update], was blocked again. In June 2015, the Rwandan government placed an indefinite ban on BBC broadcasts following the airing of a controversial documentary regarding the 1994 Rwandan genocide, Rwanda's Untold Story, broadcast on BBC2 on 1 October 2014. The UK's Foreign Office recognised "the hurt caused in Rwanda by some parts of the documentary". In February 2017, reporters from the BBC (as well as the Daily Mail, The New York Times, Politico, CNN, and others) were denied access to a United States White House briefing. In 2017, BBC India was banned for a period of five years from covering all national parks and sanctuaries in India. Following the withdrawal of CGTN's UK broadcaster licence on 4 February 2021 by Ofcom, China banned BBC News from airing in China. See also References External links |below = Category }} |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Balfour_Declaration] | [TOKENS: 16772] |
Contents Balfour Declaration His Majesty's Government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavours to facilitate the achievement of this object, it being clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine, or the rights and political status enjoyed by Jews in any other country. The Balfour Declaration was a public statement issued by the British Government in 1917 during the First World War announcing its support for the establishment of a "national home for the Jewish people" in Palestine, then an Ottoman region with a small minority Jewish population. The declaration was contained in a letter dated 2 November 1917 from Arthur Balfour, the British foreign secretary, to Lord Rothschild, a leader of the British Jewish community, for transmission to the Zionist Federation of Great Britain and Ireland. The text of the declaration was published in the press on 9 November 1917. Following Britain's declaration of war on the Ottoman Empire in November 1914, it began to consider the future of Palestine. Within two months a memorandum was circulated to the War Cabinet by a Zionist member, Herbert Samuel, proposing the support of Zionist ambitions to enlist the support of Jews in the wider war. A committee was established in April 1915 by British prime minister H. H. Asquith to determine their policy towards the Ottoman Empire including Palestine. Asquith, who had favoured post-war reform of the Ottoman Empire, resigned in December 1916; his replacement David Lloyd George favoured partition of the Empire. The first negotiations between the British and the Zionists took place at a conference on 7 February 1917 that included Sir Mark Sykes and the Zionist leadership. Subsequent discussions led to Balfour's request, on 19 June, that Rothschild and Chaim Weizmann draft a public declaration. Further drafts were discussed by the British Cabinet during September and October, with input from Zionist and anti-Zionist Jews but with no representation from the local population in Palestine. By late 1917, the wider war had reached a stalemate, with two of Britain's allies not fully engaged: the United States had yet to suffer a casualty, and the Russians were in the midst of a revolution. A stalemate in southern Palestine was broken by the Battle of Beersheba on 31 October 1917. The release of the final declaration was authorised on 31 October; the preceding Cabinet discussion had referenced perceived propaganda benefits amongst the worldwide Jewish community for the Allied war effort. The opening words of the declaration represented the first public expression of support for Zionism by a major political power. The term "national home" had no precedent in international law, and was intentionally vague as to whether a Jewish state was contemplated. The intended boundaries of Palestine were not specified, and the British government later confirmed that the words "in Palestine" meant that the Jewish national home was not intended to cover all of Palestine. The second half of the declaration was added to satisfy opponents of the policy, who had claimed that it would otherwise prejudice the position of the local population of Palestine and encourage antisemitism worldwide by "stamping the Jews as strangers in their native lands". The declaration called for safeguarding the civil and religious rights for the Palestinian Arabs, who composed the vast majority of the local population, and also the rights and political status of the Jewish communities in countries outside of Palestine. The British government acknowledged in 1939 that the local population's wishes and interests should have been taken into account, and recognised in 2017 that the declaration should have called for the protection of the Palestinian Arabs' political rights. The declaration greatly increased popular support for Zionism within Jewish communities worldwide, and became a core component of the British Mandate for Palestine, the founding document of Mandatory Palestine. It indirectly led to the emergence of the State of Israel and is considered a principal cause of the ongoing IsraeliโPalestinian conflict โ often described as the most intractable in the world. Controversy remains over a number of areas, such as whether the declaration contradicted earlier promises the British made to the Sharif of Mecca in the McMahonโHussein correspondence. Background Early British political support for an increased Jewish presence in the region of Palestine was based upon geopolitical calculations.[i] This support began in the early 1840s and was led by Lord Palmerston, following the occupation of Syria and Palestine by separatist Ottoman governor Muhammad Ali of Egypt. French influence had grown in Palestine and the wider Middle East, and its role as protector of the Catholic communities began to grow, just as Russian influence had grown as protector of the Eastern Orthodox in the same regions. This left Britain without a sphere of influence, and thus a need to find or create their own regional "protรฉgรฉs". These political considerations were supported by a sympathetic evangelical Christian sentiment towards the "restoration of the Jews" to Palestine among elements of the mid-19th-century British political elite โ most notably Lord Shaftesbury.[ii] The British Foreign Office actively encouraged Jewish emigration to Palestine, exemplified by Charles Henry Churchill's 1841โ1842 exhortations to Moses Montefiore, the leader of the British Jewish community.[a] Such efforts were premature, and did not succeed;[iii] only 24,000 Jews were living in Palestine on the eve of the emergence of Zionism within the world's Jewish communities in the last two decades of the 19th century. With the geopolitical shakeup occasioned by the outbreak of the First World War, the earlier calculations, which had lapsed for some time, led to a renewal of strategic assessments and political bargaining over the Middle and Far East. Although other factors played their part, Jonathan Schneer says that stereotypical thinking by British officials about Jews also played a role in the decision to issue the Declaration. Robert Cecil, Hugh OโBierne and Sir Mark Sykes all held an unrealistic view of "world Jewry", the former writing "I do not think it is possible to exaggerate the international power of the Jews." Zionist representatives saw advantage in encouraging such views. James Renton concurs, writing that the British foreign policy elite, including Prime Minister David Lloyd George and Foreign Secretary A.J. Balfour, believed that Jews possessed real and significant power that could be of use to them in the war. Zionism arose in the late 19th century in reaction to anti-Semitic and exclusionary nationalist movements in Europe.[iv][v] Romantic nationalism in Central and Eastern Europe had helped to set off the Haskalah, or "Jewish Enlightenment", creating a split in the Jewish community between those who saw Judaism as their religion and those who saw it as their ethnicity or nation. The 1881โ1884 anti-Jewish pogroms in the Russian Empire encouraged the growth of the latter identity, resulting in the formation of the Hovevei Zion pioneer organizations, the publication of Leon Pinsker's Autoemancipation, and the first major wave of Jewish immigration to Palestine โ retrospectively named the "First Aliyah". In 1896, Theodor Herzl, a Jewish journalist living in Austria-Hungary, published the foundational text of political Zionism, Der Judenstaat ("The Jews' State" or "The State of the Jews"), in which he asserted that the only solution to the "Jewish Question" in Europe, including growing anti-Semitism, was the establishment of a state for the Jews. A year later, Herzl founded the Zionist Organization, which at its first congress called for the establishment of "a home for the Jewish people in Palestine secured under public law". Proposed measures to attain that goal included the promotion of Jewish settlement there, the organisation of Jews in the diaspora, the strengthening of Jewish feeling and consciousness, and preparatory steps to attain necessary governmental grants. Herzl died in 1904, 44 years before the establishment of State of Israel, the Jewish state that he proposed, without having gained the political standing required to carry out his agenda. Zionist leader Chaim Weizmann, later President of the World Zionist Organisation and first President of Israel, moved from Switzerland to the UK in 1904 and met Arthur Balfour โ who had just launched his 1905โ1906 election campaign after resigning as Prime Minister โ in a session arranged by Charles Dreyfus, his Jewish constituency representative.[vi] Earlier that year, Balfour had successfully driven the Aliens Act through Parliament with impassioned speeches regarding the need to restrict the wave of immigration into Britain from Jews fleeing the Russian Empire. During this meeting, he asked what Weizmann's objections had been to the 1903 Uganda Scheme that Herzl had supported to provide a portion of British East Africa to the Jewish people as a homeland. The scheme, which had been proposed to Herzl by Joseph Chamberlain, Colonial Secretary in Balfour's Cabinet, following his trip to East Africa earlier in the year,[vii] had been subsequently voted down following Herzl's death by the Seventh Zionist Congress in 1905[viii] after two years of heated debate in the Zionist Organization. Weizmann responded that he believed the English are to London as the Jews are to Jerusalem.[b] In January 1914, Weizmann first met Baron Edmond de Rothschild, a member of the French branch of the Rothschild family and a leading proponent of the Zionist movement, in relation to a project to build a Hebrew university in Jerusalem. The Baron was not part of the World Zionist Organization, but had funded the Jewish agricultural colonies of the First Aliyah and transferred them to the Jewish Colonization Association in 1899. This connection was to bear fruit later that year when the Baron's son, James de Rothschild, requested a meeting with Weizmann on 25 November 1914, to enlist him in influencing those deemed to be receptive within the British government to the Zionist agenda in Palestine.[c] Through James's wife Dorothy, Weizmann was to meet Rรณzsika Rothschild, who introduced him to the English branch of the family โ in particular her husband Charles and his older brother Walter, a zoologist and former Member of Parliament (MP). Their father, Nathan Rothschild, 1st Baron Rothschild, head of the English branch of the family, had a guarded attitude towards Zionism, but he died in March 1915 and his title was inherited by Walter. Prior to the declaration, about 8,000 of Britain's 300,000 Jews belonged to a Zionist organisation. Globally, as of 1913 โ the latest known date prior to the declaration โ the equivalent figure was approximately 1%. The year 1916 marked four centuries since Palestine had become part of the Ottoman Empire, also known as the Turkish Empire. For most of this period, the Jewish population represented a small minority, approximately 3% of the total, with Muslims representing the largest segment of the population, and Christians the second.[ix] Ottoman government in Constantinople began to apply restrictions on Jewish immigration to Palestine in late 1882, in response to the start of the First Aliyah earlier that year. Although this immigration was creating a certain amount of tension with the local population, mainly among the merchant and notable classes, in 1901 the Sublime Porte (the Ottoman central government) gave Jews the same rights as Arabs to buy land in Palestine and the percentage of Jews in the population rose to 7% by 1914. At the same time, with growing distrust of the Young Turks (Turkish nationalists who had taken control of the Empire in 1908) and the Second Aliyah, Arab nationalism and Palestinian nationalism was on the rise; and in Palestine anti-Zionism was a characteristic that unified these forces. Historians cannot say whether these strengthening forces would still have ultimately resulted in conflict in the absence of the Balfour Declaration.[x] In July 1914, war broke out in Europe between the Triple Entente (Britain, France, and the Russian Empire) and the Central Powers (Germany, Austria-Hungary, and, later that year, the Ottoman Empire). The British Cabinet first discussed Palestine at a meeting on 9 November 1914, four days after Britain's declaration of war on the Ottoman Empire, of which the Mutasarrifate of Jerusalem โ often referred to as Palestine โ was a component. At the meeting David Lloyd George, then Chancellor of the Exchequer, "referred to the ultimate destiny of Palestine". The Chancellor, whose law firm Lloyd George, Roberts and Co had been engaged a decade before by the Zionist Federation of Great Britain and Ireland to work on the Uganda Scheme, was to become prime minister by the time of the declaration, and was ultimately responsible for it. Weizmann's political efforts picked up speed,[d] and on 10 December 1914 he met with Herbert Samuel, a British Cabinet member and a secular Jew who had studied Zionism; Samuel believed Weizmann's demands were too modest.[e] Two days later, Weizmann met Balfour again, for the first time since their initial meeting in 1905; Balfour had been out of government ever since his electoral defeat in 1906, but remained a senior member of the Conservative Party in their role as Official Opposition.[f] A month later, Samuel circulated a memorandum entitled The Future of Palestine to his Cabinet colleagues. The memorandum stated: "I am assured that the solution of the problem of Palestine which would be much the most welcome to the leaders and supporters of the Zionist movement throughout the world would be the annexation of the country to the British Empire". Samuel discussed a copy of his memorandum with Nathan Rothschild in February 1915, a month before the latter's death. It was the first time in an official record that enlisting the support of Jews as a war measure had been proposed. Many further discussions followed, including the initial meetings in 1915โ16 between Lloyd George, who had been appointed Minister of Munitions in May 1915, and Weizmann, who was appointed as a scientific advisor to the ministry in September 1915. Seventeen years later, in his War Memoirs, Lloyd George described these meetings as being the "fount and origin" of the declaration; historians have rejected this claim.[g] In late 1915 the British High Commissioner to Egypt, Henry McMahon, exchanged ten letters with Hussein bin Ali, Sharif of Mecca, in which he promised Hussein to recognize Arab independence "in the limits and boundaries proposed by the Sherif of Mecca" in return for Hussein launching a revolt against the Ottoman Empire. The pledge excluded "portions of Syria" lying to the west of "the districts of Damascus, Homs, Hama and Aleppo".[h] In the decades after the war, the extent of this coastal exclusion was hotly disputed since Palestine lay to the southwest of Damascus and was not explicitly mentioned. The Arab Revolt was launched on June 5th, 1916, on the basis of the quid pro quo agreement in the correspondence. However, less than three weeks earlier the governments of the United Kingdom, France, and Russia secretly concluded the SykesโPicot Agreement, which Balfour described later as a "wholly new method" for dividing the region, after the 1915 agreement "seems to have been forgotten".[j] This Anglo-French treaty was negotiated in late 1915 and early 1916 between Sir Mark Sykes and Franรงois Georges-Picot, with the primary arrangements being set out in draft form in a joint memorandum on 5 January 1916. Sykes was a British Conservative MP who had risen to a position of significant influence on Britain's Middle East policy, beginning with his seat on the 1915 De Bunsen Committee and his initiative to create the Arab Bureau. Picot was a French diplomat and former consul-general in Beirut. Their agreement defined the proposed spheres of influence and control in Western Asia should the Triple Entente succeed in defeating the Ottoman Empire during World War I, dividing many Arab territories into British- and French-administered areas. In Palestine, internationalisation was proposed, with the form of administration to be confirmed after consultation with both Russia and Hussein; the January draft noted Christian and Muslim interests, and that "members of the Jewish community throughout the world have a conscientious and sentimental interest in the future of the country."[k] Prior to this point, no active negotiations with Zionists had taken place, but Sykes had been aware of Zionism, was in contact with Moses Gaster โ a former President of the English Zionist Federation โ and may have seen Samuel's 1915 memorandum. On 3 March, while Sykes and Picot were still in Petrograd, Lucien Wolf (secretary of the Foreign Conjoint Committee, set up by Jewish organizations to further the interests of foreign Jews) submitted to the Foreign Office, the draft of an assurance (formula) that could be issued by the allies in support of Jewish aspirations: In the event of Palestine coming within the spheres of influence of Great Britain or France at the close of the war, the governments of those powers will not fail to take account of the historic interest that country possesses for the Jewish community. The Jewish population will be secured in the enjoyment of civil and religious liberty, equal political rights with the rest of the population, reasonable facilities for immigration and colonisation, and such municipal privileges in the towns and colonies inhabited by them as may be shown to be necessary. On 11 March, telegrams[l] were sent in Grey's name to Britain's Russian and French ambassadors for transmission to Russian and French authorities, including the formula, as well as: The scheme might be made far more attractive to the majority of Jews if it held out to them the prospect that when in course of time the Jewish colonists in Palestine grow strong enough to cope with the Arab population they may be allowed to take the management of the internal affairs of Palestine (with the exception of Jerusalem and the holy places) into their own hands. Sykes, having seen the telegram, had discussions with Picot and proposed (making reference to Samuel's memorandum[m]) the creation of an Arab Sultanate under French and British protection, some means of administering the holy places along with the establishment of a company to purchase land for Jewish colonists, who would then become citizens with equal rights to Arabs.[n] Shortly after returning from Petrograd, Sykes briefed Samuel, who then briefed a meeting of Gaster, Weizmann and Sokolow. Gaster recorded in his diary on 16 April 1916: "We are offered French-English condominium in Palest[ine]. Arab Prince to conciliate Arab sentiment and as part of the Constitution a Charter to Zionists for which England would stand guarantee and which would stand by us in every case of friction ... It practically comes to a complete realisation of our Zionist programme. However, we insisted on: national character of Charter, freedom of immigration and internal autonomy, and at the same time full rights of citizenship to [illegible] and Jews in Palestine." In Sykes's mind, the agreement which bore his name was outdated even before it was signed โ in March 1916, he wrote in a private letter: "to my mind the Zionists are now the key of the situation".[xii] In the event, neither the French nor the Russians were enthusiastic about the proposed formulation and eventually on 4 July, Wolf was informed that "the present moment is inopportune for making any announcement." These wartime initiatives, inclusive of the declaration, are frequently considered together by historians because of the potential, real or imagined, for incompatibility between them, particularly in regard to the disposition of Palestine. In the words of Professor Albert Hourani, founder of the Middle East Centre at St Antony's College, Oxford: "The argument about the interpretation of these agreements is one which is impossible to end, because they were intended to bear more than one interpretation." In terms of British politics, the declaration resulted from the coming into power of Lloyd George and his Cabinet, which had replaced the H. H. Asquith led-Cabinet in December 1916. Whilst both Prime Ministers were Liberals and both governments were wartime coalitions, Lloyd George and Balfour, appointed as his Foreign Secretary, favoured a post-war partition of the Ottoman Empire as a major British war aim, whereas Asquith and his Foreign Secretary, Sir Edward Grey, had favoured its reform. Two days after taking office, Lloyd George told General Robertson, the Chief of the Imperial General Staff, that he wanted a major victory, preferably the capture of Jerusalem, to impress British public opinion, and immediately consulted his War Cabinet about a "further campaign into Palestine when El Arish had been secured." Subsequent pressure from Lloyd George, over the reservations of Robertson, resulted in the recapture of the Sinai for British-controlled Egypt, and, with the capture of El Arish in December 1916 and Rafah in January 1917, the arrival of British forces at the southern borders of the Ottoman Empire. Following two unsuccessful attempts to capture Gaza between 26 March and 19 April, a six-month stalemate in Southern Palestine began; the Sinai and Palestine Campaign would not make any progress into Palestine until 31 October 1917. Following the change in government, Sykes was promoted into the War Cabinet Secretariat with responsibility for Middle Eastern affairs. In January 1917, despite having previously built a relationship with Moses Gaster,[xiii] he began looking to meet other Zionist leaders; by the end of the month he had been introduced to Weizmann and his associate Nahum Sokolow, a journalist and executive of the World Zionist Organization who had moved to Britain at the beginning of the war.[xiv] On 7 February 1917, Sykes, claiming to be acting in a private capacity, entered into substantive discussions with the Zionist leadership.[o] The previous British correspondence with "the Arabs" was discussed at the meeting; Sokolow's notes record Sykes's description that "The Arabs professed that language must be the measure [by which control of Palestine should be determined] and [by that measure] could claim all Syria and Palestine. Still the Arabs could be managed, particularly if they received Jewish support in other matters."[p] At this point the Zionists were still unaware of the SykesโPicot Agreement, although they had their suspicions. One of Sykes's goals was the mobilization of Zionism to the cause of British suzerainty in Palestine, so as to have arguments to put to France in support of that objective. During the period of the British War Cabinet discussions leading up to the declaration, the war had reached a period of stalemate. On the Western Front the tide would first turn in favour of the Central Powers in spring 1918, before decisively turning in favour of the Allies from July 1918 onwards. Although the United States declared war on Germany in the spring of 1917, it did not suffer its first casualties until 2 November 1917, at which point President Woodrow Wilson still hoped to avoid dispatching large contingents of troops into the war. The Russian forces were known to be distracted by the ongoing Russian Revolution and the growing support for the Bolshevik faction, but Alexander Kerensky's Provisional Government had remained in the war; Russia only withdrew after the final stage of the revolution on 7 November 1917. Approvals Balfour met Weizmann at the Foreign Office on 22 March 1917; two days later, Weizmann described the meeting as being "the first time I had a real business talk with him". Weizmann explained at the meeting that the Zionists had a preference for a British protectorate over Palestine, as opposed to an American, French or international arrangement; Balfour agreed, but warned that "there may be difficulties with France and Italy". The French position in regard to Palestine and the wider Syria region during the lead up to the Balfour Declaration was largely dictated by the terms of the SykesโPicot Agreement and was complicated from 23 November 1915 by increasing French awareness of the British discussions with the Sherif of Mecca. Prior to 1917, the British had led the fighting on the southern border of the Ottoman Empire alone, given their neighbouring Egyptian colony and the French preoccupation with the fighting on the Western Front that was taking place on their own soil. Italy's participation in the war, which began following the April 1915 Treaty of London, did not include involvement in the Middle Eastern sphere until the April 1917 Agreement of Saint-Jean-de-Maurienne; at this conference, Lloyd George had raised the question of a British protectorate of Palestine and the idea "had been very coldly received" by the French and the Italians.[q] In May and June 1917, the French and Italians sent detachments to support the British as they built their reinforcements in preparation for a renewed attack on Palestine. In early April, Sykes and Picot were appointed to act as the chief negotiators once more, this time on a month-long mission to the Middle East for further discussions with the Sherif of Mecca and other Arab leaders.[r] On 3 April 1917, Sykes met with Lloyd George, Lord Curzon and Maurice Hankey to receive his instructions in this regard, namely to keep the French onside while "not prejudicing the Zionist movement and the possibility of its development under British auspices, [and not] enter into any political pledges to the Arabs, and particularly none in regard to Palestine". Before travelling to the Middle East, Picot, via Sykes, invited Nahum Sokolow to Paris to educate the French government on Zionism. Sykes, who had prepared the way in correspondence with Picot, arrived a few days after Sokolow; in the meantime, Sokolow had met Picot and other French officials, and convinced the French Foreign Office to accept for study a statement of Zionist aims "in regard to facilities of colonization, communal autonomy, rights of language and establishment of a Jewish chartered company." Sykes went on ahead to Italy and had meetings with the British ambassador and British Vatican representative to prepare the way for Sokolow once again. Sokolow was granted an audience with Pope Benedict XV on 6 May 1917. Sokolow's notes of the meeting โ the only meeting records known to historians โ stated that the Pope expressed general sympathy and support for the Zionist project.[xv] On 21 May 1917 Angelo Sereni, president of the Committee of the Jewish Communities,[s] presented Sokolow to Sidney Sonnino, the Italian Minister of Foreign Affairs. He was also received by Paolo Boselli, the Italian prime minister. Sonnino arranged for the secretary general of the ministry to send a letter to the effect that, although he could not express himself on the merits of a program which concerned all the allies, "generally speaking" he was not opposed to the legitimate claims of the Jews. On his return journey, Sokolow met with French leaders again and secured a letter dated 4 June 1917, giving assurances of sympathy towards the Zionist cause by Jules Cambon, head of the political section of the French foreign ministry. This letter was not published, but was deposited at the British Foreign Office.[xvi] Following the United States' entry into the war on 6 April, the British Foreign Secretary led the Balfour Mission to Washington, D.C., and New York, where he spent a month between mid-April and mid-May. During the trip he spent significant time discussing Zionism with Louis Brandeis, a leading Zionist and a close ally of Wilson who had been appointed as a Supreme Court Justice a year previously.[t] By 13 June 1917, it was acknowledged by Ronald Graham, head of the Foreign Office's Middle Eastern affairs department, that the three most relevant politicians โ the Prime Minister, the Foreign Secretary, and the Parliamentary Under-Secretary of State for Foreign Affairs, Lord Robert Cecil โ were all in favour of Britain supporting the Zionist movement;[u] on the same day Weizmann had written to Graham to advocate for a public declaration.[v] Six days later, at a meeting on 19 June, Balfour asked Lord Rothschild and Weizmann to submit a formula for a declaration. Over the next few weeks, a 143-word draft was prepared by the Zionist negotiating committee, but it was considered too specific on sensitive areas by Sykes, Graham and Rothschild. Separately, a very different draft had been prepared by the Foreign Office, described in 1961 by Harold Nicolson โ who had been involved in preparing the draft โ as proposing a "sanctuary for Jewish victims of persecution". The Foreign Office draft was strongly opposed by the Zionists, and was discarded; no copy of the draft has been found in the Foreign Office archives. Following further discussion, a revised โ and at just 46 words in length, much shorter โ draft declaration was prepared and sent by Lord Rothschild to Balfour on 18 July. It was received by the Foreign Office, and the matter was brought to the Cabinet for formal consideration. The decision to release the declaration was taken by the British War Cabinet on 31 October 1917. This followed discussion at four War Cabinet meetings (including the 31 October meeting) over the space of the previous two months. In order to aid the discussions, the War Cabinet Secretariat, led by Maurice Hankey, the Cabinet Secretary and supported by his Assistant Secretaries โ primarily Sykes and his fellow Conservative MP and pro-Zionist Leo Amery โ solicited outside perspectives to put before the Cabinet. These included the views of government ministers, war allies โ notably from President Woodrow Wilson โ and in October, formal submissions from six Zionist leaders and four non-Zionist Jews. British officials asked President Wilson for his consent on the matter on two occasions โ first on 3 September, when he replied the time was not ripe, and later on 6 October, when he agreed with the release of the declaration. Excerpts from the minutes of these four War Cabinet meetings provide a description of the primary factors that the ministers considered: Drafting Declassification of British government archives has allowed scholars to piece together the choreography of the drafting of the declaration; in his widely cited 1961 book, Leonard Stein published four previous drafts of the declaration. The drafting began with Weizmann's guidance to the Zionist drafting team on its objectives in a letter dated 20 June 1917, one day following his meeting with Rothschild and Balfour. He proposed that the declaration from the British government should state: "its conviction, its desire or its intention to support Zionist aims for the creation of a Jewish national home in Palestine; no reference must be made I think to the question of the Suzerain Power because that would land the British into difficulties with the French; it must be a Zionist declaration." A month after the receipt of the much-reduced 12 July draft from Rothschild, Balfour proposed a number of mainly technical amendments. The two subsequent drafts included much more substantial amendments: the first in a late August draft by Lord Milner โ one of the original five members of Lloyd George's War Cabinet as a minister without portfolio[xvii] โ which reduced the geographic scope from all of Palestine to "in Palestine", and the second from Milner and Amery in early October, which added the two "safeguard clauses". His Majesty's Government regards as essential for the realization of this principle the grant of internal autonomy to the Jewish nationality in Palestine, freedom of immigration for Jews, and the establishment of a Jewish National Colonizing Corporation for the resettlement and economic development of the country. The conditions and forms of the internal autonomy and a Charter for the Jewish National Colonizing Corporation should, in the view of His Majesty's Government, be elaborated in detail and determined with the representatives of the Zionist Organization. Subsequent authors have debated who the "primary author" really was. In his posthumously published 1981 book The Anglo-American Establishment, Georgetown University history professor Carroll Quigley explained his view that Lord Milner was the primary author of the declaration,[xviii] and more recently, William D. Rubinstein, Professor of Modern History at Aberystwyth University, Wales, proposed Amery instead. Huneidi wrote that Ormsby-Gore, in a report he prepared for Shuckburgh, claimed authorship, together with Amery, of the final draft form. Key issues The agreed version of the declaration, a single sentence of just 67 words, was sent on 2 November 1917 in a short letter from Balfour to Walter Rothschild, for transmission to the Zionist Federation of Great Britain and Ireland. The declaration contained four clauses, of which the first two promised to support "the establishment in Palestine of a national home for the Jewish people", followed by two "safeguard clauses" with respect to "the civil and religious rights of existing non-Jewish communities in Palestine", and "the rights and political status enjoyed by Jews in any other country". "This is a very carefully worded document and but for the somewhat vague phrase 'A National Home for the Jewish People' might be considered sufficiently unalarming ... But the vagueness of the phrase cited has been a cause of trouble from the commencement. Various persons in high positions have used language of the loosest kind calculated to convey a very different impression to the more moderate interpretation which can be put upon the words. President Wilson brushed away all doubts as to what was intended from his point of view when, in March 1919, he said to the Jewish leaders in America, 'I am moreover persuaded that the allied nations, with the fullest concurrence of our own Government and people are agreed that in Palestine shall be laid the foundations of a Jewish Commonwealth.'[w] The late President Roosevelt declared that one of the Allies peace conditions should be that 'Palestine must be made a Jewish State.' Mr. Winston Churchill has spoken of a 'Jewish State' and Mr. Bonar Law has talked in Parliament of 'restoring Palestine to the Jews'."[x] The term "national home" was intentionally ambiguous, having no legal value or precedent in international law, such that its meaning was unclear when compared to other terms such as "state". The term was intentionally used instead of "state" because of opposition to the Zionist program within the British Cabinet. According to historian Norman Rose, the chief architects of the declaration contemplated that a Jewish State would emerge in time while the Palestine Royal Commission concluded that the wording was "the outcome of a compromise between those Ministers who contemplated the ultimate establishment of a Jewish State and those who did not."[xix] Interpretation of the wording has been sought in the correspondence leading to the final version of the declaration. An official report to the War Cabinet sent by Sykes on 22 September said that the Zionists did not want "to set up a Jewish Republic or any other form of state in Palestine or in any part of Palestine" but rather preferred some form of protectorate as provided in the Palestine Mandate.[y] A month later, Curzon produced a memorandum circulated on 26 October 1917 where he addressed two questions, the first concerning the meaning of the phrase "a National Home for the Jewish race in Palestine"; he noted that there were different opinions ranging from a fully fledged state to a merely spiritual centre for the Jews. Sections of the British press assumed that a Jewish state was intended even before the Declaration was finalized.[xx] In the United States the press began using the terms "Jewish National Home", "Jewish State", "Jewish republic" and "Jewish Commonwealth" interchangeably. Treaty expert David Hunter Miller, who was at the conference and subsequently compiled a 22 volume compendium of documents, provides a report of the Intelligence Section of the American Delegation to the Paris Peace Conference of 1919 which recommended that "there be established a separate state in Palestine," and that "it will be the policy of the League of Nations to recognize Palestine as a Jewish state, as soon as it is a Jewish state in fact." The report further advised that an independent Palestinian state under a British League of Nations mandate be created. Jewish settlement would be allowed and encouraged in this state and this state's holy sites would be under the control of the League of Nations. Indeed, the Inquiry spoke positively about the possibility of a Jewish state eventually being created in Palestine if the necessary demographics for this were to exist. Historian Matthew Jacobs later wrote that the US approach was hampered by the "general absence of specialist knowledge about the region" and that "like much of the Inquiry's work on the Middle East, the reports on Palestine were deeply flawed" and "presupposed a particular outcome of the conflict". He quotes Miller, writing about one report on the history and impact of Zionism, "absolutely inadequate from any standpoint and must be regarded as nothing more than material for a future report". Lord Robert Cecil on 2 December 1917, assured an audience that the government fully intended that "Judea [was] for the Jews." Yair Auron opines that Cecil, then a deputy Foreign Secretary representing the British Government at a celebratory gathering of the English Zionist Federation, "possibly went beyond his official brief" in saying (he cites Stein) "Our wish is that Arabian countries shall be for the Arabs, Armenia for the Armenians and Judaea for the Jews". The following October Neville Chamberlain, while chairing a Zionist meeting, discussed a "new Jewish State." At the time, Chamberlain was a Member of Parliament for Ladywood, Birmingham; recalling the event in 1939, just after Chamberlain had approved the 1939 White Paper, the Jewish Telegraph Agency noted that the Prime Minister had "experienced a pronounced change of mind in the 21 years intervening" A year later, on the Declaration's second anniversary, General Jan Smuts said that Britain "would redeem her pledge ... and a great Jewish state would ultimately rise." In similar vein, Churchill a few months later stated: If, as may well happen, there should be created in our own lifetime by the banks of the Jordan a Jewish State under the protection of the British Crown which might comprise three or four millions of Jews, an event will have occurred in the history of the world which would from every point of view be beneficial. At the 22 June 1921 meeting of the Imperial Cabinet, Churchill was asked by Arthur Meighen, the Canadian Prime Minister, about the meaning of the national home. Churchill said "If in the course of many years they become a majority in the country, they naturally would take it over ... pro rata with the Arab. We made an equal pledge that we would not turn the Arab off his land or invade his political and social rights". Responding to Curzon in January 1919, Balfour wrote "Weizmann has never put forward a claim for the Jewish Government of Palestine. Such a claim in my opinion is clearly inadmissible and personally I do not think we should go further than the original declaration which I made to Lord Rothschild". In February 1919, France issued a statement that it would not oppose putting Palestine under British trusteeship and the formation of a Jewish State. Friedman further notes that France's attitude went on to change; Yehuda Blum, while discussing France's "unfriendly attitude towards the Jewish national movement", notes the content of a report made by Robert Vansittart (a leading member of the British delegation to the Paris Peace Conference) to Curzon in November 1920 which said: [The French] had agreed to a Jewish National Home, not a Jewish State. They considered we were steering straight upon the latter, and the very last thing they would do was to enlarge that State for they totally disapproved our policy. Greece's Foreign Minister told the editor of the Salonica Jewish organ Pro-Israel that "the establishment of a Jewish State meets in Greece with full and sincere sympathy ... A Jewish Palestine would become an ally of Greece." In Switzerland, a number of noted historians including professors Tobler, Forel-Yvorne, and Rogaz, supported the idea of establishing a Jewish state, with one referring to it as "a sacred right of the Jews." While in Germany, officials and most of the press took the Declaration to mean a British sponsored state for the Jews. The British government, including Churchill, made it clear that the Declaration did not intend for the whole of Palestine to be converted into a Jewish National Home, "but that such a Home should be founded in Palestine."[xxii][xxiii] Emir Faisal, King of Syria and Iraq, made a formal written agreement with Zionist leader Chaim Weizmann, which was drafted by T. E. Lawrence, whereby they would try to establish a peaceful relationship between Arabs and Jews in Palestine. The 3 January 1919 FaisalโWeizmann Agreement was a short-lived agreement for ArabโJewish cooperation on the development of a Jewish homeland in Palestine.[z] Faisal did treat Palestine differently in his presentation to the Peace Conference on 6 February 1919 saying "Palestine, for its universal character, [should be] left on one side for the mutual consideration of all parties concerned". The agreement was never implemented.[aa] In a subsequent letter written in English by Lawrence for Faisal's signature, he explained: We feel that the Arabs and Jews are cousins in race, suffering similar oppression at the hands of powers stronger than themselves, and by a happy coincidence have been able to take the first step toward the attainment of their national ideals together. We Arabs, especially the educated among us, look with deepest sympathy on the Zionist movement ... We will do our best, in so far as we are concerned, to help them through; we will wish the Jews a most hearty welcome home. When the letter was tabled at the Shaw Commission in 1929, Rustam Haidar spoke to Faisal in Baghdad and cabled that Faisal had "no recollection that he wrote anything of the sort". In January 1930, Haidar wrote to a newspaper in Baghdad that Faisal: "finds it exceedingly strange that such a matter is attributed to him as he at no time would consider allowing any foreign nation to share in an Arab country". Awni Abd al-Hadi, Faisal's secretary, wrote in his memoirs that he was not aware that a meeting between Frankfurter and Faisal took place and that: "I believe that this letter, assuming that it is authentic, was written by Lawrence, and that Lawrence signed it in English on behalf of Faisal. I believe this letter is part of the false claims made by Chaim Weizmann and Lawrence to lead astray public opinion." According to Allawi, the most likely explanation for the Frankfurter letter is that a meeting took place, a letter was drafted in English by Lawrence, but that its "contents were not entirely made clear to Faisal. He then may or may not have been induced to sign it", since it ran counter to Faisal's other public and private statements at the time. A 1 March interview by Le Matin quoted Faisal as saying: This feeling of respect for other religions dictates my opinion about Palestine, our neighbor. That the unhappy Jews come to reside there and behave as good citizens of this country, our humanity rejoices given that they are placed under a Muslim or Christian government mandated by The League of Nations. If they want to constitute a state and claim sovereign rights in this region, I foresee very serious dangers. It is to be feared that there will be a conflict between them and the other races.[ab] Referring to his 1922 White Paper, Churchill later wrote that "there is nothing in it to prohibit the ultimate establishment of a Jewish State." And in private, many British officials agreed with the Zionists' interpretation that a state would be established when a Jewish majority was achieved. When Chaim Weizmann met with Churchill, Lloyd George and Balfour at Balfour's home in London on 21 July 1921, Lloyd George and Balfour assured Weizmann "that by the Declaration they had always meant an eventual Jewish State," according to Weizmann minutes of that meeting. Lloyd George stated in 1937 that it was intended that Palestine would become a Jewish Commonwealth if and when Jews "had become a definite majority of the inhabitants",[ac] and Leo Amery echoed the same position in 1946.[ad] In the UNSCOP report of 1947, the issue of home versus state was subjected to scrutiny arriving at a similar conclusion to that of Lloyd George.[xxiv] The statement that such a homeland would be found "in Palestine" rather than "of Palestine" was also deliberate.[xxv] The proposed draft of the declaration contained in Rothschild's 12 July letter to Balfour referred to the principle "that Palestine should be reconstituted as the National Home of the Jewish people." In the final text, following Lord Milner's amendment, the word "reconstituted" was removed and the word "that" was replaced with "in". This text thereby avoided committing the entirety of Palestine as the National Home of the Jewish people, resulting in controversy in future years over the intended scope, especially the Revisionist Zionism sector, which claimed entirety of Mandatory Palestine and Emirate of Transjordan as Jewish Homeland This was clarified by the 1922 Churchill White Paper, which wrote that "the terms of the declaration referred to do not contemplate that Palestine as a whole should be converted into a Jewish National Home, but that such a Home should be founded 'in Palestine.'" The declaration did not include any geographical boundaries for Palestine. Following the end of the war, three documents โ the declaration, the McMahonโHussein correspondence and the SykesโPicot Agreement โ became the basis for the negotiations to set the boundaries of Palestine. "If, however, the strict terms of the Balfour Statement are adhered to ... it can hardly be doubted that the extreme Zionist Program must be greatly modified. For "a national home for the Jewish people" is not equivalent to making Palestine into a Jewish State; nor can the erection of such a Jewish State be accomplished without the gravest trespass upon the "civil and religious rights of existing non-Jewish communities in Palestine." The fact came out repeatedly in the Commission's conference with Jewish representatives, that the Zionists looked forward to a practically complete dispossession of the present non-Jewish inhabitants of Palestine, by various forms of purchase." The declaration's first safeguard clause referred to protecting the civil and religious rights of non-Jews in Palestine. The clause had been drafted together with the second safeguard by Leo Amery in consultation with Lord Milner, with the intention to "go a reasonable distance to meeting the objectors, both Jewish and pro-Arab, without impairing the substance of the proposed declaration".[ae] Arabs constituted around 90% of the population of Palestine, but โ as stated by Ronald Storrs, Britain's Military Governor of Jerusalem between 1917 and 1920 โ they were "not so much [named but] lumped together under the negative and humiliating definition of 'Non-Jewish Communities'".[af] Additionally, there was no reference to protecting the political rights of this group, as there was regarding Jews in other countries. This lack of interest was frequently contrasted against the commitment to the Jewish community, with various terms used over subsequent years to regard the two obligations as linked.[ag] A heated question was whether the status of both groups had "equal weight", which the British government and the Permanent Mandates Commission held to be the case in the 1930 Passfield white paper.[ah] Balfour stated in February 1919 that Palestine was considered an exceptional case in which, referring to the local population, "we deliberately and rightly decline to accept the principle of self-determination,"[ai] although he considered that the policy provided self-determination to Jews. Avi Shlaim considers this the declaration's "greatest contradiction". This principle of self-determination had been declared on numerous occasions subsequent to the declaration โ President Wilson's January 1918 Fourteen Points, Sykes's Declaration to the Seven in June 1918, the November 1918 Anglo-French Declaration, and the June 1919 Covenant of the League of Nations that had established the mandate system.[aj] In an August 1919 memo Balfour acknowledged the inconsistency among these statements, and further explained that the British had no intention of consulting the existing population of Palestine.[ak] The results of the ongoing American KingโCrane Commission of Enquiry consultation of the local population โ from which the British had withdrawn โ were suppressed for three years until the report was leaked in 1922. Subsequent British governments have acknowledged this deficiency, in particular the 1939 committee led by the Lord Chancellor, Frederic Maugham, which concluded that the government had not been "free to dispose of Palestine without regard for the wishes and interests of the inhabitants of Palestine", and the April 2017 statement by British Foreign Office minister of state Baroness Anelay that the government acknowledged that "the Declaration should have called for the protection of political rights of the non-Jewish communities in Palestine, particularly their right to self-determination."[al][am] The second safeguard clause was a commitment that nothing should be done which might prejudice the rights of the Jewish communities in other countries outside of Palestine. The original drafts of Rothschild, Balfour, and Milner did not include this safeguard, which was drafted together with the preceding safeguard in early October, in order to reflect opposition from influential members of the Anglo-Jewish community. Lord Rothschild took exception to the proviso on the basis that it presupposed the possibility of a danger to non-Zionists, which he denied. The Conjoint Foreign Committee of the Board of Deputies of British Jews and the Anglo-Jewish Association had published a letter in The Times on 24 May 1917 entitled Views of Anglo-Jewry, signed by the two organisations' presidents, David Lindo Alexander and Claude Montefiore, stating their view that: "the establishment of a Jewish nationality in Palestine, founded on this theory of homelessness, must have the effect throughout the world of stamping the Jews as strangers in their native lands, and of undermining their hard-won position as citizens and nationals of these lands." This was followed in late August by Edwin Montagu, an influential anti-Zionist Jew and Secretary of State for India, and the only Jewish member of the British Cabinet, who wrote in a Cabinet memorandum that: "The policy of His Majesty's Government is anti-Semitic in result and will prove a rallying ground for anti-Semites in every country of the world." Reaction The text of the declaration was published in the press one week after it was signed, on 9 November 1917. Other related events took place within a short timeframe, the two most relevant being the almost immediate British military capture of Palestine and the leaking of the previously secret SykesโPicot Agreement. On the military side, both Gaza and Jaffa fell within several days, and Jerusalem was surrendered to the British on 9 December. The publication of the SykesโPicot Agreement, following the Russian Revolution, in the Bolshevik Izvestia and Pravda on 23 November 1917 and in the British Manchester Guardian on 26 November 1917, represented a dramatic moment for the Allies' Eastern campaign: "the British were embarrassed, the Arabs dismayed and the Turks delighted." The Zionists had been aware of the outlines of the agreement since April and specifically the part relevant to Palestine, following a meeting between Weizmann and Cecil where Weizmann made very clear his objections to the proposed scheme. The declaration represented the first public support for Zionism by a major political power โ its publication galvanized Zionism, which finally had obtained an official charter. In addition to its publication in major newspapers, leaflets were circulated throughout Jewish communities. These leaflets were airdropped over Jewish communities in Germany and Austria, as well as the Pale of Settlement, which had been given to the Central Powers following the Russian withdrawal. Weizmann had argued that the declaration would have three effects: it would swing Russia to maintain pressure on Germany's Eastern Front, since Jews had been prominent in the March Revolution of 1917; it would rally the large Jewish community in the United States to press for greater funding for the American war effort, underway since April of that year; and, lastly, that it would undermine German Jewish support for Kaiser Wilhelm II. The declaration spurred an unintended and extraordinary increase in the number of adherents of American Zionism; in 1914 the 200 American Zionist societies comprised a total of 7,500 members, which grew to 30,000 members in 600 societies in 1918 and 149,000 members in 1919.[xxvi] Whilst the British had considered that the declaration reflected a previously established dominance of the Zionist position in Jewish thought, it was the declaration itself that was subsequently responsible for Zionism's legitimacy and leadership.[xxvii] Exactly one month after the declaration was issued, a large-scale celebration took place at the Royal Opera House โ speeches were given by leading Zionists as well as members of the British administration including Sykes and Cecil. From 1918 until the Second World War, Jews in Mandatory Palestine celebrated Balfour Day as an annual national holiday on 2 November. The celebrations included ceremonies in schools and other public institutions and festive articles in the Hebrew press. In August 1919 Balfour approved Weizmann's request to name the first post-war settlement in Mandatory Palestine, "Balfouria", in his honour. It was intended to be a model settlement for future American Jewish activity in Palestine. Herbert Samuel, the Zionist MP whose 1915 memorandum had framed the start of discussions in the British Cabinet, was asked by Lloyd George on 24 April 1920 to act as the first civil governor of British Palestine, replacing the previous military administration that had ruled the area since the war. This decision reflected a clear pro-Zionist stance by the British government. Shortly after beginning the role in July 1920, he was invited to read the haftarah from Isaiah 40 at the Hurva Synagogue in Jerusalem, which, according to his memoirs, led the congregation of older settlers to feel that the "fulfilment of ancient prophecy might at last be at hand".[an] The local Christian and Muslim community of Palestine, who constituted almost 90% of the population, strongly opposed the declaration. As described by the Palestinian-American philosopher Edward Said in 1979, it was perceived as being made: "(a) by a European power, (b) about a non-European territory, (c) in a flat disregard of both the presence and the wishes of the native majority resident in that territory, and (d) it took the form of a promise about this same territory to another foreign group."[xxviii] According to the 1919 KingโCrane Commission, "No British officer, consulted by the Commissioners, believed that the Zionist programme could be carried out except by force of arms." A delegation of the Muslim-Christian Association, headed by Musa al-Husayni, expressed public disapproval on 3 November 1918, one day after the Zionist Commission parade marking the first anniversary of the Balfour Declaration. They handed a petition signed by more than 100 notables to Ronald Storrs, the British military governor: We have noticed yesterday a large crowd of Jews carrying banners and over-running the streets shouting words which hurt the feeling and wound the soul. They pretend with open voice that Palestine, which is the Holy Land of our fathers and the graveyard of our ancestors, which has been inhabited by the Arabs for long ages, who loved it and died in defending it, is now a national home for them ... We Arabs, Muslim and Christian, have always sympathized profoundly with the persecuted Jews and their misfortunes in other countries ... but there is wide difference between such sympathy and the acceptance of such a nation ... ruling over us and disposing of our affairs. The group also protested the carrying of new "white and blue banners with two inverted triangles in the middle", drawing the attention of the British authorities to the serious consequences of any political implications in raising the banners. Later that month, on the first anniversary of the occupation of Jaffa by the British, the Muslim-Christian Association sent a lengthy memorandum and petition to the military governor protesting once more any formation of a Jewish state. The majority of Britain's military leaders considered Balfour's declaration either a mistake, or one that presented grave risks. In the broader Arab world, the declaration was seen as a betrayal of the British wartime understandings with the Arabs. The Sharif of Mecca and other Arab leaders considered the declaration a violation of a previous commitment made in the McMahonโHussein correspondence in exchange for launching the Arab Revolt. Following the publication of the declaration in an Egyptian newspaper, Al Muqattam, the British dispatched Commander David George Hogarth to see Hussein in January 1918 bearing the message that the "political and economic freedom" of the Palestinian population was not in question. Hogarth reported that Hussein "would not accept an independent Jewish State in Palestine, nor was I instructed to warn him that such a state was contemplated by Great Britain". Hussein had also learned of the SykesโPicot Agreement when it was leaked by the new Soviet government in December 1917, but was satisfied by two disingenuous messages from Sir Reginald Wingate, who had replaced McMahon as High Commissioner of Egypt, assuring him that the British commitments to the Arabs were still valid and that the SykesโPicot Agreement was not a formal treaty. Continuing Arab disquiet over Allied intentions also led during 1918 to the British Declaration to the Seven and the Anglo-French Declaration, the latter promising "the complete and final liberation of the peoples who have for so long been oppressed by the Turks, and the setting up of national governments and administrations deriving their authority from the free exercise of the initiative and choice of the indigenous populations". In 1919, King Hussein refused to ratify the Treaty of Versailles. After February 1920, the British ceased to pay subsidy to him. In August 1920, five days after the signing of the Treaty of Sรจvres, which formally recognized the Kingdom of Hejaz, Curzon asked Cairo to procure Hussein's signature to both treaties and agreed to make a payment of ยฃ30,000 conditional on signature. Hussein declined and in 1921, stated that he could not be expected to "affix his name to a document assigning Palestine to the Zionists and Syria to foreigners." Following the 1921 Cairo Conference, Lawrence was sent to try and obtain the King's signature to a treaty as well as to Versailles and Sรจvres, a ยฃ60,000 annual subsidy being proposed; this attempt also failed. During 1923, the British made one further attempt to settle outstanding issues with Hussein and once again, the attempt foundered, Hussein continued in his refusal to recognize the Balfour Declaration or any of the Mandates that he perceived as being his domain. In March 1924, having briefly considered the possibility of removing the offending article from the treaty, the government suspended any further negotiations; within six months they withdrew their support in favour of their central Arabian ally Ibn Saud, who proceeded to conquer Hussein's kingdom. The declaration was first endorsed by a foreign government on 27 December 1917, when Serbian Zionist leader and diplomat David Albala announced the support of Serbia's government in exile during a mission to the United States. The French and Italian governments offered their endorsements, on 14 February and 9 May 1918, respectively. At a private meeting in London on 1 December 1918, Lloyd George and French Prime Minister Georges Clemenceau agreed to certain modifications to the SykesโPicot Agreement, including British control of Palestine. On 25 April 1920, the San Remo conference โ an outgrowth of the Paris Peace Conference attended by the prime ministers of Britain, France and Italy, the Japanese Ambassador to France, and the United States Ambassador to Italy โ established the basic terms for three League of Nations mandates: a French mandate for Syria, and British mandates for Mesopotamia and Palestine. With respect to Palestine, the resolution stated that the British were responsible for putting into effect the terms of the Balfour Declaration. The French and the Italians made clear their dislike of the "Zionist cast of the Palestinian mandate" and objected especially to language that did not safeguard the "political" rights of non-Jews, accepting Curzon's claim that "in the British language all ordinary rights were included in "civil rights"". At the request of France, it was agreed that an undertaking was to be inserted in the mandate's procรจs-verbal that this would not involve the surrender of the rights hitherto enjoyed by the non-Jewish communities in Palestine. The Italian endorsement of the Declaration had included the condition "... on the understanding that there is no prejudice against the legal and political status of the already existing religious communities ..." The boundaries of Palestine were left unspecified, to "be determined by the Principal Allied Powers." Three months later, in July 1920, the French defeat of Faisal's Arab Kingdom of Syria precipitated the British need to know "what is the 'Syria' for which the French received a mandate at San Remo?" and "does it include Transjordania?" โ it subsequently decided to pursue a policy of associating Transjordan with the mandated area of Palestine without adding it to the area of the Jewish National Home. In 1922, Congress officially endorsed America's support for the Balfour Declaration through the passage of the LodgeโFish Resolution, notwithstanding opposition from the State Department. Professor Lawrence Davidson, of West Chester University, whose research focuses on American relations with the Middle East, argues that President Wilson and Congress ignored democratic values in favour of "biblical romanticism" when they endorsed the declaration. He points to an organized pro-Zionist lobby in the United States, which was active at a time when the country's small Arab American community had little political power. Despite Balfour's warning to the War Cabinet that Germany was aiming to court Zionist support, German authorities were balancing the interests of their Zionist and non-Zionist Jewish communities (the latter represented by the Hilfsverein der Juden in Deutschland [de]) and refrained from showing favoritism to one side or the other. They successfully urged the Ottomans to show lenience towards Zionists but were not attempting anything like an equivalent to the Balfour Declaration. The publication of the Balfour Declaration was thus met with tactical responses from the Central Powers. The participation of the Ottoman Empire in the alliance meant that Germany was unable to effectively counter the British pronouncement. Some within the German government viewed potential Zionist support for Britain's war effort as a substantial loss for their side.[ao] Two weeks following the declaration, Ottokar Czernin, the Austrian Foreign Minister, gave an interview to Arthur Hantke, President of the Zionist Federation of Germany, promising that his government would influence the Turks once the war was over. On 12 December, the Ottoman Grand Vizier, Talaat Pasha, gave an interview to the German newspaper Vossische Zeitung that was published on 31 December and subsequently released in the German-Jewish periodical Jรผdische Rundschau on 4 January 1918, in which he referred to the declaration as "une blague" (a deception) and promised that under Ottoman rule "all justifiable wishes of the Jews in Palestine would be able to find their fulfilment" subject to the absorptive capacity of the country. This Turkish statement was endorsed by the German Foreign Office on 5 January 1918. On 8 January 1918, a German-Jewish Society, the Union of German Jewish Organizations for the Protection of the Rights of the Jews of the East,[ap] was formed to advocate for further progress for Jews in Palestine. Following the war, the Treaty of Sรจvres was signed by the Ottoman Empire on 10 August 1920. The treaty dissolved the Ottoman Empire, requiring Turkey to renounce sovereignty over much of the Middle East. Article 95 of the treaty incorporated the terms of the Balfour Declaration with respect to "the administration of Palestine, within such boundaries as may be determined by the Principal Allied Powers". Since incorporation of the declaration into the Treaty of Sรจvres did not affect the legal status of either the declaration or the Mandate, there was also no effect when Sรจvres was superseded by the Treaty of Lausanne, which did not include any reference to the declaration. In 1922, German anti-Semitic theorist Alfred Rosenberg in his primary contribution to Nazi theory on Zionism, Der Staatsfeindliche Zionismus ("Zionism, the Enemy of the State"), accused German Zionists of working for a German defeat and supporting Britain and the implementation of the Balfour Declaration, in a version of the stab-in-the-back myth.[xxix] Adolf Hitler took a similar approach in some of his speeches from 1920 onwards. With the advent of the declaration and the British entry into Jerusalem on 9 December, the Vatican reversed its earlier sympathetic attitude to Zionism and adopted an oppositional stance that was to continue until the early 1990s. "It is said that the effect of the Balfour Declaration was to leave the Moslems and Christians dumbfounded ... It is impossible to minimise the bitterness of the awakening. They considered that they were to be handed over to an oppression which they hated far more than the Turk's and were aghast at the thought of this domination ... Prominent people openly talk of betrayal and that England has sold the country and received the price ... Towards the Administration [the Zionists] adopted the attitude of "We want the Jewish State and we won't wait", and they did not hesitate to avail themselves of every means open to them in this country and abroad to force the hand of an Administration bound to respect the "Status Quo" and to commit it, and thereby future Administrations, to a policy not contemplated in the Balfour Declaration ... What more natural than that [the Moslems and Christians] should fail to realise the immense difficulties the Administration was and is labouring under and come to the conclusion that the openly published demands of the Jews were to be granted and the guarantees in the Declaration were to become but a dead letter?" The British policy as stated in the declaration was to face numerous challenges to its implementation in the following years. The first of these was the indirect peace negotiations which took place between Britain and the Ottomans in December 1917 and January 1918 during a pause in the hostilities for the rainy season; although these peace talks were unsuccessful, archival records suggest that key members of the War Cabinet may have been willing to permit leaving Palestine under nominal Turkish sovereignty as part of an overall deal. In October 1919, almost a year after the end of the war, Lord Curzon succeeded Balfour as Foreign Secretary. Curzon had been a member of the 1917 Cabinet that had approved the declaration, and according to British historian Sir David Gilmour, Curzon had been "the only senior figure in the British government at the time who foresaw that its policy would lead to decades of ArabโJewish hostility". He therefore determined to pursue a policy in line with its "narrower and more prudent rather than the wider interpretation". Following Bonar Law's appointment as Prime Minister in late 1922, Curzon wrote to Law that he regarded the declaration as "the worst" of Britain's Middle East commitments and "a striking contradiction of our publicly declared principles". In August 1920 the report of the Palin Commission, the first in a long line of British Commissions of Inquiry on the question of Palestine during the Mandate period, noted that "The Balfour Declaration ... is undoubtedly the starting point of the whole trouble". The conclusion of the report, which was not published, mentioned the Balfour Declaration three times, stating that "the causes of the alienation and exasperation of the feelings of the population of Palestine" included: British public and government opinion became increasingly unfavourable to state support for Zionism; even Sykes had begun to change his views in late 1918.[aq] In February 1922 Churchill telegraphed Samuel, who had begun his role as High Commissioner for Palestine 18 months earlier, asking for cuts in expenditure and noting: In both Houses of Parliament there is growing movement of hostility, against Zionist policy in Palestine, which will be stimulated by recent Northcliffe articles.[ar] I do not attach undue importance to this movement, but it is increasingly difficult to meet the argument that it is unfair to ask the British taxpayer, already overwhelmed with taxation, to bear the cost of imposing on Palestine an unpopular policy. Following the issuance of the Churchill White Paper in June 1922, the House of Lords rejected a Palestine Mandate that incorporated the Balfour Declaration by 60 votes to 25, following a motion issued by Lord Islington. The vote proved to be only symbolic as it was subsequently overruled by a vote in the House of Commons following a tactical pivot and variety of promises made by Churchill.[xxx] In February 1923, following the change in government, Cavendish, in a lengthy memorandum for the Cabinet, laid the foundation for a secret review of Palestine policy: It would be idle to pretend that the Zionist policy is other than an unpopular one. It has been bitterly attacked in Parliament and is still being fiercely assailed in certain sections of the press. The ostensible grounds of attack are threefold:(1) the alleged violation of the McMahon pledges; (2) the injustice of imposing upon a country a policy to which the great majority of its inhabitants are opposed; and (3) the financial burden upon the British taxpayer ... His covering note asked for a statement of policy to be made as soon as possible and that the cabinet ought to focus on three questions: (1) whether or not pledges to the Arabs conflict with the Balfour declaration; (2) if not, whether the new government should continue the policy set down by the old government in the 1922 White Paper; and (3) if not, what alternative policy should be adopted. Stanley Baldwin, replacing Bonar Law as prime minister, in June 1923 set up a cabinet sub-committee whose terms of reference were: examine Palestine policy afresh and to advise the full Cabinet whether Britain should remain in Palestine and whether if she remained, the pro-Zionist policy should be continued. The Cabinet approved the report of this committee on 31 July 1923. Describing it as "nothing short of remarkable", Quigley noted that the government was admitting to itself that its support for Zionism had been prompted by considerations having nothing to do with the merits of Zionism or its consequences for Palestine. As Huneidi noted, "wise or unwise, it is well nigh impossible for any government to extricate itself without a substantial sacrifice of consistency and self-respect, if not honour." The wording of the declaration was thus incorporated into the British Mandate for Palestine, a legal instrument that created Mandatory Palestine with an explicit purpose of putting the declaration into effect and was finally formalized in September 1923. Unlike the declaration itself, the Mandate was legally binding on the British government. In June 1924, Britain made its report to the Permanent Mandates Commission for the period July 1920 to the end of 1923 containing nothing of the candor reflected in the internal documents; the documents relating to the 1923 reappraisal stayed secret until the early 1970s. Historiography and motivations Lloyd George and Balfour remained in government until the collapse of the coalition in October 1922. Under the new Conservative government, attempts were made to identify the background to and motivations for the declaration. A private Cabinet memorandum was produced in January 1923, providing a summary of the then-known Foreign Office and War Cabinet records leading up to the declaration. An accompanying Foreign Office note asserted that the primary authors of the declaration were Balfour, Sykes, Weizmann, and Sokolow, with "perhaps Lord Rothschild as a figure in the background", and that "negotiations seem to have been mainly oral and by means of private notes and memoranda of which only the scantiest records seem to be available." Following the 1936 general strike that was to degenerate into the 1936โ1939 Arab revolt in Palestine, the most significant outbreak of violence since the Mandate began, a British Royal Commission โ a high-profile public inquiry โ was appointed to investigate the causes of the unrest. The Palestine Royal Commission, appointed with significantly broader terms of reference than the previous British inquiries into Palestine, completed its 404-page report after six months of work in June 1937, publishing it a month later. The report began by describing the history of the problem, including a detailed summary of the origins of the Balfour Declaration. Much of this summary relied on Lloyd-George's personal testimony; Balfour had died in 1930 and Sykes in 1919. He told the commission that the declaration was made "due to propagandist reasons ... In particular Jewish sympathy would confirm the support of American Jewry, and would make it more difficult for Germany to reduce her military commitments and improve her economic position on the eastern front".[as] Two years later, in his Memoirs of the Peace Conference,[at] Lloyd George described a total of nine factors motivating his decision as Prime Minister to release the declaration, including the additional reasons that a Jewish presence in Palestine would strengthen Britain's position on the Suez Canal and reinforce the route to their imperial dominion in India. These geopolitical calculations were debated and discussed in the following years. Historians agree that the British believed that expressing support would appeal to Jews in Germany and the United States, given two of Woodrow Wilson's closest advisors were known to be avid Zionists;[xxxi][xxxii] they also hoped to encourage support from the large Jewish population in Russia. In addition, the British intended to pre-empt the expected French pressure for an international administration in Palestine.[xxxiii] Some historians argue that the British government's decision reflected what James Gelvin, Professor of Middle Eastern History at UCLA, calls 'patrician anti-Semitism' in the overestimation of Jewish power in both the United States and Russia. American Zionism was still in its infancy; in 1914 the Zionist Federation had a small budget of about $5,000 and only 12,000 members, despite an American Jewish population of three million[xxxiv] but the Zionist organizations had recently succeeded, following a show of force within the American Jewish community, in arranging a Jewish congress to debate the Jewish problem as a whole.[xxxv] This impacted British and French government estimates of the balance of power within the American Jewish public.[xxvi] Avi Shlaim, emeritus Professor of International Relations in the University of Oxford, asserts that two main schools of thought have been developed on the question of the primary driving force behind the declaration, one presented in 1961 by Leonard Stein, a lawyer and former political secretary to the World Zionist Organization, and the other in 1970 by Mayir Veretรฉ, then Professor of Israeli History at the Hebrew University of Jerusalem. Shlaim states that Stein does not reach any clear cut conclusions, but that implicit in his narrative is that the declaration resulted primarily from the activity and skill of the Zionists, whereas according to Veretรฉ, it was the work of hard-headed pragmatists motivated by British imperial interests in the Middle East. Much of modern scholarship on the decision to issue the declaration focuses on the Zionist movement and rivalries within it, with a key debate being whether the role of Weizmann was decisive or whether the British were likely to have issued a similar declaration in any event. Danny Gutwein, Professor of Jewish History at the University of Haifa, proposes a twist on an old idea, asserting that Sykes's February 1917 approach to the Zionists was the defining moment, and that it was consistent with the pursuit of the government's wider agenda to partition the Ottoman Empire.[xxxvi] Long-term impact The declaration had two indirect consequences, the emergence of Israel and a chronic state of conflict between Arabs and Jews throughout the Middle East. It has been described as the "original sin" with respect to both Britain's failure in Palestine and for wider events in Palestine. The statement also had a significant impact on the traditional anti-Zionism of religious Jews, some of whom saw it as divine providence; this contributed to the growth of religious Zionism amid the larger Zionist movement.[xxxvii] Starting in 1920, intercommunal conflict in Mandatory Palestine broke out, which widened into the regional ArabโIsraeli conflict, often referred to as the world's "most intractable conflict". The "dual obligation" to the two communities quickly proved to be untenable; the British subsequently concluded that it was impossible for them to pacify the two communities in Palestine by using different messages for different audiences.[au] The Palestine Royal Commission โ in making the first official proposal for partition of the region โ referred to the requirements as "contradictory obligations", and that the "disease is so deep-rooted that, in our firm conviction, the only hope of a cure lies in a surgical operation". Following the 1936โ1939 Arab revolt in Palestine, and as worldwide tensions rose in the buildup to the Second World War, the British Parliament approved the White Paper of 1939 โ their last formal statement of governing policy in Mandatory Palestine โ declaring that Palestine should not become a Jewish State and placing restrictions on Jewish immigration. Whilst the British considered this consistent with the Balfour Declaration's commitment to protect the rights of non-Jews, many Zionists saw it as a repudiation of the declaration.[av] Although this policy lasted until the British surrendered the Mandate in 1948, it served only to highlight the fundamental difficulty for Britain in carrying out the Mandate obligations. Britain's involvement in this became one of the most controversial parts of its Empire's history and damaged its reputation in the Middle East for generations.[xxxviii] According to historian Elizabeth Monroe: "measured by British interests alone, [the declaration was] one of the greatest mistakes in [its] imperial history" which greatly damaged Britain. However, others argue that this approache ignores the emergance of nationalism and the dismantling of major empires throughout the world, and the Britain would likely not have been able to maintain its presence in the Middle East in any case. The 2010 study by Jonathan Schneer, specialist in modern British history at Georgia Tech, concluded that because the build-up to the declaration was characterized by "contradictions, deceptions, misinterpretations, and wishful thinking", the declaration sowed dragon's teeth and "produced a murderous harvest, and we go on harvesting even today".[xxxix] The foundational stone for modern Israel had been laid, but the prediction that this would lay the groundwork for harmonious Arab-Jewish cooperation proved to be wishful thinking.[xl] On the bicentenary of its foundation, the British newspaper The Guardian, reflecting on its major errors of judgment, included the support the paper's editor, C. P. Scott, gave to Balfour's declaration. Israel had not become, it said, 'the country the Guardian foresaw or would have wanted.' The Board of Deputies of British Jews through its president Marie van der Zyl denounced the column as 'breathtakingly ill-considered', declaring that the Guardian appeared "to do everything it can to undermine the legitimacy of the world's only Jewish state". The document The document was presented to the British Museum in 1924 by Walter Rothschild; today it is held in the British Library, which separated from the British Museum in 1973, as Additional Manuscripts number 41178. From October 1987 to May 1988 it was lent outside the UK for display in Israel's Knesset. See also Notes What exactly was in the minds of those who made the Balfour Declaration is speculative. The fact remains that, in the light of experience acquired as a consequence of serious disturbances in Palestine, the mandatory Power, in a statement on "British Policy in Palestine," issued on 3 June 1922 by the Colonial Office, placed a restrictive construction upon the Balfour Declaration. Nevertheless, neither the Balfour Declaration nor the Mandate precluded the eventual creation of a Jewish State. The Mandate in its Preamble recognized, with regard to the Jewish people, the "grounds for reconstituting their National Home". By providing, as one of the main obligations of the mandatory Power the facilitation of Jewish immigration, it conferred upon the Jews an opportunity, through large-scale immigration, to create eventually a Jewish State with a Jewish majority. References Bibliography External links |
======================================== |
[SOURCE: https://www.ynet.co.il/dating/sex/article/syckqrip11g] | [TOKENS: 478] |
"ืืืจืชื ืืืขืื - ืืืื, ืื ื ืืืืืช ืืืืืช ืืจืืกืืืื ืืืืจ ืฉื ืืกืงืก" ืืืืื ืจืืื ืื ืืืืืืื ืืืืืืช ื ืขื ืจืืืืื ืืืืชื ืคืชืืื ืืืจื: ืืื ืืืืืช ืืืคืื ืืืฆืจื ืืจืืืืช ืืื ืืช ืืืืฆืจื ืืืคืื ืืืคื ืชืืื, ืืืืืื ืืืคืื ืืืฉืจืื. ืืืื ื ืจืื ืืืืื, ืืื ืืืืขื ืื ืคืืื: ืื ืืชืืื ืืืกืืืืช ืืืื ืกืืืจื ืืืืฉืื ืขื ืืืืจืืช ืฉืืชืืืื ืืจืื. "ืฉืืื ืืช ืืขืื ืื ืื ืื ืืคืจืืข ืื, ืืืื ืืคืืื ืืืืืจืื ืื ืืฉ ืื ืืื ืื ืคืื ืก ืื ืื ืื ื ืืืืืช ืคืืจื ื. ืืืืื, ืื ื ืืืืขืช ืื ืื ื ืืืืืฉืืจ ืืืืื ืฉืื ื ืฆืจืืื ืืืืื ืืืื ืื ืฉื ืืขืื" โโืคืืกื ืืฉืืชืฃ ืขื ืืื โโNoomโโ (@โโheybabeitsnoomโโ)โโ |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/DirecTV_Stream] | [TOKENS: 1390] |
Contents DirecTV Stream DirecTV Stream (formerly DirecTV Now and AT&T TV) was a premium streaming multichannel television service offered in the United States by DirecTV. The brand offered pay television service without a contract, with the service utilizing a customer's existing streaming TV hardware, such as a Roku or Amazon Fire TV device, and was also available on some smart TV systems like Tizen OS by Samsung, WebOS by LG and Vizio SmartCast, as well as on phones and tablets. The service was similar to DirecTV via Internet, a streaming version of DirecTV's flagship satellite service, which required a multi-year contract and included an Android TV box called "Gemini." Unlike DirecTV via Internet, DirecTV Stream did not require a contract, and the Gemini device was optional. Channel packages between DirecTV via Internet and DirecTV Stream were mostly the same, though DirecTV via Internet offered a few broadcast and cable networks that were not available on DirecTV Stream. Additionally, DirecTV Stream's pricing was lower compared to DirecTV via Internet, which charged regional sports and equipment lease fees. DirecTV via Internet customers were able to watch programming from their subscription through the DirecTV app on other platforms, which was also used for DirecTV Stream. History DirecTV Stream launched as DirecTV Now on November 30, 2016. On July 13, 2017, it was reported that AT&T was preparing to introduce a cloud-based DVR streaming service as part of its effort to create a unified platform across the DirecTV satellite television service and DirecTV Now services, with U-verse to be added soon. In March 2019, DirecTV Now instituted a new package structure for new subscribers with fewer channels included (although with HBO now included in the base package), and increased pricing for all subscribers. By the second quarter of 2019, DirecTV Now lost 168,000 subscribers (decreasing to 1.3 million), with AT&T citing "higher prices and less promotional activity" as reasoning. On July 30, 2019, AT&T announced an upcoming streaming television service known as AT&T TV, which would feature an Android TV-based set-top box with a Google Assistant-based voice remote, use the same apps used by DirecTV Now, and offer cloud DVR with 500 hours of storage. Unlike DirecTV Now, this service is sold on a contract basis (and in bundles with AT&T Internet), and requires the rental or purchase of proprietary set-top boxes. The service allows user self-installation, but activation fees are still charged. AT&T CEO Randall Stephenson referred to AT&T TV as a "workhorse" service succeeding DirecTV and AT&T U-verse in its pay television business. The service was initially launched in selected markets in California, Florida, Kansas, Missouri, and Texas, with additional markets to follow. Concurrently it was announced that DirecTV Now would re-brand as "AT&T TV Now". The similar names between the different services have been noted as possibly causing confusion, with media outlets even citing examples occurring within the company itself. In September 2019, a class action lawsuit was filed against AT&T, alleging that it had falsely inflated its reported number of AT&T TV Now subscribers by engaging in "unrelenting pressure and strong-arm tactics" and giving unwanted subscriptions to the service to customers without their consent, as well as making false claims surrounding risks related to the service in its SEC filings related to the purchase of Time Warner. On February 25, 2021, AT&T announced that it would spin off DirecTV, U-Verse and AT&T TV into a separate entity, selling a 30% stake to TPG Capital while retaining a 70% stake in the new standalone company. The deal was closed on August 2, 2021, at which point the provider adopted its current name. In December 2022, DirecTV Stream announced it would raise prices to offset higher costs associated with distributing broadcast and cable networks to users. The price increases rolled out on January 22, 2023, with most customers paying between $5 and $10 extra for channels they already received. It was the second consecutive year DirecTV Stream raised prices on customers. On April 13, 2025, DirecTV announced the end of DirecTV Stream as a standalone brand and merged its content with its regular DirecTV service. Past services The service's base package "Entertainment" (included channels from co-owned division WarnerMedia as well as the seven other major television conglomerates: The Walt Disney Company, Fox Corporation, NBCUniversal, Discovery, A&E Networks, AMC Networks, and ViacomCBS. The "Premiere" package adds HBO, Cinemax, Showtime, Starz, and StarzEncore, and the various additional sports channels. Previous packages started at $35.00 "Live a Little" (Replaced by "Entertainment") and ranged up to $70 "Gotta Have it" (Replaced by "Gotta Have It") these packages are no longer available, but are still accessible to existing subscribers. The packages currently offer the same channels as prior packages, just at a higher premium. On March 2, 2020, AT&T TV launched nationally. AT&T president John Stankey stated that AT&T TV would be promoted as the company's main pay television service, with DirecTV being downplayed outside of markets with insufficient broadband quality to use AT&T TV. AT&T TV Now would struggle through 2019, with a loss of 138,000 subscribers in 2020 Q1 according to its quarterly earnings report. The service as a whole was down to 788,000 subscribers, compared to its peak of 1.86 million subscribers, before the large discounts to attract initial subscriber interest were scaled back. On January 12, 2021, AT&T discontinued their Plus and Max plans to new subscribers, shifting them towards new AT&T TV packages (starting at $69.99). The packages are $15 more expensive than the previous base package, and includes channels owned by AMC Networks, Discovery Inc. and A+E Networks. On January 13, 2021, AT&T announced it would stop selling AT&T TV Now to new customers, and instead redirect new and existing customers to AT&T TV. Per the AT&T TV Now website, there are no long-term contracts for AT&T TV and compatible consumer devices can be used. The final iteration of the service consisted of four main bundles, including the base "Entertainment" service, "Choice" (which adds regional sports networks), "Ultimate", and "Premier". See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#cite_ref-FOOTNOTELavington199834โ35_75-0] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abลซ Rayhฤn al-Bฤซrลซnฤซ in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abลซ Rayhฤn al-Bฤซrลซnฤซ invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620โ1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musรฉe d'Art et d'Histoire of Neuchรขtel, Switzerland, and still operates. In 1831โ1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand โ this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y โ z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5โ10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the AtanasoffโBerry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metalโoxideโsilicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metalโoxideโsemiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries โ and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as followsโ this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operationโalthough it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or โ128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machineโbased computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember โ a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languagesโsome intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The ChurchโTuring thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTECharla1997a26_182-0] | [TOKENS: 10728] |
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006โover eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protรฉgรฉ. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint NintendoโSony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ยฅ39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to ยฃ700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even closeโPlaystations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, โ with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) โ as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a ยฃ20 million marketing budget during the Christmas season compared to Sega's ยฃ4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999โ2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least ยฃ100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006โover eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256ร224 to 640ร480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consolesโincluding the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears outโusually unevenlyโdue to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41โ2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsลซshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5โfor all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everythingโthe whole PlayStation formatโis independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References |
======================================== |
[SOURCE: https://www.mako.co.il/news-med12/lung_ambition/Article-2551e9c4a4b9a91027.htm] | [TOKENS: 14019] |
ืืืืงื ืงืื ื, ืืืื ืืจืืืืืชืืื ืืช ืืกืงืจ ืืืืฉื ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืื ื ืื ืกื ืืชืืงืคื ืืกื ืืืจืืืืช ืืื ืืกืคืืืืจ 2025, ืืืฆืืขื ืืืขืฉื ืื ืืืืขืฉื ืื ืืฉืขืืจ ืืืืืืื 65โ74, ืืืืงื ืคืฉืืื ืฉืขืฉืืื ืืฉื ืืช ืืืื โ ืืื ืืืื ืืชืืื, ืืื ืื ืืขืืื ืืืื ืืกืืจ ืืคืกืคืกืจืืช ืืืจืื ืกืืMED12ืืฉืืชืืฃ ืืกืืจืืื ืืงืืคืืจืกื: 25.11.25, 12:17ืืืืืกืืจืฆืื | ืฆืืืื: shutterstockืืงืืฉืืจ ืืืขืชืงืืืื ื ืกืจืื ืืจืืื ืืื ืืกืจืื ืืฉื ื ืืื ื ืคืืฅ ืืขืืื, ืื ืืื ืืืจื ืืชืืืชื ืืกืคืจ ืืืช ืืืืจืื, ืืืฉื ื ืืงืจื ื ืฉืื. ืึพ2020 ืืืขื 20% ืืืื ืืคืืืจืืช ืืกืจืื ืืืฉืจืื ืืืืจืื ืื-12% ืื ืฉืื ืืื ืืชืืฆืื ืืกืจืื ืืจืืื. ืืกืืื ืืื ืงืฉืืจื ืืขืืืื ืฉืืืืืจ ืืืืื ืฉืงืื, ืฉืืจืื ืืกืืืคืืืืื ืฉืื ืืชืืืืื ืจืง ืืฉืืืื ืืชืงืืืื, ืืขืืชืื ืืืจื ืฉืืืืื ืืืจ ืฉืืื ืืจืืจืืช ืืืืืจืื ืืืจืื ืืืืฃ, ืืื ืืืืืืช ืืืืื ืืจืืคืื ืคืืืชืช ืืืืืจื ืืืืคืืืืช ืืฉืชื ื ืืืืจืืช ืืืื ืืืฉืืืจื ืขื ืืืืืชื."ืืฉืกืจืื ื ืืฆื ืืฉืื ืืชืงืื ืืืคืืฉื ืืืืฃ, ืืคืฉืจ ืืืคื ืืชืจืืคืืช ืืกืืืื ืฉืื ืื ืฉืืขืืื ืืช ืืืืื ืืืคืืื ืืืืื ืื ืกืืื, ืืื ืืจืื ืืืชืจ ืงืฉื ืืืืืข ืืจืืคืื. ืืขืืืช ืืืช, ืืฉืืืืื ืืชืืื ืืฉืื ืืืงืื, ืืืืคืื ืืืื ืืืืืช ื ืืชืืื ืื ืืคืขืืื ืงืจืื ืชื, ืืฉืืขืืจ ืืจืืคืื ืืื ืืืื ืืฉืืขืืชืืช", ืืกืืืจื ืคืจืืค' ืืืจืืช ืฉืื, ืื ืืืช ืืืืืช CT ืืืืืืช ืืืช ืืืื ืืืืช ืืืืืื ืืืกื. ืืืืงืช ืกืงืจ ืืืฉื, ืฉื ืื ืกื ืืกื ืืืจืืืืช ืืกืคืืืืจ ืืืืจืื, ืืขื ืืงื ืืืืืื ืจืืื ืกืืืื ืืืืืื ืืืงืื โ ืืืขืืชืื ืืื ืื ืฉืขืืฉื ืืช ืืืืื ืืื ืืืื ืืืืืช.ืืืจื ืืกืคืจ ืืืช ืืกืจืื ืจืืื ืืื ืืขืืฉืื, ืืืืขืฉื ื- 80% ืืืืงืจืื ืฉื ืกืจืื ืจืืื ืงืฉืืจืื ืืขืืฉืื ืืืืื ืื ืืขืืจ. ืืืืงืช ืืกืงืจ ืืื ืืืืงืช ืืืืืช ืคืฉืืื ืืืืืขืืช ืืืขืฉื ืื ืืืืื ืืืขืืจ, ืืืืคืฉืจืช ืืืชืืจ ืกืืื ืื ืฉืขืืืืื ืืืขืื ืขื ืกืจืื ืจืืื ืขืื ืืืจื ืืืคืขืช ืืชืกืืื ืื. "ืืืืืจ ืืืืืงืช CT ืฉื ืืืื ืืืืืฆืขืช ืืืื ืื ืงืจืื ื ื ืืื, ืืื ืื ืฉืฉืืื ืขืจื ืืืืืงืช ืืืืืจืคืื ืื ืืงืจืื ื ืฉื ืืฉืคืื ืืืื ืืืืกื ืืจื ืกึพืืืื ืืืช ืืืื ืืฉืื", ืืกืืืจื ืคืจืืค' ืฉืื. "ืืืขืฉื ืื ืฉืืืคืฉืจ ืื ื ืืืฆืข ืืช ืืืืืงื ืืืื ืื ื ืืื ืืื ืื ืืืืืืืช ืืืืืื ืฉืืฉ ืืจืืื, ืฉืื ืืื ืืืืื ืืฆื ืืื ืืืืืจ ืืืฆื ืฉื ื ืืื ืื, ืื ืฉืืคืฉืจ ืืงืื ืืืืื ืืจืืจื ืืื ืคืืืช ืืืื ืืืืืงื ืฉื ืขืฉืืช ืืงืจืื ื ืืืืื".ืืืืืงื ืขืฆืื ืืื ื ืืฆืจืืื ืื ืืื ื ืืื ืื ืืืจืงื ืฉื ืืืืจ ื ืืืื, ืืคื ืฉื ืืจืฉ ืืืืืงืืช CT ืืืจืืช. ืืื ืืื ื ืืืืืื ืืืื ื ืืกืืื ืช ืืืชืจ ืืฆืืืื ืจื ืืื ืคืฉืื. ืื ืืืงืื ืืืืขืื ืืืืจ ืืืืืงื, ื ืฉืืืื ืขื ืืืื ืืืขืืืืช ืืืืื ืื ืืืฆืข ืืช ืืกืจืืงื, ืฉืืืจืืช ืฉื ืืืช ืืืืืืช. ืืืืืงื ืืชืงืื ืฆืืืื ืืืืืื ืืช ืืฆื ืืืช ืืืื ืืืจืืืืช ืืืืชืืจ ืฉื ืงืฉืจืืืช (ืืืงืืื ืงืื ืื ืืงืืืจ ืฉื ืืืืืืืจืื ืื ืกื ืืืืืจืื ืกืคืืจืื ืฉืขืืืืื ืืืืืช ืกืืื ืืืงืื ืืืืืื ืกืจืื ื). "ืืขืืจ ืืืื ื ืจืืืืื ืืื ืฉืืืืื ืืจืืื ืืื ืืืื ืืืื ืืกืืื, ืืื ืฉืืืืฆืขืืช ืืืืงืช ืืกืงืจ ืืืื ื ืืชื ืืืืืช ืื ืืืืืืื ืงืื ืื ืืืื ืืืืจืงืืื ืืืจืื ืืืืืจืืจืืื ืืืชืจ", ืืืืจืช ืคืจืืค' ืฉืื.ืืช ืืืืฆื ืืืชืงืื ืืืืืงืช ืืกืงืจ ืืืจืืื ืขื ืคื ืฉืืืช Lung-RADS ืฉืืืืืช ืืืฉ ืืจืืืช, ืึพ0 ืืขื 4. ืืืจืืืช ื ืงืืขืืช ืขื ืคื ืืืคืืื ืื ืฉืื ืื, ืืื ืืจืงื, ืืืื ืืฆืืจื. "ืืจืืืช 1 ืึพ2 ืืฉืืขืืชื ืชืืฆืื ืฉืืืืืช, ืืืืืจ ืื ื ืืฆืื ืงืฉืจืืช ืื ืืงืฉืจืืช ืฉื ืืฆืื ืืื ื ืืฉืืื. ืืจืื 3 ืื 4 ืืฉืืขืืชื ืฉืืฉ ืืฉื, ืืืื ืฉืขืืืื ืืืืจืื ืืืฉื ืืืื ืืืชืจ, ืื ืืืฉื ืืืืจืื 3 ืืื ืื ืจืื ืืืืืจ ืืืืฆื ืฉืคืืจ, ืึพ4 ืืืฉื ืขืืื ืืฉืืขืืชืืช, ืืกืืืจื ืคืจืืค' ืฉืื. ืืจืื ืืืงืจืื ืืืืฆืืื ืืื ื ืกืจืื ืืื ืืืืืืคืืื ืืชืืงืฉืื ืืืืืจ ืขื ืืืืงืช ืืกืงืจ ืืืืจ ืืฉื ืชืืื, ืืืืงืจืื ืฉื ืืจืื 3 ืื 4A ืื ืืืง ืืชืืงืฉ ืืืฆืข ืฉืื ืืช ืืืชื ืืืืงื (CT ืืงืจืื ื ื ืืืื) ืืขืืืจ ืฉืืืฉื ืื ืฉืืฉื ืืืืฉืื. "ืืจืื, ืื ืืฉ ืืฉื ืืฉืืขืืชื, ืืืงืจื ืืืฆื ืืืฉืืื ืฉืชืืืื ืฆืืืช ืจื ืืงืฆืืขื ืฉืืืื ืจืืคืืื ืืืืกืฆืืคืืื ืืช ืฉืื ืืช: ืืื ืงืืืืืื, ืจืืคืื ืจืืืืช, ืื ืชืื ืจืืืืช, ืืงืจืื ืื ืืจืืืืืืืื. ืืืฉืืื ื ืืื ืืืงืจื ืื ืืืื ืขื ืืฆืขืืื ืืืืื, ืืื ืืืฆืืข ืืืืคืกืื ืื ืืืืงืืช ื ืืกืคืืช. ืืืชืื ืชืืงืืข ืชืืื ืืช ืืืืคืื, ืฉืืจืื ืชืืืื ืืฉืืืื ืืืืงืืืื ื ืืชืื ืืืกืจืช ืืืืืื ืืืขืืชืื ืืืคืื ืงืจืื ืชื", ืืกืืืจื ืคืจืืค' ืฉืื.ืคืจืกืืืชืืืื ืืชืืื ืืคืืืืืืชืืื ืืช ืืกืงืจ ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืื ืืื ื ืชืืืื ืฉื ืคืืืืื ืฉืืชืืฆืข ืืฉื ืชืืื ืืืืจืื ืืช ืืืฉืจืื ืืฉื ืชืื"ืจ (ืชืืื ืืช ืืืฉืืืืช ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืืืช). ืืืจืช ืคืจืืืงื ืชืื"ืจ ืื ืืืืชื ืืืืื ืื ืืืืงืช ืืกืงืจ ืืืืื ืืืืื ืืืืืื ืืืงืื ืฉืืฆืื ืืืื, ืฉืื ื ืชืื ืื ืืื ืืืจ ืืืืื ืืืืงืจืื ืจืืื ืืขืืื, ืืื ืืืืื ืืื ืืขืจืืืช ืืืจืืืืช ืืืฉืจืื ืขืจืืืืช ืืืืฉืื ืชืืื ืืช ืืกืงืจ.ืืคืืืืื ืืื ืืื ืืืฆืืื ืืืืืื ืฉืืชืืื ืืช ืืคืฉืจืืช ืืืขืืื ืืืฉืจืื. "ืชืืื ืืช ืชืื"ืจ ืืืืื ืืืขืืื, ืืืื ืื ืืคืฉืจื ืืืฉืจื ืืืจืืืืช ืืืงืืคืืช ืืืืืื ืืืืขืจื ืชืื ืืื ืืคืืืืื. ืืืื ื ืืขืืช ืืกื ืืืืืื ืืช ืืืืืงื ืจืง ืืื ื 65โ74 ืืืื ืืกืงืจ ืฉืืงืืื ืืขืืื ืืฉืืืืืฅ ืขื ืืื ืืฉืจื ืืืจืืืืช ืืื ืึพ50, ืื ืืกืืคื ืฉื ืืืจ ืืืืืจ ืืืจืืกืช ืจืื ืืฉืืขืืชืืช ืืืื", ืืืืจืช ื"ืจ ืฉื ื ืฉืืื, ืืืืกืืช ืืื ื"ืืืช ืืขืืืชื ืืืฉืจืืืืช ืืกืจืื ืืจืืื..ืืืืงืช ืืกืงืจ ืืืืืื ืืืงืื ืฉื ืกืจืื ืืจืืื ืฉืืืืืื ืืกื ืืืจืืืืช ืืืฉืจืื ืชืื ืชื ืืืืืจ ืืื ื 65 ืขื 74 ืืืช ืืฉื ืชืืื, ืฉืืขืฉื ืื ืืืื ืื ืืขืฉื ืื ืืฉืขืืจ (ืขื 15 ืฉื ื ืืืคืกืงืช ืืขืืฉืื) ืขื ืืืกืืืจืืืช ืขืืฉืื ืฉื 20 'ืฉื ืืช ืงืืคืกื' ืืคืืืช. ืฉื ืช ืงืืคืกื ืืฉืืขืืชื ืืกืคืจ ืฉื ืืช ืืขืืฉืื ืืคืื ืืกืคืจ ืืืคืืกืืช ืืืื, ืืืฉื ืขืืฉืื ืฉื ืงืืคืกื ืืืื ืืืฉื 20 ืฉื ื ืชืืฉื ืึพ20 ืฉื ืืช ืงืืคืกื, ืื ืื ืขืืฉืื ืฉื ืฉืชื ืงืืคืกืืืช ืืืื ืืืฉื ืขืฉืจ ืฉื ืื ืชืืฉื ืึพ20 ืฉื ืืช ืงืืคืกื. "ืื ื ืืฆืื ืืืืืงื ืงืฉืจืืช ืืฉืืื, ืืขืจื ืืืจืืจ ืขื ืคื ืื ืืืืช Lung-RADS", ืืืืืฉื ืคืจืืค' ืฉืื, "ืจืง ืืฉืชืืฉืื ืืืคืฉืจืืช ืฉืืืืืจ ืืกืจืื, ืืืืฆืข ืืืฉื ืืกืงืจ ืคืขื ืืฉื ืชืืื".ืคืจืกืืืช"ืื ืฉืขืืื ืืชื ืืื ืืืื ืืืื ืืืงืฉ ืืคื ืื ืืืืืงื ืืจืืคื ืืืฉืคืื. ืืืืื ืฉืืืืืฆื ืืขืืืืืช ืืื ืืืฆืข ืืช ืืกืงืจ ืืื ืืืื 50, ืืคื ืฉื ืขืฉื ืืชืื"ืจ, ืืฉืื ืืืขืช ืื ืืฉื ื ืืคืฉืจืืช ืืืงืฉ ืืช ืืืืืงื ืื ืืืื 50", ืืืืจืช ื"ืจ ืฉืืื ืืืืกืืคื ืื "ืืืืื ืืืื ืืืืืช, ื ืืชื ืืืฆืข ืืช ืืืืืงื ืื ืืืืฅ ืืกื ืืืืคื ืคืจืื". ืืฉ ืืฆืืื ืื ืืขืืืชื ืืืฉืจืืืืช ืืกืจืื ืจืืื ืืืืฉื ืืงืฉื ืืกื ืืืจืืืืช ืืืจืืื ืืช ืืืืืืช ืืืืืงื ืืืจ ืืืื 50.ืืืืืจื ืืืืงืืช ืืกืงืจ ืขืืืืช ืขืืืื ืืงืฆืืขืืช ืืืจืืืช ืฉื ืจืืืืืืืื, ืฉืืืืฉืจื ืืคืขื ืื ืกืจืืงืืช CT ืืืื ืื ืงืจืื ื ื ืืื ืืืื ืฉืืืืฉ ืืืืืจ ื ืืืื. "ืจืืืืืืืื ืื ืืืืื ืืกืืืืช ืืงืฆืืข 'ืฉืงืืฃ', ืืืืืื ืืืจื ืืื ืื ืคืืืฉืื ืืืชื ืืื ืื ืคืืืฉืื ืืช ืืืืืื, ืืื ืืชืคืงืื ืฉืืื ืืฉืื ืืืื", ืืืืจืช ื"ืจ ืฉืืื. "ืืคืขื ืื ืืจืืืืืืื ืืื ืืฆืืจ ืืืจืืื ืฉืืืืงืช ืืกืงืจ ืกืืื ืกืืืื. ืคืขื ืื ืฉืืื ืืืื ืืขืื ืืืื ื ืฉื ืกืจืื ืื ืืืืืื ืืืืืงืืช ืืืืชืจืืช", ืืกืืืจื ืคืจืืค' ืฉืื.ืืขืชืื ืืงืืื ืคืจืืค' ืฉืื ืฉืืคืฉืจ ืืืื ืืืื ืืก ืืขืืจืช ืืจืืืืืืืื ืื ืืืื ืืชืงืืืื ืฉื ืืื ื ืืืืืืชืืช. ืึพAI ืืืื ืืืืืช ืืช ืืงืฉืจืืืช, ืืืืื ืืืชื ืืืืฉืืืช ืืื ืืืืงืืช ืงืืืืืช. ืื ืืกืฃ, ืืขืจืืืช ืึพAI ืืกืืืืืช ืืืชืจ ืืขืืืช ืืจืืืืช ืืืจืืช ืืื ืืืกืืืืืคืืจืืืืก (ืืจืืืช ืกืืื) ืื ืืืืืช ืื ืืืคืืื ืืืฆืืื ืฉืืืฉืืืื ื- COPD (ืืืืช ืจืืืืช ืืกืืืชืืช ืืจืื ืืช). ืื ืืืคืืช ืืืืืงื ืื ืจืง ืืืื ืืืืชืืจ ืืืงืื ืฉื ืกืจืื ืจืืื, ืืื ืื ืืืืืื ืืช ืืืืืง ืืฆืืื ืจืคืืืืื ื ืืกืคืื.ืคืจืกืืืชืงืื ืืืจืืืช ืขื ืืขืชืื ืฉืืื"ืืืืืฆืื ืืืืืื ืืืื ืืคืืืช ืืืคืืืช ืืืืืช ืืขืืฉืื" | ืฆืืืื: By Dafna A.meron, shutterstockืืขืช, ืืืืจ ืืฆืืืช ืืคืืืืื ืืื ืืกืช ืืกืงืจ ืืกื ืืืจืืืืช, ืืืืจื ืืืฉืืื ืืขืื ื ื"ืจ ืฉืืื ืืื ืืืขืืืช ืืช ืืืืืขืืช ืฉื ืื ืฉืื ืฉื ืืฆืืื ืืกืืืื ืืืงืจืื ืืื ืืืฉืช ืืืืืืืง. "ืืฉืื ืืืืืืฉ ืื ืื ืขื ืืืชืคืชืืืืืช ืืืืืืืืช ืืืืคืืืื ืืกืจืื ืจืืื, ืืกืืืื ืืืืื ืืืืชืจ ืืจืืคืื ืืื ืืืืื ืืฉืื ืืืงืื ืื ืืชืื. ืื ืืกืฃ, ืืฉื ื ืืืื ืื ืืืคืืืื ืชืจืืคืชืืื ืงืฆืจืื ืืคื ื ืืืืจื ืื ืืชืื ืฉืืฉืคืจืื ืืช ืืกืืืื ืืืฆืืืช ืืคืจืืฆืืืจื ืืืืจืืจืืืช ืืืคืืืชืื ืืช ืืกืืืื ืฉืืืืื ืชืืืืจ ืื ืืขืชืื".ืืืืืืกืืืช ืืืขื ืฉื ืืืืงืช ืืกืงืจ ืืื ืืืืืจ ืืขืฉื ืื ืืืืื ืืืขืฉื ืื ืืืืื ืืฉืขืืจ, ืฉืื ืจืื ืืืืืื ืืกืจืื ืจืืื ืื ืืืืืืืกืืื ืื, ืืื ืฉืื ืื ืื ืฉืขืืื ืืืจืื ืื ืืืช ืืืชื ืืืืช ืืืืืงื. "ืืื ืืืืกืืืื ืฉืงืืืืื ืืืขืฉื ืื ืื ืขื ืืื ืืืืฉื", ืืืืจืช ื"ืจ ืฉืืื. "ืืืืื ืฉืื ืื ื ืืืืืฆืื ืืืืืื ืืืื ืืคืืืช ืืืคืืืช ืืืืืช ืืขืืฉืื, ืืื ืืฉืื ืืืืื ืฉืื ืืชืืืจืืช. ืืืืืื ืฉืจืื ืืื ืฉืื ืืชืืืืื ืืขืฉื ืืืื ืฆืขืืจ, ืืฉื ืื ืฉืืื ืืืื ืขืื ืืชืคืชื ืืืืชืืืจืืช ืืื ืืืงื ืืืชืจ, ืื ืื ื ืื ืืคื ืื ืืฆืืข ืืืฉืืื ืืืคื ืืขืฉื ืื. ืขื ืืืช, ืื ืื ืืฆืืืืื ืืืืืื ืืื ืื ืื, ืืฉืื ืืงืืช ืืืจืืืช ืืืขื ืขืฆืื ืืืืขื ืืืืืืื ืฉืื ืืืืฉืช ืืืืืืง".ืคืจืกืืืชืื ืื ื ืืฉืคืื ืืืืืื ืืขืืื ืืืจืื ืื ืงืจืืืื ืืืืืืง โ ืื ืคืขื ืื ืืฆืขื ืฉืืฆืื ืืืื: "ืืคื ื ืืื ืฉื ืื ืืคืืืืื ืฉืขืจืื ื, ืืืื ืืื ืืืืงืืช ืฉื ืขืฉื, ืฉืืืฉื ื ืืฆืื ืืืืืืืช ืืืืช ืืื ืืืืชื ืฉื ืืืจ ืฉืืืช ืฉืื ืจืฉืื ืืืชื ืืืืืงื. ืืื ืืืืื ืืกืจืื ืจืืืืช ืืฉืื ืืืงืื, ืขืืจ ื ืืชืื ืืืืื ืืื ืืจืื. ืืืช ืฉืื ืืืฉ ืืฆืืื ืืืชื. ืืื ื ืืฉืคืื ืืฉ ืืฉืืืืช ืืืืจื, ืื ืืืืืื ืืืฆืืฃ ืืช ืื ืืฉื ืืืืงืฉ ืืืืืจืื ืืืืืืง, ืื ืื ืืฉืืื ืขืฆืื ืื ืืฉืืื ืืืฉืคืื", ืืกืคืจืช ื"ืจ ืฉืืื, ืืืกืืืช: "ืืื ืกืคืง ืฉืืืืืขื ืขื ืชืืืืื ืืกืจืื ืจืืื ืืื ืืคืืืื ืืืื. ืืื ืืืืงื ืฉืขืฉืืื ืืืืื ืืืืืื ืืืงืื ืืืคืฉืจืช ืืฉืคืจ ืืช ืกืืืื ืืจืืคืื ืืืืคื ืืฉืืขืืชื".ืืืกืืช ืืกืืจืืื ืืงื โ ืืืืง ืืืืื "ืืื ื ืืืืืฉื"ืืฆืืชื ืืขืืช ืืฉืื? ืืืืงื ืงืื ื, ืืืื ืืจืืืื ืชืืื ืืช ืืกืงืจ ืืืืฉื ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืื ื ืื ืกื ืืชืืงืคื ืืกื ืืืจืืืืช ืืื ืืกืคืืืืจ 2025, ืืืฆืืขื ืืืขืฉื ืื ืืืืขืฉื ืื ืืฉืขืืจ ืืืืืืื 65โ74, ืืืืงื ืคืฉืืื ืฉืขืฉืืื ืืฉื ืืช ืืืื โ ืืื ืืืื ืืชืืื, ืืื ืื ืืขืืื ืืืื ืืกืืจ ืืคืกืคืก ืืืื ื ืกืจืื ืืจืืื ืืื ืืกืจืื ืืฉื ื ืืื ื ืคืืฅ ืืขืืื, ืื ืืื ืืืจื ืืชืืืชื ืืกืคืจ ืืืช ืืืืจืื, ืืืฉื ื ืืงืจื ื ืฉืื. ืึพ2020 ืืืขื 20% ืืืื ืืคืืืจืืช ืืกืจืื ืืืฉืจืื ืืืืจืื ืื-12% ืื ืฉืื ืืื ืืชืืฆืื ืืกืจืื ืืจืืื. ืืกืืื ืืื ืงืฉืืจื ืืขืืืื ืฉืืืืืจ ืืืืื ืฉืงืื, ืฉืืจืื ืืกืืืคืืืืื ืฉืื ืืชืืืืื ืจืง ืืฉืืืื ืืชืงืืืื, ืืขืืชืื ืืืจื ืฉืืืืื ืืืจ ืฉืืื ืืจืืจืืช ืืืืืจืื ืืืจืื ืืืืฃ, ืืื ืืืืืืช ืืืืื ืืจืืคืื ืคืืืชืช ืืืืืจื ืืืืคืืืืช ืืฉืชื ื ืืืืจืืช ืืืื ืืืฉืืืจื ืขื ืืืืืชื. "ืืฉืกืจืื ื ืืฆื ืืฉืื ืืชืงืื ืืืคืืฉื ืืืืฃ, ืืคืฉืจ ืืืคื ืืชืจืืคืืช ืืกืืืื ืฉืื ืื ืฉืืขืืื ืืช ืืืืื ืืืคืืื ืืืืื ืื ืกืืื, ืืื ืืจืื ืืืชืจ ืงืฉื ืืืืืข ืืจืืคืื. ืืขืืืช ืืืช, ืืฉืืืืื ืืชืืื ืืฉืื ืืืงืื, ืืืืคืื ืืืื ืืืืืช ื ืืชืืื ืื ืืคืขืืื ืงืจืื ืชื, ืืฉืืขืืจ ืืจืืคืื ืืื ืืืื ืืฉืืขืืชืืช", ืืกืืืจื ืคืจืืค' ืืืจืืช ืฉืื, ืื ืืืช ืืืืืช CT ืืืืืืช ืืืช ืืืื ืืืืช ืืืืืื ืืืกื. ืืืืงืช ืกืงืจ ืืืฉื, ืฉื ืื ืกื ืืกื ืืืจืืืืช ืืกืคืืืืจ ืืืืจืื, ืืขื ืืงื ืืืืืื ืจืืื ืกืืืื ืืืืืื ืืืงืื โ ืืืขืืชืื ืืื ืื ืฉืขืืฉื ืืช ืืืืื ืืื ืืืื ืืืืืช. ืืืจื ืืกืคืจ ืืืช ืืกืจืื ืจืืื ืืื ืืขืืฉืื, ืืืืขืฉื ื- 80% ืืืืงืจืื ืฉื ืกืจืื ืจืืื ืงืฉืืจืื ืืขืืฉืื ืืืืื ืื ืืขืืจ. ืืืืงืช ืืกืงืจ ืืื ืืืืงืช ืืืืืช ืคืฉืืื ืืืืืขืืช ืืืขืฉื ืื ืืืืื ืืืขืืจ, ืืืืคืฉืจืช ืืืชืืจ ืกืืื ืื ืฉืขืืืืื ืืืขืื ืขื ืกืจืื ืจืืื ืขืื ืืืจื ืืืคืขืช ืืชืกืืื ืื. "ืืืืืจ ืืืืืงืช CT ืฉื ืืืื ืืืืืฆืขืช ืืืื ืื ืงืจืื ื ื ืืื, ืืื ืื ืฉืฉืืื ืขืจื ืืืืืงืช ืืืืืจืคืื ืื ืืงืจืื ื ืฉื ืืฉืคืื ืืืื ืืืืกื ืืจื ืกึพืืืื ืืืช ืืืื ืืฉืื", ืืกืืืจื ืคืจืืค' ืฉืื. "ืืืขืฉื ืื ืฉืืืคืฉืจ ืื ื ืืืฆืข ืืช ืืืืืงื ืืืื ืื ื ืืื ืืื ืื ืืืืืืืช ืืืืืื ืฉืืฉ ืืจืืื, ืฉืื ืืื ืืืืื ืืฆื ืืื ืืืืืจ ืืืฆื ืฉื ื ืืื ืื, ืื ืฉืืคืฉืจ ืืงืื ืืืืื ืืจืืจื ืืื ืคืืืช ืืืื ืืืืืงื ืฉื ืขืฉืืช ืืงืจืื ื ืืืืื". ืืืืืงื ืขืฆืื ืืื ื ืืฆืจืืื ืื ืืื ื ืืื ืื ืืืจืงื ืฉื ืืืืจ ื ืืืื, ืืคื ืฉื ืืจืฉ ืืืืืงืืช CT ืืืจืืช. ืืื ืืื ื ืืืืืื ืืืื ื ืืกืืื ืช ืืืชืจ ืืฆืืืื ืจื ืืื ืคืฉืื. ืื ืืืงืื ืืืืขืื ืืืืจ ืืืืืงื, ื ืฉืืืื ืขื ืืืื ืืืขืืืืช ืืืืื ืื ืืืฆืข ืืช ืืกืจืืงื, ืฉืืืจืืช ืฉื ืืืช ืืืืืืช. ืืืืืงื ืืชืงืื ืฆืืืื ืืืืืื ืืช ืืฆื ืืืช ืืืื ืืืจืืืืช ืืืืชืืจ ืฉื ืงืฉืจืืืช (ืืืงืืื ืงืื ืื ืืงืืืจ ืฉื ืืืืืืืจืื ืื ืกื ืืืืืจืื ืกืคืืจืื ืฉืขืืืืื ืืืืืช ืกืืื ืืืงืื ืืืืืื ืกืจืื ื). "ืืขืืจ ืืืื ื ืจืืืืื ืืื ืฉืืืืื ืืจืืื ืืื ืืืื ืืืื ืืกืืื, ืืื ืฉืืืืฆืขืืช ืืืืงืช ืืกืงืจ ืืืื ื ืืชื ืืืืืช ืื ืืืืืืื ืงืื ืื ืืืื ืืืืจืงืืื ืืืจืื ืืืืืจืืจืืื ืืืชืจ", ืืืืจืช ืคืจืืค' ืฉืื. ืืช ืืืืฆื ืืืชืงืื ืืืืืงืช ืืกืงืจ ืืืจืืื ืขื ืคื ืฉืืืช Lung-RADS ืฉืืืืืช ืืืฉ ืืจืืืช, ืึพ0 ืืขื 4. ืืืจืืืช ื ืงืืขืืช ืขื ืคื ืืืคืืื ืื ืฉืื ืื, ืืื ืืจืงื, ืืืื ืืฆืืจื. "ืืจืืืช 1 ืึพ2 ืืฉืืขืืชื ืชืืฆืื ืฉืืืืืช, ืืืืืจ ืื ื ืืฆืื ืงืฉืจืืช ืื ืืงืฉืจืืช ืฉื ืืฆืื ืืื ื ืืฉืืื. ืืจืื 3 ืื 4 ืืฉืืขืืชื ืฉืืฉ ืืฉื, ืืืื ืฉืขืืืื ืืืืจืื ืืืฉื ืืืื ืืืชืจ, ืื ืืืฉื ืืืืจืื 3 ืืื ืื ืจืื ืืืืืจ ืืืืฆื ืฉืคืืจ, ืึพ4 ืืืฉื ืขืืื ืืฉืืขืืชืืช, ืืกืืืจื ืคืจืืค' ืฉืื. ืืจืื ืืืงืจืื ืืืืฆืืื ืืื ื ืกืจืื ืืื ืืืืืืคืืื ืืชืืงืฉืื ืืืืืจ ืขื ืืืืงืช ืืกืงืจ ืืืืจ ืืฉื ืชืืื, ืืืืงืจืื ืฉื ืืจืื 3 ืื 4A ืื ืืืง ืืชืืงืฉ ืืืฆืข ืฉืื ืืช ืืืชื ืืืืงื (CT ืืงืจืื ื ื ืืืื) ืืขืืืจ ืฉืืืฉื ืื ืฉืืฉื ืืืืฉืื. "ืืจืื, ืื ืืฉ ืืฉื ืืฉืืขืืชื, ืืืงืจื ืืืฆื ืืืฉืืื ืฉืชืืืื ืฆืืืช ืจื ืืงืฆืืขื ืฉืืืื ืจืืคืืื ืืืืกืฆืืคืืื ืืช ืฉืื ืืช: ืืื ืงืืืืืื, ืจืืคืื ืจืืืืช, ืื ืชืื ืจืืืืช, ืืงืจืื ืื ืืจืืืืืืืื. ืืืฉืืื ื ืืื ืืืงืจื ืื ืืืื ืขื ืืฆืขืืื ืืืืื, ืืื ืืืฆืืข ืืืืคืกืื ืื ืืืืงืืช ื ืืกืคืืช. ืืืชืื ืชืืงืืข ืชืืื ืืช ืืืืคืื, ืฉืืจืื ืชืืืื ืืฉืืืื ืืืืงืืืื ื ืืชืื ืืืกืจืช ืืืืืื ืืืขืืชืื ืืืคืื ืงืจืื ืชื", ืืกืืืจื ืคืจืืค' ืฉืื. ืชืืื ืืช ืืกืงืจ ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืื ืืื ื ืชืืืื ืฉื ืคืืืืื ืฉืืชืืฆืข ืืฉื ืชืืื ืืืืจืื ืืช ืืืฉืจืื ืืฉื ืชืื"ืจ (ืชืืื ืืช ืืืฉืืืืช ืืืืืื ืืืงืื ืฉื ืกืจืื ืจืืืืช). ืืืจืช ืคืจืืืงื ืชืื"ืจ ืื ืืืืชื ืืืืื ืื ืืืืงืช ืืกืงืจ ืืืืื ืืืืื ืืืืืื ืืืงืื ืฉืืฆืื ืืืื, ืฉืื ื ืชืื ืื ืืื ืืืจ ืืืืื ืืืืงืจืื ืจืืื ืืขืืื, ืืื ืืืืื ืืื ืืขืจืืืช ืืืจืืืืช ืืืฉืจืื ืขืจืืืืช ืืืืฉืื ืชืืื ืืช ืืกืงืจ. ืืคืืืืื ืืื ืืื ืืืฆืืื ืืืืืื ืฉืืชืืื ืืช ืืคืฉืจืืช ืืืขืืื ืืืฉืจืื. "ืชืืื ืืช ืชืื"ืจ ืืืืื ืืืขืืื, ืืืื ืื ืืคืฉืจื ืืืฉืจื ืืืจืืืืช ืืืงืืคืืช ืืืืืื ืืืืขืจื ืชืื ืืื ืืคืืืืื. ืืืื ื ืืขืืช ืืกื ืืืืืื ืืช ืืืืืงื ืจืง ืืื ื 65โ74 ืืืื ืืกืงืจ ืฉืืงืืื ืืขืืื ืืฉืืืืืฅ ืขื ืืื ืืฉืจื ืืืจืืืืช ืืื ืึพ50, ืื ืืกืืคื ืฉื ืืืจ ืืืืืจ ืืืจืืกืช ืจืื ืืฉืืขืืชืืช ืืืื", ืืืืจืช ื"ืจ ืฉื ื ืฉืืื, ืืืืกืืช ืืื ื"ืืืช ืืขืืืชื ืืืฉืจืืืืช ืืกืจืื ืืจืืื.. ืืืืงืช ืืกืงืจ ืืืืืื ืืืงืื ืฉื ืกืจืื ืืจืืื ืฉืืืืืื ืืกื ืืืจืืืืช ืืืฉืจืื ืชืื ืชื ืืืืืจ ืืื ื 65 ืขื 74 ืืืช ืืฉื ืชืืื, ืฉืืขืฉื ืื ืืืื ืื ืืขืฉื ืื ืืฉืขืืจ (ืขื 15 ืฉื ื ืืืคืกืงืช ืืขืืฉืื) ืขื ืืืกืืืจืืืช ืขืืฉืื ืฉื 20 'ืฉื ืืช ืงืืคืกื' ืืคืืืช. ืฉื ืช ืงืืคืกื ืืฉืืขืืชื ืืกืคืจ ืฉื ืืช ืืขืืฉืื ืืคืื ืืกืคืจ ืืืคืืกืืช ืืืื, ืืืฉื ืขืืฉืื ืฉื ืงืืคืกื ืืืื ืืืฉื 20 ืฉื ื ืชืืฉื ืึพ20 ืฉื ืืช ืงืืคืกื, ืื ืื ืขืืฉืื ืฉื ืฉืชื ืงืืคืกืืืช ืืืื ืืืฉื ืขืฉืจ ืฉื ืื ืชืืฉื ืึพ20 ืฉื ืืช ืงืืคืกื. "ืื ื ืืฆืื ืืืืืงื ืงืฉืจืืช ืืฉืืื, ืืขืจื ืืืจืืจ ืขื ืคื ืื ืืืืช Lung-RADS", ืืืืืฉื ืคืจืืค' ืฉืื, "ืจืง ืืฉืชืืฉืื ืืืคืฉืจืืช ืฉืืืืืจ ืืกืจืื, ืืืืฆืข ืืืฉื ืืกืงืจ ืคืขื ืืฉื ืชืืื". "ืื ืฉืขืืื ืืชื ืืื ืืืื ืืืื ืืืงืฉ ืืคื ืื ืืืืืงื ืืจืืคื ืืืฉืคืื. ืืืืื ืฉืืืืืฆื ืืขืืืืืช ืืื ืืืฆืข ืืช ืืกืงืจ ืืื ืืืื 50, ืืคื ืฉื ืขืฉื ืืชืื"ืจ, ืืฉืื ืืืขืช ืื ืืฉื ื ืืคืฉืจืืช ืืืงืฉ ืืช ืืืืืงื ืื ืืืื 50", ืืืืจืช ื"ืจ ืฉืืื ืืืืกืืคื ืื "ืืืืื ืืืื ืืืืืช, ื ืืชื ืืืฆืข ืืช ืืืืืงื ืื ืืืืฅ ืืกื ืืืืคื ืคืจืื". ืืฉ ืืฆืืื ืื ืืขืืืชื ืืืฉืจืืืืช ืืกืจืื ืจืืื ืืืืฉื ืืงืฉื ืืกื ืืืจืืืืช ืืืจืืื ืืช ืืืืืืช ืืืืืงื ืืืจ ืืืื 50. ืืืืืจื ืืืืงืืช ืืกืงืจ ืขืืืืช ืขืืืื ืืงืฆืืขืืช ืืืจืืืช ืฉื ืจืืืืืืืื, ืฉืืืืฉืจื ืืคืขื ืื ืกืจืืงืืช CT ืืืื ืื ืงืจืื ื ื ืืื ืืืื ืฉืืืืฉ ืืืืืจ ื ืืืื. "ืจืืืืืืืื ืื ืืืืื ืืกืืืืช ืืงืฆืืข 'ืฉืงืืฃ', ืืืืืื ืืืจื ืืื ืื ืคืืืฉืื ืืืชื ืืื ืื ืคืืืฉืื ืืช ืืืืืื, ืืื ืืชืคืงืื ืฉืืื ืืฉืื ืืืื", ืืืืจืช ื"ืจ ืฉืืื. "ืืคืขื ืื ืืจืืืืืืื ืืื ืืฆืืจ ืืืจืืื ืฉืืืืงืช ืืกืงืจ ืกืืื ืกืืืื. ืคืขื ืื ืฉืืื ืืืื ืืขืื ืืืื ื ืฉื ืกืจืื ืื ืืืืืื ืืืืืงืืช ืืืืชืจืืช", ืืกืืืจื ืคืจืืค' ืฉืื. ืืขืชืื ืืงืืื ืคืจืืค' ืฉืื ืฉืืคืฉืจ ืืืื ืืืื ืืก ืืขืืจืช ืืจืืืืืืืื ืื ืืืื ืืชืงืืืื ืฉื ืืื ื ืืืืืืชืืช. ืึพAI ืืืื ืืืืืช ืืช ืืงืฉืจืืืช, ืืืืื ืืืชื ืืืืฉืืืช ืืื ืืืืงืืช ืงืืืืืช. ืื ืืกืฃ, ืืขืจืืืช ืึพAI ืืกืืืืืช ืืืชืจ ืืขืืืช ืืจืืืืช ืืืจืืช ืืื ืืืกืืืืืคืืจืืืืก (ืืจืืืช ืกืืื) ืื ืืืืืช ืื ืืืคืืื ืืืฆืืื ืฉืืืฉืืืื ื- COPD (ืืืืช ืจืืืืช ืืกืืืชืืช ืืจืื ืืช). ืื ืืืคืืช ืืืืืงื ืื ืจืง ืืืื ืืืืชืืจ ืืืงืื ืฉื ืกืจืื ืจืืื, ืืื ืื ืืืืืื ืืช ืืืืืง ืืฆืืื ืจืคืืืืื ื ืืกืคืื. ืืขืช, ืืืืจ ืืฆืืืช ืืคืืืืื ืืื ืืกืช ืืกืงืจ ืืกื ืืืจืืืืช, ืืืืจื ืืืฉืืื ืืขืื ื ื"ืจ ืฉืืื ืืื ืืืขืืืช ืืช ืืืืืขืืช ืฉื ืื ืฉืื ืฉื ืืฆืืื ืืกืืืื ืืืงืจืื ืืื ืืืฉืช ืืืืืืืง. "ืืฉืื ืืืืืืฉ ืื ืื ืขื ืืืชืคืชืืืืืช ืืืืืืืืช ืืืืคืืืื ืืกืจืื ืจืืื, ืืกืืืื ืืืืื ืืืืชืจ ืืจืืคืื ืืื ืืืืื ืืฉืื ืืืงืื ืื ืืชืื. ืื ืืกืฃ, ืืฉื ื ืืืื ืื ืืืคืืืื ืชืจืืคืชืืื ืงืฆืจืื ืืคื ื ืืืืจื ืื ืืชืื ืฉืืฉืคืจืื ืืช ืืกืืืื ืืืฆืืืช ืืคืจืืฆืืืจื ืืืืจืืจืืืช ืืืคืืืชืื ืืช ืืกืืืื ืฉืืืืื ืชืืืืจ ืื ืืขืชืื". ืืืืืืกืืืช ืืืขื ืฉื ืืืืงืช ืืกืงืจ ืืื ืืืืืจ ืืขืฉื ืื ืืืืื ืืืขืฉื ืื ืืืืื ืืฉืขืืจ, ืฉืื ืจืื ืืืืืื ืืกืจืื ืจืืื ืื ืืืืืืืกืืื ืื, ืืื ืฉืื ืื ืื ืฉืขืืื ืืืจืื ืื ืืืช ืืืชื ืืืืช ืืืืืงื. "ืืื ืืืืกืืืื ืฉืงืืืืื ืืืขืฉื ืื ืื ืขื ืืื ืืืืฉื", ืืืืจืช ื"ืจ ืฉืืื. "ืืืืื ืฉืื ืื ื ืืืืืฆืื ืืืืืื ืืืื ืืคืืืช ืืืคืืืช ืืืืืช ืืขืืฉืื, ืืื ืืฉืื ืืืืื ืฉืื ืืชืืืจืืช. ืืืืืื ืฉืจืื ืืื ืฉืื ืืชืืืืื ืืขืฉื ืืืื ืฆืขืืจ, ืืฉื ืื ืฉืืื ืืืื ืขืื ืืชืคืชื ืืืืชืืืจืืช ืืื ืืืงื ืืืชืจ, ืื ืื ื ืื ืืคื ืื ืืฆืืข ืืืฉืืื ืืืคื ืืขืฉื ืื. ืขื ืืืช, ืื ืื ืืฆืืืืื ืืืืืื ืืื ืื ืื, ืืฉืื ืืงืืช ืืืจืืืช ืืืขื ืขืฆืื ืืืืขื ืืืืืืื ืฉืื ืืืืฉืช ืืืืืืง". ืื ืื ื ืืฉืคืื ืืืืืื ืืขืืื ืืืจืื ืื ืงืจืืืื ืืืืืืง โ ืื ืคืขื ืื ืืฆืขื ืฉืืฆืื ืืืื: "ืืคื ื ืืื ืฉื ืื ืืคืืืืื ืฉืขืจืื ื, ืืืื ืืื ืืืืงืืช ืฉื ืขืฉื, ืฉืืืฉื ื ืืฆืื ืืืืืืืช ืืืืช ืืื ืืืืชื ืฉื ืืืจ ืฉืืืช ืฉืื ืจืฉืื ืืืชื ืืืืืงื. ืืื ืืืืื ืืกืจืื ืจืืืืช ืืฉืื ืืืงืื, ืขืืจ ื ืืชืื ืืืืื ืืื ืืจืื. ืืืช ืฉืื ืืืฉ ืืฆืืื ืืืชื. ืืื ื ืืฉืคืื ืืฉ ืืฉืืืืช ืืืืจื, ืื ืืืืืื ืืืฆืืฃ ืืช ืื ืืฉื ืืืืงืฉ ืืืืืจืื ืืืืืืง, ืื ืื ืืฉืืื ืขืฆืื ืื ืืฉืืื ืืืฉืคืื", ืืกืคืจืช ื"ืจ ืฉืืื, ืืืกืืืช: "ืืื ืกืคืง ืฉืืืืืขื ืขื ืชืืืืื ืืกืจืื ืจืืื ืืื ืืคืืืื ืืืื. ืืื ืืืืงื ืฉืขืฉืืื ืืืืื ืืืืืื ืืืงืื ืืืคืฉืจืช ืืฉืคืจ ืืช ืกืืืื ืืจืืคืื ืืืืคื ืืฉืืขืืชื". ืืืกืืช ืืกืืจืืื ืืงื โ ืืืืง ืืืืื "ืืื ื ืืืืืฉื" |
======================================== |
[SOURCE: https://www.theverge.com/tech/881862/a-watershed-moment-for-tech] | [TOKENS: 631] |
Posted Feb 20, 2026 at 12:00 PM UTCDQuoteDominic PrestonA watershed moment for tech.In case you havenโt heard, RAMageddon is here. And while the tech industry will survive this apocalypse, it might well be forever changed by it.letechguye:This feels like a watershed moment for the personal computer and consumer tech in general. No matter what happens with AI, itโs clear tech companies no longer value consumers/users at all. Price hikes, subscription models, and โprotect the childrenโ will continue until eventually every device is a thin client that requires a call home. But not before being intercepted by Palantir. Itโs quite sad.Get the dayโs best comment and more in my free newsletter, The Verge Daily.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Dominic PrestonCloseDominic PrestonNews EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Dominic PrestonGadgetsCloseGadgetsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All GadgetsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechCommentsLoading commentsGetting the conversation ready...Most PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharmaโs first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agentโs mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. In case you havenโt heard, RAMageddon is here. And while the tech industry will survive this apocalypse, it might well be forever changed by it. letechguye: This feels like a watershed moment for the personal computer and consumer tech in general. No matter what happens with AI, itโs clear tech companies no longer value consumers/users at all. Price hikes, subscription models, and โprotect the childrenโ will continue until eventually every device is a thin client that requires a call home. But not before being intercepted by Palantir. Itโs quite sad. Get the dayโs best comment and more in my free newsletter, The Verge Daily. Posts from this author will be added to your daily email digest and your homepage feed. See All by Dominic Preston Posts from this topic will be added to your daily email digest and your homepage feed. See All Gadgets Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Most Popular The Verge Daily A free daily digest of the news that matters most. More in Tech This is the title for the native ad Top Stories ยฉ 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Lime_Grove_Studios] | [TOKENS: 1200] |
Contents Lime Grove Studios Lime Grove Studios was a film, and later television, studio complex in Shepherd's Bush, west London, England. The complex was built by the Gaumont Film Company in 1915. It was situated in Lime Grove, a residential street in Shepherd's Bush, and when it first opened was described by Gaumont as "the finest studio in Great Britain and the first building ever put up in this country solely for the production of films". Many Gainsborough Pictures films were made here from the early 1930s. Its sister studio was Islington Studios, also used by Gainsborough; films were often shot partly at Islington and partly at Lime Grove. In 1949, the complex was purchased by the BBC, who used it for television broadcasts until 1991. It was demolished in 1993. Gaumont-British Picture Corporation In 1922, Isidore Ostrer along with brothers Mark and Maurice, acquired control of Gaumont-British from its French parent. In 1932 a major redevelopment of Lime Grove Studios was completed, creating one of the best equipped sound studio complexes of that era. The first film produced at the remodelled studio was the Walter Forde thriller Rome Express (1932), which became one of the first British sound films to gain critical and financial success in the United States (where it was distributed by Universal Pictures). The studios prospered under Gaumont-British, and in 1941 were bought by the Rank Organisation. By then Rank had a substantial interest in Gainsborough Pictures, and The Wicked Lady (1945), among other Gainsborough melodramas, was shot at Lime Grove. BBC studios In 1949 the BBC bought Lime Grove Studios as a "temporary measure"โbecause they were to build Television Centre at nearby White Cityโand began converting them from film to television use. The BBC studios were ceremonially opened on 21 May 1950 by Violet Attlee (wife of the then prime minister Clement Attlee). Lime Grove would be used for many BBC Television programmes over the next forty-two years, including: Quatermass II; Andy Pandy; The Sky at Night; Dixon of Dock Green; Nineteen Eighty-Four; Steptoe and Son; Doctor Who; Nationwide; Panorama; and The Grove Family, which took its title family from the studios, where it was made. A children's magazine-style programme, Studio E, was broadcast live from the studio of the same name from 1955 until 1958; it was hosted by Vera McKechnie.[citation needed] The Queen and Prince Philip visited Lime Grove on 28 October 1953, when they observed production of the variety show For Your Pleasure, the quiz show Animal, Vegetable, Mineral?, and a drama production, The Disagreeable Man. On 20 January 1966, the first edition of Top of the Pops from Lime Grove was broadcast, hosted by David Jacobs. The newly successful show had moved south from its original home at Dickenson Road Studios, a converted church building in Manchester, to the larger studio facilities at Lime Grove, where the production could attract a more trendy "Swinging London" studio audience. Top of the Pops was produced at Lime Grove for three years until the show moved to BBC Television Centre in 1969. Lime Grove hosted a revolution in British TV when Breakfast Time began broadcasting from there on 17 January 1983, the start of popular daytime television hosted by Frank Bough, Selina Scott and Nick Ross. Lime Grove's use for programmes outside current affairs declined over time, and later episodes of the continuing series were made at BBC Television Centre and BBC Elstree Centre. Indeed, in Lime Grove Studios' final years, its official name was Lime Grove Current Affairs Production Centre. Humble Pie performed Desperation, a Steppenwolf single from the debut albums of both: Steppenwolf and Humble Pie; Natural Born Bugie, their debut single; Heartbeat, a Buddy Holly single, and; The Sad Bag of Shaky Jake, their second single, for a recording-and-broadcast for the BBC. Led Zeppelin performed White Summer and Black Mountain Side there, on The Julie Felix Show, on 23 April 1970.[citation needed] In 1991 the BBC decided to consolidate its London television production at the nearby BBC Television Centre and to close its other studios including Lime Grove. The last live programme to be broadcast from Lime Grove was The Late Show on 13 June 1991 from Studio D, although the final portion of the programme, with a symbolic "unplugging" of a camera power cord in Studio D by Cliff Michelmore, was pre-recorded. On 26 August 1991, a month after the studios were closed, the BBC transmitted a special day of programming called The Lime Grove Story, featuring examples of the many programmes and films that had been made at Lime Grove in its 76 years as a place of film and television production. BBC Television Theatre close by, near Shepherd's Bush Green, reverted to being the Shepherd's Bush Empire. By the end, the building was in such a poor state of repair that the remaining BBC staff nicknamed it "Slime Grove". The building was put on the market and eventually bought by a development company, Notting Hill Housing Association, which demolished the studios in 1993, and redeveloped the site into a housing estate. The streets in the estate were named Gaumont Terrace and Gainsborough Court, in memory of the past owners of Lime Grove Studios.[citation needed] In popular culture Lime Grove Studios was the setting for the fictional current affairs programme The Hour in the BBC drama of the same name. The studios are also represented in the 2013 drama An Adventure in Space and Time which was shot at Wimbledon Studios. See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Shepherd%27s_Bush] | [TOKENS: 3407] |
Contents Shepherd's Bush Shepherd's Bush is a suburb of West London, England, within the London Borough of Hammersmith and Fulham 4.9 miles (7.9 km) west of Charing Cross, and identified as a major metropolitan centre in the London Plan. Although primarily residential in character, its focus is the shopping area of Shepherd's Bush Green, with the Westfield London shopping centre a short distance to the north. The main thoroughfares are Uxbridge Road, Goldhawk Road and Askew Road, all with small and mostly independent shops, pubs and restaurants. Loftus Road football stadium in Shepherd's Bush is home to Queens Park Rangers. In 2011, the population of the area was 39,724. The district is bounded by Hammersmith to the south, Holland Park and Notting Hill to the east, Harlesden and Kensal Green to the north and by Acton and Chiswick to the west. White City forms the northern part of Shepherd's Bush. Shepherd's Bush comprises the Shepherd's Bush Green, Askew, College Park & Old Oak, and Wormholt and White City wards of the borough. History The name Shepherd's Bush is thought to have originated from the use of the common land here as a resting point for shepherds on their way to Smithfield Market in the City of London.[citation needed] An alternative theory is that it could have been named after someone in the area, because in 1635 the area was recorded as "Sheppard's Bush Green". Evidence of human habitation can be traced back to the Iron Age. Shepherd's Bush enters the written record in the year 704 when it was bought by Waldhere, Bishop of London as a part of the "Fulanham" estate. A map of London dated 1841 shows Shepherd's Bush to be largely undeveloped and chiefly rural in character, with much open farmland, compared with fast-developing Hammersmith. Residential development began in earnest in the late 19th century, as London's population expanded relentlessly. In 1904 the Catholic Church of Holy Ghost and St Stephen, built in the Gothic style with a triple-gabled facade of red brick and Portland stone, was completed and opened to the public. Like other parts of London, Shepherd's Bush suffered from bomb damage during World War II, especially from V-1 flying bomb attacks (known as "doodlebugs" or "buzzbombs"), which struck randomly and with little warning. On 13 April 1963, the Beatles recorded their first-ever BBC Television broadcast at Lime Grove Studios in Shepherd's Bush. The group returned in 1964 for a further recording. Lime Grove Studios was demolished in 1994 to make way for residential accommodation. More recently, the White City bus station is housed in the redeveloped Dimco Buildings (1898), Grade II listed red brick buildings which were originally built in 1898 as a shed for a London Underground power station. The Dimco buildings were used as a filming location for the 'Acme Factory' in the 1988 film Who Framed Roger Rabbit, and later served as the interior of the British Museum in The Mummy Returns. Geography The area's focal point is Shepherd's Bush Green (also known as Shepherds Bush Common), a triangular area of about 8 acres (3 ha) of open grass surrounded by trees and roads with shops, with Westfield shopping centre to its north. The Green is a hub on the local road network, with four main roads radiating from the western side of the green and three roads approaching its eastern apex, meeting at the large Holland Park Roundabout. This position makes it an important node of the bus network, with eighteen bus routes arriving there. It is also served by five London Underground stations (see Transport below): Shepherd's Bush and White City both on the Central line, and Shepherd's Bush Market, Goldhawk Road and Wood Lane all on the Hammersmith & City and Circle lines. To the east, Shepherd's Bush is bounded by the physical barrier of the West London railway line and the grade-separated West Cross Route (part of the aborted 1960s London Motorway Box scheme); the Holland Park Roundabout and the small Addison Bridge to the south are the only ways to cross this barrier from Shepherds Bush. Most of the areas to the east of the barrier differ significantly in character, being associated with the more affluent Holland Park and Notting Hill; although the Edward Woods Estate just to the north-east of the roundabout is part of and is managed by the London Borough of Hammersmith and Fulham.[citation needed] To the south, Shepherd's Bush neighbours Brook Green and Hammersmith. Commerce Commercial activity in Shepherd's Bush is now focused on the Westfield shopping centre next to Shepherd's Bush Central line station and on the many small shops which run along the northern side of the Green. Originally built in the 1970s with a rooftop car park and connecting bridge to the station, the older West 12 Shepherds Bush shopping centre was significantly redeveloped in the 1990s. The bridge was removed, and the centre now houses several chain stores, a 12-screen cinema, gym, pub, restaurants, a medical practice and a supermarket. The small shops continue along many of the most popular roads within Shepherd's Bush, such as Uxbridge Road. Many of these establishments cater for the local ethnic minority communities. For example, a relatively large proportion of the local shops on Goldhawk Road (south of the Green) are dedicated to Ethiopian culture, whether that be through food, clothing or barbershops. (see Demographics). Running parallel to, and partly under, an elevated section of the Hammersmith & City line there is a large permanent market, the Shepherd's Bush Market, selling all types of foodstuffs, cooked food, household goods, clothing and bric-ร -brac. The Westfield Group (with Hausinvest Europa) opened a shopping centre in October 2008. Office buildings As well as the offices within the Television Centre on Wood Lane, opposite this is Network House, 1 Ariel Way, a 20,000 sq ft (2,000 m2) building that was let by Frost Meadowcroft on behalf of Westfield to Zodiak Entertainment in September 2009 and in Rockley Road is the 160,000 sq ft (15,000 m2) Shepherds Building where Endemol another TV company are based and where Jellycat, a soft toy company, relocated their head office to in February 2010. Residential The residential areas of Shepherd's Bush are primarily located to the west of the Green, either side of Uxbridge Road and Goldhawk Road to the southwest, and about as far as Askew Road in the west. Much of the housing in this area consists of three- or four-storey terraces dating from the late 19th century, and subsequently divided up into small flats. Shepherd's Bush is also home to the White City Estate, a housing estate that was originally constructed in the 1930s and further extended after the war in the early 1950s. It was built on the site of the grounds of the 1908 Franco-British Exhibition and close to the White City Stadium and has given its name to the northern part of Shepherd's Bush known as White City. The London Borough of Hammersmith and Fulham has created the Shepherd's Bush Conservation Area in order to promote the protection of local buildings of historic interest, and improve the character of the neighbourhood. Transport Shepherd's Bush is a major transport interchange in west London. Five London Underground stations serve the area, including: All stations are in London fare zone 2. The Central line links the area to Ealing and areas of north-west London, such as Greenford and Ruislip. To the east, the line links Shepherd's Bush to London's West End, the City, and Stratford. The Circle and Hammersmith & City lines share the same route through the area, with direct services southbound to nearby Hammersmith. To the north, the lines curve eastwards towards Latimer Road and Ladbroke Grove. The lines then run directly to key destinations such as Paddington, King's Cross, Moorgate in the city, and the East End. Shepherd's Bush railway station is served by National Rail trains, operated by London Overground () and Southern. There are direct services from Shepherd's Bush to Kensington, and Clapham Junction and Balham, both of which are based in the south-west of London, and Croydon in the south east of London. Northbound Southern services link the area to Wembley, Watford, Hemel Hempstead, and Milton Keynes. London Overground services running northbound travel towards Willesden Junction, where services continue towards West Hampstead, Camden, Hackney, and Stratford in east London. The station is an out-of-station interchange with Shepherd's Bush tube station on the Central line, and is situated on the western side of Holland Park Roundabout. There are two main bus interchanges in Shepherd's Bush. London Buses routes 31, 49, 72, 94, 95, 148, 207, 220, 228, 237, 260, 272, 283, 295, 316, SL8, N72, N207, and C1 serve Shepherd's Bush Green and the southern side of the Westfield shopping centre. Most of these routes also serve White City bus station on the northern side of Westfield. Shepherd's Bush was also the proposed terminus of the West London Tram, an on-street light rail line running to Uxbridge via Acton, Ealing and Southall. This project was cancelled in 2007 in favour of an enhanced bus service and the development of Crossrail. Cycle lanes run around the southern rim of the Holland Park Roundabout on the eastern side of Shepherd's Bush. This provides cyclists with traffic-free access from Holland Park Avenue to Shepherd's Bush Green. Transport for London (TfL) proposes that a cycle spur will link the roundabout to Cycleway 9, which is intended to run along Kensington High Street. The Santander Cycles bicycle-sharing system operates around Shepherd's Bush, with docking stations near Westfield, Wood Lane station, and Shepherd's Bush Road. The A3220/West Cross Route runs along the eastern rim of the district. Until 2000, the route was the M41 motorway, part of the abandoned London Ringways network of orbital roads in London. Despite not retaining motorway status, pedal cycles are prohibited from using the route northbound. The A3220 links Shepherd's Bush with the A40/Westway to the north. This provides the area with a dual-carriageway link to Paddington and Marylebone to the east, and westbound to Acton and the M40 motorway. Southbound, the A3220 is named Holland Road and links the area to Earl's Court, the A4, and Chelsea. Other key routes through Shepherd's Bush include: In popular culture The junkyard in the sitcom Steptoe & Son was situated at the fictional Oil Drum Lane, Shepherd's Bush. It is often referred to in the BBC series Absolutely Fabulous where the main character, Edina Monsoon, owns her home but prefers to say she lives in the more upmarket Holland Park, nearby. The BBC used to have a number of offices in Shepherd's Bush; however, many have now been closed or moved. They included the Lime Grove Studios on the site of previous film studios Gaumont and Gainsborough Pictures. Sulgrave House, Threshold and Union Houses and Kensington Houseโnow a hotel. The BBC's presence in the Bush is now concentrated in two huge sites on Wood Lane, Television Centre and the White City building. The Media Village was built next to the White City building in the mid-1980s on the former site of the White City Stadium. It is used by the BBC and other media companies including Red Bee Media (formerly BBC Broadcast, now a private company). Television Centre was the national home of BBC Television, and it is from there that BBC TV and radio news, the BBC website and a host of TV drama and light entertainment were broadcast. The BBC moved all of its news operations from Television Centre to Broadcasting House in central London in 2012. Shepherd's Bush Green The newly regenerated Green in 2012โ13 was the site for the public sculptures Goaloids by Fine Artist Elliott Brook. This Inspire Mark (awarded by LOCOG (London Organising Committee of the Olympic and Paralympic Games) making it part of the Cultural Olympiad) artwork was installed on Shepherd's Bush Green for the duration of London 2012 and the Paralympic Games. These large unique rotating football related sculptures commemorated the history of Shepherd's Bush and White City, which hosted the 1908 Summer Olympics football. The London Borough of Hammersmith and Fulham is the only Borough to have three football teams playing Premier League Football. Bush Theatre is a writing theatre, situated on the Green. Shepherd's Bush Empire is a music venue and former television studio, and has played host to a number of acts and TV programmes, including David Bowie, Rolling Stones, Bob Dylan, The Old Grey Whistle Test, Wogan, That's Life!, Crackerjack, and This Is Your Life. Bush Hall is a venue at 310 Uxbridge Road, built in 1904 as a dance hall. It predominantly showcases smaller acoustic performers. Shepherd's Bush Walkabout was a music and live sports venue located on the western end of the green, and home to the West London Wildcats and Shepherds Bush Raiders Aussie Rules teams. On Australian and New Zealand national holidays, big sporting events such as the National Rugby League Grand Final, Rugby Championship and Bledisloe Cup Rugby Union test matches, Australian Football League grand final, memorial days such as Waitangi Day, Australia Day, and Anzac Day, and on Sundays after The Church, the Shepherd's Bush Walkabout was the centre of Antipodean life in London. The live music was usually a mixture of up and coming local acts, and cover bands who played Australian and New Zealand classic songs and contemporary popular music. Shepherd's Bush Walkabout closed in early October 2013 and it was announced the site would be redeveloped into a hotel. A number of influential music groups originate from in and around Shepherd's Bush. The Who infused much of their work with the youth culture of Shepherd's Bush during the 1960s and 1970s. Steve Jones, guitarist of punk legends the Sex Pistols, was born in Shepherd's Bush, and Pistols drummer Paul Cook grew up here. The Clash's early work is infused with the culture of Shepherd's Bush and the Westway. Libertines and Babyshambles frontman Pete Doherty moved to Shepherd's Bush at age 16. Tony Butler, bass-player with 1980s band Big Country and others, was born in Shepherd's Bush. Bands Bush and Symposium hail from Shepherd's Bush, the former taking their name from the area. Classical musicians Evelyn Glennie and Robert Steadman have both lived in Shepherd's Bush.[citation needed] In the Westfield shopping centre area at White City, the grade II listed Dimco buildings (1898), now redeveloped as a bus station, were used as the location for the 'Acme Factory' in the 1988 film Who Framed Roger Rabbit. Shepherd's Bush is home to Queens Park Rangers football club, who play their home games in Loftus Road. Olympic gold medal winner Linford Christie also grew up in Shepherd's Bush and lived in Loftus Road as a child. A stadium on nearby Wormwood Scrubs is named the Linford Christie Stadium in his honour. Some of the football games in the 1908 Olympics were hosted in Shepherd's Bush. Shepherds Bush F.C. were the local side until 1915. Former England national rugby union team captain Lawrence Dallaglio was born in Shepherd's Bush.[citation needed] The London Borough of Hammersmith and Fulham has created the Shepherd's Bush Conservation Area in order to promote the protection of local buildings of historic interest, and improve the character of the neighbourhood. Politics At Westminster, Shepherd's Bush is represented by Andy Slaughter, the Labour Party MP for the constituency of Hammersmith and Chiswick, which includes Shepherd's Bush. Gallery See also References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Gaza_Subdistrict,_Mandatory_Palestine] | [TOKENS: 264] |
Contents Gaza Subdistrict, Mandatory Palestine The Gaza Subdistrict (Arabic: ูุถุงุก ุบุฒุฉ; Hebrew: ื ืคืช ืขืื) was one of the subdistricts of Mandatory Palestine. It was situated in the southern Mediterranean coastline of the British Mandate of Palestine. After the 1948 Arab-Israeli War, the district disintegrated, with Israel controlling the northern and eastern portions while Egypt held control of the southern and central parts โ which became the Gaza Strip, under Egyptian military between 1948 and 1967, Israeli military rule between 1967 and 2005, part of the Palestinian National Authority (with some aspects of retained Israeli rule until the 2005 withdrawal) after the Oslo Accords until 2007, and is currently ruled by the Hamas as a de facto separate entity from the Palestinian National Authority. The parts which Israel held since 1948 were merged into Israeli administrative districts, their connection with Gaza severed. Borders Towns and villages All of the localities captured by Israel were depopulated prior, during or after the 1948 War. al-Majdal was not destroyed. See also References External links This article relating to the British Mandate for Palestine is a stub. You can help Wikipedia by adding missing information. |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Computer#CITEREFLavington1998] | [TOKENS: 10628] |
Contents Computer A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations (computation). Modern digital electronic computers can perform generic sets of operations known as programs, which enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation, or to a group of computers that are linked and function together, such as a computer network or computer cluster. A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users. Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power, and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace (Moore's law noted that counts doubled every two years), leading to the Digital Revolution during the late 20th and early 21st centuries. Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joysticks, etc.), output devices (monitors, printers, etc.), and input/output devices that perform both functions (e.g. touchscreens). Peripheral devices allow information to be retrieved from an external source, and they enable the results of operations to be saved and retrieved. Etymology It was not until the mid-20th century that the word acquired its modern definition; according to the Oxford English Dictionary, the first known use of the word computer was in a different sense, in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: "I haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic] breathed, and he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued to have the same meaning until the middle of the 20th century. During the latter part of this period, women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an "agent noun from compute (v.)". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name; [in a] theoretical [sense] from 1937, as Turing machine". The name has remained, although modern computers are capable of many higher-level functions. History Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers.[a] The use of counting rods is one example. The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money. The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abลซ Rayhฤn al-Bฤซrลซnฤซ in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abลซ Rayhฤn al-Bฤซrลซnฤซ invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620โ1630, by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft. In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton) that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically "programmed" to read instructions. Along with two other complex machines, the doll is at the Musรฉe d'Art et d'Histoire of Neuchรขtel, Switzerland, and still operates. In 1831โ1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which through a system of pulleys and cylinders could predict the perpetual calendar for every year from 0 CE (that is, 1 BCE) to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences. Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled "Note on the application of machinery to the computation of astronomical and mathematical tables". He also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The engine would incorporate an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. The machine was about a century ahead of its time. All the parts for his machine had to be made by hand โ this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its use in computing tables in 1906. In his work Essays on Automatics published in 1914, Leonardo Torres Quevedo wrote a brief history of Babbage's efforts at constructing a mechanical Difference Engine and Analytical Engine. The paper contains a design of a machine capable to calculate formulas like a x ( y โ z ) 2 {\displaystyle a^{x}(y-z)^{2}} , for a sequence of sets of values. The whole machine was to be controlled by a read-only program, which was complete with provisions for conditional branching. He also introduced the idea of floating-point arithmetic. In 1920, to celebrate the 100th anniversary of the invention of the arithmometer, Torres presented in Paris the Electromechanical Arithmometer, which allowed a user to input arithmetic problems through a keyboard, and computed and printed the results, demonstrating the feasibility of an electromechanical analytical engine. During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson (later to become Lord Kelvin) in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson. The art of mechanical analog computing reached its zenith with the differential analyzer, completed in 1931 by Vannevar Bush at MIT. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education (slide rule) and aircraft (control systems).[citation needed] Claude Shannon's 1937 master's thesis laid the foundations of digital computing, with his insight of applying Boolean algebra to the analysis and synthesis of switching circuits being the basic concept which underlies all electronic digital computers. By 1938, the United States Navy had developed the Torpedo Data Computer, an electromechanical analog computer for submarines that used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II, similar devices were developed in other countries. Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer. In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5โ10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system (used in Charles Babbage's earlier design), using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete. Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin. The Z4 served as the inspiration for the construction of the ERMETH, the first Swiss computer and one of the first in Europe. Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the AtanasoffโBerry Computer (ABC) in 1942, the first "automatic electronic digital computer". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February. Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total). Colossus Mark I contained 1,500 thermionic valves (tubes), but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process. The ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the "ENIAC girls". It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called "Universal Computing machine" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions (program) stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine. Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions (a program) that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report "Proposed Electronic Calculator" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945. The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as "small and primitive" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job. The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications. At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The metalโoxideโsilicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960 and was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics. The next great advance in computing power came with the advent of the integrated circuit (IC). The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952. The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as "a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated". However, Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (IC) chip. Kilby's IC had external wire connections, which made it difficult to mass-produce. Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Carl Frosch and Lincoln Derick work on semiconductor surface passivation by silicon dioxide. Modern monolithic ICs are predominantly MOS (metalโoxideโsemiconductor) integrated circuits, built from MOSFETs (MOS transistors). The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate (silicon-gate) MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs. The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel.[b] In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip. System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above (known as Package on package) or below (on the opposite side of the circuit board) the SoC, and the flash memory is usually placed right next to the SoC. This is done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs (such as the Snapdragon 865) being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power. The first mobile computers were heavy and ran from mains power. The 50 lb (23 kg) IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries โ and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s. These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip (SoCs), which are complete computers on a microchip the size of a coin. Types Computers can be classified in a number of different ways, including: A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word "computer" is synonymous with a personal electronic computer,[c] a typical modern definition of a computer is: "A device that computes, especially a programmable [usually] electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information." According to this definition, any device that processes information qualifies as a computer. Hardware The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard, displays, power supplies, cables, keyboards, printers and "mice" input devices are all hardware. A general-purpose computer has four main components: the arithmetic logic unit (ALU), the control unit, the memory, and the input and output devices (collectively termed I/O). These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of information so that when the circuit is on it represents a "1", and when off it represents a "0" (in positive logic representation). The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits. Input devices are the means by which the operations of a computer are controlled and it is provided with data. Examples include: Output devices are the means by which a computer provides the results of its calculations in a human-accessible form. Examples include: The control unit (often called a control system or central controller) manages the computer's various components; it reads and interprets (decodes) the program instructions, transforming them into control signals that activate other parts of the computer.[e] Control systems in advanced computers may change the order of execution of some instructions to improve performance. A key component common to all CPUs is the program counter, a special memory cell (a register) that keeps track of which location in memory the next instruction is to be read from.[f] The control system's function is as followsโ this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU: Since the program counter is (conceptually) just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as "jumps" and allow for loops (instructions that are repeated by the computer) and often conditional instruction execution (both examples of control flow). The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen. The control unit, ALU, and registers are collectively known as a central processing unit (CPU). Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor. The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers (integers) while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operationโalthough it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values (true or false) depending on whether one is equal to, greater than or less than the other ("is 64 greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic. Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices. A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered "address" and can store a single number. The computer can be instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595." The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers. In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits (called a byte). Each byte is able to represent 256 different numbers (28 = 256); either from 0 to 255 or โ128 to +127. To store larger numbers, several consecutive bytes may be used (typically, two, four or eight). When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory. The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory (which is often slow compared to the ALU and control units) greatly increases the computer's speed. Computer main memory comes in two principal varieties: RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary.[g] In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part. I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry. While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking, i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running "at the same time". Then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time, even though only one is ever executing in any given instant. This method of multitasking is sometimes termed "time-sharing" since each program is allocated a "slice" of time in turn. Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a "time slice" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss. Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple CPUs on a single integrated circuit) personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result. Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers.[h] They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called "embarrassingly parallel" tasks. Software Software is the part of a computer system that consists of the encoded information that determines the computer's operation, such as data or instructions on how to process the data. In contrast to the physical hardware from which the system is built, software is immaterial. Software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither is useful on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called "firmware". The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions (the program) can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second (gigaflops) and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors. This section applies to most common RAM machineโbased computers. In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out (executed) in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that "remembers" the location it jumped from and another instruction to return to the instruction following that jump instruction. Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention. Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language: Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second. In most computers, individual instructions are stored as machine code with each instruction being given a unique number (its operation code or opcode for short). The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs (which are just lists of these instructions) can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches. While it is possible to write computer programs as long lists of numbers (machine language) and while this technique was used with many early computers,[i] it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember โ a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand (machine language) is usually done by a computer program called an assembler. A programming language is a notation system for writing the source code from which a computer program is produced. Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques. There are thousands of programming languagesโsome intended for general purpose programming, others useful for only highly specialized applications. Machine languages and the assembly languages that represent them (collectively termed low-level programming languages) are generally unique to the particular architecture of a computer's central processing unit (CPU). For instance, an ARM architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot understand the machine language of an x86 CPU that might be in a PC.[j] Historically a significant number of other CPU architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80. Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently (and thereby help reduce programmer error). High level languages are usually "compiled" into machine language (or sometimes into assembly language and then into machine language) using another computer program called a compiler.[k] High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem(s) to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles. Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge. Errors in computer programs are called "bugs". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to "hang", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design.[l] Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term "bugs" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947. Networking and the Internet Computers have been used to coordinate information between multiple physical locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA (now DARPA), and the computer network that resulted was called the ARPANET. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The ChurchโTuring thesis is a mathematical statement of this versatility: any computer with a minimum capability (being Turing-complete) is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to perform the same computational tasks, given enough time and storage capacity. In the 20th century, artificial intelligence systems were predominantly symbolic: they executed code that was explicitly programmed by software developers. Machine learning models, however, have a set parameters that are adjusted throughout training, so that the model learns to accomplish a task based on the provided data. The efficiency of machine learning (and in particular of neural networks) has rapidly improved with progress in hardware for parallel computing, mainly graphics processing units (GPUs). Some large language models are able to control computers or robots. AI progress may lead to the creation of artificial general intelligence (AGI), a type of AI that could accomplish virtually any intellectual task at least as well as humans. Professions and organizations As the use of computers has spread throughout society, there are an increasing number of careers involving computers. The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature. See also Notes References Sources External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Steve_Omohundro] | [TOKENS: 849] |
Contents Steve Omohundro Stephen Malvern Omohundro (born 1959) is an American computer scientist whose areas of research include Hamiltonian physics, dynamical systems, programming languages, machine learning, machine vision, and the social implications of artificial intelligence. His current work uses rational economics to develop safe and beneficial intelligent technologies for better collaborative modeling, understanding, innovation, and decision making. Education Omohundro has degrees in physics and mathematics from Stanford University (Phi Beta Kappa) and a Ph.D. in physics from the University of California, Berkeley. Learning algorithms Omohundro started the "Vision and Learning Group" at the University of Illinois, which produced 4 Masters and 2 Ph.D. theses. His work in learning algorithms included a number of efficient geometric algorithms, the manifold learning task and various algorithms for accomplishing this task, other related visual learning and modelling tasks, the best-first model merging approach to machine learning (including the learning of Hidden Markov Models and Stochastic Context-free Grammars), and the Family Discovery Learning Algorithm, which discovers the dimension and structure of a parameterized family of stochastic models. Self-improving artificial intelligence and AI safety Omohundro started Self-Aware Systems in Palo Alto, California to research the technology and social implications of self-improving artificial intelligence. He is an advisor to the Machine Intelligence Research Institute on artificial intelligence. He argues that rational systems exhibit problematic natural "drives" that will need to be countered in order to build intelligent systems safely. His papers, talks, and videos on AI safety have generated extensive interest. He has given many talks on self-improving artificial intelligence, cooperative technology, AI safety, and connections with biological intelligence. Programming languages At Thinking Machines Corporation, Cliff Lasser and Steve Omohundro developed Star Lisp, the first programming language for the Connection Machine. Omohundro joined the International Computer Science Institute (ICSI) in Berkeley, California, where he led the development of the open source programming language Sather. Sather is featured in O'Reilly's History of Programming Languages poster. Physics and dynamical systems theory Omohundro's book Geometric Perturbation Theory in Physics describes natural Hamiltonian symplectic structures for a wide range of physical models that arise from perturbation theory analyses. He showed that there exist smooth partial differential equations which stably perform universal computation by simulating arbitrary cellular automata. The asymptotic behavior of these PDEs is therefore logically undecidable. With John David Crawford he showed that the orbits of three-dimensional period doubling systems can form an infinite number of topologically distinct torus knots and described the structure of their stable and unstable manifolds. Mathematica and Apple tablet contest From 1986 to 1988, he was an Assistant Professor of Computer science at the University of Illinois at Urbana-Champaign and cofounded the Center for Complex Systems Research with Stephen Wolfram and Norman Packard. While at the University of Illinois, he worked with Stephen Wolfram and five others to create the symbolic mathematics program Mathematica. He and Wolfram led a team of students that won an Apple Computer contest to design "The Computer of the Year 2000." Their design entry "Tablet" was a touchscreen tablet with GPS and other features that finally appeared when the Apple iPad was introduced 22 years later. Other contributions Subutai Ahmad and Steve Omohundro developed biologically realistic neural models of selective attention. As a research scientist at the NEC Research Institute, Omohundro worked on machine learning and computer vision, and was a co-inventor of U.S. Patent 5,696,964, "Multimedia Database Retrieval System Which Maintains a Posterior Probability Distribution that Each Item in the Database is a Target of a Search." Omohundro developed an extension to the game theoretic pirate puzzle featured in Scientific American. Outreach Omohundro has sat on the Machine Intelligence Research Institute board of advisors. He has written extensively on artificial intelligence, and has warned that "an autonomous weapons arms race is already taking place" because "military and economic pressures are driving the rapid development of autonomous systems". References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Qilian_Mountains] | [TOKENS: 1039] |
Contents Qilian Mountains 39ยฐ12โฒN 98ยฐ32โฒE๏ปฟ / ๏ปฟ39.200ยฐN 98.533ยฐE๏ปฟ / 39.200; 98.533 The Qilian Mountains,[a] along with the Altyn-Tagh sometimes known as the Nan Shan[b] as it is to the south of the Hexi Corridor, are northern outliers of the Kunlun Mountains, forming the border between the Qinghai and Gansu provinces of China. Geography The range stretches from the south of Dunhuang some 800 km to the southeast, forming the northeastern escarpment of the Tibetan Plateau and the southwestern border of the Hexi Corridor. The eponymous Qilian Shan peak, situated some 60 km south of Jiuquan, at 39ยฐ12โฒN 98ยฐ32โฒE๏ปฟ / ๏ปฟ39.200ยฐN 98.533ยฐE๏ปฟ / 39.200; 98.533, rises to 5,547 m. It is the highest peak of the main range, but there are two higher peaks further south, Kangze'gyai at 38ยฐ30โฒN 97ยฐ43โฒE๏ปฟ / ๏ปฟ38.500ยฐN 97.717ยฐE๏ปฟ / 38.500; 97.717 with 5,808 m and Qaidam Shan peak at 38ยฐ2โฒN 95ยฐ19โฒE๏ปฟ / ๏ปฟ38.033ยฐN 95.317ยฐE๏ปฟ / 38.033; 95.317 with 5,759 m. Other major peaks include Gangshiqia Peak in the east. The Nan-Shan range continues to the west as Yema Shan (5,250 m) and Altun Shan (Altyn Tagh) (5,798 m). To the east, it passes north of Qinghai Lake, terminating as Daban Shan and Xinglong Shan near Lanzhou, with Maoma Shan peak (4,070 m) as an eastern outlier. Sections of the Ming dynasty's Great Wall pass along its northern slopes, and south of the northern outlier Longshou Shan (3,616 m). The Qilian mountains are the source of numerous, mostly small, rivers and creeks that flow northeast, enabling irrigated agriculture in the Hexi Corridor (Gansu Corridor) communities, and eventually disappearing in the Alashan Desert. The best known of these streams is the Ejin (Heihe) River. The region has many glaciers, the largest of which is the Touming Mengke. These glaciers have undergone acceleration in their melting in recent decades. Lake Hala is a large brackish lake, located within the Qilian mountains. The characteristic ecosystem of the Qilian Mountains has been described by the World Wildlife Fund as the Qilian Mountains conifer forests. Biandukou (ๆ้ฝๅฃ), with an altitude of over 3500 m, is a pass in the Qilian Mountains. It links Minle County of Gansu in the north and Qilian County of Qinghai in the south. History The Shiji mentions the name "Qilian mountains" together with Dunhuang in relation to the homeland of the Yuezhi. These Qilian Mountains however, have been suggested to be the mountains now known as Tian Shan, 1,500 km to the west. Dunhuang has also been argued to be the Dunhong mountain. Qilian (็ฅ่ฟ) is said to be a Xiongnu word meaning "sky" (Chinese: ๅคฉ) according to Yan Shigu, a Tang dynasty commentator on the Hanshu. Sanping Chen (1998) suggested that ๅคฉ tiฤn, ๆๅคฉ hร otiฤn, ็ฅ้ฃ qรญliรกn, and ่ตซ้ฃ Hรจliรกn were all cognates and descended from multisyllabic Proto-Sinitic *gh?klien. Schessler (2014) objects to Yan Shigu's statement that ็ฅ้ฃ was a Xiongnu word; he reconstructs ็ฅ้ฃ's pronunciation in around 121 BCE as *gษจ-lian, apparently the same etymon as ไนพ (โฐ) the trigram for "Heaven", in standard Chinese qiรกn < Middle Chinese QYS *gjรคn < Eastern Han Chinese gษจan < Old Chinese *gran, which Schuessler etymologizes as from Proto-Sino-Tibetan and related to Proto-Tibeto-Burman *m-ka-n, cognate with Written Tibetan เฝเฝเฝ (Wylie transliteration: mkha') โheavenโ. The Tuyuhun were based around the Qilian mountains. The mountain range was formerly known in European languages as the Richthofen Range after Ferdinand von Richthofen, who was the Red Baron's explorer-geologist uncle. The mountain range gives its name to Qinghai's Qilian County. Notes See also References External links |
======================================== |
[SOURCE: https://www.theverge.com/tech/880812/ramageddon-ram-shortage-memory-crisis-price-2026-phones-laptops#comments] | [TOKENS: 4798] |
TechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportThe RAM shortage is coming for everything you care about๏ปฟItโs not just desktops, itโs phones and laptops and consoles, and itโs getting worse.by Sean HollisterCloseSean HollisterSenior EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Sean HollisterFeb 19, 2026, 1:00 PM UTCLinkShareGift Image: Cath Virginia / The Verge, Getty ImagesPart OfRAM price hikes: the latest on the global memory shortagesee all updates Sean HollisterCloseSean HollisterPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Sean Hollister is a senior editor and founding member of The Verge who covers gadgets, games, and toys. He spent 15 years editing the likes of CNET, Gizmodo, and Engadget.Maybe youโve heard: Memory is expensive now. The price of RAM has tripled, quadrupled, even sextupled depending on the type of chip, all because AI companies are gobbling it up.But maybe youโve thought: I donโt buy memory sticks! I donโt build my own PCs! It wonโt affect me, right? Iโm here to tell you RAM is coming for your wallet anyhow.Do you have a phone in your pocket youโd like to upgrade in the next few years? Fancy a game console or handheld? A laptop, perhaps? Will you need a new router, whether youโre purchasing outright or renting from your ISP? Each of these devices is expected to have shortages, price hikes, or both in 2026. And even if you donโt plan to buy, you depend on goods and services from others whoโll be paying more to upgrade their devices.โRAMageddonโ is only getting worse, and thereโs no immediate end in sight. Everything that has a computer inside depends on RAM, and almost everything has a computer in it now: farm tractors, hospital equipment, your TV set-top box. RAM is the short-term memory of a device, and AI especially needs lots to juggle all the data itโs processing. And most of that RAM comes from just three companies that are happily prioritizing the AI gold rush over everything else.RelatedRAM is ruining everythingThe RAM crunch could kill products and even entire companies, memory exec admitsThe RAM shortage is here to stayWe may never know how many products were truly delayed or canceled due to RAM โ like how Nvidia may skip releasing a gaming GPU for the first time in 30 years, or how Meta may not release a single VR headset this year and plans to charge a premium when they return in 2027, or how Sonyโs next PlayStation may get pushed to 2029 because of RAM.But we do know that RAMageddon is coming for your phone next.RAM in your phoneAnalysts from IDC, Omdia, and Counterpoint agree: 2025 was one of the best years ever for smartphone sales, growing shipments roughly 2 percent to roughly 1.25 billion phones in a single year. Apple reported record iPhone sales in January.They also all agree that the RAM shortage is about to flip that on its head. Prices will go up. Fewer products will be available. Or as Omdia research manager Le Xuan Chiew put it, โvendors will shift toward prioritizing profitability while expanding alternative revenue streams.โFlagship smartphone chipmaker Qualcomm is warning that companies will build fewer phones, period โ and that remaining phones will be more expensive. CEO Cristiano Amon says a big dip in its smartphone business will be โ100 percentโ because of the memory shortage.Here are some choice quotes from Amon on the companyโs February 4th earnings call:โUnfortunately, I think that the whole sector is impacted by memory.โโIndustry-wide memory shortage and price increases are likely to define the overall scale of the handset industry through the fiscal year.โโOEMs are very likely to prioritize premium and high-tier, how they have done in the past.โโWe just wish there was more memory.โCFO Akash Palkhiwala also said: โWeโve seen several OEMs, especially in China, take actions to reduce their handset build plans and channel inventory.โHow much more might you pay? Hard to say, but IDC points out that memory represents 15โ20 percent of the materials cost of a midrange phone, and about 10โ15 percent of a high-end flagship phone. When we first started reporting on RAM ruining everything, IDC thought average phone prices might go up by just $9. Now, itโs predicting the average price might increase as much as 8 percent, with โsignificantly higherโ price hikes on cheaper phones where โOEMs will have to pass the cost to end users.โThat means if youโre used to buying $500 phones, they might easily cost $600 or more. Even if youโre used to $1,000 phones, you may get less bang for the buck: โnew flagship models in 2026 will likely have no RAM upgrades, sticking to 12GB for Pro models rather than increasing to 16GB,โ IDC writes. Weโre already seeing similar: Google just announced a Pixel 10A with no new chips and the same mediocre 8GB of RAM inside.Even Apple, which can typically bully suppliers on pricing, is feeling pressure on its supply chain now that AI companies are writing huge checks for memory supplies, reports The Wall Street Journal. That could force it to increase the price of its iPhones to maintain the companyโs profits. Apple CEO Tim Cook told analysts this quarter that he โwill look at a range of options to deal withโ the way that the shortage is impacting the companyโs gross margins.โIndustry sourcesโ told ZDNet Korea that Apple may pay 80 percent or even 100 percent more for memory this quarter after renegotiating with Samsung and SK Hynix โ and may pay even more in the second half of the year.A Nintendo Switch 2, which might soon cost more due to RAM. Photo by Amelia Holowaty Krales / The VergeRAM in your game systemThe era of โrazor and bladeโ game console subsidies โ where companies sell consoles at a loss and make their money back on exclusive software โ was over before the RAM crunch even began. Trumpโs tariffs broke the dam, and now weโre half-expecting the next Xbox to be a $1,000 PC rather than a traditional console.Bloomberg reports that RAMaggedon is also coming for the Nintendo Switch 2 in the form of a price hike, and Sonyโs PS6 in the form of a delay โto 2028 or even 2029.โOur last, best hope for the subsidy model was Valve, a company that famously rakes in money hand over fist and launched the original Steam Deck at the unbeatable price of $399 through a โpainfulโ amount of subsidy. If Valve did the same for the upcoming Steam Machine, it could have legitimately competed with the PlayStation and Xbox for your living room TV.But Valve has all but dashed those hopes through a series of moves. In late December, it discontinued the $399 Steam Deck, raising the starting price to $549. In early February, it announced that the Steam Machine had been delayed due to the memory shortage and that the company would have to reset expectations on pricing. And now, even the $549 Steam Deck OLED is out of stock specifically because of the memory crisis.Other handhelds are getting pricier too: although the Lenovo Legion Go 2 will get SteamOS this year, memory shrinkflation means it will cost more or contain less than the Windows version did when it first arrived, with less horsepower, storage, and RAM at the $1,199 mark. The MSI Claw 8 AI Plus, which I thought was pricey at $999, now costs $1,099, $1,149, or even $1,199 depending on where you look.RAM in your laptopPCs generally need even more RAM than phones and consoles, and theyโve been hit quicker because PC makers havenโt felt the need to stockpile RAM in advance. They also generally need larger SSDs whose prices have surged 90 percent in a single quarter.Thatโs why almost every major laptop manufacturer โ Lenovo, Dell, HP, Asus, Acer โ is reportedly planning price hikes of 10, 20, or even 30 percent, and why Chosun Biz is reporting that Lenovo, HP, Dell, Samsung, and LG are rethinking their PC product roadmaps for 2026.IDC suggests the whole PC market could decline by 4.9 to 8.9 percent in 2026, while TrendForce is forecasting a 2.4 percent decline in laptops where it previously expected growth.Dell reportedly already began hiking prices of its laptops by $55 to $765, depending on which components you choose. And modular laptop company Framework writes that its own cost has risen from roughly $10 per gigabyte to as much as $16 per gigabyte, and so itโs selling its new laptops and mainboards for 6 percent to 16 percent more than previously.โWe are again only increasing pricing enough to cover the increases in cost from our suppliers,โ Framework CEO Nirav Patel writes.Even though Lenovo has admitted to hoarding RAM so it wonโt run out, the worldโs largest PC manufacturer is still paying more to secure its supply for 2026; CEO Yang Yuanqing told Bloomberg his memory costs increased by 40 to 50 percent last quarter and suggested prices might double soon.While Apple hasnโt telegraphed plans to raise MacBook prices due to RAM price hikes, itโs quite possible weโll see for ourselves in just two weeks at its March 4th event.When will it end? โThereโs no relief until 2028,โ said Intel CEO Lip-Bu Tan in early February, after speaking to two of the big three memory companies. One of them, Micron, has publicly said the same, telling Wccftech that its Idaho memory fab wonโt open until mid-2027 โ and that โyouโre not really gonna see real outputโ until 2028. SK Hynix also previously predicted the shortage would last through late 2027.While Micron, SK Hynix, and Samsung, which control about 95 percent of the global DRAM supply, are making enough money to increase memory production, it will take time to build their promised new fabs. And they also see it as more profitable and less risky to build out slowly instead of rushing to meet demand.As SemiAnalysis founder Dylan Patel told us in December, it wasnโt that long ago that some of these memory companies were losing money due to overproduction: โThe scary thing about this industry is if you overbuild the most, you end up going bankrupt.โ Samsung is expected to increase memory wafer supply by just 5 percent this year.In the meantime, the RAM makers are going to profit as much as they can, with the added costs ultimately being passed on to you.Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Sean HollisterCloseSean HollisterSenior EditorPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Sean HollisterReportCloseReportPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All ReportTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMore in: RAM price hikes: the latest on the global memory shortageThe RAM crunch could kill products and even entire companies, memory exec admitsSean HollisterFeb 19Valveโs Steam Deck OLED will be โintermittentlyโ out of stock because of the RAM crisisJay PetersFeb 17Switch 2 pricing and next PlayStation release could be impacted by memory shortageJess WeatherbedFeb 16Most PopularMost PopularXbox chief Phil Spencer is leaving MicrosoftRead Microsoft gaming CEO Asha Sharmaโs first memo on the future of XboxThe RAM shortage is coming for everything you care aboutAmazon blames human employees for an AI coding agentโs mistakeWill Stancil, man of the people or just an annoying guy?The Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech Posts from this topic will be added to your daily email digest and your homepage feed. See All Report The RAM shortage is coming for everything you care about ๏ปฟItโs not just desktops, itโs phones and laptops and consoles, and itโs getting worse. Posts from this author will be added to your daily email digest and your homepage feed. See All by Sean Hollister Posts from this author will be added to your daily email digest and your homepage feed. See All by Sean Hollister Maybe youโve heard: Memory is expensive now. The price of RAM has tripled, quadrupled, even sextupled depending on the type of chip, all because AI companies are gobbling it up. But maybe youโve thought: I donโt buy memory sticks! I donโt build my own PCs! It wonโt affect me, right? Iโm here to tell you RAM is coming for your wallet anyhow. Do you have a phone in your pocket youโd like to upgrade in the next few years? Fancy a game console or handheld? A laptop, perhaps? Will you need a new router, whether youโre purchasing outright or renting from your ISP? Each of these devices is expected to have shortages, price hikes, or both in 2026. And even if you donโt plan to buy, you depend on goods and services from others whoโll be paying more to upgrade their devices. โRAMageddonโ is only getting worse, and thereโs no immediate end in sight. Everything that has a computer inside depends on RAM, and almost everything has a computer in it now: farm tractors, hospital equipment, your TV set-top box. RAM is the short-term memory of a device, and AI especially needs lots to juggle all the data itโs processing. And most of that RAM comes from just three companies that are happily prioritizing the AI gold rush over everything else. We may never know how many products were truly delayed or canceled due to RAM โ like how Nvidia may skip releasing a gaming GPU for the first time in 30 years, or how Meta may not release a single VR headset this year and plans to charge a premium when they return in 2027, or how Sonyโs next PlayStation may get pushed to 2029 because of RAM. But we do know that RAMageddon is coming for your phone next. RAM in your phone Analysts from IDC, Omdia, and Counterpoint agree: 2025 was one of the best years ever for smartphone sales, growing shipments roughly 2 percent to roughly 1.25 billion phones in a single year. Apple reported record iPhone sales in January. They also all agree that the RAM shortage is about to flip that on its head. Prices will go up. Fewer products will be available. Or as Omdia research manager Le Xuan Chiew put it, โvendors will shift toward prioritizing profitability while expanding alternative revenue streams.โ Flagship smartphone chipmaker Qualcomm is warning that companies will build fewer phones, period โ and that remaining phones will be more expensive. CEO Cristiano Amon says a big dip in its smartphone business will be โ100 percentโ because of the memory shortage. Here are some choice quotes from Amon on the companyโs February 4th earnings call: How much more might you pay? Hard to say, but IDC points out that memory represents 15โ20 percent of the materials cost of a midrange phone, and about 10โ15 percent of a high-end flagship phone. When we first started reporting on RAM ruining everything, IDC thought average phone prices might go up by just $9. Now, itโs predicting the average price might increase as much as 8 percent, with โsignificantly higherโ price hikes on cheaper phones where โOEMs will have to pass the cost to end users.โ That means if youโre used to buying $500 phones, they might easily cost $600 or more. Even if youโre used to $1,000 phones, you may get less bang for the buck: โnew flagship models in 2026 will likely have no RAM upgrades, sticking to 12GB for Pro models rather than increasing to 16GB,โ IDC writes. Weโre already seeing similar: Google just announced a Pixel 10A with no new chips and the same mediocre 8GB of RAM inside. Even Apple, which can typically bully suppliers on pricing, is feeling pressure on its supply chain now that AI companies are writing huge checks for memory supplies, reports The Wall Street Journal. That could force it to increase the price of its iPhones to maintain the companyโs profits. Apple CEO Tim Cook told analysts this quarter that he โwill look at a range of options to deal withโ the way that the shortage is impacting the companyโs gross margins. โIndustry sourcesโ told ZDNet Korea that Apple may pay 80 percent or even 100 percent more for memory this quarter after renegotiating with Samsung and SK Hynix โ and may pay even more in the second half of the year. RAM in your game system The era of โrazor and bladeโ game console subsidies โ where companies sell consoles at a loss and make their money back on exclusive software โ was over before the RAM crunch even began. Trumpโs tariffs broke the dam, and now weโre half-expecting the next Xbox to be a $1,000 PC rather than a traditional console. Bloomberg reports that RAMaggedon is also coming for the Nintendo Switch 2 in the form of a price hike, and Sonyโs PS6 in the form of a delay โto 2028 or even 2029.โ Our last, best hope for the subsidy model was Valve, a company that famously rakes in money hand over fist and launched the original Steam Deck at the unbeatable price of $399 through a โpainfulโ amount of subsidy. If Valve did the same for the upcoming Steam Machine, it could have legitimately competed with the PlayStation and Xbox for your living room TV. But Valve has all but dashed those hopes through a series of moves. In late December, it discontinued the $399 Steam Deck, raising the starting price to $549. In early February, it announced that the Steam Machine had been delayed due to the memory shortage and that the company would have to reset expectations on pricing. And now, even the $549 Steam Deck OLED is out of stock specifically because of the memory crisis. Other handhelds are getting pricier too: although the Lenovo Legion Go 2 will get SteamOS this year, memory shrinkflation means it will cost more or contain less than the Windows version did when it first arrived, with less horsepower, storage, and RAM at the $1,199 mark. The MSI Claw 8 AI Plus, which I thought was pricey at $999, now costs $1,099, $1,149, or even $1,199 depending on where you look. RAM in your laptop PCs generally need even more RAM than phones and consoles, and theyโve been hit quicker because PC makers havenโt felt the need to stockpile RAM in advance. They also generally need larger SSDs whose prices have surged 90 percent in a single quarter. Thatโs why almost every major laptop manufacturer โ Lenovo, Dell, HP, Asus, Acer โ is reportedly planning price hikes of 10, 20, or even 30 percent, and why Chosun Biz is reporting that Lenovo, HP, Dell, Samsung, and LG are rethinking their PC product roadmaps for 2026. IDC suggests the whole PC market could decline by 4.9 to 8.9 percent in 2026, while TrendForce is forecasting a 2.4 percent decline in laptops where it previously expected growth. Dell reportedly already began hiking prices of its laptops by $55 to $765, depending on which components you choose. And modular laptop company Framework writes that its own cost has risen from roughly $10 per gigabyte to as much as $16 per gigabyte, and so itโs selling its new laptops and mainboards for 6 percent to 16 percent more than previously. โWe are again only increasing pricing enough to cover the increases in cost from our suppliers,โ Framework CEO Nirav Patel writes. Even though Lenovo has admitted to hoarding RAM so it wonโt run out, the worldโs largest PC manufacturer is still paying more to secure its supply for 2026; CEO Yang Yuanqing told Bloomberg his memory costs increased by 40 to 50 percent last quarter and suggested prices might double soon. While Apple hasnโt telegraphed plans to raise MacBook prices due to RAM price hikes, itโs quite possible weโll see for ourselves in just two weeks at its March 4th event. When will it end? โThereโs no relief until 2028,โ said Intel CEO Lip-Bu Tan in early February, after speaking to two of the big three memory companies. One of them, Micron, has publicly said the same, telling Wccftech that its Idaho memory fab wonโt open until mid-2027 โ and that โyouโre not really gonna see real outputโ until 2028. SK Hynix also previously predicted the shortage would last through late 2027. While Micron, SK Hynix, and Samsung, which control about 95 percent of the global DRAM supply, are making enough money to increase memory production, it will take time to build their promised new fabs. And they also see it as more profitable and less risky to build out slowly instead of rushing to meet demand. As SemiAnalysis founder Dylan Patel told us in December, it wasnโt that long ago that some of these memory companies were losing money due to overproduction: โThe scary thing about this industry is if you overbuild the most, you end up going bankrupt.โ Samsung is expected to increase memory wafer supply by just 5 percent this year. In the meantime, the RAM makers are going to profit as much as they can, with the added costs ultimately being passed on to you. Posts from this author will be added to your daily email digest and your homepage feed. See All by Sean Hollister Posts from this topic will be added to your daily email digest and your homepage feed. See All Report Posts from this topic will be added to your daily email digest and your homepage feed. See All Tech More in: RAM price hikes: the latest on the global memory shortage Most Popular The Verge Daily A free daily digest of the news that matters most. This is the title for the native ad More in Tech This is the title for the native ad Top Stories ยฉ 2026 Vox Media, LLC. All Rights Reserved |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Extraterrestrial_life#cite_ref-67] | [TOKENS: 11349] |
Contents Extraterrestrial life Extraterrestrial life, or alien life (colloquially aliens), is life that originates from another world rather than on Earth. No extraterrestrial life has yet been scientifically or conclusively detected. Such life might range from simple forms such as prokaryotes to intelligent beings, possibly bringing forth civilizations that might be far more, or far less, advanced than humans. The Drake equation speculates about the existence of sapient life elsewhere in the universe. The science of extraterrestrial life is known as astrobiology. Speculation about inhabited worlds beyond Earth dates back to antiquity. Early Christian writers, including Augustine, discussed ideas from thinkers like Democritus and Epicurus about countless worlds in the vast universe. Pre-modern writers typically assumed extraterrestrial "worlds" were inhabited by living beings. William Vorilong, in the 15th century, acknowledged the possibility Jesus could have visited extraterrestrial worlds to redeem their inhabitants.: 26 In 1440, Nicholas of Cusa suggested Earth is a "brilliant star"; he theorized that all celestial bodies, even the Sun, could host life. Descartes wrote that there were no means to prove the stars were not inhabited by "intelligent creatures", but their existence was a matter of speculation.: 67 In comparison to the life-abundant Earth, the vast majority of intrasolar and extrasolar planets and moons have harsh surface conditions and disparate atmospheric chemistry, or lack an atmosphere. However, there are many extreme and chemically harsh ecosystems on Earth that do support forms of life and are often hypothesized to be the origin of life on Earth. Examples include life surrounding hydrothermal vents, acidic hot springs, and volcanic lakes, as well as halophiles and the deep biosphere. Since the mid-20th century, researchers have searched for extraterrestrial life and intelligence. Solar system studies focus on Venus, Mars, Europa, and Titan, while exoplanet discoveries now total 6,022 confirmed planets in 4,490 systems as of October 2025. Depending on the category of search, methods range from analysis of telescope and specimen data to radios used to detect and transmit interstellar communication. Interstellar travel remains largely hypothetical, with only the Voyager 1 and Voyager 2 probes confirmed to have entered the interstellar medium. The concept of extraterrestrial life, especially intelligent life, has greatly influenced culture and fiction. A key debate centers on contacting extraterrestrial intelligence: some advocate active attempts, while others warn it could be risky, given human history of exploiting other societies. Context Initially, after the Big Bang, the universe was too hot to allow life. It is estimated that the temperature of the universe was around 10 billion Kelvin at the one-second mark. Roughly 15 million years later, it cooled to temperate levels, though the elements of organic life were yet nonexistent. The only freely available elements at that point were hydrogen and helium. Carbon and oxygen (and later, water) would not appear until 50 million years later, created through stellar fusion. At that point, the difficulty for life to appear was not the temperature, but the scarcity of free heavy elements. Planetary systems emerged, and the first organic compounds may have formed in the protoplanetary disk of dust grains that would eventually create rocky planets like Earth. Although Earth was in a molten state after its birth and may have burned any organics that fell on it, it would have been more receptive once it cooled down. Once the right conditions on Earth were met, life started by a chemical process known as abiogenesis. Alternatively, life may have formed less frequently, then spreadโby meteoroids, for exampleโbetween habitable planets in a process called panspermia. During most of its stellar evolution, stars combine hydrogen nuclei to make helium nuclei by stellar fusion, and the comparatively lighter weight of helium allows the star to release the extra energy. The process continues until the star uses all of its available fuel, with the speed of consumption being related to the size of the star. During its last stages, stars start combining helium nuclei to form carbon nuclei. The larger stars can further combine carbon nuclei to create oxygen and silicon, oxygen into neon and sulfur, and so on until iron. Ultimately, the star blows much of its content back into the stellar medium, where it would join clouds that would eventually become new generations of stars and planets. Many of those materials are the raw components of life on Earth. As this process takes place in all the universe, said materials are ubiquitous in the cosmos and not a rarity from the Solar System. Earth is a planet in the Solar System, a planetary system formed by a star at the center, the Sun, and the objects that orbit it: other planets, moons, asteroids, and comets. The sun is part of the Milky Way, a galaxy. The Milky Way is part of the Local Group, a galaxy group that is in turn part of the Laniakea Supercluster. The universe is composed of all similar structures in existence. The immense distances between celestial objects are a difficulty for studying extraterrestrial life. So far, humans have only set foot on the Moon and sent robotic probes to other planets and moons in the Solar System. Although probes can withstand conditions that may be lethal to humans, the distances cause time delays: the New Horizons took nine years after launch to reach Pluto. No probe has ever reached extrasolar planetary systems. The Voyager 2 left the Solar System at a speed of 50,000 kilometers per hour; if it headed towards the Alpha Centauri system, the closest one to Earth at 4.4 light years, it would reach it in 100,000 years. Under current technology, such systems can only be studied by telescopes, which have limitations. It is estimated that dark matter has a larger amount of combined matter than stars and gas clouds, but as it plays no role in the stellar evolution of stars and planets, it is usually not taken into account by astrobiology. There is an area around a star, the circumstellar habitable zone or "Goldilocks zone", wherein water may be at the right temperature to exist in liquid form at a planetary surface. This area is neither too close to the star, where water would become steam, nor too far away, where water would be frozen as ice. However, although useful as an approximation, planetary habitability is complex and defined by several factors. Being in the habitable zone is not enough for a planet to be habitable, not even to actually have such liquid water. Venus is located in the solar system's habitable zone, but does not have liquid water because of the conditions of its atmosphere. Jovian planets or gas giants are not considered habitable even if they orbit close enough to their stars as hot Jupiters, due to crushing atmospheric pressures. The actual distances for the habitable zones vary according to the type of star, and even the solar activity of each specific star influences the local habitability. The type of star also defines the time the habitable zone will exist, as its presence and limits will change along with the star's stellar evolution. The Big Bang occurred 13.8 billion years ago, the Solar System was formed 4.6 billion years ago, and the first hominids appeared 6 million years ago. Life on other planets may have started, evolved, given birth to extraterrestrial intelligences, and perhaps even faced a planetary extinction event millions or billions of years ago. When considered from a cosmic perspective, the brief times of existence of Earth's species may suggest that extraterrestrial life may be equally fleeting under such a scale. During a period of about 7 million years, from about 10 to 17 million years after the Big Bang, the background temperature was between 373 and 273 K (100 and 0 ยฐC; 212 and 32 ยฐF), allowing the possibility of liquid water if any planets existed. Avi Loeb (2014) speculated that primitive life might in principle have appeared during this window, which he called "the Habitable Epoch of the Early Universe". Life on Earth is quite ubiquitous across the planet and has adapted over time to almost all the available environments in it, extremophiles and the deep biosphere thrive at even the most hostile ones. As a result, it is inferred that life in other celestial bodies may be equally adaptive. However, the origin of life is unrelated to its ease of adaptation and may have stricter requirements. A celestial body may not have any life on it, even if it were habitable. Likelihood of existence Life in the cosmos beyond Earth has been observed. The hypothesis of ubiquitous extraterrestrial life relies on three main ideas. The first one, the size of the universe, allows for plenty of planets to have a similar habitability to Earth, and the age of the universe gives enough time for a long process analog to the history of Earth to happen there. The second is that the substances that make life, such as carbon and water, are ubiquitous in the universe. The third is that the physical laws are universal, which means that the forces that would facilitate or prevent the existence of life would be the same ones as on Earth. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, it would be improbable for life not to exist somewhere else other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. Other authors consider instead that life in the cosmos, or at least multicellular life, may actually be rare. The Rare Earth hypothesis maintains that life on Earth is possible because of a series of factors that range from the location in the galaxy and the configuration of the Solar System to local characteristics of the planet, and that it is unlikely that another planet simultaneously meets all such requirements. The proponents of this hypothesis consider that very little evidence suggests the existence of extraterrestrial life and that, at this point, it is just a desired result and not a reasonable scientific explanation for any gathered data. In 1961, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The Drake equation is:: xix where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: 10,000 = 5 โ
0.5 โ
2 โ
1 โ
0.2 โ
1 โ
10,000 {\displaystyle 10{,}000=5\cdot 0.5\cdot 2\cdot 1\cdot 0.2\cdot 1\cdot 10{,}000} [better source needed] The Drake equation has proved controversial since, although it is written as a math equation, none of its values were known at the time. Although some values may eventually be measured, others are based on social sciences and are not knowable by their very nature. This does not allow one to make noteworthy conclusions from the equation. Based on observations from the Hubble Space Telescope, there are nearly 2 trillion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets. In other words, there are 6.25ร1018 stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the Kepler spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100โ400 billion exoplanets. The Nebular hypothesis that explains the formation of the Solar System and other planetary systems would suggest that those can have several configurations, and not all of them may have rocky planets within the habitable zone. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilisations and the lack of evidence for such civilisations is known as the Fermi paradox. Dennis W. Sciama claimed that life's existence in the universe depends on various fundamental constants. Zhi-Wei Wang and Samuel L. Braunstein suggest that a random universe capable of supporting life is likely to be just barely able to do so, giving a potential explanation to the Fermi paradox. Biochemical basis If extraterrestrial life exists, it could range from simple microorganisms and multicellular organisms similar to animals or plants, to complex alien intelligences akin to humans. When scientists talk about extraterrestrial life, they consider all those types. Although it is possible that extraterrestrial life may have other configurations, scientists use the hierarchy of lifeforms from Earth for simplicity, as it is the only one known to exist. The first basic requirement for life is an environment with non-equilibrium thermodynamics, which means that the thermodynamic equilibrium must be broken by a source of energy. The traditional sources of energy in the cosmos are the stars, such as for life on Earth, which depends on the energy of the sun. However, there are other alternative energy sources, such as volcanoes, plate tectonics, and hydrothermal vents. There are ecosystems on Earth in deep areas of the ocean that do not receive sunlight, and take energy from black smokers instead. Magnetic fields and radioactivity have also been proposed as sources of energy, although they would be less efficient ones. Life on Earth requires water in a liquid state as a solvent in which biochemical reactions take place. It is highly unlikely that an abiogenesis process can start within a gaseous or solid medium: the atom speeds, either too fast or too slow, make it difficult for specific ones to meet and start chemical reactions. A liquid medium also allows the transport of nutrients and substances required for metabolism. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia rather than water has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. Another unknown aspect of potential extraterrestrial life would be the chemical elements that would compose it. Life on Earth is largely composed of carbon, but there could be other hypothetical types of biochemistry. A replacement for carbon would need to be able to create complex molecules, store information required for evolution, and be freely available in the medium. To create DNA, RNA, or a close analog, such an element should be able to bind its atoms with many others, creating complex and stable molecules. It should be able to create at least three covalent bonds: two for making long strings and at least a third to add new links and allow for diverse information. Only nine elements meet this requirement: boron, nitrogen, phosphorus, arsenic, antimony (three bonds), carbon, silicon, germanium and tin (four bonds). As for abundance, carbon, nitrogen, and silicon are the most abundant ones in the universe, far more than the others. On Earth's crust the most abundant of those elements is silicon, in the Hydrosphere it is carbon and in the atmosphere, it is carbon and nitrogen. Silicon, however, has disadvantages over carbon. The molecules formed with silicon atoms are less stable, and more vulnerable to acids, oxygen, and light. An ecosystem of silicon-based lifeforms would require very low temperatures, high atmospheric pressure, an atmosphere devoid of oxygen, and a solvent other than water. The low temperatures required would add an extra problem, the difficulty to kickstart a process of abiogenesis to create life in the first place. Norman Horowitz, head of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976 considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Even if extraterrestrial life is based on carbon and uses water as a solvent, like Earth life, it may still have a radically different biochemistry. Life is generally considered to be a product of natural selection. It has been proposed that to undergo natural selection a living entity must have the capacity to replicate itself, the capacity to avoid damage/decay, and the capacity to acquire and process resources in support of the first two capacities. Life on Earth may have started with an RNA world and later evolved to its current form, where some of the RNA tasks were transferred to DNA and proteins. Extraterrestrial life may still be stuck using RNA, or evolve into other configurations. It is unclear if our biochemistry is the most efficient one that could be generated, or which elements would follow a similar pattern. However, it is likely that, even if cells had a different composition to those from Earth, they would still have a cell membrane. Life on Earth jumped from prokaryotes to eukaryotes and from unicellular organisms to multicellular organisms through evolution. So far no alternative process to achieve such a result has been conceived, even if hypothetical. Evolution requires life to be divided into individual organisms, and no alternative organisation has been satisfactorily proposed either. At the basic level, membranes define the limit of a cell, between it and its environment, while remaining partially open to exchange energy and resources with it. The evolution from simple cells to eukaryotes, and from them to multicellular lifeforms, is not guaranteed. The Cambrian explosion took place thousands of millions of years after the origin of life, and its causes are not fully known yet. On the other hand, the jump to multicellularity took place several times, which suggests that it could be a case of convergent evolution, and so likely to take place on other planets as well. Palaeontologist Simon Conway Morris considers that convergent evolution would lead to kingdoms similar to our plants and animals, and that many features are likely to develop in alien animals as well, such as bilateral symmetry, limbs, digestive systems and heads with sensory organs. Scientists from the University of Oxford analysed it from the perspective of evolutionary theory and wrote in a study in the International Journal of Astrobiology that aliens may be similar to humans. The planetary context would also have an influence: a planet with higher gravity would have smaller animals, and other types of stars can lead to non-green photosynthesizers. The amount of energy available would also affect biodiversity, as an ecosystem sustained by black smokers or hydrothermal vents would have less energy available than those sustained by a star's light and heat, and so its lifeforms would not grow beyond a certain complexity. There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. It is common knowledge that the conditions on other planets in the solar system, in addition to the many galaxies outside of the Milky Way galaxy, are very harsh and seem to be too extreme to harbor any life. The environmental conditions on these planets can have intense UV radiation paired with extreme temperatures, lack of water, and much more that can lead to conditions that don't seem to favor the creation or maintenance of extraterrestrial life. However, there has been much historical evidence that some of the earliest and most basic forms of life on Earth originated in some extreme environments that seem unlikely to have harbored life at least at one point in Earth's history. Fossil evidence as well as many historical theories backed up by years of research and studies have marked environments like hydrothermal vents or acidic hot springs as some of the first places that life could have originated on Earth. These environments can be considered extreme when compared to the typical ecosystems that the majority of life on Earth now inhabit, as hydrothermal vents are scorching hot due to the magma escaping from the Earth's mantle and meeting the much colder oceanic water. Even in today's world, there can be a diverse population of bacteria found inhabiting the area surrounding these hydrothermal vents which can suggest that some form of life can be supported even in the harshest of environments like the other planets in the solar system. The aspects of these harsh environments that make them ideal for the origin of life on Earth, as well as the possibility of creation of life on other planets, is the chemical reactions forming spontaneously. For example, the hydrothermal vents found on the ocean floor are known to support many chemosynthetic processes which allow organisms to utilize energy through reduced chemical compounds that fix carbon. In return, these reactions will allow for organisms to live in relatively low oxygenated environments while maintaining enough energy to support themselves. The early Earth environment was reducing and therefore, these carbon fixing compounds were necessary for the survival and possible origin of life on Earth. With the little amount of information that scientists have found regarding the atmosphere on other planets in the Milky Way galaxy and beyond, the atmospheres are most likely reducing or with very low oxygen levels, especially when compared with Earth's atmosphere. If there were the necessary elements and ions on these planets, the same carbon fixing, reduced chemical compounds occurring around hydrothermal vents could also occur on these planets' surfaces and possibly result in the origin of extraterrestrial life. Planetary habitability in the Solar System The Solar System has a wide variety of planets, dwarf planets, and moons, and each one is studied for its potential to host life. Each one has its own specific conditions that may benefit or harm life. So far, the only lifeforms found are those from Earth. No extraterrestrial intelligence other than humans exists or has ever existed within the Solar System. Astrobiologist Mary Voytek points out that it would be unlikely to find large ecosystems, as they would have already been detected by now. The inner Solar System is likely devoid of life. However, Venus is still of interest to astrobiologists, as it is a terrestrial planet that was likely similar to Earth in its early stages and developed in a different way. There is a greenhouse effect, the surface is the hottest in the Solar System, sulfuric acid clouds, all surface liquid water is lost, and it has a thick carbon-dioxide atmosphere with huge pressure. Comparing both helps to understand the precise differences that lead to beneficial or harmful conditions for life. And despite the conditions against life on Venus, there are suspicions that microbial life-forms may still survive in high-altitude clouds. Mars is a cold and almost airless desert, inhospitable to life. However, recent studies revealed that water on Mars used to be quite abundant, forming rivers, lakes, and perhaps even oceans. Mars may have been habitable back then, and life on Mars may have been possible. But when the planetary core ceased to generate a magnetic field, solar winds removed the atmosphere and the planet became vulnerable to solar radiation. Ancient life-forms may still have left fossilised remains, and microbes may still survive deep underground. As mentioned, the gas giants and ice giants are unlikely to contain life. The most distant solar system bodies, found in the Kuiper Belt and outwards, are locked in permanent deep-freeze, but cannot be ruled out completely. Although the giant planets themselves are highly unlikely to have life, there is much hope to find it on moons orbiting these planets. Europa, from the Jovian system, has a subsurface ocean below a thick layer of ice. Ganymede and Callisto also have subsurface oceans, but life is less likely in them because water is sandwiched between layers of solid ice. Europa would have contact between the ocean and the rocky surface, which helps the chemical reactions. It may be difficult to dig so deep in order to study those oceans, though. Enceladus, a tiny moon of Saturn with another subsurface ocean, may not need to be dug, as it releases water to space in eruption columns. The space probe Cassini flew inside one of these, but could not make a full study because NASA did not expect this phenomenon and did not equip the probe to study ocean water. Still, Cassini detected complex organic molecules, salts, evidence of hydrothermal activity, hydrogen, and methane. Titan is the only celestial body in the Solar System besides Earth that has liquid bodies on the surface. It has rivers, lakes, and rain of hydrocarbons, methane, and ethane, and even a cycle similar to Earth's water cycle. This special context encourages speculations about lifeforms with different biochemistry, but the cold temperatures would make such chemistry take place at a very slow pace. Water is rock-solid on the surface, but Titan does have a subsurface water ocean like several other moons. However, it is of such a great depth that it would be very difficult to access it for study. Scientific search The science that searches and studies life in the universe, both on Earth and elsewhere, is called astrobiology. With the study of Earth's life, the only known form of life, astrobiology seeks to study how life starts and evolves and the requirements for its continuous existence. This helps to determine what to look for when searching for life in other celestial bodies. This is a complex area of study, and uses the combined perspectives of several scientific disciplines, such as astronomy, biology, chemistry, geology, oceanography, and atmospheric sciences. The scientific search for extraterrestrial life is being carried out both directly and indirectly. As of September 2017[update], 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in the Solar System hold the potential for hosting primitive life such as microorganisms. As of 8 February 2021, an updated status of studies considering the possible detection of lifeforms on Venus (via phosphine) and Mars (via methane) was reported. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. Although all the unusual properties of the meteorite were eventually explained as the result of inorganic processes, the controversy over its discovery laid the groundwork for the development of astrobiology. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In February 2005 NASA scientists reported they may have found some evidence of extraterrestrial life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. In November 2011, NASA launched the Mars Science Laboratory that landed the Curiosity rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. A group of scientists at Cornell University started a catalog of microorganisms, with the way each one reacts to sunlight. The goal is to help with the search for similar organisms in exoplanets, as the starlight reflected by planets rich in such organisms would have a specific spectrum, unlike that of starlight reflected from lifeless planets. If Earth was studied from afar with this system, it would reveal a shade of green, as a result of the abundance of plants with photosynthesis. In August 2011, NASA studied meteorites found on Antarctica, finding adenine, guanine, hypoxanthine, and xanthine. Adenine and guanine are components of DNA, and the others are used in other biological processes. The studies ruled out pollution of the meteorites on Earth, as those components would not be freely available the way they were found in the samples. This discovery suggests that several organic molecules that serve as building blocks of life may be generated within asteroids and comets. In October 2011, scientists reported that cosmic dust contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. It is still unclear if those compounds played a role in the creation of life on Earth, but Sun Kwok, of the University of Hong Kong, thinks so. "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In December 2023, astronomers reported the first time discovery, in the plumes of Enceladus, moon of the planet Saturn, of hydrogen cyanide, a possible chemical essential for life as we know it, as well as other organic molecules, some of which are yet to be better identified and understood. According to the researchers, "these [newly discovered] compounds could potentially support extant microbial communities or drive complex organic synthesis leading to the origin of life." Although most searches are focused on the biology of extraterrestrial life, an extraterrestrial intelligence capable enough to develop a civilization may be detectable by other means as well. Technology may generate technosignatures, effects on the native planet that may not be caused by natural causes. There are three main types of techno-signatures considered: interstellar communications, effects on the atmosphere, and planetary-sized structures such as Dyson spheres. Organizations such as the SETI Institute search the cosmos for potential forms of communication. They started with radio waves, and now search for laser pulses as well. The challenge for this search is that there are natural sources of such signals as well, such as gamma-ray bursts and supernovae, and the difference between a natural signal and an artificial one would be in its specific patterns. Astronomers intend to use artificial intelligence for this, as it can manage large amounts of data and is devoid of biases and preconceptions. Besides, even if there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth. The length of time required for a signal to travel across space means that a potential answer may arrive decades or centuries after the initial message. The atmosphere of Earth is rich in nitrogen dioxide as a result of air pollution, which can be detectable. The natural abundance of carbon, which is also relatively reactive, makes it likely to be a basic component of the development of a potential extraterrestrial technological civilization, as it is on Earth. Fossil fuels may likely be generated and used on such worlds as well. The abundance of chlorofluorocarbons in the atmosphere can also be a clear technosignature, considering their role in ozone depletion. Light pollution may be another technosignature, as multiple lights on the night side of a rocky planet can be a sign of advanced technological development. However, modern telescopes are not strong enough to study exoplanets with the required level of detail to perceive it. The Kardashev scale proposes that a civilization may eventually start consuming energy directly from its local star. This would require giant structures built next to it, called Dyson spheres. Those speculative structures would cause an excess infrared radiation, that telescopes may notice. The infrared radiation is typical of young stars, surrounded by dusty protoplanetary disks that will eventually form planets. An older star such as the Sun would have no natural reason to have excess infrared radiation. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star were being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zones of their stars. Since 1992, over four thousand exoplanets have been discovered (6,128 planets in 4,584 planetary systems including 1,017 multiple planetary systems as of 30 October 2025). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years.[better source needed] The Kepler space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone,[c] with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way,[d] that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located 4.2 light-years (1.3 pc) from Earth in the southern constellation of Centaurus. As of March 2014[update], the least massive exoplanet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1โ491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyse the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. History and cultural impact The modern concept of extraterrestrial life is based on assumptions that were not commonplace during the early days of astronomy. The first explanations for the celestial objects seen in the night sky were based on mythology. Scholars from Ancient Greece were the first to consider that the universe is inherently understandable and rejected explanations based on supernatural incomprehensible forces, such as the myth of the Sun being pulled across the sky in the chariot of Apollo. They had not developed the scientific method yet and based their ideas on pure thought and speculation, but they developed precursor ideas to it, such as that explanations had to be discarded if they contradict observable facts. The discussions of those Greek scholars established many of the pillars that would eventually lead to the idea of extraterrestrial life, such as Earth being round and not flat. The cosmos was first structured in a geocentric model that considered that the sun and all other celestial bodies revolve around Earth. However, they did not consider them as worlds. In Greek understanding, the world was composed by both Earth and the celestial objects with noticeable movements. Anaximander thought that the cosmos was made from apeiron, a substance that created the world, and that the world would eventually return to the cosmos. Eventually two groups emerged, the atomists that thought that matter at both Earth and the cosmos was equally made of small atoms of the classical elements (earth, water, fire and air), and the Aristotelians who thought that those elements were exclusive of Earth and that the cosmos was made of a fifth one, the aether. Atomist Epicurus thought that the processes that created the world, its animals and plants should have created other worlds elsewhere, along with their own animals and plants. Aristotle thought instead that all the earth element naturally fell towards the center of the universe, and that would make it impossible for other planets to exist elsewhere. Under that reasoning, Earth was not only in the center, it was also the only planet in the universe. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include, among others, Bharat Kshetra, Mahavideh Kshetra, Airavat Kshetra, and Hari kshetra. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. Chaucer's poem The House of Fame engaged in medieval thought experiments that postulated the plurality of worlds. However, those ideas about other worlds were different from the current knowledge about the structure of the universe, and did not postulate the existence of planetary systems other than the Solar System. When those authors talk about other worlds, they talk about places located at the center of their own systems, and with their own stellar vaults and cosmos surrounding them. The Greek ideas and the disputes between atomists and Aristotelians outlived the fall of the Greek empire. The Great Library of Alexandria compiled information about it, part of which was translated by Islamic scholars and thus survived the end of the Library. Baghdad combined the knowledge of the Greeks, the Indians, the Chinese and its own scholars, and the knowledge expanded through the Byzantine Empire. From there it eventually returned to Europe by the time of the Middle Ages. However, as the Greek atomist doctrine held that the world was created by random movements of atoms, with no need for a creator deity, it became associated with atheism, and the dispute intertwined with religious ones. Still, the Church did not react to those topics in a homogeneous way, and there were stricter and more permissive views within the church itself. The first known mention of the term 'panspermia' was in the writings of the 5th-century BC Greek philosopher Anaxagoras. He proposed the idea that life exists everywhere. By the time of the late Middle Ages there were many known inaccuracies in the geocentric model, but it was kept in use because naked eye observations provided limited data. Nicolaus Copernicus started the Copernican Revolution by proposing that the planets revolve around the sun rather than Earth. His proposal had little acceptance at first because, as he kept the assumption that orbits were perfect circles, his model led to as many inaccuracies as the geocentric one. Tycho Brahe improved the available data with naked-eye observatories, which worked with highly complex sextants and quadrants. Tycho could not make sense of his observations, but Johannes Kepler did: orbits were not perfect circles, but ellipses. This knowledge benefited the Copernican model, which worked now almost perfectly. The invention of the telescope a short time later, perfected by Galileo Galilei, clarified the final doubts, and the paradigm shift was completed. Under this new understanding, the notion of extraterrestrial life became feasible: if Earth is but just a planet orbiting around a star, there may be planets similar to Earth elsewhere. The astronomical study of distant bodies also proved that physical laws are the same elsewhere in the universe as on Earth, with nothing making the planet truly special. The new ideas were met with resistance from the Catholic church. Galileo was tried for the heliocentric model, which was considered heretical, and forced to recant it. The best-known early-modern proponent of ideas of extraterrestrial life was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". Bruno's belief in the plurality of worlds was one of the charges leveled against him by the Venetian Holy Inquisition, which tried and executed him. The heliocentric model was further strengthened by the postulation of the theory of gravity by Sir Isaac Newton. This theory provided the mathematics that explains the motions of all things in the universe, including planetary orbits. By this point, the geocentric model was definitely discarded. By this time, the use of the scientific method had become a standard, and new discoveries were expected to provide evidence and rigorous mathematical explanations. Science also took a deeper interest in the mechanics of natural phenomena, trying to explain not just the way nature works but also the reasons for working that way. There was very little actual discussion about extraterrestrial life before this point, as the Aristotelian ideas remained influential while geocentrism was still accepted. When it was finally proved wrong, it not only meant that Earth was not the center of the universe, but also that the lights seen in the sky were not just lights, but physical objects. The notion that life may exist in them as well soon became an ongoing topic of discussion, although one with no practical ways to investigate. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18thโ19th-century astronomers who believed that the Solar System is populated by alien life. Other scholars of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals โ which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book Mars, followed by Mars and its Canals in 1906, proposing that the canals were the work of a long-gone civilisation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. As a consequence of the belief in the spontaneous generation there was little thought about the conditions of each celestial body: it was simply assumed that life would thrive anywhere. This theory was disproved by Louis Pasteur in the 19th century. Popular belief in thriving alien civilisations elsewhere in the solar system still remained strong until Mariner 4 and Mariner 9 provided close images of Mars, which debunked forever the idea of the existence of Martians and decreased the previous expectations of finding alien life in general. The end of the spontaneous generation belief forced investigation into the origin of life. Although abiogenesis is the more accepted theory, a number of authors reclaimed the term "panspermia" and proposed that life was brought to Earth from elsewhere. Some of those authors are Jรถns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). The science fiction genre, although not so named during the time, developed during the late 19th century. The expansion of the genre of extraterrestrials in fiction influenced the popular perception over the real-life topic, making people eager to jump to conclusions about the discovery of aliens. Science marched at a slower pace, some discoveries fueled expectations and others dashed excessive hopes. For example, with the advent of telescopes, most structures seen on the Moon or Mars were immediately attributed to Selenites or Martians, and later ones (such as more powerful telescopes) revealed that all such discoveries were natural features. A famous case is the Cydonia region of Mars, first imaged by the Viking 1 orbiter. The low-resolution photos showed a rock formation that resembled a human face, but later spacecraft took photos in higher detail that showed that there was nothing special about the site. The search and study of extraterrestrial life became a science of its own, astrobiology. Also known as exobiology, this discipline is studied by the NASA, the ESA, the INAF, and others. Astrobiology studies life from Earth as well, but with a cosmic perspective. For example, abiogenesis is of interest to astrobiology, not because of the origin of life on Earth, but for the chances of a similar process taking place in other celestial bodies. Many aspects of life, from its definition to its chemistry, are analyzed as either likely to be similar in all forms of life across the cosmos or only native to Earth. Astrobiology, however, remains constrained by the current lack of extraterrestrial life-forms to study, as all life on Earth comes from the same ancestor, and it is hard to infer general characteristics from a group with a single example to analyse. The 20th century came with great technological advances, speculations about future hypothetical technologies, and an increased basic knowledge of science by the general population thanks to science divulgation through the mass media. The public interest in extraterrestrial life and the lack of discoveries by mainstream science led to the emergence of pseudosciences that provided affirmative, if questionable, answers to the existence of aliens. Ufology claims that many unidentified flying objects (UFOs) would be spaceships from alien species, and ancient astronauts hypothesis claim that aliens would have visited Earth in antiquity and prehistoric times but people would have failed to understand it by then. Most UFOs or UFO sightings can be readily explained as sightings of Earth-based aircraft (including top-secret aircraft), known astronomical objects or weather phenomenons, or as hoaxes. Looking beyond the pseudosciences, Lewis White Beck strove to elevate the level of public discourse on the topic of extraterrestrial life by tracing the evolution of philosophical thought over the centuries from ancient times into the modern era. His review of the contributions made by Lucretius, Plutarch, Aristotle, Copernicus, Immanuel Kant, John Wilkins, Charles Darwin and Karl Marx demonstrated that even in modern times, humanity could be profoundly influenced in its search for extraterrestrial life by subtle and comforting archetypal ideas which are largely derived from firmly held religious, philosophical and existential belief systems. On a positive note, however, Beck further argued that even if the search for extraterrestrial life proves to be unsuccessful, the endeavor itself could have beneficial consequences by assisting humanity in its attempt to actualize superior ways of living here on Earth. By the 21st century, it was accepted that multicellular life in the Solar System can only exist on Earth, but the interest in extraterrestrial life increased regardless. This is a result of the advances in several sciences. The knowledge of planetary habitability allows to consider on scientific terms the likelihood of finding life at each specific celestial body, as it is known which features are beneficial and harmful for life. Astronomy and telescopes also improved to the point exoplanets can be confirmed and even studied, increasing the number of search places. Life may still exist elsewhere in the Solar System in unicellular form, but the advances in spacecraft allow to send robots to study samples in situ, with tools of growing complexity and reliability. Although no extraterrestrial life has been found and life may still be just a rarity from Earth, there are scientific reasons to suspect that it can exist elsewhere, and technological advances that may detect it if it does. Many scientists are optimistic about the chances of finding alien life. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. On the other hand, other scientists are pessimistic. Jacques Monod wrote that "Man knows at last that he is alone in the indifferent immensity of the universe, whence which he has emerged by chance". In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled Rare Earth: Why Complex Life is Uncommon in the Universe.[better source needed] In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics such as DNA and carbon. As for the possible risks, theoretical physicist Stephen Hawking warned in 2010 that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. On 20 July 2015, Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". Government responses The 1967 Outer Space Treaty and the 1979 Moon Agreement define rules of planetary protection against potentially hazardous extraterrestrial life. COSPAR also provides guidelines for planetary protection. A committee of the United Nations Office for Outer Space Affairs had in 1977 discussed for a year strategies for interacting with extraterrestrial life or intelligence. The discussion ended without any conclusions. As of 2010, the UN lacks response mechanisms for the case of an extraterrestrial contact. One of the NASA divisions is the Office of Safety and Mission Assurance (OSMA), also known as the Planetary Protection Office. A part of its mission is to "rigorously preclude backward contamination of Earth by extraterrestrial life." In 2016, the Chinese Government released a white paper detailing its space program. According to the document, one of the research objectives of the program is the search for extraterrestrial life. It is also one of the objectives of the Chinese Five-hundred-meter Aperture Spherical Telescope (FAST) program. In 2020, Dmitry Rogozin, the head of the Russian space agency, said the search for extraterrestrial life is one of the main goals of deep space research. He also acknowledged the possibility of existence of primitive life on other planets of the Solar System. The French space agency has an office for the study of "non-identified aero spatial phenomena". The agency is maintaining a publicly accessible database of such phenomena, with over 1600 detailed entries. According to the head of the office, the vast majority of entries have a mundane explanation; but for 25% of entries, their extraterrestrial origin can neither be confirmed nor denied. In 2020, chairman of the Israel Space Agency Isaac Ben-Israel stated that the probability of detecting life in outer space is "quite large". But he disagrees with his former colleague Haim Eshed who stated that there are contacts between an advanced alien civilisation and some of Earth's governments. In fiction Although the idea of extraterrestrial peoples became feasible once astronomy developed enough to understand the nature of planets, they were not thought of as being any different from humans. Having no scientific explanation for the origin of mankind and its relation to other species, there was no reason to expect them to be any other way. This was changed by the 1859 book On the Origin of Species by Charles Darwin, which proposed the theory of evolution. Now with the notion that evolution on other planets may take other directions, science fiction authors created bizarre aliens, clearly distinct from humans. A usual way to do that was to add body features from other animals, such as insects or octopuses. Costuming and special effects feasibility alongside budget considerations forced films and TV series to tone down the fantasy, but these limitations lessened since the 1990s with the advent of computer-generated imagery (CGI), and later on as CGI became more effective and less expensive. Real-life events sometimes captivate people's imagination and this influences the works of fiction. For example, during the Barney and Betty Hill incident, the first recorded claim of an alien abduction, the couple reported that they were abducted and experimented on by aliens with oversized heads, big eyes, pale grey skin, and small noses, a description that eventually became the grey alien archetype once used in works of fiction. See also Notes References Further reading External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-140] | [TOKENS: 9291] |
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988โ89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passedโusually fully encryptedโacross the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to โ4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links |
======================================== |
[SOURCE: https://www.wired.com/tag/coupons/] | [TOKENS: 461] |
CouponsCoupons from brands we like, with our recommendations on how to use them.GearH&R Block Coupon: $50 Off In-Store ServicesGearLovehoney Coupon Offers: Toys, Lingerie, and Gift Set DiscountsGearSave at Loweโs With up to 40% Off Appliances and Daily DealsGearTop Sony Coupons: 45% Off Sony Headphones, WF-1000XM6 Earbuds, and Sony CamerasGearSave With Our KitchenAid Promo Codes This MonthGearTop LG Promo Codes and Coupons for February 2026Gear20% Off TurboTax Service Codes for February 2026GearParamount+ Coupon Codes and Deals: Free Trial, Student Deals, and Teachers Discounts This FebruaryGearVisible Promo Codes and Coupons for February 2026GearTop HBO Max Promo Codes This FebruaryGearHome Depot Promo Codes: 50% Off in February 2026GearSave 30% With Our Nike Promo Codes and Discounts for February 2026GearTop Walmart Promo Codes for February 2026GearAltra Running Deals: Up to 50% off, 10% off With Sign-up, and Free Shipping This FebruaryGearExclusive LegalZoom Promo Code for 10% Off Services for FebruaryGearChewy Promo Codes: $20 Off February 2026GearTop Tuft & Needle Promo Codes for February 2026GearTop Home Chef Promo Codes for February 2026GearSave With Vitamix Promo Codes and Deals This FebruaryGearVimeo Promo Codes and Discounts: Up to 40% Off This February 2026GearTop Newegg Promo Codes and Coupons for February 2026GearSephora Promo Codes for FebruaryGearGet 20% Off With a Brooks Promo Code for February 2026GearTop Nomad Goods Promo Codes: Get 25% Off in February 2026More Stories Coupons Coupons from brands we like, with our recommendations on how to use them. ยฉ 2026 Condรฉ Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condรฉ Nast. Ad Choices |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Hengduan_Mountains] | [TOKENS: 690] |
Contents Hengduan Mountains The Hengduan Mountains (simplified Chinese: ๆจชๆญๅฑฑ่; traditional Chinese: ๆฉซๆทๅฑฑ่; pinyin: Hรฉngduร n Shฤnmร i) are a group of mountain ranges in southwest China that connect the southeast portions of the Tibetan Plateau with the YunnanโGuizhou Plateau. The Hengduan Mountains are primarily large north-south mountain ranges that effectively separate lowlands in northern Myanmar from the lowlands of the Sichuan Basin. These ranges are characterized by significant vertical relief originating from the Indian subcontinent's collision with the Eurasian Plate, and further carved out by the major rivers draining the eastern Tibetan Plateau. These rivers, the Yangtze, Mekong, and Salween, are recognized today as the Three Parallel Rivers UNESCO World Heritage Site. The Hengduan Mountains cover much of western present-day Sichuan province as well as the northwestern portions of Yunnan, the easternmost section of the Tibet Autonomous Region, and touching upon parts of southern Qinghai. Additionally, some parts of eastern Kachin State in neighbouring Myanmar are considered part of the Hengduan group. The Hengduan Mountains are approximately 900 kilometres (560 mi) long, stretching from 33ยฐN to 25ยฐN. Depending on extent of the definition, the Hengduan Mountains are also approximately 400 kilometres (250 mi) wide under the narrowest definition, ranging from 98ยฐE to 102ยฐE. The area covered by these ranges roughly corresponds with the Tibetan region known as Kham. The Hengduan Mountains subalpine conifer forests is a palaearctic ecoregion in the Temperate coniferous forests biome that covers portions of the mountains. Geography The Hengduan Mountain system consists of many component mountain ranges, most of which run roughly north to south. These mountain ranges, in turn, can be further divided into various subranges. The component ranges of the Hengduan are separated by deep river valleys that channel the waters of many of Southeast Asia's great rivers. The core of the Hengduan Mountains can be divided into four major component ranges, described below. Ecosystems The Hengduan Mountains support a range of habitats, from subtropical to temperate to montane biomes. The mountains are largely covered by subalpine coniferous forests. Elevations range from 1,300 to 6,000 metres (4,300 to 19,700 ft). The dense, pristine forests, the relative isolation, and the fact that most of the area remained free from glaciation during the ice ages provides a very complex habitat with a high degree of biological diversity. The ecoregions that coincide with the Hengduan Mountains are: Additionally, the lowest elevation portions of the Jinsha (Yangtze) River and Nu (Salween) River valleys in the southern Hengduan ranges are classified by the Chinese government as a tropical savanna environment. The easternmost ranges of the Hengduan are home to the rare and endangered giant panda. Other species native to the mountains are the Chinese yew (Taxus chinensis) and various other rare plants, deer, and primates. Gallery See also References External links |
======================================== |
[SOURCE: https://www.mako.co.il/travel-news/international/Article-f3f2ba23d007c91026.htm] | [TOKENS: 9107] |
ืืืืื ื ืืฉืืช: ืืืฉืจืืืื ืืกืื ืื ืืช ืืืขืืื ืืืืขืืคืื ืฉืืื ืืืืคืฉื - ืืื ื ืืฆืืื ืืืืจื ืืืจืืคืืขื ืชืืืืช ืืฉื ื ืืืืฉื, ื ืจืื ืฉืืืชืจ ืืฉืจืืืื ืืขืืืคืื ืืช ืืืจื ืืืจืืคื ืืืขื ืืืืคืฉืืช ืืืฉืจ ืืขืจื ืืืจืืคื. ืืืืงืืฉืื ืืืื ืืจืื, ืคืืืื ืืฆ'ืืื ืจืืฉืืื ืืื ืืง ืฉื ืขืฉืจืืช ืืืืืื, ืืขืื ืืืื ื, ืกืคืจื ืืืจืืื ืื ืจืืฉืืืช ืืจืืืืช ืืืืช. ืชืืืฉืช ืืืืืืื, ืืคืขืจืื ืืืืืจืื, ืืื ื ืืืืกื ืืงืฆืจืื ืืืจืืืช ืงืืื ืืืืื ืงืืกื ืืฉื ืื ืืช ืืคืช ืืชืืืจืืช ืืืฉืจืืืืชืืกืฃ ืืงื ืืmakoืคืืจืกื: 19.02.26, 15:26ืฆืืืื: Andrzej Lisowski Travel | shutterstockืืงืืฉืืจ ืืืขืชืงืืืืืฉืืื ืืืืจ ืชืืืืช ืืฉื ื ืืืืฉื, ื ืจืื ืื ืืขื ืฃ ืืชืืืจืืช ืืืฉืจืืื ื ืืฉืืช ืืืืื ืฉืืืื ืืืจ ื-2025 - ืืืจื ืืืจืืคื ืืชืืกืกืช ืืืืืจื ืืืืขืืคืช ืฉื ืืืืืืืื ืืืฉืจืืืื ืืืืืืคื ืืืืจืื ืืช ืืขืื ืืขืจื ืืืจืืคื ืืืกืืจืชืืื ืฉืืืื ื ืจืืืืื ืืืืื. ืืงืืืฆืช ืืืื ืืืื ืืืืืืื ืื ืืื ืืจืื, ืคืืืื ืืฆ'ืืื ืืืืืืืช ืืช ืืืืงืืฉืื ืืชืืืืช ืืฉื ื ืืืขืื ืืืคืฉื, ืืขืื ืืืื ื, ืกืคืจื ืืืขืืื ื ืืกืคืื ืืืขืจื ืืืจืืคื ืืืืื ืืจืืื ืืื ืืืืื ืืช ืืืืชืขื ืืื ืืช ืืฆื ืืชืืืจืื ืืืืจืฅ.> ืืืืจืื ืืืืืื ืืืืจื ืืืจืืคืืื ืืืฉื, ืืฉื ืช 2025 ื ืจืฉืื ืขืืืื ืฉื ื-60% ืืืกืคืจ ืืืฉืจืืืื ืฉืืืืขื ืืืื ืืจืื ืืขืืืช ืืฉื ื ืฉืืคื ื ืื. ืืคืืืื ืื ืชืื ืืื ืืง ืืฉื ื ืืืืืคืช ื-50%, ืืืฆ'ืืื ืฉืชืืคืกืช ืืช ืืขืืื ืืืืช ืืืืื ืืช ืืื ืืืืืืช ืืงืจื ืืชืืืจ ืืืืจืฅ ืืืกืคืจ ืืืฉืื ืืขืืืช ืืืฉื ื ืืืืืคืช ื ืจืฉืื ืขืืืื ืฉื ื-31%. ืืขืืืช ืืืช, ืืืืื ื, ืืจืืื ืื ืืฆืจืคืช ื ืจืฉืื ืืจืืืืช ืฉื ืืื 15% ื-27% ืืืืงืืฉืื ืืฆืคืืืื ืืฉื ื ืื ืืืืืช.ืฆืืืื: STEPHANE DE SAKUTIN , getty imagesืืืืื ืืืื ืืฆืืื ืื ืื ืืืืจืืื ืืืืื ืื ืฉืืืื ืฉื ืฉืืงืืืื ืืืืืืื ืืืืืืืชืืื. ืขืืืช ืืฉืืืช ืืืืจื ืืืจืืคื ื ืืืื ืืฉืืขืืชืืช ืืื ืื ืืืื ืืืืื ืืื ืืคืจืืืจืื. ืืืฉื, ืืืื ืืืืื ืืื ืืจืงืื ืืื ื ืื ืืืืงืจืชื ืืืืคืฉื ืขืืื ืืืื ื-165 ืืืจื ืืืืื, ืืขืืืช ื-390 ืืืจื ืืืจืฆืืื ื ืื-465 ืืืจื ืืืื ืืื. ืื ืืืืจื ืืืืื ืืืืื ืืืชืจ, ืืฉืืจืืืช ืฆืืจืืื ืืคืืืื ืื ืืฆ'ืืื ืขืืื ืืืืืฆืข ื-12 ืืืืจ ืืืื, ืืืฆื ืืืืืืจ ืืืขืจื ืืืจืืคื.ืื ืืกืฃ, ืืื ื ืืืืกื ืืงืฆืจืื, ืืื ืฉืขืชืืื ืืืฆื ืืฉืืืฉ ืฉืขืืช ืืืฆื ืืืคืืื ืืช ืืืขืืื ืืืื ืืืืจืงืืืืืื ืืืืืื ืขืืืจ ืืืคืฉืืช ืงืฆืจืืช, ืืืคืฉืืช ืืฉืคืืชืืืช ืื ืกืืขืืช ืืืฉืืืืช ืขืืจ, ืืืข ืืืชืืื ื ืืคืฉ. "ืืืืืื ืืืฉืจืืื ืคืืืช ืืืคืฉ ืืืื 'ืืกืื ืื' ืขื ืขืืจ ืืกืืืืช ืืืขืื ืืื ืืืชืจ ืืืคืฉื ืฉืืจืืืฉื ื ืืื, ืจืืืขื ืืืืชืืืช ืืืฉืคืื", ืืืืจืช ืขืืืจ ืืืืืจืื ืืื, ืกืื ื"ืืืช ืฉืืืืง ืืืืจืืืช ืืืจืชืืช ืืงืืืฆืช ืืืื ืืืื. "ืืืจื ืืืจืืคื ืืืืขืช ืืืฆืืข ืืจืืืื, ืืืข, ืืืจืงืฆืืืช ืืืืืื ืืคืชืจืื ืืช ื ืืคืฉ ืฉืืืคืฉืจืื ืืืืจืื ืืืืืืื ืืืื ืืช ืืื ืืื ืคืฉืจืืช, ืืืื ืืืืืจืื ืืืจืงืืืืืื, ืืืกืืช ื ืืืืช ืืืืืื ืฉื ืืคืฉืจืืืืช ืืื ื ืื ืืคืฉ".ืคืจืกืืืชืืืืจื ืืืืื ืืคืจื ื, ืืืืื ืืชืขืืคื ืืื ืืืืืืช ืืื ืื ืชืืื ืืชืืืจืืช ืืืืขืืื ืืฆืจืื ืืช ืืื, ืืื ืืืื ืืขืื ืืืืื ืืื ืืื ืืืื ืืืืื ืงืืกื ืฉืคืืขื ืืืฆืืื ืื ืชื"ื. "ืื ืื ื ืขืืื ืืชืืืืจ ืืงืืืื ืืืืจื ืืืจืืคื ืฉืื ืืขืื ืืฉืืขืืชืืช ืืช ืงืฆื ืืขืืืื ืฉื ืืชืืืจืื ืืืฉืจืืืื. ืื ืื ื ืืืืจืื ืขื ืขืฉืจืืช ืืขืืื ืืืืจืืคื ืฉืืืคืืื ืืืืืืืื ืืืืืช ืคืขืืืืช ืฉื ืืืจืืช ืืื ืืืื, ืืืื ื'ื ืืื ืจืืื ืืืืจ ืฉืืืขืชื ืชืืืืจ ืืืืก ืืืฉืจืื ืืฉื ื, ืืฆื ืืืจืืช ื ืืกืคืืช", ืืืืจ ืืคืจื ื. " ืืืืืจ ืืชืืจืืชื ืฉื ืืืืจืืช ืืืื ื ืืชื ืืช ืืืชืืชืื". ืืฉื ืช 2026 ืคืืขืืืช ืืืขืื ื-60 ืืืจืืช ืชืขืืคื ืืงืืืื ืืืจืฅ, ืืืชืืจืืช ืืื ืฆืคืืื ืืืืจืื ืืืืจืื. "ืขื ืกืืฃ ืืฉื ื, ืื ืื ืืงืจื ืืืจืืข ืืจืืื, ืืืกืคืจ ืขืฉืื ืืืืืข ืืืขื 100 ืืืจืืช ืชืขืืคื ืืื ืืืฉืจืื ืืื ืชืืื ืืืฉืืจื ืืืืืื ืฉื ืืฉื ื - ืืืชืจ ืชืืจืืช ืืืืืจืื ื ืืืืื ืืืชืจ".ืฆืืืื: ืจืืืืจืกืคืจืกืืืชืืืื ืืขืชืื, ืืคืจื ื ืืกืื ืืื ืืขืืื ืฉืขืฉืืืื ืืืืืช ืืืืืื ืืขื ืืืื ืขืืืจ ืืชืืืจืื ืืืฉืจืืืื. "ืืฉืืืืจืื ืขื ืืืจื ืืืจืืคื, ืื ืื ื ืจืืืื ืขืืืื ืืืขืืื ืฉืืขืืจ ืื ืืื ืืคืืื ืืืืค 50 ืฉื ืืืืื ืืช ืืืชืืืจืืช, ืืืื ืืื ืืืื ืขื ืืืกืืช ืืฉืืจืืช ืืืืืื ื, ืืื ืืจืื ืขื ืืขื ืืืืืื ืืื ืืช ืฉื ืืฉืจืืืื ื-2025, ืคืืืื ืืื ืจืง ืืจืฉื ืืื ืื ืขืจืื ืืื ืงืจืงืื ืืืื ืกืง, ืืฆืคืืื ืืจืืื ืืืงืืืืช ื ืืกืคืื ืืฉื ื. ืืืจืช ืกืืืืงืื, ืืจืืืกืืื, ืืืฉืืช ืืืืจืื ื ืืืชืจ ืืืืชืจ ืืฉืคืืืช, ืืืคืฉืืช ืกืคื ืืืืืืื ืืืืจืื ืื. ืื ื ืืขืจืื ืฉืื ืืืจืืืื'ื ืืืืืืงืืกืื, ืฉืชื ืืืื ืืช ืืืกืืืืืช ืื ืืืืืืชืืืช ืืืื ืืืฉืจืืืื ืฆืคืืืืช ืืืชืืืง ืืืืืื ืืฉื ื. ืืชืืืจ ืืืฉืจืืื ืืืคืฉ ืืงืืืืช ืฉื ืื ืืืืื ืืื ืคืืฉ ืืื. ืืฉืืืื ืืื ืืืงืจ ืืืื ืืืื ืืืื ืื ืืืฉืืืืช ืืชืืืฉืช ืืืกืจ ื ืืืืช ืืืืง ืืืืขืืื ืืืขืจื ืืืจืืคื ืืืืืืื ืืืืืจืช ืืืื ืืช ืืืืคืืืช ืืืืจื ืืืจืืคื".ืืคืจื ื ืืฆืืื ืื ืขืืืื ืืฉ ืืืงืืฉืื ืืืขืืื ืืื ืืื ืืื, ืคืจืื, ืืืกืืจืื ืืืจืฆืืื ื, ืื ืืื ืื ืืืกืคืจืื ืฉืืื ืืขืืจ. "ืืืง ืืืขืจืื ืืืื ื ืืฆืืื ืื ืืืื ืืชืงืฉืืจืช ืกืืื ืื ืืืฉืืืืช ืืืคืื ืืช, ืืืงืื ืืืฉืจืืื ืืืคืฉ ืืืืืข ืืืงืืืืช ืฉื ืื ืืืืื ืืืืืช ืืื".ืืคืื ื ืคืจื-ืคืืกืืื ืืช ืืืจืฆืืื ื | ืฆืืืื: reutersืคืจืกืืืช"ืืขืืืื ืืืืงืืฉืื ืืืืจื ืืืจืืคื ื ืชืืืื ืขื ืขื ืืื ืืืงืจื ืขืืืง ืฉื ืืกื ืืืื ื ืืชืืืจืืชื ืฉื ืืืฉืจืืืื ืืื ืขืืืืช ืฉืืืฉ ืืขืืคืืช ืืจืืจืืช", ืืฆืืื ื"ืจ ืขืจื ืืชืจ, ืจืืฉ ืืืืดืก ืื ืืืื ืชืืืจืืช ืืืืื ืืืช ืืืงืืืืช ืื ืจืช ืืืืืื ืืื ืดื ืืชืืืจืืช. "ืจืืฉืืช, ืืขืืคื ืืืขืืื ืืืฆืืขืื ืขืจื ืืืื ืืืืืจ ืชืืจืืชื ืขื ืจืงืข ืืืงืจ ืืืืื ืืืืืืจ ืืืฉืจืื ืืืืงืจ ืืืืฆืจ ืืชืืืจืืชื ืืืขืจื ืืืจืืคื. ืฉื ืืช, ืืขืืคื ืืืคื ืืขืืื ืืฉืจ ื ืชืคืกืื ืืืืืืืชืืื ืืืฉืจืื ืืืื ืืืงืจืื ืืืฉืจืื (ืืขืืงืจ ืืฉืคืืืช) ืืจืืืฉื ืื ืื ืืืกืชืืื ืืจืืื ืืืืคืฉืืืช ืืื ืคืขืืืืช ืคืจื-ืคืืกืืื ืืช. ืืืืจ ืืฉืืืฉื, ืืงืจื ืืืฉืคืืืช ื ืจืฉืืช ืืขืืคื ืืคืขืืืืช ืืืืจืืืืช ืืฉืจ ืืฉืืืช ืืืคืฉื ืขืืจืื ืืช ืืื ืขื ืืืืืื ืืืืข ืืคืขืืืืช ืืฉืืชืคืช ืืฉืจ ืืืคืฉืจืืช ืืื ืืฉืคืืชื ืืืืฉ".ืืืงืืื, ื"ืจ ืืชืจ ืืืืกืืฃ ืืช ืืืขืืคื ืืืืืจืช ืฉื ืืฉืจืืืื ืืืขืืื ื ืืกืคืื ืืฉืจ ืืื ื ืืืง ืืืขืจื ืืืจืืคื ืืื ืืคื ืืชืืืื ื. "ืื ืชื ืืขื ืชืืืจืืชืืช ืืืฉืคืขืช ืืฉืืืื ืืืจืืื ืืืจืชืืื, ืืืืืืื, ืคืืืืืืื ืืืืจืื. ืืื, ืืืืืื ืืืจื ืชืงืืคืช ืืงืืจืื ื ืืืืืืื, ืื ืื ื ืืืื ืื ืฉืืื ืืื ืื ืืืืืื ืขืชืืื ืืืฉืชื ืืช ืืืืื ืืื ืกืืืืช ืืฉืชื ื ืืืชืืื".ืืื ืืงืืืคืฉืืืืกืืืืื ืงืืกืืืืจื ืืืจืืคืืืฆืืชื ืืขืืช ืืฉืื? ืืืืื ื ืืฉืืช: ืืืฉืจืืืื ืืกืื ืื ืืช ืืืขืืื ืืืืขืืคืื ืฉืืื ืืืืคืฉื - ืืื ื ืืฆืืื ืืืืจื ืืืจืืคื ืขื ืชืืืืช ืืฉื ื ืืืืฉื, ื ืจืื ืฉืืืชืจ ืืฉืจืืืื ืืขืืืคืื ืืช ืืืจื ืืืจืืคื ืืืขื ืืืืคืฉืืช ืืืฉืจ ืืขืจื ืืืจืืคื. ืืืืงืืฉืื ืืืื ืืจืื, ืคืืืื ืืฆ'ืืื ืจืืฉืืื ืืื ืืง ืฉื ืขืฉืจืืช ืืืืืื, ืืขืื ืืืื ื, ืกืคืจื ืืืจืืื ืื ืจืืฉืืืช ืืจืืืืช ืืืืช. ืชืืืฉืช ืืืืืืื, ืืคืขืจืื ืืืืืจืื, ืืื ื ืืืืกื ืืงืฆืจืื ืืืจืืืช ืงืืื ืืืืื ืงืืกื ืืฉื ืื ืืช ืืคืช ืืชืืืจืืช ืืืฉืจืืืืช ืืืืืฉืืื ืืืืจ ืชืืืืช ืืฉื ื ืืืืฉื, ื ืจืื ืื ืืขื ืฃ ืืชืืืจืืช ืืืฉืจืืื ื ืืฉืืช ืืืืื ืฉืืืื ืืืจ ื-2025 - ืืืจื ืืืจืืคื ืืชืืกืกืช ืืืืืจื ืืืืขืืคืช ืฉื ืืืืืืืื ืืืฉืจืืืื ืืืืืืคื ืืืืจืื ืืช ืืขืื ืืขืจื ืืืจืืคื ืืืกืืจืชืืื ืฉืืืื ื ืจืืืืื ืืืืื. ืืงืืืฆืช ืืืื ืืืื ืืืืืืื ืื ืืื ืืจืื, ืคืืืื ืืฆ'ืืื ืืืืืืืช ืืช ืืืืงืืฉืื ืืชืืืืช ืืฉื ื ืืืขืื ืืืคืฉื, ืืขืื ืืืื ื, ืกืคืจื ืืืขืืื ื ืืกืคืื ืืืขืจื ืืืจืืคื ืืืืื ืืจืืื ืืื ืืืืื ืืช ืืืืชืขื ืืื ืืช ืืฆื ืืชืืืจืื ืืืืจืฅ. > ืืืืจืื ืืืืืื ืืืืจื ืืืจืืคื ืื ืืืฉื, ืืฉื ืช 2025 ื ืจืฉืื ืขืืืื ืฉื ื-60% ืืืกืคืจ ืืืฉืจืืืื ืฉืืืืขื ืืืื ืืจืื ืืขืืืช ืืฉื ื ืฉืืคื ื ืื. ืืคืืืื ืื ืชืื ืืื ืืง ืืฉื ื ืืืืืคืช ื-50%, ืืืฆ'ืืื ืฉืชืืคืกืช ืืช ืืขืืื ืืืืช ืืืืื ืืช ืืื ืืืืืืช ืืงืจื ืืชืืืจ ืืืืจืฅ ืืืกืคืจ ืืืฉืื ืืขืืืช ืืืฉื ื ืืืืืคืช ื ืจืฉืื ืขืืืื ืฉื ื-31%. ืืขืืืช ืืืช, ืืืืื ื, ืืจืืื ืื ืืฆืจืคืช ื ืจืฉืื ืืจืืืืช ืฉื ืืื 15% ื-27% ืืืืงืืฉืื ืืฆืคืืืื ืืฉื ื ืื ืืืืืช. ืืืืื ืืืื ืืฆืืื ืื ืื ืืืืจืืื ืืืืื ืื ืฉืืืื ืฉื ืฉืืงืืืื ืืืืืืื ืืืืืืืชืืื. ืขืืืช ืืฉืืืช ืืืืจื ืืืจืืคื ื ืืืื ืืฉืืขืืชืืช ืืื ืื ืืืื ืืืืื ืืื ืืคืจืืืจืื. ืืืฉื, ืืืื ืืืืื ืืื ืืจืงืื ืืื ื ืื ืืืืงืจืชื ืืืืคืฉื ืขืืื ืืืื ื-165 ืืืจื ืืืืื, ืืขืืืช ื-390 ืืืจื ืืืจืฆืืื ื ืื-465 ืืืจื ืืืื ืืื. ืื ืืืืจื ืืืืื ืืืืื ืืืชืจ, ืืฉืืจืืืช ืฆืืจืืื ืืคืืืื ืื ืืฆ'ืืื ืขืืื ืืืืืฆืข ื-12 ืืืืจ ืืืื, ืืืฆื ืืืืืืจ ืืืขืจื ืืืจืืคื. ืื ืืกืฃ, ืืื ื ืืืืกื ืืงืฆืจืื, ืืื ืฉืขืชืืื ืืืฆื ืืฉืืืฉ ืฉืขืืช ืืืฆื ืืืคืืื ืืช ืืืขืืื ืืืื ืืืืจืงืืืืืื ืืืืืื ืขืืืจ ืืืคืฉืืช ืงืฆืจืืช, ืืืคืฉืืช ืืฉืคืืชืืืช ืื ืกืืขืืช ืืืฉืืืืช ืขืืจ, ืืืข ืืืชืืื ื ืืคืฉ. "ืืืืืื ืืืฉืจืืื ืคืืืช ืืืคืฉ ืืืื 'ืืกืื ืื' ืขื ืขืืจ ืืกืืืืช ืืืขืื ืืื ืืืชืจ ืืืคืฉื ืฉืืจืืืฉื ื ืืื, ืจืืืขื ืืืืชืืืช ืืืฉืคืื", ืืืืจืช ืขืืืจ ืืืืืจืื ืืื, ืกืื ื"ืืืช ืฉืืืืง ืืืืจืืืช ืืืจืชืืช ืืงืืืฆืช ืืืื ืืืื. "ืืืจื ืืืจืืคื ืืืืขืช ืืืฆืืข ืืจืืืื, ืืืข, ืืืจืงืฆืืืช ืืืืืื ืืคืชืจืื ืืช ื ืืคืฉ ืฉืืืคืฉืจืื ืืืืจืื ืืืืืืื ืืืื ืืช ืืื ืืื ืคืฉืจืืช, ืืืื ืืืืืจืื ืืืจืงืืืืืื, ืืืกืืช ื ืืืืช ืืืืืื ืฉื ืืคืฉืจืืืืช ืืื ื ืื ืืคืฉ". ืืืืจื ืืืืื ืืคืจื ื, ืืืืื ืืชืขืืคื ืืื ืืืืืืช ืืื ืื ืชืืื ืืชืืืจืืช ืืืืขืืื ืืฆืจืื ืืช ืืื, ืืื ืืืื ืืขืื ืืืืื ืืื ืืื ืืืื ืืืืื ืงืืกื ืฉืคืืขื ืืืฆืืื ืื ืชื"ื. "ืื ืื ื ืขืืื ืืชืืืืจ ืืงืืืื ืืืืจื ืืืจืืคื ืฉืื ืืขืื ืืฉืืขืืชืืช ืืช ืงืฆื ืืขืืืื ืฉื ืืชืืืจืื ืืืฉืจืืืื. ืื ืื ื ืืืืจืื ืขื ืขืฉืจืืช ืืขืืื ืืืืจืืคื ืฉืืืคืืื ืืืืืืืื ืืืืืช ืคืขืืืืช ืฉื ืืืจืืช ืืื ืืืื, ืืืื ื'ื ืืื ืจืืื ืืืืจ ืฉืืืขืชื ืชืืืืจ ืืืืก ืืืฉืจืื ืืฉื ื, ืืฆื ืืืจืืช ื ืืกืคืืช", ืืืืจ ืืคืจื ื. " ืืืืืจ ืืชืืจืืชื ืฉื ืืืืจืืช ืืืื ื ืืชื ืืช ืืืชืืชืื". ืืฉื ืช 2026 ืคืืขืืืช ืืืขืื ื-60 ืืืจืืช ืชืขืืคื ืืงืืืื ืืืจืฅ, ืืืชืืจืืช ืืื ืฆืคืืื ืืืืจืื ืืืืจืื. "ืขื ืกืืฃ ืืฉื ื, ืื ืื ืืงืจื ืืืจืืข ืืจืืื, ืืืกืคืจ ืขืฉืื ืืืืืข ืืืขื 100 ืืืจืืช ืชืขืืคื ืืื ืืืฉืจืื ืืื ืชืืื ืืืฉืืจื ืืืืืื ืฉื ืืฉื ื - ืืืชืจ ืชืืจืืช ืืืืืจืื ื ืืืืื ืืืชืจ". ืืืื ืืขืชืื, ืืคืจื ื ืืกืื ืืื ืืขืืื ืฉืขืฉืืืื ืืืืืช ืืืืืื ืืขื ืืืื ืขืืืจ ืืชืืืจืื ืืืฉืจืืืื. "ืืฉืืืืจืื ืขื ืืืจื ืืืจืืคื, ืื ืื ื ืจืืืื ืขืืืื ืืืขืืื ืฉืืขืืจ ืื ืืื ืืคืืื ืืืืค 50 ืฉื ืืืืื ืืช ืืืชืืืจืืช, ืืืื ืืื ืืืื ืขื ืืืกืืช ืืฉืืจืืช ืืืืืื ื, ืืื ืืจืื ืขื ืืขื ืืืืืื ืืื ืืช ืฉื ืืฉืจืืืื ื-2025, ืคืืืื ืืื ืจืง ืืจืฉื ืืื ืื ืขืจืื ืืื ืงืจืงืื ืืืื ืกืง, ืืฆืคืืื ืืจืืื ืืืงืืืืช ื ืืกืคืื ืืฉื ื. ืืืจืช ืกืืืืงืื, ืืจืืืกืืื, ืืืฉืืช ืืืืจืื ื ืืืชืจ ืืืืชืจ ืืฉืคืืืช, ืืืคืฉืืช ืกืคื ืืืืืืื ืืืืจืื ืื. ืื ื ืืขืจืื ืฉืื ืืืจืืืื'ื ืืืืืืงืืกืื, ืฉืชื ืืืื ืืช ืืืกืืืืืช ืื ืืืืืืชืืืช ืืืื ืืืฉืจืืืื ืฆืคืืืืช ืืืชืืืง ืืืืืื ืืฉื ื. ืืชืืืจ ืืืฉืจืืื ืืืคืฉ ืืงืืืืช ืฉื ืื ืืืืื ืืื ืคืืฉ ืืื. ืืฉืืืื ืืื ืืืงืจ ืืืื ืืืื ืืืื ืื ืืืฉืืืืช ืืชืืืฉืช ืืืกืจ ื ืืืืช ืืืืง ืืืืขืืื ืืืขืจื ืืืจืืคื ืืืืืืื ืืืืืจืช ืืืื ืืช ืืืืคืืืช ืืืืจื ืืืจืืคื". ืืคืจื ื ืืฆืืื ืื ืขืืืื ืืฉ ืืืงืืฉืื ืืืขืืื ืืื ืืื ืืื, ืคืจืื, ืืืกืืจืื ืืืจืฆืืื ื, ืื ืืื ืื ืืืกืคืจืื ืฉืืื ืืขืืจ. "ืืืง ืืืขืจืื ืืืื ื ืืฆืืื ืื ืืืื ืืชืงืฉืืจืช ืกืืื ืื ืืืฉืืืืช ืืืคืื ืืช, ืืืงืื ืืืฉืจืืื ืืืคืฉ ืืืืืข ืืืงืืืืช ืฉื ืื ืืืืื ืืืืืช ืืื". "ืืขืืืื ืืืืงืืฉืื ืืืืจื ืืืจืืคื ื ืชืืืื ืขื ืขื ืืื ืืืงืจื ืขืืืง ืฉื ืืกื ืืืื ื ืืชืืืจืืชื ืฉื ืืืฉืจืืืื ืืื ืขืืืืช ืฉืืืฉ ืืขืืคืืช ืืจืืจืืช", ืืฆืืื ื"ืจ ืขืจื ืืชืจ, ืจืืฉ ืืืืดืก ืื ืืืื ืชืืืจืืช ืืืืื ืืืช ืืืงืืืืช ืื ืจืช ืืืืืื ืืื ืดื ืืชืืืจืืช. "ืจืืฉืืช, ืืขืืคื ืืืขืืื ืืืฆืืขืื ืขืจื ืืืื ืืืืืจ ืชืืจืืชื ืขื ืจืงืข ืืืงืจ ืืืืื ืืืืืืจ ืืืฉืจืื ืืืืงืจ ืืืืฆืจ ืืชืืืจืืชื ืืืขืจื ืืืจืืคื. ืฉื ืืช, ืืขืืคื ืืืคื ืืขืืื ืืฉืจ ื ืชืคืกืื ืืืืืืืชืืื ืืืฉืจืื ืืืื ืืืงืจืื ืืืฉืจืื (ืืขืืงืจ ืืฉืคืืืช) ืืจืืืฉื ืื ืื ืืืกืชืืื ืืจืืื ืืืืคืฉืืืช ืืื ืคืขืืืืช ืคืจื-ืคืืกืืื ืืช. ืืืืจ ืืฉืืืฉื, ืืงืจื ืืืฉืคืืืช ื ืจืฉืืช ืืขืืคื ืืคืขืืืืช ืืืืจืืืืช ืืฉืจ ืืฉืืืช ืืืคืฉื ืขืืจืื ืืช ืืื ืขื ืืืืืื ืืืืข ืืคืขืืืืช ืืฉืืชืคืช ืืฉืจ ืืืคืฉืจืืช ืืื ืืฉืคืืชื ืืืืฉ". ืืืงืืื, ื"ืจ ืืชืจ ืืืืกืืฃ ืืช ืืืขืืคื ืืืืืจืช ืฉื ืืฉืจืืืื ืืืขืืื ื ืืกืคืื ืืฉืจ ืืื ื ืืืง ืืืขืจื ืืืจืืคื ืืื ืืคื ืืชืืืื ื. "ืื ืชื ืืขื ืชืืืจืืชืืช ืืืฉืคืขืช ืืฉืืืื ืืืจืืื ืืืจืชืืื, ืืืืืืื, ืคืืืืืืื ืืืืจืื. ืืื, ืืืืืื ืืืจื ืชืงืืคืช ืืงืืจืื ื ืืืืืืื, ืื ืื ื ืืืื ืื ืฉืืื ืืื ืื ืืืืืื ืขืชืืื ืืืฉืชื ืืช ืืืืื ืืื ืกืืืืช ืืฉืชื ื ืืืชืืื". |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/YouTube_TV] | [TOKENS: 3027] |
Contents YouTube TV YouTube TV is an American subscription over-the-top streaming television service operated by Google through its subsidiary YouTube. Launched in 2017, the virtual multichannel video programming distributor offers a selection of live linear channel feeds and on-demand content from more than 100 television networks (including affiliates of the Big Three broadcast networks (such as ABC, NBC and CBS), Fox, The CW and PBS in most markets) and over 30 OTT-originated services, as well as a cloud-based DVR. The service, which is aimed at cord cutters, is available exclusively in the United States, and can be streamed through its dedicated website and mobile app, smart TVs and digital media players. As of November 7, 2025, YouTube TV has over 10 million subscribers. History YouTube TV launched on April 5, 2017, in five major U.S. marketsโNew York City, Los Angeles, Chicago, Philadelphia and San Francisco. In addition to carrying national broadcast networks, YouTube TV offers cable-originated channels owned by the corporate parents of the four major networks and other media companies. Other channels initially available on the service included CNBC and MSNBC (owned by Comcast through NBCUniversal), BBC World News (owned by the BBC), Smithsonian Channel (owned by Paramount Global), Sundance TV and BBC America (owned by AMC Networks), numerous sports channels, Disney Channel (owned by The Walt Disney Company). YouTube TV members also received access to YouTube Premiumโs original movies and series, though an additional subscription to Premium was required for customers to access ad-free content and additional app features; Showtime and Fox Soccer Plus were also purchasable as optional premium add-ons for an extra fee. Also in 2017, YouTube added MLB Network, and entered into regional streaming rights deals with two Major League Soccer clubs, Seattle Sounders FC and Los Angeles FC. In February 2018, YouTube TV began carrying the Time Warnerโowned Turner Broadcasting System's cable networks (including, among others, TBS, TNT, CNN and Cartoon Network). In addition, YouTube TV also announced a deal to add NBA TV and MLB Network. With these additional channels, the service increased its monthly price for the first time in March 2018, from $34.99 to $39.99, with no grandfathering or opt-out available. On June 19, 2018, under an agreement with Lionsgate, YouTube TV began offering Starz as a premium add-on, containing linear feeds of the six Starz and eight Starz Encore channels. The service expanded to cover 98% of U.S. households by January 2019. In March 2019, YouTube TV launched in Glendive, Montana, thus making the service available in all 210 American television markets. On April 10, 2019, YouTube TV added nine networks owned by Discovery, Inc. (including Discovery Channel, HGTV, Food Network, TLC, Animal Planet and OWN), were not absent after streamer's launch, bringing the service's lineup up to 70 channels. The service concurrently announced a second monthly price increase, from $39.99 to $49.99, without grandfathering existing customers or allowing them to opt out. On April 12 of that year, YouTube TV reached an agreement with Metro-Goldwyn-Mayer to offer its Epix (now MGM+) premium service as an add-on. In July 2019, at the Television Critics Association Summer Press Tour in Pasadena, California, YouTube TV announced it had signed a multi-year deal with PBS to allow carriage of live streams of the public broadcaster's member stations and PBS Kids Channel beginning as early as the fourth quarter of 2019. On December 15, 2019, the first PBS affiliate stations were added to YouTube TV. On February 20, 2020, YouTube TV reached an agreement with WarnerMedia (now Warner Bros. Discovery) to carry HBO and Cinemax as add-ons, and allowing access to the conglomerate's HBO Max streaming service, which launched on May 20 of that year, with a containing HBO subscription. (Customers who subscribe to the HBO add-on can access content within the HBO Max app using their YouTube TV/Google account credentials.) The additions of HBO and Cinemax resulted in YouTube TV becoming the first American vMVPD service to offer all five major premium channels as add-ons. In May 2020, YouTube TV reached an expanded, multi-year deal with ViacomCBS (now Paramount Skydance) to add the company's major cable networks (including MTV, Nickelodeon, BET and Comedy Central) that were notably absent since the streamer's launch. The deal also entailed a continued commitment to distribute several other ViacomCBS-owned networks, including CBS, Pop, The CW and Showtime, through YouTube TV, along with an extended partnership to distribute the media company's content on the broader YouTube platforms. Eight of the channels were added on June 30, expanding YouTube TV's lineup to over 85 channels. The additions of the extra channels were accompanied by the service's third monthly price increase, from $49.99 to $64.99, which also had no grandfathering or opt-out provisions. Some of its competitors, such as Hulu + Live TV and FuboTV, have also implemented similar price increases over time. In September 2020, YouTube TV added the NFL Network to its base lineup and announced the launch of a Sports Plus add-on package, which includes premium sports networks such as NFL RedZone, MavTV, GolTV, Fox Soccer Plus, Stadium and TVG for an additional cost. On December 1, 2020, YouTube TV announced an agreement to carry Nexstar Media Group's NewsNation (the former WGN America) beginning in January 2021. On March 16, 2021, YouTube TV announced that seven additional ViacomCBS-owned networks (including MTV2, TeenNick, Nick Jr. Channel, Dabl and BET Her) that were not added as part of the May 7 renewal agreement would be added to the lineup. In February 2021, the service launched its "Entertainment Plus" add-on, an optional discount bundle (available for $29.99 per month) consisting of the HBO Max, Showtime and Starz premium add-ons. On September 2, 2021, YouTube TV announced that BeIN Sports, Outside TV, VSiN and several other niche sports channels would be added to its Sports Plus add-on tier, effective September 8. In May 2022, the service launched a secondary Spanish-language base plan aimed at Hispanic and Latino customers, and a complimentary "Spanish Plus" add-on; the "Spanish Plan", available for $34.99 per month, consists of 28 Spanish-language channels (including ESPN Deportes, CNN en Espaรฑol, Cine Latino, Estrella TV, Nat Geo Mundo and Cine Mexicano), while Spanish Plus, available for $14.99 per month, includes over 25 Spanish-language channels (including several that are offered as part of the main Spanish plan). The Spanish planโwhich, unlike the Spanish Plus add-on, does not require a subscription to the main base planโlaunched with a seven-day free trial. In September 2022, YouTube TV began allowing subscribers the option of purchasing its premium add-ons without requiring signing up for the 85-channel base plan (a concept similar to the streaming channel stores operated by Apple, Prime Video and Roku), with around 20 add-ons initially being made available for purchase ร la carte, including HBO Max; Cinemax; Showtime; Starz; MGM+; Hallmark Movies Now; CuriosityStream; MLB.tv and NBA League Pass. (YouTube launched a standalone channel store, Primetime Channels, within the platform's Movies & TV hub on November 1 of that year.) In December 2022, YouTube TV was named the exclusive provider of NFL Sunday Ticket beginning with the 2023 NFL season. YouTube TV replaces DirecTV as the package's provider; DirecTV had carried the package since its 1994 inception, a 29-year run. CNBC characterized this as a win for both YouTube TV as well as traditional television networks. YouTube's chief product officer, Neal Mohan, said this was logical progression given how people consume sports content, and noted that subscriptions were a big part of the service's future. He also noted that "creators [would] have exclusive access to games, everything from the first game all the way through the Super Bowl, so that they can produce content on the NFL channel, but they can also produce their own content for YouTube shorts." At the time of announcement, this move would not affect the NFL Network and RedZone on YouTube. NFL Sunday Ticket officially launched as a standalone add-on on both YouTube TV and YouTube's Primetime Channels store on August 16, 2023. On January 31, 2023, YouTube TV notified subscribers that it was dropping MLB Network after the company was unable to reach a new agreement with the channel for continued carriage. In a statement, a spokesperson for the channel said it was simply asking YouTube TV for a deal that was comparable to what around 300 other cable, satellite and streaming companies had agreed to in the past. On March 16, 2023, YouTube TV increased the price from $64.99 to $72.99 per month for new members, and April 18, 2023, for existing members who subscribed to YouTube TV. The price of some add-on packages, like its 4K feature, was reduced to account for the price increase. On May 17, 2023, YouTube TV received backlash after a glitch that made many channels unavailable for several hours, including TNT, who were airing an NBA playoff game between the Boston Celtics & Miami Heat. The glitch made subscribers unable to watch the ending of the game. The service received an unexpected boost from a carriage dispute between Disney and Charter Communications in August, in the advent of the college football and NFL season, with many users advising impacted Charter customers to consider switching to YouTube TV. On December 12, 2023, the separate HBO and HBO Max linear/VOD add-ons were converted into a singular HBO Max offering thatโin addition to featuring HBO's live linear feeds and VOD contentโincludes in-app library access to Max's ad-free tier, and live feeds of the service's streaming channels, CNN Max (offering a mix of programs simulcast from CNN and CNN International) and Bleacher Report (a four-channel, gametime-only service that airs Max-exclusive and Warner Bros. Discovery-owned linear network simulcasts of sports events).[note 1] On December 10, 2025, Google announced it would introduce a slate of genre-specific channel packages in early 2026. Google did not announce pricing at the time, but packages are expected to cost less than its current base plan, which runs $82.99 per month. Features YouTube TV offers a cloud-based DVR service with unlimited storage that saves recordings for nine months; access to the DVR required a subscription to the service's base channel plan until September 2022, when YouTube TV expanded the feature to subscribers of its premium add-ons who do not have an accompanying subscription to the base package. Each subscription can be shared among six accounts and allows up to three simultaneous streams. Supported devices Supported YouTube TV devices include: Carriage disputes In February 2020, YouTube TV announced that Sinclair Broadcast Groupโowned regional sports networks (including Fox Sports Networks and YES Network) would likely be pulled from the service on February 28, 2020, citing high carriage fees. On that day, YouTube TV announced that it had reached an interim agreement to continue offering the channels on the platform while negotiations are under way. On March 5, 2020, YouTube TV and Sinclair reached a new deal to continue carrying all the Fox RSNs except three โ the YES Network, Fox Sports Prime Ticket and Fox Sports West. However, on October 1, 2020, the networks (one of which was carrying the Bally Sports moniker) were pulled off the service after the two sides could not come to a renegotiation agreement. The same month, YouTube TV dropped NESN, which carries games for the Boston Red Sox and the Boston Bruins. In September 2021, YouTube TV entered into a dispute with NBCUniversal when negotiating a renewal of their contract, with the latter warning that its channels would be removed from the service if they failed to reach an agreement by the end of the month. NBC had reportedly demanded YouTube TV bundle their Peacock streaming service, while YouTube TV announced that it would decrease their price by $10 if the contract is not renewed. The two companies failed to reach an agreement by October 1, but agreed to a "short extension" to avoid the channels being taken down. A deal was reached a day later. In December 2021, YouTube TV engaged in a dispute with The Walt Disney Company over a renewal in their contract, warning customers about the possible removal of ABC, Disney Channel, ESPN, Freeform, FX, National Geographic, and other Disney-owned networks should the two fail to reach an agreement. Google and Disney were unable to renew their contract by the expiration date, resulting in YouTube TV's first contract-related blackout. This was resolved a day later, with the two companies reaching a new deal. In January 2023, MLB Network was pulled off YouTube TV after they failed to reach a contract renewal agreement. On November 18, 2023, Phoenix, Arizona television station KTVK was pulled off YouTube TV as a local channel in the state of Arizona due to a dispute between YouTube TV and KTVK's parent company, Gray Television. This leaves viewers on that service unable to watch the majority of locally televised Phoenix Suns and Phoenix Mercury basketball games not televised by TNT, ESPN, or ABC. On February 12, 2025, YouTube TV engaged in a dispute with Paramount Global over a renewal in their contract, warning customers about the possible removal of CBS, Nickelodeon, MTV, Paramount+ with Showtime, BET, Comedy Central, and other Paramount-owned networks should the two fail to reach an agreement on February 13. A shutdown would have potentially made 2025 NCAA Division I men's basketball tournament games airing on CBS plus coverage of the Masters Tournament unavailable to YouTube TV subscribers. However, the two sides reached an agreement on the 13th. On October 30, 2025, Disney pulled its networks from YouTube TV. The dispute was resolved on November 14, 2025, with the service agreeing to carry ESPN's new DTC service (whose content will also be integrated directly into the YouTube TV platform), as well as Disney+ and Hulu on selected plans. See also Notes References External links |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Meta_Quest_Pro] | [TOKENS: 1620] |
Contents Meta Quest Pro USB-C The Meta Quest Pro is a discontinued mixed reality (MR) headset developed by Reality Labs, a division of Meta Platforms. Unveiled on October 11, 2022, it is a high-end headset designed for mixed reality and virtual reality applications, targeting business and enthusiast users. It is differentiated from the Quest 2 by a thinner form factor leveraging pancake lenses, high resolution cameras for MR, integrated face and eye tracking, and updated controllers with on-board motion tracking. The Quest Pro received mixed reviews, with critics praising its display and controllers, but criticizing its mixed reality cameras for having a grainy appearance and limited usefulness in its software at launch, and for its high price. Meta discontinued the Quest Pro in favor of the Meta Quest 3 in September 2024, with sales ending in January 2025. Development Prior to Facebook Connect in October 2021 (during which Facebook, Inc. announced its rebranding as "Meta" to emphasize its development of "metaverse"-related technologies), CEO Mark Zuckerberg and CTO Andrew Bosworth posted photos of themselves testing prototype headsets, which they stated to have "Retina resolution" displays (alluding to the Apple Inc. trademark for high-resolution displays), while leaked demo videos and references to an "Oculus Pro" headset were also discovered on the Oculus website and unreleased system software. During the event, Zuckerberg officially announced that the company was developing a headset codenamed "Project Cambria" as part of the Oculus Quest line of products, which would be a high-end product aimed at mixed reality applications, and feature a slimmer design, high-resolution color passthrough cameras, infrared depth sensors, and eye tracking. The product was officially revealed as Meta Quest Pro during Connect in October 2022 for a release on October 25; Zuckerberg told the media that the Quest Pro would target "people who just want the highest-end VR device โ enthusiast, prosumer folks โ or people who are trying to get work done", and would be sold in parallel with the Quest 2 (which is aimed primarily at the consumer market). Bosworth stated that the Quest Pro would "take existing experiences that people are having today in VR and make them better." The planned depth sensor was dropped from the final hardware due to cost and weight concerns. Specifications The Quest Pro more closely resembles AR headsets such as Microsoft's HoloLens rather than other VR headsets, with a thinner lens enclosure, and a more visor-like form factor that does not obscure the entirety of the user's peripheral vision; "peripheral blinders" are included as an accessory, with a "full light blocker" attachment sold separately. The lenses can be adjusted for interpupillary distance, and moved forwards and backwards. It uses LCD quantum dot displays with a per-eye resolution of 1800ร1920, viewed through pancake lenses that allow for its enclosure to be 40% thinner in comparison to the Quest 2. Meta stated that the displays supported a wider color gamut than the Quest 2, and had improved contrast via "local dimming". The Quest Pro's battery is built into the back of its head strap for better weight distribution; Meta rated it as lasting 1 to 2 hours on a single charge. For its mixed reality functions, the Quest Pro uses high-resolution color cameras, as opposed to the lower-resolution, grayscale cameras on the Quest. The headset also contains internal sensors that are used for eye and face tracking, primarily for use with avatars. The Quest Pro uses a Qualcomm Snapdragon XR2+ system-on-chip with 12 GB of RAM, which Meta stated had "50% more power" than the Quest 2's Snapdragon XR. The Quest Pro uses Touch Pro controllers, a significant update to the Oculus Touch controllers used by prior Quest and Rift products. They have a more compact design with upgraded haptics, and replace the infrared sensor ring (which were tracked by the headset's cameras) with on-board motion tracking using embedded cameras and Qualcomm Snapdragon 662 processors. The controllers are also rechargeable via the headset's charging dock, have a new pressure sensor for pinch gestures, and have pressure-sensitive stylus tip accessories that can be attached to the bottoms of their handles for drawing and writing. The Quest Pro controllers are also sold separately as an accessory for Quest 2 and newer. The Quest Pro was demonstrated to the press with mixed reality versions of software such as Horizon Workrooms (which allows users to attend meetings, and control their computer remotely in VR with a virtual multi-monitor environment), the DJ software Tribe XR, and Painting VR. Meta announced a partnership with Microsoft to integrate productivity services such as Microsoft 365, Microsoft Teams, and Windows 365 with Meta Quest 2 and Pro, including allowing users to join Teams meetings via Horizon Workrooms, use Microsoft 365 applications, as well as support for management of Quest devices via Intune. In December 2023, Valve Corporation released Steam Link for Meta Quest 2, Quest Pro, and Quest 3, with OSC support for facial and eye tracking on Quest Pro. Reception The Quest Pro received mixed reviews. Ars Technica noted that its design felt less "claustrophobic" and "much more secure and better balanced than previous Quest headsets, especially during extended use", but noting that its narrow field of view was more apparent when using the headset without its light blinder accessories. Its display and lenses were described as being slightly sharper and having more legible text rendering than the Quest 2, making it better-suited for office tasks and using a remote desktop environment in Workrooms. The MR cameras were panned for being grainy and "fuzzy"-looking, while many of the MR features in apps at launch were deemed to be "novelties". It was also criticized for requiring manual room setup, rather than automatically mapping walls. In conclusion, it was felt that "at its current asking price, though, we can only recommend the Quest Pro to mid-level executives who have convinced their superiors to allocate a ridiculous, money-is-no-object budget to ill-defined metaverse projects out of nothing more than a deep sense of FOMO." Adi Robertson of The Verge described the device as "seemingly launched without plan or purpose, highlighting VRโs persistent drawbacks without making good use of its strengths โ and topped off with some irredeemably bad software". The new controllers were praised for being more compact than the previous Oculus Touch design and offering a charging dock with rechargeable batteries (albeit having less battery life than the previous controllers, which used standard AA batteries). The Quest Pro was criticized for having a "uniquely tortuous" strap system that felt worse when using the headset for extended sessions (in comparison to the Quest 2 with its Elite Strap accessory), a "grainy"-looking display and "fuzzy" passthrough visuals that "doesn't look remotely like the real world", and "limited and idiosyncratic" face tracking. The Workrooms app was also criticized for being unreliable, especially for software that was promoted as being one of the key selling points of the Quest Pro. Robertson gave the Quest Pro a 2 out of 5, arguing that it "[offers] technically innovative features without doing a good job of showcasing them", and suggesting that mainstream users wait for an eventual Quest 3 that may incorporate hardware improvements from the Pro at a lower price point. In March 2023, Meta lowered the price of Quest Pro to US$1,000, amid reports of "underwhelming" sales. On July 19, 2023, The Information reported that Meta was in the process of discontinuing production of the Quest Pro, and had scrapped plans for a successor model. Meta officially discontinued the Quest Pro in September 2024 alongside the unveiling of the Meta Quest 3S, with sales officially discontinued in January 2025, and Meta steering customers towards the Meta Quest 3. References |
======================================== |
[SOURCE: https://en.wikipedia.org/wiki/Qinling_Mountains] | [TOKENS: 1174] |
Contents Qinling The Qinling (simplified Chinese: ็งฆๅฒญ; traditional Chinese: ็งฆๅถบ; pinyin: Qรญnlวng) or Qin Mountains, formerly known as the Nanshan ("Southern Mountains"), are a major eastโwest mountain range in southern Shaanxi Province, China. The mountains mark the divide between the drainage basins of the Yangtze and Yellow River systems, providing a natural boundary between North and South China and support a huge variety of plant and wildlife, some of which is found nowhere else on earth. To the north is the densely populated Wei River valley, an ancient center of Chinese civilization. To the south is the Han River valley. To the west is the line of mountains along the northern edge of the Tibetan Plateau. To the east are the lower Funiu and Daba Mountains, which rise out of the coastal plain. The northern side of the range is prone to hot weather, the rain shadow cast by the physical barrier of the mountains dictating that the land to the north has a semi-arid climate, and is consequently somewhat impoverished in regard to fertility and species diversity. Furthermore, the mountains have also acted in the past as a natural defense against nomadic invasions from the north, as only four passes cross the mountains. In the late 1990s a railway tunnel and a spiral were completed, thereby easing travel across the range. The highest mountain in the range is Mount Taibai at 3,767 meters (12,359 ft), which is about 100 kilometers (62 mi) west of the ancient Chinese capital of Xi'an. Three culturally significant peaks in the range are Mount Hua (2,155 meters or 7,070 feet), Mount Li (1,302 meters or 4,272 feet), and Mount Maiji (1,742 meters or 5,715 feet). Environment, flora and fauna The environment of the Qin Mountains is a deciduous forest ecoregion. The Qin Mountains form the watershed of the Yellow River and Yangtze River basins; historically, the former was home to deciduous broadleaf forests, while the latter has milder winters with more rainfall, and was generally covered in warmer, temperate, evergreen broadleaf forests. Thus, the Qin Mountains are commonly used as the demarcation line between northern and southern China. The low-elevation forests of the Qin foothills are dominated by temperate deciduous trees, like oaks (Quercus acutissima, Q. variabilis), elm (Ulmus spp.), common walnut (Juglans regia), maple (Acer spp.), ash (Fraxinus spp.) and Celtis spp. Evergreen species of these low-elevation forests include broadleaf chinquapins (Castanopsis sclerophylla), ring-cupped oaks (Quercus glauca), and conifers, like Pinus massoniana. At the middle elevations, conifers, like Pinus armandii, are mixed with broadleaf birch (Betula spp.), oaks (Quercus spp.), and hornbeams (Carpinus spp.); from about 2,600 to 3,000 meters (8,500 to 9,800 ft), these mid-elevation forests give way to a subalpine forest of firs (Abies fargesii, A. chensiensis), Cunninghamia, and birch (Betula spp.), with rhododendrons (Rhododendron fastigiatum) abundant in the understory. The region is home to a large number of rare plants, of which around 3,000 have been documented. Plant and tree species native to the region include ginkgo (Ginkgo bilobaโthought to be one of the oldest species of tree in the world), as well as Huashan or Armand pine (Pinus armandii), Huashan shen (Physochlaina infundibularis), Acer miaotaiense and Chinese fir. Timber harvesting reached a peak in the 18th century in the Qinling Mountains. The region is home to the endemic Qinling panda (Ailuropoda melanoleuca qinlingensis), a brown-and-white subspecies of the giant panda (A. melanoleuca), which is protected with the help of the Changqing and Foping nature reserves. An estimated 250 to 280 pandas live in the region, which is thought to represent around one-fifth of the entire wild giant panda population. The Qinling Mountains are also home to many other species of wildlife, including numerous birds, like the crested ibis, Temminck's tragopan, golden eagle, black throat and golden pheasants, as well as mammals like the Asiatic golden cat, Asiatic black bear, clouded leopard, golden takin, golden snub-nosed monkey, yellow-throated marten, and leopard. The Chinese giant salamander (Andrias davidianus), at 1.8 meters (5 ft 11 in), is one of the largest amphibians in the world, and is critically endangered; it is locally pursued for food, and for use of its body parts in traditional Chinese medicine. An environmental education program is being undertaken to encourage sustainable management of wild populations in the Qin Mountains, and captive-breeding programs have also been set up. Weapons of mass destruction According to the US-based think tank Nuclear Information Project, China "keeps most of its nuclear warheads at a central storage facility in the Qinling mountain range, though some are kept at smaller regional storage facilities." See also References External links |
======================================== |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.