text
stringlengths
0
473k
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEMcFerran201525_84-2] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Parallel_(geometry)] | [TOKENS: 2222]
Contents Parallel (geometry) In geometry, parallel lines are coplanar infinite straight lines that do not intersect at any point. Parallel planes are infinite flat planes in the same three-dimensional space that never meet. In three-dimensional Euclidean space, a line and a plane that do not share a point are also said to be parallel. However, two noncoplanar lines are called skew lines. Line segments and Euclidean vectors are parallel if they have the same direction or opposite direction (not necessarily the same length). Parallel lines are the subject of Euclid's parallel postulate. Parallelism is primarily a property of affine geometries and Euclidean geometry is a special instance of this type of geometry. In some other geometries, such as hyperbolic geometry, lines can have analogous properties that are referred to as parallelism. The concept can also be generalized to non-straight parallel curves and non-flat parallel surfaces, which keep a fixed minimum distance and do not touch each other or intersect. Symbol The parallel symbol is ∥ {\displaystyle \parallel } . For example, A B ∥ C D {\displaystyle AB\parallel CD} indicates that line AB is parallel to line CD. In the Unicode character set, the "parallel" and "not parallel" signs have codepoints U+2225 (∥) and U+2226 (∦), respectively. In addition, U+22D5 (⋕) represents the relation "equal and parallel to". Euclidean parallelism Given parallel straight lines l and m in Euclidean space, the following properties are equivalent: Since these are equivalent properties, any one of them could be taken as the definition of parallel lines in Euclidean space, but the first and third properties involve measurement, and so, are "more complicated" than the second. Thus, the second property is the one usually chosen as the defining property of parallel lines in Euclidean geometry. The other properties are then consequences of Euclid's Parallel Postulate. The definition of parallel lines as a pair of straight lines in a plane which do not meet appears as Definition 23 in Book I of Euclid's Elements. Alternative definitions were discussed by other Greeks, often as part of an attempt to prove the parallel postulate. Proclus attributes a definition of parallel lines as equidistant lines to Posidonius and quotes Geminus in a similar vein. Simplicius also mentions Posidonius' definition as well as its modification by the philosopher Aganis. At the end of the nineteenth century, in England, Euclid's Elements was still the standard textbook in secondary schools. The traditional treatment of geometry was being pressured to change by the new developments in projective geometry and non-Euclidean geometry, so several new textbooks for the teaching of geometry were written at this time. A major difference between these reform texts, both between themselves and between them and Euclid, is the treatment of parallel lines. These reform texts were not without their critics and one of them, Charles Dodgson (a.k.a. Lewis Carroll), wrote a play, Euclid and His Modern Rivals, in which these texts are lambasted. One of the early reform textbooks was James Maurice Wilson's Elementary Geometry of 1868. Wilson based his definition of parallel lines on the primitive notion of direction. According to Wilhelm Killing the idea may be traced back to Leibniz. Wilson, without defining direction since it is a primitive, uses the term in other definitions such as his sixth definition, "Two straight lines that meet one another have different directions, and the difference of their directions is the angle between them." Wilson (1868, p. 2) In definition 15 he introduces parallel lines in this way; "Straight lines which have the same direction, but are not parts of the same straight line, are called parallel lines." Wilson (1868, p. 12) Augustus De Morgan reviewed this text and declared it a failure, primarily on the basis of this definition and the way Wilson used it to prove things about parallel lines. Dodgson also devotes a large section of his play (Act II, Scene VI § 1) to denouncing Wilson's treatment of parallels. Wilson edited this concept out of the third and higher editions of his text. Other properties, proposed by other reformers, used as replacements for the definition of parallel lines, did not fare much better. The main difficulty, as pointed out by Dodgson, was that to use them in this way required additional axioms to be added to the system. The equidistant line definition of Posidonius, expounded by Francis Cuthbertson in his 1874 text Euclidean Geometry suffers from the problem that the points that are found at a fixed given distance on one side of a straight line must be shown to form a straight line. This can not be proved and must be assumed to be true. The corresponding angles formed by a transversal property, used by W. D. Cooley in his 1860 text, The Elements of Geometry, simplified and explained requires a proof of the fact that if one transversal meets a pair of lines in congruent corresponding angles then all transversals must do so. Again, a new axiom is needed to justify this statement. The three properties above lead to three different methods of construction of parallel lines. Because parallel lines in a Euclidean plane are equidistant, there is a unique distance between the two parallel lines. Given the equations of two non-vertical, non-horizontal parallel lines, the distance between the two lines can be found by locating two points (one on each line) that lie on a common perpendicular to the parallel lines and calculating the distance between them. Since the lines have slope m, a common perpendicular would have slope −1/m and we can take the line with equation y = −x/m as a common perpendicular. Solve the linear systems and to get the coordinates of the points. The solutions to the linear systems are the points and These formulas still give the correct point coordinates even if the parallel lines are horizontal (i.e., m = 0). The distance between the points is which reduces to When the lines are given by the general form of the equation of a line (horizontal and vertical lines are included): their distance can be expressed as Two lines in the same three-dimensional space that do not intersect need not be parallel. Only if they are in a common plane are they called parallel; otherwise they are called skew lines. Two distinct lines l and m in three-dimensional space are parallel if and only if the distance from a point P on line m to the nearest point on line l is independent of the location of P on line m. This never holds for skew lines. A line m and a plane q in three-dimensional space, the line not lying in that plane, are parallel if and only if they do not intersect. Equivalently, they are parallel if and only if the distance from a point P on line m to the nearest point in plane q is independent of the location of P on line m. Similar to the fact that parallel lines must be located in the same plane, parallel planes must be situated in the same three-dimensional space and contain no point in common. Two distinct planes q and r are parallel if and only if the distance from a point P in plane q to the nearest point in plane r is independent of the location of P in plane q. This will never hold if the two planes are not in the same three-dimensional space. In non-Euclidean geometry In non-Euclidean geometry, the concept of a straight line is replaced by the more general concept of a geodesic, a curve which is locally straight with respect to the metric (definition of distance) on a Riemannian manifold, a surface (or higher-dimensional space) which may itself be curved. In general relativity, particles not under the influence of external forces follow geodesics in spacetime, a four-dimensional manifold with 3 spatial dimensions and 1 time dimension. In non-Euclidean geometry (elliptic or hyperbolic geometry) the three Euclidean properties mentioned above are not equivalent and only the second one (Line m is in the same plane as line l but does not intersect l) is useful in non-Euclidean geometries, since it involves no measurements. In general geometry the three properties above give three different types of curves, equidistant curves, parallel geodesics and geodesics sharing a common perpendicular, respectively. While in Euclidean geometry two geodesics can either intersect or be parallel, in hyperbolic geometry, there are three possibilities. Two geodesics belonging to the same plane can either be: In the literature ultra parallel geodesics are often called non-intersecting. Geodesics intersecting at infinity are called limiting parallel. As in the illustration through a point a not on line l there are two limiting parallel lines, one for each direction ideal point of line l. They separate the lines intersecting line l and those that are ultra parallel to line l. Ultra parallel lines have single common perpendicular (ultraparallel theorem), and diverge on both sides of this common perpendicular. In spherical geometry, all geodesics are great circles. Great circles divide the sphere in two equal hemispheres and all great circles intersect each other. Thus, there are no parallel geodesics to a given geodesic, as all geodesics intersect. Equidistant curves on the sphere are called parallels of latitude analogous to the latitude lines on a globe. Parallels of latitude can be generated by the intersection of the sphere with a plane parallel to a plane through the center of the sphere. Reflexive variant If l, m, n are three distinct lines, then l ∥ m ∧ m ∥ n ⟹ l ∥ n . {\displaystyle l\parallel m\ \land \ m\parallel n\ \implies \ l\parallel n.} In this case, parallelism is a transitive relation. However, in case l = n, the superimposed lines are not considered parallel in Euclidean geometry. The binary relation between parallel lines is evidently a symmetric relation. According to Euclid's tenets, parallelism is not a reflexive relation and thus fails to be an equivalence relation. Nevertheless, in affine geometry a pencil of parallel lines is taken as an equivalence class in the set of lines where parallelism is an equivalence relation. To this end, Emil Artin (1957) adopted a definition of parallelism where two lines are parallel if they have all or none of their points in common. Then a line is parallel to itself so that the reflexive and transitive properties belong to this type of parallelism, creating an equivalence relation on the set of lines. In the study of incidence geometry, this variant of parallelism is used in the affine plane. See also Notes References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Arab_al-Bawati] | [TOKENS: 380]
Contents Arab al-Bawati Arab al-Bawati (Arabic: عرب البواطي/خربة الحكمة), was a Palestinian Arab village in the District of Baysan. It was depopulated during the 1948 Arab-Israeli War. It was located 4 kilometres north east of Baysan in the Baysan valley. History In 1882, the PEF's Survey of Western Palestine described Kh. el Hakeimiyeh as having "ruined walls and a few modern deserted houses – a small deserted village". In the 1922 census of Palestine, conducted by the Mandatory Palestine authorities, Bawati had a population of 348 Muslims, increasing in the 1931 census to 461 (under the name of 'Arab Hakamiya), still all Muslims, in 86 houses. In the 1945 statistics it had a population of 520 Muslims with a total of 10,641 dunums of land. That year Arabs used 2,225 dunams of village lands for plantations and irrigated land, 3,335 for cereals, while 52 dunams were classed as uncultivable. Many of the villagers left early in the war, apparently after a Haganah attack. The village was destroyed on May 16, or May 20, 1948. Following the war the area was incorporated into the State of Israel and the land was left undeveloped; the nearest village is Hamadia. In 1992, it was described: "All of the village houses have been demolished. The remains of basalt stone walls and the square and circular foundations of buildings can be seen among the weeds." Evidence of historic occupation includes Roman milestones and ruined buildings at the Khirbat al Bawati. References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Al-Mazar,_Jenin] | [TOKENS: 1418]
Contents Al-Mazar, Jenin Al-Mazar (Arabic: المزار) was a Palestinian Arab village in the District of Jenin. Situated on Mount Gilboa, its history stretched back to the period of Mamluk rule over Palestine (13th century). An agricultural village, its villagers traced their ancestry to nomads descended from a Sufi mystic from Jaba', Syria. Al-Mazar was depopulated during the 1948 Palestine war, and incorporated into the newly established state of Israel. The Israeli villages of Prazon, Meitav, and Gan Ner were established on al-Mazar's former lands. Location The village was located on the flat, circular peak of the mountain known in biblical scripture as Mount Gilboa, and locally as Mount al-Mazar or Djebel Foukou'ah ("Mount of Mushrooms"), with steep slopes on all sides excepting the southeast. It was joined to the neighbouring village of Nuris by a dirt path. History The village may have been named al-Mazar (Arabic for "shrine", "a place one visits") because it was a burial place of many of those who fell in the Battle of Ain Jalut between the Mamluks and the Mongols in 1260. The villagers traced their origins to the al-Sadiyyun nomads, who in turn were descended from Shaykh Sad al-Din al-Shaybani (died 1224), a prominent Sufi mystic from the Jaba' village on the Golan. Another tradition traces their ancestry to Libya. During the period of Ottoman rule over Palestine, al-Mazar was captured and burned by Napoleon's troops in April 1799 during the Syrian leg of his military campaign in Egypt. Pierre Jacotin named the village Nazer on his map from that campaign. In 1870, V. Guérin visited al-Mazar, describing it as a village with about 500 inhabitants, situated at the peak of Djebel Foukou'ah, and surrounded by a belt of gigantic cactus plants. Numerous wells carved in the rock were said to point to the antiquity of the village. From the village, he could see the whole of Djebel Foukou'ah, which he identifies as the Mount Gilboa of biblical scripture, as well as the Jezreel Valley, the Little Hermon (actually Djebel Dhahy), Mount Tabor, and further north, the snowy peaks of Mount Hermon. Also seen from the village to the west and northwest were the Plain of Esdraelon and the Carmel Mountains; to the south, the mountains around Jenin; and to the east, before the Jordan River, what he calls the ancient country of Galaad. He notes that the name of Mount Gilboa is preserved in the name of the village of Djelboun, also situated on the mountain. Descending the mountain towards the west-southwest, at the base of the village of al-Mazar, he notes the presence of a spring of the same name, Ain el-Mezar, and on the slopes of this side of the mountain, which are less steep, there were olive trees and wheat being cultivated. In the 1882 the PEF's Survey of Western Palestine (SWP) described the place as: "a village on the summit of the mountain. It is principally built of stone, and has a well to the south-east. A few olives surround the houses. The site is very rocky. It is inhabited by Derwishes, and is a place of Muslim pilgrimage." In the 1922 census of Palestine, conducted by the British Mandate authorities, al-Mazar had a population of 223, all Muslims, increasing slightly in the 1931 census to 257, still all Muslims, in a total of 62 inhabited houses. The village was home to Sheik Farhan al Sadi, a prominent leader in the 1936 Arab revolt in Palestine. In 1937, at the age of 75, he was executed by the British authorities for his participation in the revolt. Agriculture was the backbone of the village economy, which was based on grain, fruit, legume, and olive cultivation. In the 1945 statistics the population of Al-Mazar was 270 Muslims, with a total of 14,501 dunams of land. Of this, 5,221 dunums were used for cereals, 229 dunums were irrigated or used for orchards, of which 68 dunums were for olives, while 9 dunams were built-up (urban) land. Farhan al-Sa'di (1856–1937) was born in al-Mazar. He is thought to be the first to use a weapon during the 1936 revolt. On 19 April 1948, Palmah HQ (headquarters) ordered the OC (operational command) of the First Battalion to, "destroy enemy bases in Mazar, Nuris and Zir'in [..] Comment: with the capture of Zir'in, most of the village houses must be destroyed while [some] should be left intact for accommodation and defence." According to Benny Morris, the Israeli historian, the policy of destroying the Palestinian villages was characteristic of Haganah attacks in April–May 1948, just before the outbreak of the 1948 Arab–Israeli war. However, the specific orders for al-Mazar were either not acted upon, or did not succeed at once, as the village was not occupied until 30 May 1948. By that time, it had been captured after an attack by Israeli soldiers from the Golani Brigade, along with the village of Nuris, which lay at the foot of the mountain. Following the war, the area was incorporated into the State of Israel and three villages were subsequently established on the land of al-Mazar; Prazon in 1953, Meitav in 1954, and Gan Ner in 1987. The Palestinian historian Walid Khalidi described what remained of al-Mazar in 1992: The site is overgrown with thorns and cactuses and strewn with stone rubble. None of the village houses or landmarks remains. Almond trees and cactuses grow on parts of the village lands. The hilly lands are used as grazing areas, and other parts are covered with forest. Folklore According to local tradition, the ancestral mother of the local al-Sadiyyun clan, Halima al-Sa'adi, was a Bedouin woman who breastfed the Islamic Prophet Muhammad. It is said that the prophet's mother entrusted the infant to a Bedouin woman to breastfeed him. Members of the clan say Halima nursed Muhammad in the house of his uncle following his mother's death when Muhammad was six years old. References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Kingman_Reef] | [TOKENS: 2684]
Contents Kingman Reef Kingman Reef (/ˈkɪŋmən/) is a largely submerged, uninhabited, triangle-shaped reef, geologically an atoll, 9.0 nmi (20 km) east-west and 4.5 nmi (8 km) north-south, in the North Pacific Ocean, roughly halfway between the Hawaiian Islands and American Samoa. It has an area of 3 hectares (0.03 km2; 7.4 acres) and is an unincorporated territory of the United States in Oceania. The reef is administered by the United States Fish and Wildlife Service as the Kingman Reef National Wildlife Refuge. It was claimed by the United States in 1859 and later used briefly as a stopover for commercial Pacific flying boat routes in the 1930s going to New Zealand; however, the route was changed with a different stopover. It was administered by the Navy from 1934 to 2000 and thereafter by the Fish and Wildlife Service. It has since become a marine protected area. In the 19th century, it was noted as a maritime hazard, earning the name Hazard Rocks, and is known to have been hit once in 1876. In the 21st century, it has been noted for its marine biodiversity and remote nature. Hundreds of fish and coral species are on and around the reef. History Kingman Reef was discovered on June 14, 1798, by the American captain Edmund Fanning of the ship Betsey. It was first described by Captain W. E. Kingman (whose name the island bears) of the ship Shooting Star on November 29, 1853. It was claimed in 1859 by the United States Guano Company, under the name "Dangers Rock," along with several other islands. The claim was made under the U.S. Guano Islands Act of 1856, although there is no evidence that guano existed or was ever mined on Kingman Reef. The British steamship Tarta struck the reef in June 1874, and it was later surveyed by HMS Penguin (1876) in 1897, establishing that Kingman Reef was the same hazard previously charted as Caldew Reef and Maria Shoal, among other names. On May 10, 1922, Lorrin A. Thurston became the first person to raise the American flag on the atoll and read an annexation proclamation. The Palmyra Copra Co. intended to use Kingman as a fishing base, as demand for copra had declined after World War I and Palmyra Island lacked a suitable anchorage. Thurston formally claimed Kingman for the United States by reading the following declaration while standing on its shore: Be it known to all people: That on the tenth of May, A.D. 1922, the undersigned agent of the Island of Palmyra Copra Co., Ltd., landed from the motorship Palmyra doth, on this tenth day of May, A.D. 1922, take formal possession of this island, called Kingman Reef, situated in longitude 162 degrees 18' west and 6 degrees 23' north, on behalf of the United States of America and claim the same for said company. A copy of the declaration, along with a U.S. flag and clippings from The Honolulu Advertiser newspaper, were left on Kingman to document the claim. On December 29, 1934, the U.S. Navy assumed jurisdiction over Kingman Reef. In 1935, the reef was visited by William T. Miller, representing the U.S. Bureau of Air Commerce. In 1935, Pan American Airways wanted to expand its routes to the Pacific and include Australia and New Zealand in its "Clipper" air routes, with a stopover in Pago Pago, American Samoa. However, an additional stopover point was sought. It had been decided that the Kingman Reef lagoon, located 1,600 miles (2,600 km) north of Samoa, would be suitable for overnight stops for planes en route from the U.S. to New Zealand. A supply ship, the North Wind, was stationed at Kingman Reef to provide fuel, lodging, and meals. On March 23, 1937, the S42B Pan American Clipper II, named Samoan Clipper and piloted by Captain Ed Musick, en route from Hawaii to American Samoa, became the first flight to land in Kingman Reef's lagoon. During the next several months, Pan Am successfully used the lagoon several times as a halfway station for its flying boats (Sikorsky S-42B) when they traveled between those two points. However, a Clipper flight on January 11, 1938, ended in tragedy. Shortly after the early-morning takeoff from Pago Pago, as it was bound for New Zealand, the plane exploded. The right outboard engine had developed an oil leak, and the aircraft burst into flames while dumping fuel; there were no survivors. As a result of the tragedy, Pan Am ended flights to New Zealand via Kingman Reef and Pago Pago. It established a new route in July 1940 that used Canton Island and New Caledonia as stopovers instead. On February 14, 1941, President Franklin D. Roosevelt issued Executive Order 8682 to create naval defense areas in the central Pacific territories. The proclamation established the "Kingman Reef Naval Defensive Sea Area", encompassing the territorial waters between the extreme high-water marks and the three-mile marine boundaries surrounding the atoll. "Kingman Naval Airspace Reservation" was also established to restrict access to the airspace over the naval defense sea area. Only U.S. government ships and aircraft were permitted to enter the naval defense areas at Kingman Reef unless authorized by the Secretary of the Navy. In 2012, Kingman Reef Atoll Development LLC, owned by descendants of the owners of the Palmyra Copra Co., Ltd., sued the U.S. government for its designation as a national wildlife refuge. The plaintiff sought $54.5 million in compensation for losing fishing rights, ecotourism, and other economic activity. However, in 2014, the federal court ruled that any such claim had expired by 1950 at the latest. In 2016, the ARRL Awards Committee of the American Radio Relay League removed Kingman Reef from its DXCC list, with the reef now considered part of the Palmyra Island / Jarvis Island DXCC Entity. Geography It is the northernmost of the Northern Line Islands and lies 36 nautical miles (67 km) northwest of the next closest island (Palmyra Atoll), and 930 nautical miles (1,720 km) south of Honolulu. The reef encloses a lagoon up to 53 fathoms (318 ft; 97 m) deep in its eastern part near the northeastern spit of land. The total area within the outer rim of the reef is 20 sq nmi (70 km2). There are two small strips (spits) of dry land composed of coral rubble and giant clamshells on the eastern rim with areas of 2 and 1 acre (0.8 and 0.4 ha) having a coastline of 2 miles (3 km), a short spit on the northeast side of the lagoon and a spit twice as long but thinner on its south side. The highest point on the reef is less than 5 feet (1.5 m) above sea level, which is wet or awash most of the time, making Kingman Reef a maritime hazard. It has no natural resources and supports no economic activity. Political status Kingman Reef has the status of an unincorporated territory of the United States, administered from Washington, D.C., by the U.S. Department of Interior. The atoll is closed to the public. For statistical purposes, Kingman Reef is grouped as part of the United States Minor Outlying Islands. In January 2009, Kingman Reef was designated a marine national monument. The pre-20th century names Danger Rock, Caldew Reef, Maria Shoal, and Crane Shoal refer to this atoll, which was entirely submerged at high tide. Thomas Hale Streets described its state in the 1870s, when it had: ... hardly, as yet, assumed the distinctive features of an island. It is entirely under water at high tide, and but a few coral heads project here and there above the surface at low water. In the course of time, however, it will undoubtedly be added to the [northern Line Islands]. Kingman Reef is considered to be a county-equivalent by the U.S. Census Bureau. With only 0.01 square miles (0.03 square kilometers) of land, Kingman Reef is the smallest county or county-equivalent by land area in the United States. Ecology Kingman Reef supports a vast variety of marine life. Giant clams are abundant in the shallows, and there are approximately 38 genera and 130 species of stony corals on the reef. This is more than three times the species diversity of corals in the main Hawaiian Islands. The ecosystem of the reef and its subsequent food chain is known for the distinct quality of being primarily predator-based. Sharks comprised 74% of the top predator biomass (329 g·m−2) at Kingman Reef and 57% at Palmyra Atoll (97 g·m−2). Low shark numbers have been observed at Tabuaeran and Kiritimati. The percentage of the total fish biomass on the reef is made up of 85% apex predators, creating a high level of competition for food and nutrients among local organisms – particularly sharks, jacks, and other carnivores. The threatened green sea turtles that frequent nearby Palmyra Atoll travel to Kingman Reef to forage and bask on the coral rubble spits at low tide. However, above sea level, the reef is usually barren of macroorganisms. Mainly constructed of dead and dried coral skeletons, providing only calcite as a source of nutrients, the small and narrow strips of dry land are only habitable by a handful of species for short periods. Most flora that begin to grow above water—primarily coconut palms—die out quickly due to the fierce tides and lack of resources necessary to sustain plant life. National Wildlife Refuge On September 1, 2000, the Navy relinquished its control over Kingman Reef to the U.S. Fish and Wildlife Service. On January 18, 2001, Secretary of the Interior Bruce Babbitt created the Kingman Reef National Wildlife Refuge during his final days in office with Secretary's Order 3223. It is composed of the emergent coral rubble spits and all waters out to 12 nautical miles (22 km). While there are only 3 acres (0.012 km2) of land, 483,754 acres (1,957.68 km2) of water area is included in the Refuge. Along with six other islands, the reef was administered as part of the Pacific Remote Islands National Wildlife Refuge Complex. In January 2009, that entity was upgraded to the Pacific Remote Islands Marine National Monument by President George W. Bush. In 2025, this was renamed Pacific Islands Heritage Marine National Monument. Amateur radio expeditions Since the early 1940s, Kingman Reef has had minimal human contact. However, amateur radio operators from around the world have occasionally visited the reef to put it "on the air" in what is known as a DX-pedition. In 1974, a group of amateurs using the callsign KP6KR sailed to the reef and set up a temporary radio station and antenna. Other groups visited the island in subsequent years, including 1977, 1980, 1981, 1988, and 1993. More recently, 15 amateur radio operators from the Palmyra DX Group visited the reef in October 2000. Using the FCC-issued special event callsign K5K, the group made more than 80,000 individual contacts with amateurs worldwide for 10 days. Between November 15, 1945, and March 28, 2016, Kingman Reef was considered a discrete entity to earn awards such as the DX Century Club. A video shot by amateur radio operators traveling to the K5P DX-pedition on Palmyra in January 2016 shows Kingman Reef mostly awash, raising questions as to whether a future activation of Kingman Reef would be possible. On March 28, 2016, the ARRL DXCC desk deleted Kingman Reef from the list of collectible entities effective March 29, 2016, and deeming Kingman a part of the Palmyra and Jarvis entity due to proximity of the islands and common administration of the islands by the Fish and Wildlife Service. See also References External links Wikimedia Atlas of Kingman Reef
========================================
[SOURCE: https://en.wikipedia.org/wiki/File:Jehu_on_the_Black_Obelisk_of_Shalmaneser_III.jpg] | [TOKENS: 106]
File:Jehu on the Black Obelisk of Shalmaneser III.jpg Summary Licensing File history Click on a date/time to view the file as it appeared at that time. File usage The following 3 pages use this file: Global file usage The following other wikis use this file: Metadata This file contains additional information, probably added from the digital camera or scanner used to create or digitize it. If the file has been modified from its original state, some details may not fully reflect the modified file.
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-IEEE_Transactions_on_Communications_13-1] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Animal#cite_ref-Balian2008_66-6] | [TOKENS: 6011]
Contents Animal Animals are multicellular, eukaryotic organisms belonging to the biological kingdom Animalia (/ˌænɪˈmeɪliə/). With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. Animals form a clade, meaning that they arose from a single common ancestor. Over 1.5 million living animal species have been described, of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are as many as 7.77 million animal species on Earth. Animal body lengths range from 8.5 μm (0.00033 in) to 33.6 m (110 ft). They have complex ecologies and interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology, and the study of animal behaviour is known as ethology. The animal kingdom is divided into five major clades, namely Porifera, Ctenophora, Placozoa, Cnidaria and Bilateria. Most living animal species belong to the clade Bilateria, a highly proliferative clade whose members have a bilaterally symmetric and significantly cephalised body plan, and the vast majority of bilaterians belong to two large clades: the protostomes, which includes organisms such as arthropods, molluscs, flatworms, annelids and nematodes; and the deuterostomes, which include echinoderms, hemichordates and chordates, the latter of which contains the vertebrates. The much smaller basal phylum Xenacoelomorpha have an uncertain position within Bilateria. Animals first appeared in the fossil record in the late Cryogenian period and diversified in the subsequent Ediacaran period in what is known as the Avalon explosion. Nearly all modern animal phyla first appeared in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago (Mya), and most classes during the Ordovician radiation 485.4 Mya. Common to all living animals, 6,331 groups of genes have been identified that may have arisen from a single common ancestor that lived about 650 Mya during the Cryogenian period. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on advanced techniques, such as molecular phylogenetics, which are effective at demonstrating the evolutionary relationships between taxa. Humans make use of many other animal species for food (including meat, eggs, and dairy products), for materials (such as leather, fur, and wool), as pets and as working animals for transportation, and services. Dogs, the first domesticated animal, have been used in hunting, in security and in warfare, as have horses, pigeons and birds of prey; while other terrestrial and aquatic animals are hunted for sports, trophies or profits. Non-human animals are also an important cultural element of human evolution, having appeared in cave arts and totems since the earliest times, and are frequently featured in mythology, religion, arts, literature, heraldry, politics, and sports. Etymology The word animal comes from the Latin noun animal of the same meaning, which is itself derived from Latin animalis 'having breath or soul'. The biological definition includes all members of the kingdom Animalia. In colloquial usage, the term animal is often used to refer only to nonhuman animals. The term metazoa is derived from Ancient Greek μετα meta 'after' (in biology, the prefix meta- stands for 'later') and ζῷᾰ zōia 'animals', plural of ζῷον zōion 'animal'. A metazoan is any member of the group Metazoa. Characteristics Animals have several characteristics that they share with other living things. Animals are eukaryotic, multicellular, and aerobic, as are plants and fungi. Unlike plants and algae, which produce their own food, animals cannot produce their own food, a feature they share with fungi. Animals ingest organic material and digest it internally. Animals have structural characteristics that set them apart from all other living things: Typically, there is an internal digestive chamber with either one opening (in Ctenophora, Cnidaria, and flatworms) or two openings (in most bilaterians). Animal development is controlled by Hox genes, which signal the times and places to develop structures such as body segments and limbs. During development, the animal extracellular matrix forms a relatively flexible framework upon which cells can move about and be reorganised into specialised tissues and organs, making the formation of complex structures possible, and allowing cells to be differentiated. The extracellular matrix may be calcified, forming structures such as shells, bones, and spicules. In contrast, the cells of other multicellular organisms (primarily algae, plants, and fungi) are held in place by cell walls, and so develop by progressive growth. Nearly all animals make use of some form of sexual reproduction. They produce haploid gametes by meiosis; the smaller, motile gametes are spermatozoa and the larger, non-motile gametes are ova. These fuse to form zygotes, which develop via mitosis into a hollow sphere, called a blastula. In sponges, blastula larvae swim to a new location, attach to the seabed, and develop into a new sponge. In most other groups, the blastula undergoes more complicated rearrangement. It first invaginates to form a gastrula with a digestive chamber and two separate germ layers, an external ectoderm and an internal endoderm. In most cases, a third germ layer, the mesoderm, also develops between them. These germ layers then differentiate to form tissues and organs. Repeated instances of mating with a close relative during sexual reproduction generally leads to inbreeding depression within a population due to the increased prevalence of harmful recessive traits. Animals have evolved numerous mechanisms for avoiding close inbreeding. Some animals are capable of asexual reproduction, which often results in a genetic clone of the parent. This may take place through fragmentation; budding, such as in Hydra and other cnidarians; or parthenogenesis, where fertile eggs are produced without mating, such as in aphids. Ecology Animals are categorised into ecological groups depending on their trophic levels and how they consume organic material. Such groupings include carnivores (further divided into subcategories such as piscivores, insectivores, ovivores, etc.), herbivores (subcategorised into folivores, graminivores, frugivores, granivores, nectarivores, algivores, etc.), omnivores, fungivores, scavengers/detritivores, and parasites. Interactions between animals of each biome form complex food webs within that ecosystem. In carnivorous or omnivorous species, predation is a consumer–resource interaction where the predator feeds on another organism, its prey, who often evolves anti-predator adaptations to avoid being fed upon. Selective pressures imposed on one another lead to an evolutionary arms race between predator and prey, resulting in various antagonistic/competitive coevolutions. Almost all multicellular predators are animals. Some consumers use multiple methods; for example, in parasitoid wasps, the larvae feed on the hosts' living tissues, killing them in the process, but the adults primarily consume nectar from flowers. Other animals may have very specific feeding behaviours, such as hawksbill sea turtles which mainly eat sponges. Most animals rely on biomass and bioenergy produced by plants and phytoplanktons (collectively called producers) through photosynthesis. Herbivores, as primary consumers, eat the plant material directly to digest and absorb the nutrients, while carnivores and other animals on higher trophic levels indirectly acquire the nutrients by eating the herbivores or other animals that have eaten the herbivores. Animals oxidise carbohydrates, lipids, proteins and other biomolecules in cellular respiration, which allows the animal to grow and to sustain basal metabolism and fuel other biological processes such as locomotion. Some benthic animals living close to hydrothermal vents and cold seeps on the dark sea floor consume organic matter produced through chemosynthesis (via oxidising inorganic compounds such as hydrogen sulfide) by archaea and bacteria. Animals originated in the ocean; all extant animal phyla, except for Micrognathozoa and Onychophora, feature at least some marine species. However, several lineages of arthropods begun to colonise land around the same time as land plants, probably between 510 and 471 million years ago, during the Late Cambrian or Early Ordovician. Vertebrates such as the lobe-finned fish Tiktaalik started to move on to land in the late Devonian, about 375 million years ago. Other notable animal groups that colonized land environments are Mollusca, Platyhelmintha, Annelida, Tardigrada, Onychophora, Rotifera, Nematoda. Animals occupy virtually all of earth's habitats and microhabitats, with faunas adapted to salt water, hydrothermal vents, fresh water, hot springs, swamps, forests, pastures, deserts, air, and the interiors of other organisms. Animals are however not particularly heat tolerant; very few of them can survive at constant temperatures above 50 °C (122 °F) or in the most extreme cold deserts of continental Antarctica. The collective global geomorphic influence of animals on the processes shaping the Earth's surface remains largely understudied, with most studies limited to individual species and well-known exemplars. Diversity The blue whale (Balaenoptera musculus) is the largest animal that has ever lived, weighing up to 190 tonnes and measuring up to 33.6 metres (110 ft) long. The largest extant terrestrial animal is the African bush elephant (Loxodonta africana), weighing up to 12.25 tonnes and measuring up to 10.67 metres (35.0 ft) long. The largest terrestrial animals that ever lived were titanosaur sauropod dinosaurs such as Argentinosaurus, which may have weighed as much as 73 tonnes, and Supersaurus which may have reached 39 metres. Several animals are microscopic; some Myxozoa (obligate parasites within the Cnidaria) never grow larger than 20 μm, and one of the smallest species (Myxobolus shekel) is no more than 8.5 μm when fully grown. The following table lists estimated numbers of described extant species for the major animal phyla, along with their principal habitats (terrestrial, fresh water, and marine), and free-living or parasitic ways of life. Species estimates shown here are based on numbers described scientifically; much larger estimates have been calculated based on various means of prediction, and these can vary wildly. For instance, around 25,000–27,000 species of nematodes have been described, while published estimates of the total number of nematode species include 10,000–20,000; 500,000; 10 million; and 100 million. Using patterns within the taxonomic hierarchy, the total number of animal species—including those not yet described—was calculated to be about 7.77 million in 2011.[a] 3,000–6,500 4,000–25,000 Evolutionary origin Evidence of animals is found as long ago as the Cryogenian period. 24-Isopropylcholestane (24-ipc) has been found in rocks from roughly 650 million years ago; it is only produced by sponges and pelagophyte algae. Its likely origin is from sponges based on molecular clock estimates for the origin of 24-ipc production in both groups. Analyses of pelagophyte algae consistently recover a Phanerozoic origin, while analyses of sponges recover a Neoproterozoic origin, consistent with the appearance of 24-ipc in the fossil record. The first body fossils of animals appear in the Ediacaran, represented by forms such as Charnia and Spriggina. It had long been doubted whether these fossils truly represented animals, but the discovery of the animal lipid cholesterol in fossils of Dickinsonia establishes their nature. Animals are thought to have originated under low-oxygen conditions, suggesting that they were capable of living entirely by anaerobic respiration, but as they became specialised for aerobic metabolism they became fully dependent on oxygen in their environments. Many animal phyla first appear in the fossil record during the Cambrian explosion, starting about 539 million years ago, in beds such as the Burgess Shale. Extant phyla in these rocks include molluscs, brachiopods, onychophorans, tardigrades, arthropods, echinoderms and hemichordates, along with numerous now-extinct forms such as the predatory Anomalocaris. The apparent suddenness of the event may however be an artefact of the fossil record, rather than showing that all these animals appeared simultaneously. That view is supported by the discovery of Auroralumina attenboroughii, the earliest known Ediacaran crown-group cnidarian (557–562 mya, some 20 million years before the Cambrian explosion) from Charnwood Forest, England. It is thought to be one of the earliest predators, catching small prey with its nematocysts as modern cnidarians do. Some palaeontologists have suggested that animals appeared much earlier than the Cambrian explosion, possibly as early as 1 billion years ago. Early fossils that might represent animals appear for example in the 665-million-year-old rocks of the Trezona Formation of South Australia. These fossils are interpreted as most probably being early sponges. Trace fossils such as tracks and burrows found in the Tonian period (from 1 gya) may indicate the presence of triploblastic worm-like animals, roughly as large (about 5 mm wide) and complex as earthworms. However, similar tracks are produced by the giant single-celled protist Gromia sphaerica, so the Tonian trace fossils may not indicate early animal evolution. Around the same time, the layered mats of microorganisms called stromatolites decreased in diversity, perhaps due to grazing by newly evolved animals. Objects such as sediment-filled tubes that resemble trace fossils of the burrows of wormlike animals have been found in 1.2 gya rocks in North America, in 1.5 gya rocks in Australia and North America, and in 1.7 gya rocks in Australia. Their interpretation as having an animal origin is disputed, as they might be water-escape or other structures. Phylogeny Animals are monophyletic, meaning they are derived from a common ancestor. Animals are the sister group to the choanoflagellates, with which they form the Choanozoa. Ros-Rocher and colleagues (2021) trace the origins of animals to unicellular ancestors, providing the external phylogeny shown in the cladogram. Uncertainty of relationships is indicated with dashed lines. The animal clade had certainly originated by 650 mya, and may have come into being as much as 800 mya, based on molecular clock evidence for different phyla. Holomycota (inc. fungi) Ichthyosporea Pluriformea Filasterea The relationships at the base of the animal tree have been debated. Other than Ctenophora, the Bilateria and Cnidaria are the only groups with symmetry, and other evidence shows they are closely related. In addition to sponges, Placozoa has no symmetry and was often considered a "missing link" between protists and multicellular animals. The presence of hox genes in Placozoa shows that they were once more complex. The Porifera (sponges) have long been assumed to be sister to the rest of the animals, but there is evidence that the Ctenophora may be in that position. Molecular phylogenetics has supported both the sponge-sister and ctenophore-sister hypotheses. In 2017, Roberto Feuda and colleagues, using amino acid differences, presented both, with the following cladogram for the sponge-sister view that they supported (their ctenophore-sister tree simply interchanging the places of ctenophores and sponges): Porifera Ctenophora Placozoa Cnidaria Bilateria Conversely, a 2023 study by Darrin Schultz and colleagues uses ancient gene linkages to construct the following ctenophore-sister phylogeny: Ctenophora Porifera Placozoa Cnidaria Bilateria Sponges are physically very distinct from other animals, and were long thought to have diverged first, representing the oldest animal phylum and forming a sister clade to all other animals. Despite their morphological dissimilarity with all other animals, genetic evidence suggests sponges may be more closely related to other animals than the comb jellies are. Sponges lack the complex organisation found in most other animal phyla; their cells are differentiated, but in most cases not organised into distinct tissues, unlike all other animals. They typically feed by drawing in water through pores, filtering out small particles of food. The Ctenophora and Cnidaria are radially symmetric and have digestive chambers with a single opening, which serves as both mouth and anus. Animals in both phyla have distinct tissues, but these are not organised into discrete organs. They are diploblastic, having only two main germ layers, ectoderm and endoderm. The tiny placozoans have no permanent digestive chamber and no symmetry; they superficially resemble amoebae. Their phylogeny is poorly defined, and under active research. The remaining animals, the great majority—comprising some 29 phyla and over a million species—form the Bilateria clade, which have a bilaterally symmetric body plan. The Bilateria are triploblastic, with three well-developed germ layers, and their tissues form distinct organs. The digestive chamber has two openings, a mouth and an anus, and in the Nephrozoa there is an internal body cavity, a coelom or pseudocoelom. These animals have a head end (anterior) and a tail end (posterior), a back (dorsal) surface and a belly (ventral) surface, and a left and a right side. A modern consensus phylogenetic tree for the Bilateria is shown below. Xenacoelomorpha Ambulacraria Chordata Ecdysozoa Spiralia Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Many bilaterians have a combination of circular muscles that constrict the body, making it longer, and an opposing set of longitudinal muscles, that shorten the body; these enable soft-bodied animals with a hydrostatic skeleton to move by peristalsis. They also have a gut that extends through the basically cylindrical body from mouth to anus. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. However, over evolutionary time, descendant spaces have evolved which have lost one or more of each of these characteristics. For example, adult echinoderms are radially symmetric (unlike their larvae), while some parasitic worms have extremely simplified body structures. Genetic studies have considerably changed zoologists' understanding of the relationships within the Bilateria. Most appear to belong to two major lineages, the protostomes and the deuterostomes. It is often suggested that the basalmost bilaterians are the Xenacoelomorpha, with all other bilaterians belonging to the subclade Nephrozoa. However, this suggestion has been contested, with other studies finding that xenacoelomorphs are more closely related to Ambulacraria than to other bilaterians. Protostomes and deuterostomes differ in several ways. Early in development, deuterostome embryos undergo radial cleavage during cell division, while many protostomes (the Spiralia) undergo spiral cleavage. Animals from both groups possess a complete digestive tract, but in protostomes the first opening of the embryonic gut develops into the mouth, and the anus forms secondarily. In deuterostomes, the anus forms first while the mouth develops secondarily. Most protostomes have schizocoelous development, where cells simply fill in the interior of the gastrula to form the mesoderm. In deuterostomes, the mesoderm forms by enterocoelic pouching, through invagination of the endoderm. The main deuterostome taxa are the Ambulacraria and the Chordata. Ambulacraria are exclusively marine and include acorn worms, starfish, sea urchins, and sea cucumbers. The chordates are dominated by the vertebrates (animals with backbones), which consist of fishes, amphibians, reptiles, birds, and mammals. The protostomes include the Ecdysozoa, named after their shared trait of ecdysis, growth by moulting, Among the largest ecdysozoan phyla are the arthropods and the nematodes. The rest of the protostomes are in the Spiralia, named for their pattern of developing by spiral cleavage in the early embryo. Major spiralian phyla include the annelids and molluscs. History of classification In the classical era, Aristotle divided animals,[d] based on his own observations, into those with blood (roughly, the vertebrates) and those without. The animals were then arranged on a scale from man (with blood, two legs, rational soul) down through the live-bearing tetrapods (with blood, four legs, sensitive soul) and other groups such as crustaceans (no blood, many legs, sensitive soul) down to spontaneously generating creatures like sponges (no blood, no legs, vegetable soul). Aristotle was uncertain whether sponges were animals, which in his system ought to have sensation, appetite, and locomotion, or plants, which did not: he knew that sponges could sense touch and would contract if about to be pulled off their rocks, but that they were rooted like plants and never moved about. In 1758, Carl Linnaeus created the first hierarchical classification in his Systema Naturae. In his original scheme, the animals were one of three kingdoms, divided into the classes of Vermes, Insecta, Pisces, Amphibia, Aves, and Mammalia. Since then, the last four have all been subsumed into a single phylum, the Chordata, while his Insecta (which included the crustaceans and arachnids) and Vermes have been renamed or broken up. The process was begun in 1793 by Jean-Baptiste de Lamarck, who called the Vermes une espèce de chaos ('a chaotic mess')[e] and split the group into three new phyla: worms, echinoderms, and polyps (which contained corals and jellyfish). By 1809, in his Philosophie Zoologique, Lamarck had created nine phyla apart from vertebrates (where he still had four phyla: mammals, birds, reptiles, and fish) and molluscs, namely cirripedes, annelids, crustaceans, arachnids, insects, worms, radiates, polyps, and infusorians. In his 1817 Le Règne Animal, Georges Cuvier used comparative anatomy to group the animals into four embranchements ('branches' with different body plans, roughly corresponding to phyla), namely vertebrates, molluscs, articulated animals (arthropods and annelids), and zoophytes (radiata) (echinoderms, cnidaria and other forms). This division into four was followed by the embryologist Karl Ernst von Baer in 1828, the zoologist Louis Agassiz in 1857, and the comparative anatomist Richard Owen in 1860. In 1874, Ernst Haeckel divided the animal kingdom into two subkingdoms: Metazoa (multicellular animals, with five phyla: coelenterates, echinoderms, articulates, molluscs, and vertebrates) and Protozoa (single-celled animals), including a sixth animal phylum, sponges. The protozoa were later moved to the former kingdom Protista, leaving only the Metazoa as a synonym of Animalia. In human culture The human population exploits a large number of other animal species for food, both of domesticated livestock species in animal husbandry and, mainly at sea, by hunting wild species. Marine fish of many species are caught commercially for food. A smaller number of species are farmed commercially. Humans and their livestock make up more than 90% of the biomass of all terrestrial vertebrates, and almost as much as all insects combined. Invertebrates including cephalopods, crustaceans, insects—principally bees and silkworms—and bivalve or gastropod molluscs are hunted or farmed for food, fibres. Chickens, cattle, sheep, pigs, and other animals are raised as livestock for meat across the world. Animal fibres such as wool and silk are used to make textiles, while animal sinews have been used as lashings and bindings, and leather is widely used to make shoes and other items. Animals have been hunted and farmed for their fur to make items such as coats and hats. Dyestuffs including carmine (cochineal), shellac, and kermes have been made from the bodies of insects. Working animals including cattle and horses have been used for work and transport from the first days of agriculture. Animals such as the fruit fly Drosophila melanogaster serve a major role in science as experimental models. Animals have been used to create vaccines since their discovery in the 18th century. Some medicines such as the cancer drug trabectedin are based on toxins or other molecules of animal origin. People have used hunting dogs to help chase down and retrieve animals, and birds of prey to catch birds and mammals, while tethered cormorants have been used to catch fish. Poison dart frogs have been used to poison the tips of blowpipe darts. A wide variety of animals are kept as pets, from invertebrates such as tarantulas, octopuses, and praying mantises, reptiles such as snakes and chameleons, and birds including canaries, parakeets, and parrots all finding a place. However, the most kept pet species are mammals, namely dogs, cats, and rabbits. There is a tension between the role of animals as companions to humans, and their existence as individuals with rights of their own. A wide variety of terrestrial and aquatic animals are hunted for sport. The signs of the Western and Chinese zodiacs are based on animals. In China and Japan, the butterfly has been seen as the personification of a person's soul, and in classical representation the butterfly is also the symbol of the soul. Animals have been the subjects of art from the earliest times, both historical, as in ancient Egypt, and prehistoric, as in the cave paintings at Lascaux. Major animal paintings include Albrecht Dürer's 1515 The Rhinoceros, and George Stubbs's c. 1762 horse portrait Whistlejacket. Insects, birds and mammals play roles in literature and film, such as in giant bug movies. Animals including insects and mammals feature in mythology and religion. The scarab beetle was sacred in ancient Egypt, and the cow is sacred in Hinduism. Among other mammals, deer, horses, lions, bats, bears, and wolves are the subjects of myths and worship. See also Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Council_on_Foreign_Relations] | [TOKENS: 2997]
Contents Council on Foreign Relations The Council on Foreign Relations (CFR) is an American think tank focused on U.S. foreign policy and international relations. Founded in 1921, it is an independent and nonpartisan 501(c)(3) nonprofit organization with longstanding ties to political, corporate, and media elites. CFR is based in New York City, with an additional office in Washington, D.C. Its membership has included senior politicians, secretaries of state, CIA directors, bankers, lawyers, professors, corporate directors, CEOs, and prominent media figures. CFR meetings convene government officials, global business leaders, and prominent members of the intelligence and foreign-policy communities to discuss international issues. CFR publishes the bi-monthly journal Foreign Affairs since 1922. It also runs the David Rockefeller Studies Program, which makes recommendations to presidential administrations and the diplomatic community, testifies before Congress, interacts with the media, and publishes research on foreign policy issues. Michael Froman is the organization's 15th president. History In September 1917, near the end of World War I, President Woodrow Wilson established a working fellowship of about 150 scholars called "The Inquiry", tasked with briefing him about options for the postwar world after Germany was defeated. This academic group, directed by Wilson's closest adviser and long-time friend "Colonel" Edward M. House, and with Walter Lippmann as Head of Research, met to assemble the strategy for the postwar world.: 13–14 The team produced more than 2,000 documents detailing and analyzing the political, economic, and social facts globally that would be helpful for Wilson in the peace talks. Their reports formed the basis for the Fourteen Points, which outlined Wilson's strategy for peace after the war's end. These scholars then traveled to the Paris Peace Conference 1919 and participated in the discussions there.: 1–5 As a result of discussions at the Peace Conference, a small group of British and American diplomats and scholars met on May 30, 1919, at the Hotel Majestic in Paris. They decided to create an Anglo-American organization called "The Institute of International Affairs", which would have offices in London and New York.: 12 : 5 Ultimately, the British and American delegates formed separate institutes, with the British developing the Royal Institute of International Affairs (known as Chatham House) in London. Due to the isolationist views prevalent in American society at that time, the scholars had difficulty gaining traction with their plan and turned their focus instead to a set of discreet meetings which had been taking place since June 1918 in New York City, under the name "Council on Foreign Relations". The meetings were headed by corporate lawyer Elihu Root, who had served as Secretary of State under President Theodore Roosevelt, and attended by 108 "high-ranking officers of banking, manufacturing, trading and finance companies, together with many lawyers".[citation needed] The members supported Wilson's internationalist vision but were especially concerned about "the effect that the war and the treaty of peace might have on postwar business".: 6–7 Scholars from the Inquiry saw an opportunity to establish an organization that would bring together diplomats, senior government officials, and academics with lawyers, bankers, and industrialists to influence public policy. On July 29, 1921, they filed a certification of incorporation, officially forming the Council on Foreign Relations.: 8–9 Founding members included its first honorary president, Elihu Root, and first elected president, John W. Davis, vice-president Paul D. Cravath, and secretary–treasurer Edwin F. Gay. In 1922, Gay, who was a former dean of the Harvard Business School and director of the Shipping Board during the war, headed the Council's efforts to begin publication of a magazine that would be the "authoritative" source on foreign policy. He gathered US$125,000 (equivalent to $2,404,324 in 2025) from the wealthy members on the council, as well as by sending letters soliciting funds to "the thousand richest Americans". Using these funds, the first issue of Foreign Affairs was published in September 1922. Within a few years, it had gained a reputation as the "most authoritative American review dealing with international relations".: 17–18 In the late 1930s, the Ford Foundation and Rockefeller Foundation began financially supporting the Council. In 1938, they created various Committees on Foreign Relations, which later became governed by the American Committees on Foreign Relations in Washington, D.C., throughout the country, funded by a grant from the Carnegie Corporation. Influential men were to be chosen in a number of cities, and would then be brought together for discussions in their own communities as well as participating in an annual conference in New York. These local committees served to influence local leaders and shape public opinion to build support for the Council's policies, while also acting as "useful listening posts" through which the Council and U.S. government could "sense the mood of the country".: 30–31 During the Second World War, the Council achieved much greater prominence within the government and the State Department, when it established the strictly confidential War and Peace Studies, funded entirely by the Rockefeller Foundation.: 23 The secrecy surrounding this group was such that the Council members who were not involved in its deliberations were completely unaware of the study group's existence.: 26 It was divided into four functional topic groups: economic and financial; security and armaments; territorial; and political. The security and armaments group was headed by Allen Welsh Dulles, who later became a pivotal figure in the CIA's predecessor, the Office of Strategic Services (OSS). CFR ultimately produced 682 memoranda for the State Department, which were marked classified and circulated among the appropriate government departments•: 23–26 A critical study found that of 502 government officials surveyed from 1945 to 1972, more than half were members of the Council.: 48 During the Eisenhower administration 40% of the top U.S. foreign policy officials were CFR members (Eisenhower himself had been a council member); under Truman, 42% of the top posts were filled by council members. During the Kennedy administration, this number rose to 51%, and peaked at 57% under the Johnson administration.: 62–64 In 1947, CFR study group member George Kennan anonymously published an article in Foreign Affairs titled, "The Sources of Soviet Conduct," in which he introduced the concept of "containment." The essay became highly influential in shaping U.S. foreign policy over the course of the next seven presidential administrations. Forty years later, Kennan remarked that he had never believed the Soviet Union intended to attack the United States, assuming that point was so self-evident it required no explanation in the original essay. William Bundy credited CFR's study groups with helping to lay the framework of thinking that led to the Marshall Plan and NATO. Due to new interest in the group, membership grew towards 1,000.: 35–39 Dwight D. Eisenhower chaired a CFR study group while he served as President of Columbia University. One member later said, "whatever General Eisenhower knows about economics, he has learned at the study group meetings.": 35–44 The CFR study group devised an expanded study group called "Americans for Eisenhower" to increase his chances for the presidency. Eisenhower would later draw many Cabinet members from CFR ranks and become a CFR member himself. His primary CFR appointment was Secretary of State John Foster Dulles. Dulles gave a public address at the Harold Pratt House in New York City in which he announced a new direction for Eisenhower's foreign policy: "There is no local defense which alone will contain the mighty land power of the communist world. Local defenses must be reinforced by the further deterrent of massive retaliatory power." After this speech, the council convened a session on "Nuclear Weapons and Foreign Policy" and chose Henry Kissinger to head it. Kissinger spent the following academic year working on the project at Council headquarters. The book of the same name that he published from his research in 1957 gave him national recognition, topping the national bestseller lists.: 39–41 CFR played an important role in the creation of the European Coal and Steel Community. CFR promoted a blueprint of the ECSC and helped Jean Monnet promote the ESCS. On November 24, 1953, a study group heard a report from political scientist William Henderson regarding the ongoing conflict between France and Vietnamese Communist leader Ho Chi Minh's Viet Minh forces, a struggle that would later become known as the First Indochina War. Henderson argued that Ho's cause was primarily nationalist in nature and that Marxism had "little to do with the current revolution." Further, the report said, the United States could work with Ho to guide his movement away from Communism. State Department officials, however, expressed skepticism about direct American intervention in Vietnam and the idea was tabled. Over the next twenty years, the United States would find itself allied with anti-Communist South Vietnam and against Ho and his supporters in the Vietnam War.: 40, 49–67 The Council served as a "breeding ground" for important American policies such as mutual deterrence, arms control, and nuclear non-proliferation.: 40–42 In 1962 the group began a program of bringing select Air Force officers to the Harold Pratt House to study alongside its scholars. The Army, Navy and Marine Corps requested they start similar programs for their own officers.: 46 A four-year-long study of relations between America and China was conducted by the Council between 1964 and 1968. One study published in 1966 concluded that American citizens were more open to talks with China than their elected leaders. Henry Kissinger had continued to publish in Foreign Affairs and was appointed by President Richard Nixon to serve as National Security Adviser in 1969. In 1971, he embarked on a secret trip to Beijing to broach talks with Chinese leaders. Nixon went to China in 1972, and diplomatic relations were completely normalized by President Carter's Secretary of State, another Council member, Cyrus Vance.: 42–44 The Vietnam War created a rift within the organization. When Hamilton Fish Armstrong announced in 1970 that he would be leaving the helm of Foreign Affairs after 45 years, new chairman David Rockefeller approached a family friend, William Bundy, to take over the position. Anti-war advocates within the Council rose in protest against this appointment, claiming that Bundy's hawkish record in the State and Defense Departments and the CIA precluded him from taking over an independent journal. Some considered Bundy a war criminal for his prior actions.: 50–51 In November 1979, while chairman of CFR, David Rockefeller became embroiled in an international incident when he and Henry Kissinger, along with John J. McCloy and Rockefeller aides, persuaded President Jimmy Carter through the State Department to admit the Shah of Iran, Mohammad Reza Pahlavi, into the US for hospital treatment for lymphoma. This action directly precipitated what is known as the Iran hostage crisis and placed Rockefeller under intense media scrutiny (particularly from The New York Times) for the first time in his public life. In his book, White House Diary, Carter wrote of the affair, "April 9 David Rockefeller came in, apparently to induce me to let the shah come to the United States. Rockefeller, Kissinger, and Brzezinski seem to be adopting this as a joint project". Membership The CFR has two types of membership: life membership; and term membership, which lasts for 5 years and is available only to those between the ages of 30 and 36. Only U.S. citizens (native born or naturalized) and permanent residents who have applied for U.S. citizenship are eligible. A candidate for life membership must be nominated in writing by one Council member and seconded by a minimum of three others. Visiting fellows are prohibited from applying for membership until they have completed their fellowship tenure. Corporate membership is divided into "Associates", "Affiliates", "President's Circle", and "Founders". All corporate executive members have opportunities to hear speakers, including foreign heads of state, chairmen and CEOs of multinational corporations, and U.S. officials and Congressmen. President and premium members are also entitled to attend small, private dinners or receptions with senior American officials and world leaders. The CFR has a Young Professionals Briefing Series designed for young leaders interested in international relations to be eligible for term membership. Women were excluded from membership until the 1960s. Board members As of 2025,[update] members of CFR's board of directors include: As a charity The Council on Foreign Relations received a three-star rating (out of four stars) from Charity Navigator in fiscal year 2016, as measured by an analysis of the council's financial data and "accountability and transparency". In fiscal year 2023, the council received a four-star rating (98 percent) from Charity Navigator. Reception In an article for The Washington Post, Richard Harwood described the membership of the CFR as "the nearest thing we have to a ruling establishment in the United States". The CFR has been criticized for its perceived elitism and influence over U.S. foreign policy, with detractors arguing that it serves as a networking hub for government officials, corporate executives, and media figures, reinforcing an establishment consensus that prioritizes globalist policies over national interests. In 2019, CFR was criticized for accepting a donation from Len Blavatnik, a Ukrainian-born billionaire with close links to Vladimir Putin. The council was reported to be under fire from its own members and dozens of international affairs experts over its acceptance of a $12 million gift to fund an internship program. Fifty-five international relations scholars and Russia experts wrote a letter to the organization's board and CFR president Richard N. Haass: "It is our considered view that Blavatnik uses his 'philanthropy'—funds obtained by and with the consent of the Kremlin, at the expense of the state budget and the Russian people—at leading western academic and cultural institutions to advance his access to political circles. We regard this as another step in the longstanding effort of Mr. Blavatnik—who ... has close ties to the Kremlin and its kleptocratic network—to launder his image in the West." Critics have accused the CFR of promoting interventionist foreign policies, stating that its reports and recommendations have often supported U.S. military interventions and regime-change efforts. Some opponents say that its influence contributes to a bipartisan consensus that favors global military engagement, economic neoliberalism, and the interests of multinational corporations. Publications See also References General and cited sources External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Special:BookSources/978-1-4614-2302-7] | [TOKENS: 380]
Contents Book sources This page allows users to search multiple sources for a book given a 10- or 13-digit International Standard Book Number. Spaces and dashes in the ISBN do not matter. This page links to catalogs of libraries, booksellers, and other book sources where you will be able to search for the book by its International Standard Book Number (ISBN). Online text Google Books and other retail sources below may be helpful if you want to verify citations in Wikipedia articles, because they often let you search an online version of the book for specific words or phrases, or you can browse through the book (although for copyright reasons the entire book is usually not available). At the Open Library (part of the Internet Archive) you can borrow and read entire books online. Online databases Subscription eBook databases Libraries Alabama Alaska California Colorado Connecticut Delaware Florida Georgia Illinois Indiana Iowa Kansas Kentucky Massachusetts Michigan Minnesota Missouri Nebraska New Jersey New Mexico New York North Carolina Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Washington state Wisconsin Bookselling and swapping Find your book on a site that compiles results from other online sites: These sites allow you to search the catalogs of many individual booksellers: Non-English book sources If the book you are looking for is in a language other than English, you might find it helpful to look at the equivalent pages on other Wikipedias, linked below – they are more likely to have sources appropriate for that language. Find other editions The WorldCat xISBN tool for finding other editions is no longer available. However, there is often a "view all editions" link on the results page from an ISBN search. Google books often lists other editions of a book and related books under the "about this book" link. You can convert between 10 and 13 digit ISBNs with these tools: Find on Wikipedia See also Get free access to research! Research tools and services Outreach Get involved
========================================
[SOURCE: https://en.wikipedia.org/wiki/Net_worth] | [TOKENS: 722]
Contents Net worth Net worth is the value of all the non-financial and financial assets owned by an individual or institution minus the value of all its outstanding liabilities. Financial assets minus outstanding liabilities equal net financial assets, so net worth can be expressed as the sum of non-financial assets and net financial assets. This concept can apply to companies, individuals, governments, or economic sectors such as the financial corporations sector, or even entire countries. By entity Net worth is the excess of assets over liabilities. The assets that contribute to net worth can include homes, vehicles, various types of bank accounts, money market accounts, stocks and bonds. The liabilities are financial obligations such as loans, mortgages, and accounts payable (AP) that deplete resources. Net worth in business is also referred to as equity. It is generally based on the value of all assets and liabilities at the carrying value which is the value as expressed on the financial statements. To the extent items on the balance sheet do not express their true (market) value, the net worth will also be inaccurate. On reading the balance sheet, if the accumulated losses exceed the shareholder's equity, net worth becomes negative. Net worth in this formulation does not express the market value of a firm; a firm may be worth more (or less) if sold as a going concern, or indeed if the business closes down. Net worth vs. debt is a significant aspect of business loans. Business owners are required to "trade on equity" in order to further increase their net worth. For individuals, net worth or wealth refers to an individual's net economic position: the value of the individual's assets minus liabilities. Examples of assets that an individual would factor into their net worth are retirement accounts, other investments, home(s), and vehicles. Liabilities include both secured debt (such as a home mortgage) and unsecured debt (such as consumer debt or personal loans). Typically intangible assets such as educational degrees are not factored into net worth, even though such assets positively contribute to one's overall financial position. For a deceased individual, net worth can be used for the value of their estate when in probate. Individuals with considerable net worth are described in the financial services industry as high-net-worth individuals and ultra high-net-worth individuals. In personal finance, knowing an individual's net worth can be important to understand their current financial standing and give a reference point for measuring future financial progress. Balance sheets that include all assets and liabilities can also be constructed for governments. Compared with government debt, a government's net worth is an alternative measure of the government's financial strength. Most governments utilize an accrual-based accounting system in order to provide a transparent picture of government operational costs. Other governments may utilize cash accounting in order to better foresee future fiscal events. The accrual-based system is more effective, however, when dealing with the overall transparency of a government's spending. Massive governmental organizations rely on consistent and effective accounting in order to identify total net worth. A country's net worth is calculated as the sum of the net worth of all companies and individuals resident in that country, plus the government's net worth. For the United States, this measure is referred to as the financial position, and totalled $123.8 trillion as of 2014.[Out of date] Net worth is a representation of where one stands financially. This can be used to help create budgets, influence wise spending, motivate one to pay off debt, and it can motivate someone to save and invest. Net worth is also important to look at when considering retirement. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Midway_Atoll] | [TOKENS: 5726]
Contents Midway Atoll Midway Atoll (colloquial: Midway Islands; Hawaiian: Kuaihelani, lit. 'the backbone of heaven'; Pihemanu, 'the loud din of birds') is a 2.4 sq mi (6.2 km2) atoll in the North Pacific Ocean. Midway Atoll is an insular area of the United States and is an unorganized and unincorporated territory. The largest island is Sand Island, which has housing and an airstrip. Immediately east of Sand Island, across the narrow Brooks Channel, is Eastern Island, which is uninhabited and no longer has any facilities. Forming a rough, incomplete circle around the two main islands and creating Midway Lagoon is Spit Island, a narrow reef. Roughly equidistant between North America and Asia, Midway is the only island in the Hawaiian Archipelago that is not part of the state of Hawaii. Unlike the other Hawaiian islands, Midway observes Samoa Time (UTC−11:00, i.e., eleven hours behind Coordinated Universal Time), which is one hour behind the time in the Hawaii–Aleutian Time Zone used in Hawaii. For statistical purposes, Midway is grouped as one of the United States Minor Outlying Islands. The Midway Atoll National Wildlife Refuge, encompassing 590,991.50 acres (239,165.77 ha) of land and water in the surrounding area, is administered by the United States Fish and Wildlife Service (FWS). The refuge and surrounding area are part of the larger Papahānaumokuākea Marine National Monument. From 1941 until 1993, the atoll was the home of Naval Air Facility Midway Island, which played a crucial role in the Battle of Midway, June 4–6, 1942. Aircraft based at the then-named Henderson Field on Eastern Island joined with United States Navy ships and planes in an attack on a Japanese battle group that sank four carriers and one heavy cruiser and defended the atoll from invasion. The battle was a critical Allied victory and a significant turning point of the Pacific campaign of World War II. About 50 people live on Sand Island: U.S. Fish and Wildlife Service staff and contract workers. Visiting the atoll is possible only for business reasons, which includes permanent and temporary staff, contractors, and volunteers, as the tourism program has been suspended due to budget cutbacks. In 2012, the last year that the visitor program was in operation, 332 people made the trip to Midway. Tours focused on the unique ecology of Midway and its military history. The economy is derived solely from governmental sources. Nearly all supplies must be brought to the island by ship or plane, although a hydroponic greenhouse and garden supply some fresh fruits and vegetables. Location As its name suggests, Midway is roughly equidistant between North America and Asia and lies almost halfway around the world longitudinally from Greenwich, England. It is near the northwestern end of the Hawaiian archipelago, 1,310 miles (2,110 km) northwest of Honolulu, Hawaii, and about one-third of the way from Honolulu to Tokyo, Japan. Unlike the rest of the Northwestern Hawaiian Islands, Midway is not part of the State of Hawaii due to the Hawaiian Organic Act of 1900 that formally annexed Hawaii to the United States as a territory, which defined Hawaii as "the islands acquired by the United States of America under an Act of Congress entitled 'Joint resolution to provide for annexing the Hawaiian Islands to the United States,'" referring to the Newlands Resolution of 1898. While it could be argued that Midway became part of Hawaii when Captain N.C. Brooks of the sealing ship Gambia sighted it in 1859, it was assumed at the time that Midway was independently acquired by the United States when Captain William Reynolds of USS Lackawanna visited in 1867, and thus not part of the Hawaii Territory. In defining which islands the state of Hawaii would inherit from the Territory, the Hawaii Admission Act of 1959 clarified the question, specifically excluding Midway (along with Palmyra Island, Johnston Island, and Kingman Reef) from the jurisdiction of the state. Midway Atoll is approximately 140 nmi (260 km; 160 mi) east of the International Date Line, about 2,800 nmi (5,200 km; 3,200 mi) west of San Francisco, and 2,200 nmi (4,100 km; 2,500 mi) east of Tokyo. Geography and geology Midway Atoll is part of a chain of volcanic islands, atolls, and seamounts extending from near the Island of Hawaii up to the area of the Aleutian Islands and known as the Hawaiian–Emperor seamount chain, between Pearl and Hermes Atoll and Kure Atoll in the Northwestern Hawaiian Islands. It consists of a ring-shaped barrier reef nearly 5 mi (8.0 km) in diameter and several sand islets. The two significant pieces of land, Sand Island and Eastern Island, provide a habitat for millions of seabirds. The island sizes are shown in the table above. Midway was formed roughly 28 million years ago when the seabed underneath it was over the same hotspot from which the Island of Hawaii is now being formed. Midway was once a shield volcano, perhaps as large as the island of Lanai. As the volcano piled up, lava flows built the island, its weight depressed the crust, and the island slowly subsided for millions of years, a process known as isostatic adjustment. As the island subsided, a coral reef around the former volcanic island could maintain itself near sea level by growing upwards. That reef is now over 516 ft (157 m) thick (in the lagoon, 1,261 ft (384 m), composed mostly of post-Miocene limestones with a layer of upper Miocene (Tertiary g) sediments and lower Miocene (Tertiary e) limestones at the bottom overlying the basalts). What remains today is a shallow water atoll about 6 mi (9.7 km) across. Following Kure Atoll, Midway is the 2nd most northerly atoll in the world. The atoll has some 20 mi (32 km) of roads, 4.8 mi (7.7 km) of pipelines, one port on Sand Island (World Port Index Nr. 56328, MIDWAY ISLAND), and an airfield. As of 2004,[update] Henderson Field airfield at Midway Atoll, with its one active runway (rwy 06/24, around 8,000 ft (2,400 m) long) has been designated as an emergency diversion airport for aircraft flying under ETOPS rules. Although the FWS closed all airport operations on November 22, 2004, public access to the island was restored in March 2008. Eastern Island Airstrip is a disused airfield used by U.S. forces during the Battle of Midway. It is mostly constructed of Marston Mat and was built by the United States Navy Seabees. Climate Despite being located at 28°12′ N, which is north of the Tropic of Cancer, Midway Atoll has a tropical savanna climate (Köppen As) bordering a subtropical climate (Cfa) and a tropical rainforest climate (Af), with very pleasant year-round temperatures. Rainfall is fairly evenly distributed throughout the year, with only one month (June) having an average annual precipitation of less than 60 mm (2.4 in). History Midway has no indigenous inhabitants and was uninhabited until the 19th century. The atoll was sighted on July 5, 1859, by Captain N.C. Brooks, of the sealing ship Gambia. The islands were named the "Middlebrook Islands". Brooks claimed Midway for the United States under the Guano Islands Act of 1856, which authorized Americans to occupy uninhabited islands temporarily to obtain guano. There is no record of any attempt to mine guano on the island. On August 28, 1867, Captain William Reynolds of USS Lackawanna formally took possession of the atoll for the United States; the name changed to "Midway" some time after this. The atoll was the first Pacific island annexed by the United States as the Unincorporated Territory of Midway Island and was administered by the United States Navy. The first attempt at settlement was in 1870 when the Pacific Mail Steamship Company started a project of blasting and dredging a ship channel through the reef to the lagoon using money put up by the United States Congress. The purpose was to establish a mid-ocean coaling station to avoid the high taxes imposed at ports controlled by the Kingdom of Hawaiʻi. The project was a failure, and the USS Saginaw evacuated the channel project's workforce in October 1870. The ship ran aground on October 21 at Kure Atoll, stranding 93 men. On November 18, five men set out in a small boat to seek help. On December 19, four of the men perished when the boat was upset in the breakers off of Kauai. The survivor reached the U.S. Consulate in Honolulu on Christmas Eve. Relief ships were dispatched and reached Kure Atoll on January 4, 1871. The survivors of the Saginaw wreck reached Honolulu on January 14, 1871. In 1903, workers for the Commercial Pacific Cable Company took up residence on the island as part of the effort to lay a trans-Pacific telegraph cable. To make the island more verdant, these workers introduced many non-native species to the island, including the canary, cycad, Norfolk Island pine, she-oak/Ironwood, coconut, and various deciduous trees; along with some 9,000 short tons (8,200 t) of soil from Oahu and Guam. Ants, cockroaches, termites, centipedes, and countless other organisms were unintentionally introduced to Midway and the soil. On January 20, 1903, the United States Navy opened a radio station in response to complaints from cable company workers about Japanese squatters and poachers. Between 1904 and 1908, President Theodore Roosevelt stationed 21 Marines on the island to end wanton destruction of bird life and keep Midway safe as a U.S. possession, protecting the cable station. In 1935, operations began for the Martin M-130 flying boats operated by Pan American Airlines. The M-130s island-hopped from San Francisco to the Republic of China, providing the fastest and most luxurious route to the Far East and bringing tourists to Midway until 1941. Only the wealthy could afford the trip, which in the 1930s cost more than three times the annual salary of an average American. With Midway on the route between Honolulu and Wake Island, the flying boats landed in the atoll and pulled up to a float offshore in the lagoon. Tourists transferred to the Pan Am Hotel or the "Gooneyville Lodge", named after the ubiquitous "Gooney birds" (albatrosses), in this case Laysan Albatross and Black-footed Albatross. The military importance of the location of Midway in the Pacific included its use as a convenient refueling stop on transpacific flights and for Navy ships. Beginning in 1940, as tensions with the Japanese rose, Midway was deemed second only to Pearl Harbor in importance to the protection of the U.S. West Coast. Airstrips, gun emplacements, and a seaplane base quickly materialized on the tiny atoll. The channel was widened, and Naval Air Station Midway was completed. Midway was also an important submarine base. On February 14, 1941, President Franklin D. Roosevelt issued Executive order 8682 to create naval defense areas in the central Pacific territories. The proclamation established the "Midway Island Naval Defensive Sea Area", which encompassed the territorial waters between the extreme high-water marks and the three mi (4.8 km) marine boundaries surrounding Midway. "Midway Island Naval Airspace Reservation" was also established to restrict access to the airspace over the naval defense sea area. Only U.S. government ships and aircraft were permitted to enter the naval defense areas at Midway Atoll unless authorized by the Secretary of the Navy. Midway's importance to the U.S. was brought into focus on December 7, 1941, when the Japanese attacked Pearl Harbor. Two destroyers bombarded Midway on the same day; this was the first Bombardment of Midway. A Pan-Am flying clipper stopped at Midway and evacuated passengers and Pan-American employees from Wake island, which had also been attacked earlier that day. The clipper was on its usual passenger route to Guam when the attack on Pearl Harbor happened; it then made a return journey going from Wake to Midway, Honolulu, and back to the USA. A Japanese submarine bombarded Midway on February 10, 1942. In total, Midway had been attacked four times between December 7, 1941 and the Japanese submarine attack of February 10, 1942. Four months later, on June 4, 1942, a major naval battle near Midway resulted in the U.S. Navy inflicting a devastating defeat on the Imperial Japanese Navy. Four Japanese fleet aircraft carriers, Akagi, Kaga, Hiryū and Sōryū, were sunk, along with the loss of hundreds of Japanese aircraft, losses that the Empire of Japan would never be able to replace. The U.S. lost the aircraft carrier Yorktown, along with a number of its carrier- and land-based aircraft that were either shot down by Japanese forces or bombed on the ground at the airfields. The Battle of Midway was, by most accounts, the beginning of the end of the Imperial Japanese Navy's control of the Pacific Ocean. Starting in July 1942, a submarine tender was always stationed at the atoll to support submarines patrolling Japanese waters. In 1944, a floating dry dock joined the tender. After the Battle of Midway, a second airfield was developed on Sand Island. This work necessitated enlarging the island through landfill techniques that, when completed, more than doubled its size. From August 1, 1941, to 1945, U.S. military forces occupied Midway. In 1950, the Navy decommissioned Naval Air Station Midway, only to re-commission it again to support the Korean War. Thousands of troops on ships and aircraft stopped at Midway for refueling and emergency repairs. Midway Island was a Naval Air Facility from 1968 to September 10, 1993. With about 3,500 people living on Sand Island, Midway supported the U.S. troops during the Vietnam War. In June 1969, President Richard Nixon met South Vietnamese President Nguyen Van Thieu at the Officer-in-Charge house, also known as "Midway House". Because of its particularly remote location and political status as a U.S. Navy base not part of the State of Hawaii, Midway was a separate country for amateur radio purposes. During this era, there were two main amateur radio stations: KM6BI on Sand Island and KM6CE on Eastern Island. Many other amateurs operated under callsigns from their quarters. They all provided a vital link to home via messages and phone patches. In 2009, the U.S. Fish and Wildlife Service (USFWS) permitted amateur radio operations on Midway Atoll for the first time since 2002. This initiative aimed to encourage visitors to experience Midway's wildlife, history, and culture, with amateur radio being a significant aspect of this experience. The operation, designated as K4M, involved a team of 19 operators who activated the atoll for a 10-day period, operating on multiple frequencies and bands to connect with amateur radio enthusiasts worldwide. From 1958 through 1960, the United States installed the Missile Impact Location System (MILS) in the Navy-managed Pacific Missile Range, later the Air Force-managed Western Range, to localize the splashdowns of test missile nose cones. MILS was developed and installed by the same entities that had completed the first phase of the Atlantic and U.S. West Coast SOSUS systems. A MILS installation, consisting of both a target array for precision location and a broad ocean area system for good positions outside the target area, was installed at Midway as part of the system supporting intercontinental ballistic missile (ICBM) tests. Other Pacific MILS shore terminals were at the Marine Corps Air Station Kaneohe Bay supporting intermediate range ballistic missile tests with impact areas northeast of Hawaii and the other ICBM test support systems at Wake Island and Eniwetok. Eastern Island, part of Midway Atoll, played a significant role during the Cold War as a site for U.S. naval intelligence operations. From July 1, 1954, to February 1971, it hosted the Naval Security Group Activity (NSGA), Midway Island, which was responsible for operating the AN/GRD-6 High-Frequency Direction Finding (HFDF) system. This system was integral to both the Eastern and Western Pacific HFDF networks, providing critical capabilities in tracking and monitoring high-frequency radio communications. The AN/GRD-6 HFDF system was designed to automatically provide azimuth indications within the frequency range of 2 to 32 MHz. It featured two antenna arrays: a low-frequency array covering 2 to 8 MHz and a high-frequency array covering 8 to 32 MHz. Each array consisted of multiple monopole antennas arranged in a circular pattern, with a sense antenna positioned at the center. Beneath each array, a circular copper wire mesh ground mat was buried to ensure consistent and reliable direction-finding performance, independent of local ground conductivity. The system included superheterodyne receivers and cathode ray tube indicators to display the direction of incoming signals. The strategic location of Eastern Island allowed the NSGA to monitor vast expanses of the Pacific Ocean, contributing to the U.S. Navy's efforts in signals intelligence and maritime surveillance during a period marked by heightened geopolitical tensions. The data collected through the AN/GRD-6 system supported various military operations and enhanced the United States' situational awareness in the region. During the Cold War, the U.S. established a shore terminal, in which output of the array at sea was processed and displayed utilizing the Low-Frequency Analyzer and Recorder (LOFAR), of the Sound Surveillance System (SOSUS), Naval Facility (NAVFAC) Midway Island, to track Soviet submarines. The facility became operational in 1968 and was commissioned on January 13, 1969. It remained secret until its decommissioning on September 30, 1983, after data from its arrays had been removed first to Naval Facility Barbers Point, Hawaii, in 1981 and then directly to the Naval Ocean Processing Facility (NOPF) Ford Island, Hawaii. U.S. Navy WV-2 In 1978, the Navy downgraded Midway from a Naval Air Station to a Naval Air Facility, and many personnel and dependents began leaving the island. With the war in Vietnam over and with the introduction of reconnaissance satellites and nuclear submarines, Midway's significance to U.S. national security was diminished. The World War II facilities at Sand and Eastern Islands were listed on the National Register of Historic Places on May 28, 1987, and were simultaneously added as a National Historic Landmark. As part of the Base Realignment and Closure process, the Navy facility on Midway has been operationally closed since September 10, 1993. However, the Navy assumed responsibility for cleaning up environmental contamination. The 2011 Tōhoku earthquake and tsunami on March 11 killed many birds on Midway. It was reported that a 1.5 m (4.9 ft) -tall wave completely submerged the atoll's reef inlets and Spit Island, killing more than 110,000 nesting seabirds at the National Wildlife Refuge. Scientists on the island, however, do not think it will have long-term negative impacts on the bird populations. A U.S. Geological Survey study found that the Midway Atoll, Laysan, and Pacific islands like them could become inundated and unfit to live on during the 21st century, due to increased storm waves and rising sea levels. National Wildlife Refuge and National Monument Midway was designated an overlay National Wildlife Refuge on April 22, 1988, while still under the primary jurisdiction of the Navy. From August 1996, the general public could visit the atoll through study ecotours. This program ended in 2002, but another visitor program was approved and began operating in March 2008. This program operated through 2012, but was suspended in 2013 due to budget cuts. On October 31, 1996, President Bill Clinton signed Executive Order 13022, which transferred the jurisdiction and control of the atoll to the United States Department of the Interior. The FWS assumed management of the Midway Atoll National Wildlife Refuge. The last contingent of Navy personnel left Midway on June 30, 1997, after an ambitious environmental cleanup program was completed. On September 13, 2000, Secretary of the Interior Bruce Babbitt designated the Wildlife Refuge as the Battle of Midway National Memorial. The refuge is now called the "Midway Atoll National Wildlife Refuge and Battle of Midway National Memorial". On June 15, 2006, President George W. Bush designated the Northwestern Hawaiian Islands as a national monument. The Northwestern Hawaiian Islands Marine National Monument encompasses 105,564 sq nmi (139,798 sq mi; 362,074 km2) and includes 3,910 sq nmi (5,178 sq mi; 13,411 km2) of coral reef habitat. The Monument also includes the Hawaiian Islands National Wildlife Refuge and the Midway Atoll National Wildlife Refuge. In 2007, the Monument's name was changed to Papahānaumokuākea (Hawaiian pronunciation: [ˈpɐpəˈhaːnɔuˈmokuˈaːkeə]) Marine National Monument. The National Monument is managed by the U.S. Fish and Wildlife Service, the National Oceanic and Atmospheric Administration (NOAA), and the State of Hawaii. In 2016, President Barack Obama expanded the Papahānaumokuākea Marine National Monument and added the Office of Hawaiian Affairs as a fourth co-trustee of the monument. The so-called Gooney monument was carved from a 30-foot (9.1 m) mahogany log as a personal project by a U.S. Navy dental officer stationed on the island. The project began in 1949. The statue was 11 feet (3.4 m) tall and stood for 40 years before succumbing to termite damage. It was replaced with a mock egg after its removal. Environment Midway Atoll forms part of the Northwest Hawaiian Islands Important Bird Area (IBA), designated as such by BirdLife International because of its seabirds and endemic landbirds. The atoll is a critical habitat in the central Pacific Ocean and includes breeding habitat for 17 seabird species. Many native species rely on the island, which is now home to 67–70 percent of the world's Laysan albatross population and 34–39 percent of the global population of black-footed albatross. A minimal number of the very rare short-tailed albatross also have been observed. Fewer than 2,200 individuals of this species are believed to exist due to excessive feather hunting in the late nineteenth century. In 2007–08, the U.S. Fish and Wildlife Service translocated 42 endangered Laysan ducks to the atoll as part of their efforts to conserve the species. Over 250 different species of marine life are found in the 300,000 acres (120,000 ha) of the lagoon and surrounding waters. The critically endangered Hawaiian monk seals raise their pups on the beaches, relying on the atoll's reef fish, squid, octopus, and crustaceans. Green sea turtles, another threatened species, occasionally nest on the island. The first was found in 2006 on Spit Island and another in 2007 on Sand Island. A resident pod of 300 spinner dolphins lives in the lagoons and nearshore waters. Human habitation has extensively altered the islands of Midway Atoll. Starting in 1869 with the project to blast the reefs and create a port on Sand Island, the environment of Midway Atoll has experienced profound changes. Several invasive exotics have been introduced; for example, ironwood trees from Australia were planted to act as windbreaks. Of the 200 species of plants on Midway, 75 percent are non-native. Recent efforts have focused on removing non-native plant species and re-planting native species. Lead paint on the buildings posed an environmental hazard (avian lead poisoning) to the albatross population of the island. In 2018, a project to strip the paint was completed. Midway Atoll, in common with all the Hawaiian Islands, receives substantial amounts of marine debris from the Great Pacific Garbage Patch. Consisting of 90 percent plastic, approximately 20 tons of this debris accumulates on the beaches of Midway every year. The garbage is hazardous to the island's bird population: approximately 5 tons of debris is fed to albatross chicks by their parents, but the parents often collect the debris while they are out at sea. The U.S. Fish and Wildlife Service estimates at least 100 lb (45 kg) of plastic washes up every week. Of the 1.5 million Laysan albatrosses that inhabit Midway during the winter breeding season, nearly all are found to have plastic in their digestive system. Approximately one-third of the chicks die. These deaths are attributed to the albatrosses confusing brightly colored plastic with marine animals (such as squid and fish) for food. Recent results suggest that oceanic plastic develops a chemical olfactory signature that is normally used by seabirds to locate food items. Because albatross chicks do not develop the reflex to regurgitate until they are four months old, they cannot expel the plastic pieces. Albatrosses are not the only species to suffer from the plastic pollution; sea turtles and monk seals also consume the debris. Various plastic items wash upon the shores, from cigarette lighters to toothbrushes and toys. An albatross living on Midway can have up to 50 percent of its intestinal tract filled with plastic. Transportation The usual method of reaching Sand Island, Midway Atoll's only populated island, is on chartered aircraft landing at Sand Island's Henderson Field, which also functions as an emergency diversion point runway for transpacific flights. An example of this occurring was in 2011, where Delta Air Lines Flight 277, a Boeing 747-400 traveling from Honolulu to Osaka made an emergency landing at Henderson Field due to a cracked windshield. The US National Wildlife Refuge employees working on the atoll assisted the landing and cared for the nearly 380 passengers and crew for eight hours until a backup plane arrived. No injuries were reported. See also References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chinese_Space_Program] | [TOKENS: 16917]
Contents Chinese space program The space program of the People's Republic of China is about the activities in outer space conducted and directed by the government of China. The roots of the Chinese space program trace back to the 1950s, when, with the help of the newly allied Soviet Union, China began development of its first ballistic missile and rocket programs in response to the perceived American (and, later, Soviet) threats. Driven by the successes of Soviet Sputnik 1 and American Explorer 1 satellite launches in 1957 and 1958 respectively, China would launch its first satellite, Dong Fang Hong 1 in April 1970 aboard a Long March 1 rocket, making it the fifth nation to place a satellite in orbit. China has one of the most active space programs in the world. With space launch capability provided by the Long March rocket family and four spaceports (Jiuquan, Taiyuan, Xichang, Wenchang) within its border, China conducts either the highest or the second highest number of orbital launches each year. It operates a satellite fleet consisting of a large number of communications, navigation, remote sensing and scientific research satellites. The scope of its activities has expanded from low Earth orbit to the Moon and Mars. China is one of the three countries, alongside the United States and Russia, with independent human spaceflight capability. Currently, most of the space activities carried out by China are managed by the China National Space Administration (CNSA) and the People's Liberation Army Strategic Support Force, which directs the astronaut corps and the Chinese Deep Space Network. Major programs include China Manned Space Program, BeiDou Navigation Satellite System, Chinese Lunar Exploration Program, Gaofen Observation and Planetary Exploration of China. In recent years, China has conducted several missions, including Chang'e-4, Chang'e-5, Chang’e-6, Tianwen-1, Tianwen-2, and Tiangong space station. History The Chinese space program began in the form of missile research in the 1950s. After its birth in 1949, the newly founded People's Republic of China was in pursuit of missile technology to build up the nation's defense for the Cold War. In 1955, Qian Xuesen (钱学森), the world-class rocketry scientist, returned to China from the United States. In 1956, Qian submitted a proposal for the development of China's missile program, which was approved in just a few months. On October 8, China's first missile research institute, the Fifth Research Academy under the Ministry of National Defense, was established with less than 200 staff, most of which were recruited by Qian. The event was later recognized as the birth of China's space program. To fully utilize all available resources, China kick-started its missile development by manufacturing a licensed copy of two Soviet R-2 missiles, which were secretly shipped to China in December 1957 as part of the cooperative technology transfer program between the Soviet Union and China. The Chinese version of the missile was given the code name "1059" with the expectation of being launched in 1959. But the target date was soon postponed due to various difficulties arising from the sudden withdrawal of Soviet technical assistance due to the Sino-Soviet split. Meanwhile, China started constructing its first missile test site in the Gobi desert of Inner Mongolia, which later became the famous Jiuquan Satellite Launch Center (酒泉卫星发射中心), China's first spaceport.[citation needed] After the launch of mankind's first artificial satellite, Sputnik 1, by the Soviet Union on October 4, 1957, Mao Zedong decided during the 8th National Congress of the Chinese Communist Party (CCP) on May 17, 1958, to make China an equal of the superpowers (Chinese: "我们也要搞人造卫星"; lit. 'We too need satellites'), by adopting Project 581 with the objective of placing a satellite in orbit by 1959 to celebrate the 10th anniversary of the PRC's founding. This goal was soon proven unrealistic, and it was decided to focus on the development of sounding rockets first.[citation needed] The first achievement of the program was the launch of T-7M, a sounding rocket that eventually reached the height of 8 km on February 19, 1960. It was the first rocket developed by Chinese engineers. The success was praised by Mao Zedong as a good beginning of an indigenous Chinese rocket development. However, all Soviet technological assistance was abruptly withdrawn after the 1960 Sino-Soviet split, and Chinese scientists continued on the program with extremely limited resources and knowledge. It was under these harsh conditions that China successfully launched the first "missile 1059", fueled by alcohol and liquid oxygen, on December 5, 1960, marking a successful imitation of Soviet missile. The missile 1059 was later renamed as Dongfeng-1 (DF-1, 东风一号). While the imitation of Soviet missile was still in progress, the Fifth Academy led by Qian had begun the development of Dongfeng-2 (DF-2), the first missile to be designed and built completely by the Chinese. After a failed attempt in March 1962, multiple improvements, and hundreds of engine firing tests, DF-2 achieved its first successful launch on its second attempt on Jun 29, 1964 in Jiuquan. It was considered as a major milestone in China's indigenous missile development history. In the next few years, Dongfeng-2 conducted seven more launches, all ended in success. On October 27, 1966, as part of the "Two Bombs, One Satellite" project, Dongfeng-2A, an improved version of DF-2, successfully launched and detonated a nuclear warhead at its target. As China's missile industry matures, a new plan of developing carrier rockets and launching satellites was proposed and approved in 1965 with the name Project 581 changed to Project 651. On January 30, 1970, China successfully tested the newly developed two-stage Dongfeng-4 (DF-4) missile, which demonstrated critical technologies like rocket staging, engine in-flight ignition, attitude control. The DF-4 was used to develop the Long March 1 (LM-1 or CZ-1, 长征一号), with a newly designed spin-up orbital insertion solid-propellant rocket motor third stage added to the two existing Nitric acid/UDMH liquid propellant stages.[citation needed] China's space program benefited from the Third Front campaign to develop basic industry and national defense industry in China's rugged interior in preparation for potential invasion by the Soviet Union or the United States.: 4, 218–219 Almost all of China's new aerospace work units in the late 1960s and early 1970s were established as part of the Third Front and Third Front projects included expansion of Jiuquan Satellite Launch Center, building Xichang Satellite Launch Center, and building Taiyuan Satellite Launch Center.: 218–219 On April 24, 1970, China successfully launched the 173 kg Dong Fang Hong I (东方红一号, meaning The East Is Red I) atop a Long March 1 (CZ-1, 长征一号) rocket from Jiuquan Satellite Launch Center. It was the heaviest first satellite placed into orbit by a nation. The third stage of the Long March 1 was specially equipped with a 40 m2 solar reflector (观察球) deployed by the centrifugal force developed by the spin-up orbital insertion solid propellant stage. China's second satellite was launched with the last Long March 1 on March 3, 1971. The 221 kg ShiJian-1 (SJ-1, 实践一号) was equipped with a magnetometer and cosmic-ray/x-ray detectors.[citation needed] In addition to the satellite launch, China also made small progress in human spaceflight. The first successful launch and recovery of a T-7A(S1) sounding rocket carrying a biological experiment (it carried eight white mice) was on July 19, 1964, from Base 603 (六〇三基地). As the space race between the two superpowers reached its climax with the conquest of the Moon, Mao and Zhou Enlai decided on July 14, 1967, that China should not be left behind, and started China's own crewed space program. China's first spacecraft designed for human occupancy was named Shuguang-1 (曙光一号) in January 1968. China's Space Medical Institute (航天医学工程研究所) was founded on April 1, 1968, and the Central Military Commission issued the order to start the selection of astronauts. The first crewed space program, known as Project 714, was officially adopted in April 1971 with the goal of sending two astronauts into space by 1973 aboard the Shuguang spacecraft. The first screening process for astronauts had already ended on March 15, 1971, with 19 astronauts chosen. But the program was soon canceled in the same year due to political turmoil, ending China's first human spaceflight attempt.[citation needed] While CZ-1 was being developed, the development of China's first long-range intercontinental ballistic missile, namely Dongfeng-5 (DF-5), has started since 1965. The first test flight of DF-5 was conducted in 1971. After that, its technology was adopted by two different models of Chinese medium-lift launch vehicles being developed. One of the two was Feng Bao 1 (FB-1, 风暴一号) developed by Shanghai's 2nd Bureau of Mechanic-Electrical Industry, the predecessor of Shanghai Academy of Spaceflight Technology (SAST). The other parallel medium-lift LV program, also based on the same DF-5 ICBM and known as Long March 2 (CZ-2, 长征二号), was started in Beijing by the First Research Academy of the Seventh Ministry of Machine Building, which later became China Academy of Launch Vehicle Technology (CALT). Both FB-1 and CZ-2 were fueled by N2O4 and UDMH, the same propellant used by DF-5. On July 26, 1975, FB-1 made its first successful flight, placing the 1107-kilogram Changkong-1 (长空一号) satellite into orbit. It was the first time that China launched a payload heavier than 1 metric ton. Four months later, on November 26, CZ-2 successfully launched the FSW-0 No.1 (返回式卫星零号) recoverable satellite into orbit. The satellite returned to earth and was successfully recovered three days later, making China the third country capable of recovering a satellite, after the Soviet Union and the United States. FB-1 and CZ-2, which were developed by two different institutes, were later evolved into two different branches of the classic Long March rocket family: Long March 4 and Long March 2.[citation needed] As part of the Third Front effort to relocate critical defense infrastructure to the relatively remote interior (away from the Soviet border), it was decided to construct a new space center in the mountainous region of Xichang in the Sichuan province, code-named Base 27. After expansion, the Northern Missile Test Site was upgraded as a test base in January 1976 to become the Northern Missile Test Base (华北导弹试验基地) known as Base 25.[citation needed] After Mao died on September 9, 1976, his rival, Deng Xiaoping, denounced during the Cultural Revolution as reactionary and therefore forced to retire from all his offices, slowly re-emerged as China's new leader in 1978. At first, the new development was slowed. Then, several key projects deemed unnecessary were simply cancelled—the Fanji ABM system, the Xianfeng Anti-Missile Super Gun, the ICBM Early Warning Network 7010 Tracking Radar and the land-based high-power anti-missile laser program. Nevertheless, some development did proceed. The first Yuanwang-class space tracking ship was commissioned in 1979. The first full-range test of the DF-5 ICBM was conducted on May 18, 1980. The payload reached its target located 9300 km away in the South Pacific (7°0′S 117°33′E / 7.000°S 117.550°E / -7.000; 117.550 (DF-5 ICBM test impact)) [dubious – discuss] and retrieved five minutes later by helicopter. In 1982, Long March 2C (CZ-2C, 长征二号丙), an upgraded version of Long March 2 based on DF-5 with 2500 kg low Earth orbit (LEO) payload capacity, completed its maiden flight. Long March 2C, along with many of its derived models, eventually became the backbone of Chinese space program in the following decades.[citation needed] As China changing its direction from political activities to economy development since late 1970s, the demand for communications satellites surged. As a result, the Chinese communications satellite program, code name Project 331, was started on March 31, 1975. The first generation of China's own communication satellites was named Dong Fang Hong 2 (DFH-2, 东方红二号), whose development was led by the famous satellite expert Sun Jiadong. Since communications satellites works in the geostationary orbit much higher than what the existing carrier rockets could reach, the launching of communications satellites became the next big challenge for the Chinese space program.[citation needed] The task was assigned to Long March 3 (CZ-3, 长征三号), the most advanced Chinese launch vehicle in the 1980s. Long March 3 was a derivative of Long March 2C with an additional third stage, designed to send payloads to geosynchronous transfer orbit (GTO). When the development of Long March 3 began in the early 1970s, the engineers had to make a choice between the two options for the third stage engine: either the traditional engine fueled by the same hypergolic fuels used by the first two stages, or the advanced cryogenic engine fueled by liquid hydrogen and liquid oxygen. Although the cryogenic engine plan was much more challenging than the other one, it was eventually chosen by Chief Designer Ren Xinmin (任新民), who had foreseen the great potential of its use for the Chinese space program in the coming future. The development of cryogenic engine with in-flight re-ignition capability began in 1976 and wasn't completed until 1983. At the same time, Xichang Satellite Launch Center (西昌卫星发射中心) was chosen as the launch site of Long March 3 due to its low latitude, which provides better GTO launch capability.[citation needed] On January 29, 1984, Long March 3 performed its maiden flight from Xichang, carrying the first experimental DFH-2 satellite. Unfortunately, because of the cryogenic third-stage engine failed to re-ignite during flight, the satellite was placed into a 400 km LEO instead of its intended GTO. Despite the rocket failure, the engineers managed to send the satellite into an elliptic orbit with an apoapsis of 6480 km using the satellite's own propulsion system. A series of tests were then conducted to verify the performance of the satellite. Thanks to the hard work by the engineers, the cause of the cryogenic engine failure was located quickly, followed by improvements applied on the second rocket awaiting launch. On April 8, 1984, less than 70 days after the first failure, Long March 3 launched again from Xichang. It successfully inserted the second experimental DFH-2 satellite into target GTO on its second attempt. The satellite reached the final orbit location on April 16 and was handed over to the user on May 14, becoming China's first geostationary communications satellite. The success made China the fifth country in the world with independent geostationary satellite development and launch capability. Less than two years later, on February 1, 1986, the first practical DFH-2 communications satellite was launched into orbit atop a Long March 3 rocket, ending China's reliance on foreign communications satellite. During the 1980s, human spaceflights in the world became significantly more active than before as the American Space Shuttle and Soviet space stations were put in service respectively. It was in the same period that the previously canceled Chinese human spaceflight program was quietly revived again. In March 1986, Project 863 (863计划) was proposed by four scientists Wang Daheng, Wang Ganchang, Yang Jiachi, and Chen Fangyun. The goal of the project was to stimulate the development of advanced technologies, including human spaceflight. Followed by the approval of Project 863, the early study of Chinese human spaceflight program in the new era had begun. After the initial success of Long March 3, further development of the Long March rocket series allowed China to announce a commercial launch program for international customers in 1985, which opened up a decade of commercial launches by Chinese launch vehicles in the 1990s. The launch service was provided by China Great Wall Industry Corporation (CGWIC) with support from CALT, SAST and China Satellite Launch and Tracking Control General (CLTC). The first contract was signed with AsiaSat in January 1989 to launch AsiaSat 1, a communications satellite manufactured by Hughes. It was previously a satellite owned by Westar but placed into a wrong orbit due to kick motor malfunction before being recovered in the STS-51-A mission in 1984.[citation needed] On April 7, 1990, a Long March 3 rocket successfully launched AsiaSat 1 into target geosynchronous transfer orbit with high precision, fulfilling the contract. As its very first commercial launch ended in full success, the Chinese commercial launch program was introduced to the world with a good opening. Although Long March 3 completed its first commercial mission as expected, its 1,500 kg payload capability was not capable of placing the new generation of communication satellites, which were usually over 2,500 kg, into geostationary transfer orbit. To deal with the problem, China introduced Long March 2E (CZ-2E, 长征二号E), the first Chinese rocket with strap-on boosters that can place up to 3,000 kg payload into GTO. The development of Long March 2E began in November 1988 when CGWIC was awarded the contract of launching two Optus satellites by Hughes mostly due to its low price. At that time, neither the rocket nor the launch facility was anything more than concepts on paper. Yet the engineers of CALT eventually built all the hardware from scratch in a record-breaking period of 18 months, which impressed the American experts. On September 16, 1990, Long March 2E, carrying an Optus mass simulator, conducted its test flight and reached intended orbit as designed. The success of the test flight was a huge inspiration for all parties involved and brought optimism about the coming launch of actual Optus satellites. However, an accident occurred during this highly anticipated launch on March 22, 1992, at Xichang Satellite Launch Center. After initial ignition, all engines shut down unexpectedly. The rocket was unable to lift off, resulting in a launch abort while being live-streamed to the world. The post-launch investigation revealed that some minor aluminum scraps caused a shortage in the control circuit, triggering an emergency shutdown of all engines. Although the huge vibration brought by the short-lived ignition had led to a rotation of the whole rocket by 1.5 degree clockwise and partial displacement of the supporting blocks, the rocket filled with propellant was still standing on the launch pad when the dust settled. After a rescue mission that lasted for 39 hours, the payload, rocket, and launch facilities were all preserved intact, avoiding huge losses. Less than five months later, on August 14, a new Long March 2E rocket successfully lifted off from Xichang, sending the Optus satellite into orbit. In June 1993, the China Aerospace Corporation was founded in Beijing. It was also granted the title of China National Space Administration (CNSA). A improved version of Long March 3, namely Long March 3A (CZ-3A, 长征三号甲) with 2,600 kg payload capacity to GTO, was put into service in 1994. However, on February 15, 1996, during the first flight of the further improved Long March 3B (CZ-3B, 长征三号乙) rocket carrying Intelsat 708, the rocket veered off course immediately after clearing the launch platform, crashing 22 seconds later. The crash killed 6 people and injured 57, making it the most disastrous event in the history of Chinese space program. Although the Long March 3 rocket successfully launched APStar 1A communication satellites on July 3, it came across a third stage re-ignition malfunction during the launch of ChinaSat 7 on August 18, resulting in another launch failure. The two launch failures within a few months dealt a severe blow to the reputation of the Long March rockets. As a consequence, the Chinese commercial launch service was facing canceled orders, refusal of insurance, or greatly increased insurance premium. Under such a harsh circumstance, the Chinese space industry initiated full-scale quality improving activities. A closed-loop quality management system was established to fix quality issues in both the technical and administrative aspects. The strict quality management system remarkably increased the success rate ever since. Within the next 15 years, from October 20, 1996, up until August 16, 2011, China had achieved 102 consecutive successful space launches. On August 20, 1997, Long March 3B accomplished its first successful flight on its second attempt, placing the 3,770 kg Agila-2 communications satellite into orbit. It offered a GTO payload capacity as high as 5,000 kg capable of putting different kinds of heavy satellites available on the international market into orbit. Ever since then, Long March 3B had become the backbone of China's mid to high Earth orbit launches and been granted the title of most powerful rocket by China for nearly 20 years. In 1998, the administrative branch of China Aerospace Corporation was split and then merged into the newly founded Commission for Science, Technology and Industry for National Defense while retaining the title of CNSA. The remaining part was split again into China Aerospace Science and Technology Corporation (CASC) and China Aerospace Science and Industry Corporation (CASIC) in 1999. While the Long March rockets were trying to take back the commercial launch market it lost, the political suppression from the United States approached. In 1998, the United States accused Hughes and Loral of exporting technologies that inadvertently helped China's ballistic missile program while resolving issues that caused the Long March rocket launch failures. The accusation ultimately led to the release of Cox Report, which further accused China of stealing sensitive technologies. In the next year, the U.S. Congress passed the act that put commercial satellites into the list restricted by International Traffic in Arms Regulations (ITAR) and prohibited launches of satellites containing U.S. made components onboard Chinese rockets. The regulation abruptly killed the commercial cooperation between China and the United States. The two Iridum satellites launched by Long March 2C on June 12, 1999, became the last batch of American satellites launched by Chinese rocket. Furthermore, due to the strict regulation applied and the U.S. dominance in space industry, the Long March rockets had been de facto excluded from the international commercial launch market, causing a stagnation of the Chinese commercial launch program in the next few years. Despite the turmoil of commercial launches, the Chinese space program still made a huge breakthrough near the end of the decade. At 6:30 (China Standard Time) on November 20, 1999, Shenzhou-1 (神舟一号), the first uncrewed Shenzhou spacecraft (神舟载人飞船) designed for human spaceflight, was successfully launched atop a Long March 2F (CZ-2F, 长征二号F) rocket from Jiuquan Satellite Launch Center. The spacecraft was inserted into low earth orbit 10 minutes after lift off. After orbiting the Earth for 14 rounds, the spacecraft initiated the return procedure as planned and landed safely in Inner Mongolia at 03:41 on November 21, marking the full success of China's first Shenzhou test flight. Following the announcement of the success of the mission, the previously secretive Chinese human spaceflight program, namely the China Manned Space Program (CMS, 中国载人航天工程), was formally made public. CMS, which was formally approved on September 21, 1992, by the CCP Politburo Standing Committee as Project 921, has been the most ambitious space program of China since its birth. Its goals can be described as "Three Steps": Crewed spacecraft launch and return; Space laboratory for short-term missions; Long-term modular space station. Due to its complex nature, a series of advanced projects were introduced by the program, including Shenzhou spacecraft, Long March 2F rocket, human spaceflight launch site in Jiuquan, Beijing Aerospace Flight Control Center, and Astronaut Center of China in Beijing. In terms of astronauts, fourteen candidates were selected to form the People's Liberation Army Astronaut Corps and started accepting spaceflight training.[citation needed] Since the beginning of 21st century, China has been experiencing rapid economic growth, which led to higher investment into space programs and multiple major achievements in the following decades. In November 2000, the Chinese government released its first white paper entitled China's Space Activities, which described its goals in the next decade as: The independent satellite navigation and positioning system mentioned by the white paper was Beidou (北斗卫星导航系统). The development of Beidou dates back to 1983 when academician of the Chinese Academy of Sciences Chen Fangyun designed a primitive satellite navigation systems consisting of two satellites in the geostationary orbit. Sun Jiadong, the famous satellite expert of China, later proposed a "three-step" strategy to develop China's own satellite navigation system, whose service coverage expands from China to Asia then the globe. The two satellites of the "first step", namely BeiDou-1, were launched in October and December 2000. As an experimental system, Beidou-1 offered basic positioning, navigation and timing services to limited areas in and around China. After a few years of experiment, China started the construction of BeiDou-2, a more advanced system to serve the Asia-Pacific region by launching the first two satellites in 2007 and 2009 respectively. Another major goal specified by the white paper was to realize crewed spaceflight. The China Manned Space Program continued its steady evolvement in the 21st century after its initial success. From January 2001 to January 2003, China conducted three uncrewed Shenzhou spacecraft test flights, validating all systems required by human spaceflight. Among these missions, the Shenzhou-4 launched on December 30, 2002, was the last uncrewed rehearsal of Shenzhou. It flew for 6 days and 18 hours and orbited around the Earth for 108 circles before returning on January 5, 2003. On October 15, 2003, the first Chinese astronaut Yang Liwei (杨利伟) was launched aboard Shenzhou-5 (神舟五号) spacecraft atop a Long March 2F rocket from Jiuquan Satellite Launch Center. The spacecraft was inserted into orbit ten minutes after launch, making Yang the first Chinese in space. After a flight of more than 21 hours and 14 orbits around the Earth, the spacecraft returned and landed safely in Inner Mongolia in the next morning, followed by Yang's walking out of the return capsule by himself. The complete success of Shenzhou 5 mission was widely celebrated in China and received worldwide endorsements from different people and parties, including UN Secretary General Kofi Annan. The mission, officially recognized by China as the second milestone of its space program after the launch of Dongfanghong-1, marked China's standing as the third country capable of completing independent human spaceflight, ending the over 40-year long duopoly by the Soviet Union/Russia and the United States. The China Manned Space Program did not stop its footsteps after its historic first crewed spaceflight. In 2005, two Chinese astronauts, Fei Junlong (费俊龙) and Nie Haisheng (聂海胜), safely completed China's first "multi-person and multi-day" spaceflight mission aboard Shenzhou-6 (神舟六号) between October 12 and 17. On 25 September 2008, Shenzhou-7 (神舟七号) was launched into space with three astronauts, Zhai Zhigang (翟志刚), Liu Boming (刘伯明) and Jing Haipeng (景海鹏). During the flight, Zhai and Liu conducted China's first spacewalk in orbit. Around the same time, China began preparation for extraterrestrial exploration, starting with the Moon. The early research of Moon exploration of China dates back to 1994 when its necessity and feasibility were studied and discussed among Chinese scientists. As a result, the white paper of 2000 enlisted the Moon as the primary target of China's deep space exploration within the decade. In January 2004, the year after China's first human spaceflight mission, the Chinese Moon orbiting program was formally approved and was later transformed into Chinese Lunar Exploration Program (CLEP, 中国探月工程). Just like several other space programs of China, CLEP was divided into three phases, which were simplified as "Orbiting, Landing, Returning" (“绕、落、回”), all to be executed by robotic probes at the time of planning. On October 24, 2007, the first lunar orbiter Chang'e-1 (嫦娥一号) was successfully launched by a Long March 3A rocket, and was inserted into Moon orbit on November 7, becoming China's first artificial satellite of the Moon. It then performed a series of surveys and produced China's first lunar map. On March 1, 2009, Chang'e-1, which had been operating longer than its designed life span, performed a controlled hard landing on lunar surface, concluding the Chang'e-1 mission. Being China's first deep space exploration mission, Chang'e-1 was recognized by China as the third milestone of the Chinese space program and the admission ticket to the world club of deep space explorations. In others areas, despite the harsh sanction imposed by the United States since 1999, China still made some progress in terms of commercial launches within the first decade of the 21st century. In April 2005, China successfully conducted its first commercial launch since 1999 by launching the APStar 6 communications satellite manufactured by French company Alcatel atop a Long March 3B rocket. In May 2007, China launched NigComSat-1 satellite developed by China Academy of Space Technology. This was the first time China provided the full service from satellite manufacture to launch for international customers. From 2000 to 2010, China had quadrupled its GDP and became the second largest economy in the world. Due to the rapid development of economy activities across the nation, the demand for high-resolution Earth observation systems increased in a remarkable manner. To end the reliance on foreign high-resolution remote sensing data, China initiated the China High-resolution Earth Observation System program (高分辨率对地观测系统), most commonly known as Gaofen (高分), in May 2010. Its purpose is to establish an all-day, all-weather coverage Earth observation system for satisfying the requirements of social development as part of the Chinese space infrastructures. The first Gaofen satellite, Gaofen 1, was launched into orbit on April 26, 2013, followed by more satellites being launched into different orbits in the next few years to cover different spectra. As of today, more than 30 Gaofen satellites are being operated by China as the completion of the space-based section of Gaofen was announced in late 2022. The Beidou Navigation Satellite System proceeded in extraordinary speed after the launch of first Beidou-2 satellite in 2007. As many as five Beidou-2 navigation satellites were launched in 2010 alone. In late 2012, the Beidou-2 navigation system consisting of 14 satellites was completed and started providing service to Asia-Pacific region. The construction of more advanced Beidou-3 started since November 2017. Its buildup speed was even more astonishing than before as China launched 24 satellites into medium Earth orbit, 3 into inclined geosynchronous orbit, and 3 into geostationary orbit within just three years. The final satellite of Beidou-3 was successfully launched by a Long March 3B rocket on June 23, 2020. On July 31, 2020, CCP general secretary Xi Jinping made the announcement on the Beidou-3 completion ceremony, declaring the commission of Beidou-3 system across the globe. The completed Beidou-3 navigation system integrates navigation and communication function, and possesses multiple service capabilities, including positioning, navigation and timing, short message communication, international search and rescue, satellite-based augmentation, ground augmentation and precise point positioning. It is now one of the four core system providers designated by the International Committee on Global Navigation Satellite Systems of the United Nations. The China Manned Space Program continued to make breakthroughs in human spaceflight technologies in 2010s. In the early 2000s, the Chinese crewed space program continued to engage with Russia in technological exchanges regarding the development of a docking mechanism used for space stations. Deputy Chief Designer, Huang Weifen, stated that near the end of 2009, the China Manned Space Agency began to train astronauts on how to dock spacecraft. In order to practice space rendezvous and docking, China launched an 8,000 kg (18,000 lb) target vehicle, Tiangong-1 (天宫一号), in 2011, followed by the uncrewed Shenzhou 8 (神舟八号). The two spacecraft performed China's first automatic rendezvous and docking on 3 November 2011, which verified the performance of docking procedures and mechanisms. About 9 months later, in June 2012, Tiangong 1 completed the first manual rendezvous and docking with Shenzhou 9 (神舟九号), a crewed spacecraft carrying Jing Haipeng, Liu Wang (刘旺) and China's first female astronaut Liu Yang (刘洋). The successes of Shenzhou 8 and 9 missions, especially the automatic and manual docking experiments, marked China's advancement in space rendezvous and docking. Tiangong 1 was later docked with crewed spacecraft Shenzhou 10 (神舟十号) carrying astronauts Nie Haisheng, Zhang Xiaoguang (张晓光) and Wang Yaping (王亚平), who conducted multiple scientific experiments, gave lectures to over 60 million students in China, and performed more docking tests before returning to the Earth safely after 15 days in space. The completion of missions from Shenzhou 7 to 10 demonstrated China's mastery of all basic human spaceflight technologies, ending phase 1 of "Second Step". Although Tiangong 1 was considered as a space station prototype, its functionality was still remarkably weaker than decent space laboratories. Tiangong-2 (天宫二号), the first real space laboratory of China, was launched into orbit on September 15, 2016. It was visited by Shenzhou 11 crew a month later. Two astronauts, Jing Haipeng and Chen Dong (陈冬) entered Tiangong 2 and were stationed for about 30 days, breaking China's record for the longest human spaceflight mission while carrying out different types of human-attended experiments. In April 2017, China's first cargo spacecraft, Tianzhou-1 (天舟一号), docked with Tiangong 2 and completed multiple in-orbit propellant refueling tests. In terms of deep space explorations, after completing the objective of "Orbiting" in 2007, the Chinese Lunar Exploration Program started preparing for the "Landing" phase. China's second lunar probe, Chang'e-2 (嫦娥二号), was launched on October 1, 2010. It used trans-lunar injection orbit to reach the Moon for the first time and imaged the Sinus Iridum region where future landing missions were expected to occur. On December 2, 2013, a Long March 3B rocket launched Chang'e-3 (嫦娥三号), China's first lunar lander, to the Moon. On December 14, Chang'e 3 successfully landed on the Sinus Iridum region, making China the third country that made soft-landing on an extraterrestrial body. A day later, the Yutu rover (玉兔号月球车) was deployed to the lunar surface and started its survey, achieving the goal of "landing and roving" for the second phase of CLEP. In addition to lunar exploration, it is worth noting that China made its first attempt of interplanetary exploration during the same period. Yinghuo-1 (萤火一号), China's first Mars orbiter, was launched on board the Russian Fobos-Grunt spacecraft as an additional payload in November 2011. Yinghuo-1 was a mission in cooperation with Russian Space Agency. It was a relatively small project initiated by National Space Science Center of Chinese Academy of Sciences instead of a major space program managed by the state space agency. The Yinghuo-1 orbiter weighed about 100 kg and was carried by the Fobos-Grunt probe. It was expected to detach from the Fobos-Grunt probe and injected into Mars orbit after reaching Mars. However, due to an error of the onboard computer, the Fobos-Grunt probe failed to start its main engine and was stranded in the low Earth orbit after launch. Two months later, Fobos-Grunt, along with the Yinghuo-1 orbiter, re-entered and eventually burned up in the Earth atmosphere, resulting in a mission failure. Although the Yinghuo-1 mission did not achieve its original goal due to factors not controlled by China, it led to the dawn of the Chinese interplanetary explorations by gathering a group of talents dedicated to interplanetary research for the first time. On December 13, 2012, the Chinese lunar probe Chang'e 2, which was in an extended mission after the conclusion of its primary tasks in lunar orbit, made a flyby of asteroid Toutatis with closest approach being 3.2 kilometers, making it China's first interplanetary probe. In 2016, the first Chinese independent Mars mission was formally approved and listed as one of the major tasks in "White Paper on China's Space Activities in 2016". The mission, which was planned in an unprecedented manner, aimed to achieve Mars orbiting, landing and roving in one single attempt in 2020. While China was making remarkable progress in all areas above, the Long March rockets, the absolute foundation of Chinese space program, were also experiencing a crucial revolution. Ever since 1970s, the Long March rocket family had been using dinitrogen tetroxide and UDMH as propellant for liquid engines. Although this hypergolic propellant is simple, cheap and reliable, its disadvantages, including toxicity, environmental damages, and low specific impulse, hindered Chinese carrier rockets from being competitive against other space powers since the mid-1980s. To get rid of such unsatisfying situation, China commenced the study of new propellant selection since the introduction of Project 863 in 1986. After an early study that lasted for over a decade, the development of a 120-ton rocket engine burning LOX and kerosene in staged combustion cycle were formally approved in 2000. Despite setbacks like engine explosions during initial firing tests, the development team still made breakthroughs in key technologies like superalloy production and engine ignition and completed its first long duration firing test in 2006. The engine, which was named YF-100, was eventually certified in 2012, and the first engine for actual flight was ready in 2014. On September 20, 2015, the Long March 6 (长征六号), a small rocket using one YF-100 engine on its first stage, successfully conducted its maiden flight. On June 25, 2016, the medium-lift Long March 7 (长征七号), which was equipped with six YF-100 engines, completed its maiden flight in full success, increasing the maximum LEO payload capacity by Chinese rockets to 13.5 tons. The successes of Long March 6 and 7 signified the introduction of the "new generation of Long March rockets" powered by clean and more efficient engines. The maiden launch of Long March 7 was also the very first launch from Wenchang Space Launch Site (文昌航天发射场) located in Wenchang, Hainan Province. It marked the inauguration of Wenchang on the world stage of space activities. Compared with the old Jiuquan, Taiyuan, and Xichang, the Wenchang Space Launch Site, whose construction began in September 2009, is China's latest and most advanced spaceport. Rockets launched from Wenchang can send ten to fifteen percent more payloads in mass to orbit thanks to its low latitude. Additionally, due to its geographic location, the drop zones of rocket debris produced by rocket launches are in the ocean, eliminating threats posed to people and facilities on the ground. Wenchang's coastal location also allows larger rockets to be delivered to launch site by sea, which is difficult, if not impossible, for inland launch sites due to the size limits of tunnels needed to be passed through during transportations. The biggest breakthrough within the decade, if not decades, were brought by Long March 5 (长征五号), the leading role of the new generation of Long March rockets and China's first heavy-lift launch vehicle. The early study of Long March 5 can be traced back to 1986, and the project was formally approved in mid-2000s. It applied 247 new technologies during its development while over 90% of its components were newly developed and applied for the first time. Instead of using the classic 3.35-meter-diameter core stage and 2.25-meter-diameter side boosters, the 57-meter tall Long March 5 consists of one 5-meter-diameter core stage burning LH2/LOX and four 3.35-meter-diameter side boosters burning kerosene/LOX. With a launch mass as high as 869 metric tons and 10,573 kN lift-off thrust, the Long March 5, being China's most powerful rocket, is capable of lifting up to 25 tons of payload to LEO and 14 tons to GTO, making it more than 2.5 times as much as the previous record holder (Long March 3B) and nearly as equal as the most powerful rocket in the world at that time (Delta IV Heavy). Due to its unprecedented capability, the Long March 5 was expected as the keystone for the Chinese space program in the early 21st century. However, after a successful maiden flight in late 2016, the second launch of the Long March 5 on July 2, 2017, suffered a failure, which was considered as the biggest setback for Chinese space program in nearly two decades. Because of the failure, the Long March 5 was grounded indefinitely until the problem was located and resolved, and multiple planned major space missions were either postponed or facing the risk of being postponed in the next few years.[citation needed] Despite the uncertain future of Long March 5, China managed to make history in space explorations with existing hardware in the next two years. Due to tidal locking, the Moon has been orbiting the Earth as the only natural satellite by facing it with the same side. Humans had never seen the far side of the Moon until the Space Age. Although humans have already got quite an amount of knowledge about the overall condition of the far side of the Moon in early 21st century with the help of numerous visits by lunar orbiters since the 1960s, no country had ever explored the area in close distance due to lack of communications on the far side. This missing piece was eventually filled by China's Chang'e-4 (嫦娥四号) mission in 2019. To solve the communications problem, China launched Queqiao (鹊桥号), a relay satellite orbiting around the Earth–Moon L2 Lagrangian point, in May 2018 to enable communications between the far side of the Moon and the Earth. On December 8, 2018, the Chang'e 4, which was originally built as the backup of Chang'e 3, was launched by a Long March 3B rocket from Xichang and entered lunar orbit on December 12. On January 3, 2019, Chang'e 4 successfully soft-landed at the Von Kármán (lunar crater) on the far side of the Moon, and returned the first close-up image of the lunar surface on the far side. A rover named Yutu-2 (玉兔二号) was deployed onto the lunar surface a few hours later, leaving the first trial on the far side. The accomplishment of a series of tasks by Chang'e-4 made China the first country to successfully achieved soft-landing and roving on the far side of the Moon. Because of its great success, the project team received IAF World Space Award of 2020. Aside from Chang'e 4, there were some other events worth noting during this period. In August 2016, China launched world's first quantum communications satellite Mozi (墨子号). In June 2017, the first Chinese X-ray astronomy satellite named Huiyan (慧眼) was launched into space. In August of the same year, the Astronaut Center of China organized a joint training in which sixteen Chinese and two ESA astronauts participated. It was the first time that foreign astronauts took part in astronaut training organized by China. In 2018, China performed more orbital launches than any other countries on the planet for the first time in history. On June 5, 2019, China conducted its first Sea Launch with Long March 11 (长征十一号) in the Yellow Sea. On July 25, Chinese company i-Space became the first Chinese private company to successfully conduct an orbital launch with its Hyperbola-1 small solid rocket. As the 2010s came to an end, the Chinese space program was poised to conclude the decade with an inspiring event. On December 27, 2019, after a grounding and fixture that lasted for 908 days, the Long March 5 rocket conducted a highly anticipated return-to-flight mission from Wenchang. The mission ended in full success by placing Shijian-20, the heaviest satellite China had ever built, into the intended supersynchronous orbit. The flawless return of Long March 5 swept away all the depressions brought by its last failure since 2017. With its great power, the Long March 5 cleared the paths to multiple world-class space projects, allowing China to make great strides toward its ambitions in the coming 2020s. Being the product of latest technology and engineering by Chinese space industry in the early 21st century, the flight-proven Long March 5 unleashed the potential of Chinese space program to a great extent. Various projects previously restricted by the mass and size limits of the payloads were now offered a chance of realization. Ever since 2020, with the help of Long March 5, the Chinese space program has made tremendous progress in multiple areas by completing some of the most challenging missions ever conducted in history of space explorations, impressing the world like never before.[citation needed] The "Third Step" of China Manned Space Program kicked off in 2020. Long March 5B, a variant of Long March 5, conducted its maiden flight successfully on May 5, 2020. Its high payload capacity and large payload fairing space enabled the delivery of Chinese space station modules to low Earth orbit. On April 29, 2021, Tianhe core module (天和核心舱), the 22-tonne core module of the space station, was successfully launched into Low Earth orbit by a Long March 5B rocket, marking the beginning of the construction of the China Space Station, also known as Tiangong (天宫空间站), followed by unprecedented high frequency of human spaceflight missions. A month later, China launched Tianzhou-2, the first cargo mission to the space station. On June 17, Shenzhou-12, the first crewed mission to the Chinese Space Station consisting of Nie Haisheng, Liu Boming and Tang Hongbo, was launched from Jiuquan. The crew docked with Tianhe and entered the core module about 9 hours after launch, becoming the first residents of the station. The crew lived and worked on the space station for three months, conducted two spacewalks, and returned to Earth safely on September 17, 2021. breaking the record of longest Chinese human spaceflight mission (33 days) previously made by Shenzhou-11. Roughly a month later, the Shenzhou-13 crewed was launched to the station. Astronauts Zhai Zhigang, Wang Yaping and Ye Guangfu completed the first long-duration spaceflight mission of China that lasted for over 180 days before returning to Earth safely on April 16, 2022. Astronaut Wang Yaping became the first Chinese female to perform a spacewalk during the mission. Starting from May 2022, the China Manned Space Program had entered the space station assembly and construction phase. On June 5, 2022, Shenzhou-13 was launched and docked to Tianhe core module. The crew, including Chen Dong, Liu Yang and Cai Xuzhe, were expected to welcome the arrival of two space station modules during the six-month mission. On July 24, the third Long March 5B rocket lifted off from Wenchang, carrying the 23.2 t Wentian laboratory module (问天实验舱), the largest and heaviest spacecraft ever built and launched by China, into orbit. The module docked with the space station less than 20 hours later, adding the second module and the first laboratory module to it. On September 30, the new Wentian module was rotated from the forward docking port to starboard parking port. On October 31, the Mengtian laboratory module (梦天实验舱), the third and final module of China Space Station, was launched by another Long March 5B rocket into orbit and docked with the space station in less than 13 hours later. On November 3, the 'T-shape' China Space Station was completed after the successful transposition of the Mengtian module. On November 29, Shenzhou-15 was launched and later docked with China Space Station. Astronauts Fei Junlong, Deng Qingming, and Zhang Lu were welcomed by the Shenzhou-14 crew on board the station, completing the first crew gathering and handover in space by Chinese astronauts and starting the era of continuous Chinese astronaut presence in space. The third phase of Chinese Lunar Exploration Program was also allowed to proceed in 2020. As preparation, China conducted Chang'e 5-T1 mission in 2014. By completing its main task on November 1, 2014, China demonstrated the capability of returning a spacecraft from the lunar orbit back to Earth safely, paving the way for the lunar sample return mission to be conducted in 2017. However, the failure of the second Long March 5 mission disrupted the original plan. Despite the readiness of the spacecraft, the mission had to be postponed due to the unavailability of its launch vehicle, until the successful return-to-flight of Long March 5 in late 2019. On November 24, 2020, the sample return mission, entitled Chang'e-5 (嫦娥五号), kicked off as the Long March 5 rocket launched the 8.2 t spacecraft stack into space. The spacecraft entered lunar orbit on November 28, followed by a separation of the stack into two parts. The lander landed near Mons Rümker in Oceanus Procellarum on December 1 and started the sample collection process the next day. Two days after the landing, on December 3, the ascent vehicle attached to the lander took off from lunar surface and entered lunar orbit, carrying the container with collected samples. This was the first time that China launched a spacecraft from an extraterrestrial body. On December 6, the ascent vehicle successfully docked with the orbiter in lunar orbit and transferred the sample container to the return capsule, accomplishing the first robotic rendezvous and docking in lunar orbit in history. On December 13, the orbiter, along with the return module, entered the orbit back to Earth after main engine burns. The return capsule eventually landed intact in Inner Mongolia on December 17, sealing the perfect completion of the mission. On December 19, 2020, CNSA hosted the Chang'e-5 lunar sample handover ceremony in Beijing. By weighing the sample container taken out from the return capsule, CNSA announced that Chang'e-5 retrieved 1,731 grams of samples from the Moon. Being the most complex mission completed by China at the time, the Chang'e-5 mission achieved multiple remarkable milestones, including China's first lunar sampling, first liftoff from an extraterrestrial body, first automated rendezvous and docking in lunar orbit (by any nation) and the first spacecraft carrying samples to re-enter Earth's atmosphere at high speed. Its success also marked the completion of the goal of "Orbiting, Landing, Returning" planned by CLEP since 2004. Prior to the launch of Chang'e-5, which targeted the Moon 380,000 km away from the Earth, China's first Mars probe had departed, heading to the Mars 400 million kilometers away. Ever since the approval of the Mars mission in 2016, China had developed various technologies required, including deep space network, atmospheric entry, lander hovering and obstacle avoidance. Long March 5, the only launch vehicle capable of delivering the spacecraft, was back to service after its critical return-to-flight in December 2019. As a result, all things were ready when the launch windows of July 2020 arrived. On April 24, 2020, CNSA officially announced the program of Planetary Exploration of China and named China's first independent Mars mission as Tianwen-1 (天问一号). On July 23, 2020, Tianwen-1 was successfully launched atop a Long March 5 rocket into Trans-Mars injection orbit. The spacecraft, consisting of an orbiter, a lander, and a rover, aimed to achieve the goals of orbiting, landing, and roving on Mars in one single mission on the nation's first attempt. Due to its highly complex and risky nature, the mission was widely described as "ambitious" by international observers. After a seven-month journey, on February 10, 2021, Tianwen-1 entered Mars orbit and became China's first operational Mars probe. The payloads on the orbiter were subsequently activated and started surveying Mars in preparation for the landing. In the following few months, CNSA released a series of images captured by the orbiter. On April 24, CNSA announced that the first Chinese Mars rover carried by Tianwen-1 probe had been named Zhurong, the god of fire in ancient Chinese mythology. On May 15, 2020, around 1 am (Beijing time), Tianwen-1 initiated its landing process by igniting its main engines and lowering its orbit, followed by the separation of landing module at 4 am. The orbiter then returned to the parking orbit while the lander moved toward Mars atmosphere. Three hours later, the landing experienced the most dangerous atmospheric entry process that lasted for nine minutes. At 7:18 am, the lander successfully landed on the preselected southern Utopia Planitia. On May 25, the Zhurong rover drove onto the Martian surface from the lander. On June 11, CNSA released the first batch of high-resolution images of landing sites captured by Zhurong rovers, marking the success of the Mars landing mission. Being China's first independent Mars mission, Tianwen-1 completed the daunting process involving the orbiting, landing, and roving in highly sophisticated manner on one single attempt, making China the second nation to land and drive a Mars rover on the Martian surface after the United States. It drew the attention of the world as another example of China's rapidly expanding presence in outer space. Because of its huge difficulty and inspiring success, the Tianwen-1 development team received IAF World Space Award of 2022. It was the second time that a Chinese team awarded with this honor after the Chang'e-4 mission in 2019. On 13 March, China attempted to launch two spacecrafts, DRO-A and DRO-B, into distant retrograde orbit around the Moon. As an independent project, the mission was managed by Chinese Academy of Sciences instead of Chinese Lunar Exploration Program. However, the mission failed to reach the strived for orbit due to an upper stage malfunction, remaining stranded in low Earth orbit. Rescue attempts had been made as its orbit had been observed being significantly raised to a highly elliptical orbit since its launch, yet the following status remains unknown to the public. They appear to have succeeded in reaching their desired orbit. On 20 March 2024 China launched its relay satellite, Queqiao-2, in the orbit of the Moon, along with two mini satellites Tiandu 1 and 2. Queqiao-2 will relay communications for the Chang'e 6 (far side of the Moon), Chang'e 7 and Chang'e 8 (Lunar south pole region) spacecrafts. Tiandu 1 and 2 will test technologies for a future lunar navigation and positioning constellation. All the three probes entered lunar orbit successfully on 24 March 2024 (Tiandu-1 and 2 were attached to each other and separated in lunar orbit on 3 April 2024). China sent Chang'e 6 on 3 May 2024, which conducted the first lunar sample return from Apollo Basin on the far side of the Moon. This is China's second lunar sample return mission, the first was achieved by Chang'e 5 from the lunar near side four years earlier. It also carried the Chinese Jinchan rover to conduct infrared spectroscopy of lunar surface and imaged Chang'e 6 lander on lunar surface. The lander-ascender-rover combination was separated with the orbiter and returner before landing on 1 June 2024 at 22:23 UTC. It landed on the Moon's surface on 1 June 2024. The ascender was launched back to lunar orbit on 3 June 2024 at 23:38 UTC, carrying samples collected by the lander, and later completed another robotic rendezvous and docking in lunar orbit. The sample container was then transferred to the returner, which landed in Inner Mongolia on 25 June 2024, completing China's far side extraterrestrial sample return mission. After dropping off the return samples for Earth, the Chang'e 6 (CE-6) orbiter was successfully captured by the Sun-Earth L2 Lagrange point on 9 September 2024. According to a 2022 government white paper, China will conduct more human spaceflight, lunar and planetary exploration missions, including: In addition to these, China has also initiated the crewed lunar landing phase of its lunar exploration program, which aims to land Chinese astronauts on the Moon by 2030. A new crewed carrier rocket (Long March 10), new generation crew spacecraft, crewed lunar lander, lunar EVA spacesuit, lunar rover and other equipment are under development. CNSA's Tianwen-2 was launched in May 2025, to explore the co-orbital near-Earth asteroid 469219 Kamoʻoalewa and the active asteroid 311P/PanSTARRS and collecting samples of the regolith of Kamo'oalewa. Chinese space program and the international community One of China's priorities in its Belt and Road Initiative is to improve satellite information pathways.: 300 China is an attractive partner for space cooperation for other developing countries because it launches their satellites at a reduced cost and often provides financing in the form of policy loans.: 301 With respect to the African countries, the 2022-2024 action plan for the Forum on China-Africa Cooperation commits China to using space technology to enhance cooperation with African countries and to create centers for Africa-China cooperation on satellite remote sensing application.: 300 African countries are increasingly cooperating with China on satellite launches and specialized training.: 301 As of 2022, China has launched two satellites for Ethiopia, two for Nigeria, one for Algeria, one for Sudan, and one for Egypt.: 301–302 China and Namibia jointly operate the China Telemetry, Tracking, and Command Station which was established in 2001 in Swakopmund, Namibia.: 304 This station tracks Chinese satellites and space missions.: 304 China and Brazil have successfully cooperated in the field of space.: 202 Among the most successful space cooperation projects were the development and launch of earth monitoring satellites.: 202 As of 2023, the two countries have jointly developed six China-Brazil Earth Resource Satellites.: 202 These projects have helped both Brazil and China develop their access to satellite imagery and promoted remote sending research.: 202 Brazil and China's cooperation is a unique example of South-South cooperation between two developing countries in the field of space.: 202 The PRC is a member of the United Nations Committee on the Peaceful Uses of Outer Space and a signatory to all United Nations treaties and conventions on space, with the exception of the 1979 Moon Treaty. The United States government has long been resistant to the use of PRC launch services by American industry due to concerns over alleged civilian technology transfer that could have dual-use military applications to countries such as North Korea, Iran or Syria. Thus, financial retaliatory measures have been taken on many occasions against several Chinese space companies. The Cox Report, released in 1999, alleged that following decades of intelligence operations against U.S. weapons laboratories conducted by the Ministry of State Security, China stole design information regarding advanced thermonuclear weapons. In 2011, Congress passed a law prohibiting NASA researchers from working with Chinese citizens affiliated with a Chinese state enterprise or entity without FBI certification or using NASA funds to host Chinese visitors. In March 2013, the U.S. Congress passed legislation barring Chinese nationals from entering NASA facilities without a waiver from NASA. The history of the U.S. exclusion policy can be traced back to the Cox Report's allegations that the technical information that American companies provided China for its commercial satellite ended up improving Chinese intercontinental ballistic missile technology. This was further aggravated in 2007 when China blew up a defunct meteorological satellite in low Earth orbit to test a ground-based anti-satellite (ASAT) missile. The debris created by the explosion contributed to the space junk that litter Earth's orbit, exposing other nations' space assets to the risk of accidental collision. The United States also fears the Chinese application of dual-use space technology for nefarious purposes. The Chinese response to the exclusion policy involved its own space policy of opening up its space station to the outside world, welcoming scientists coming from all countries. American scientists have also boycotted NASA conferences due to its rejection of Chinese nationals in these events. In September 2025, NASA prohibited Chinese nationals from working with its programs. Organization Initially, the space program of the PRC was organized under the People's Liberation Army, particularly the Second Artillery Corps (now the PLA Rocket Force, PLARF). In the 1990s, the PRC reorganized the space program as part of a general reorganization of the defense industry to make it resemble Western defense procurement. The China National Space Administration, an agency within the State Administration of Science, Technology and Industry for National Defense, is now responsible for launches. The Long March rocket is produced by the China Academy of Launch Vehicle Technology, and satellites are produced by the China Aerospace Science and Technology Corporation. The latter organizations are state-owned enterprises; however, it is the intent of the PRC government that they should not be actively state-managed and that they should behave as independent design bureaus.[citation needed] The space program also has close links with: The PRC has 6 satellite launch centers/sites: Plus shared space tracking facilities with France, Brazil, Sweden, and Australia. Notable spaceflight programs As the Space Race between the two superpowers reached its climax with humans landing on the Moon, Mao Zedong and Zhou Enlai decided on July 14, 1967, that the PRC should not be left behind, and therefore initiated China's own crewed space program. The top-secret Project 714 aimed to put two people into space by 1973 with the Shuguang spacecraft. Nineteen PLAAF pilots were selected for this goal in March 1971. The Shuguang-1 spacecraft to be launched with the CZ-2A rocket was designed to carry a crew of two. The program was officially cancelled on May 13, 1972, for economic reasons, though the internal politics of the Cultural Revolution likely motivated the closure.[citation needed] The short-lived second crewed program was based on the successful implementation of landing technology (third in the World after USSR and United States) by FSW satellites. It was announced a few times in 1978 with the open publishing of some details including photos, but then was abruptly canceled in 1980. It has been argued that the second crewed program was created solely for propaganda purposes, and was never intended to produce results. A new crewed space program was proposed by the Chinese Academy of Sciences in March 1986, as Astronautics plan 863-2. This consisted of a crewed spacecraft (Project 863–204) used to ferry astronaut crews to a space station (Project 863–205). In September of that year, astronauts in training were presented by the Chinese media. The various proposed crewed spacecraft were mostly spaceplanes. Project 863 ultimately evolved into the 1992 Project 921.[citation needed] In 1992, authorization and funding were given for the first phase of Project 921, which was a plan to launch a crewed spacecraft. The Shenzhou program had four uncrewed test flights and two crewed missions. The first one was Shenzhou 1 on November 20, 1999. On January 9, 2001 Shenzhou 2 launched carrying test animals. Shenzhou 3 and Shenzhou 4 were launched in 2002, carrying test dummies. Following these was the successful Shenzhou 5, China's first crewed mission in space on October 15, 2003, which carried Yang Liwei in orbit for 21 hours and made China the third nation to launch a human into orbit. Shenzhou 6 followed two years later ending the first phase of Project 921. Missions are launched on the Long March 2F rocket from the Jiuquan Satellite Launch Center. The China Manned Space Agency (CMSA) of the Equipment Development Department of the Central Military Commission provides engineering and administrative support for the crewed Shenzhou missions. The second phase of the Project 921 started with Shenzhou 7, China's first spacewalk mission. Then, two crewed missions were planned to the first Chinese space laboratory. The PRC initially designed the Shenzhou spacecraft with docking technologies imported from Russia, therefore compatible with the International Space Station (ISS). On September 29, 2011, China launched Tiangong 1. This target module is intended to be the first step to testing the technology required for a planned space station. On October 31, 2011, a Long March 2F rocket lifted the Shenzhou 8 uncrewed spacecraft which docked twice with the Tiangong 1 module. The Shenzhou 9 craft took off on 16 June 2012 with a crew of 3. It successfully docked with the Tiangong-1 laboratory on 18 June 2012, at 06:07 UTC, marking China's first crewed spacecraft docking. Another crewed mission, Shenzhou 10, launched on 11 June 2013. The Tiangong 1 target module is then expected to be deorbited. A second space lab, Tiangong 2, launched on 15 September 2016, 22:04:09 (UTC+8). The launch mass was 8,600 kg, with a length of 10.4m and a width of 3.35m, much like the Tiangong 1. Shenzhou 11 launched and rendezvoused with Tiangong 2 in October 2016, with an unconfirmed further mission Shenzhou 12 in the future. The Tiangong 2 brings with it the POLAR gamma ray burst detector, a space-Earth quantum key distribution, and laser communications experiment to be used in conjunction with the Mozi 'Quantum Science Satellite', a liquid bridge thermocapillary convection experiment, and a space material experiment. Also included is a stereoscopic microwave altimeter, a space plant growth experiment, and a multi-angle wide-spectral imager and multi-spectral limb imaging spectrometer. Onboard TG-2 there will also be the world's first-ever in-space cold atomic fountain clock. A larger basic permanent space station (基本型空间站) would be the third and last phase of Project 921. This will be a modular design with an eventual weight of around 60 tons, to be completed sometime before 2022. The first section, designated Tiangong 3, was scheduled for launch after Tiangong 2, but ultimately not ordered after its goals were merged with Tiangong 2. This could also be the beginning of China's crewed international cooperation, the existence of which was officially disclosed for the first time after the launch of Shenzhou 7. The first module of Tiangong space station, Tianhe core module, was launched on 29 April 2021, from Wenchang Space Launch Site. It was first visited by Shenzhou 12 crew on 17 June 2021. The Chinese space station is scheduled to be completed in 2022 and fully operational by 2023. In January 2004, the PRC formally started the implementation phase of its uncrewed Moon exploration project. According to Sun Laiyan, administrator of the China National Space Administration, the project will involve three phases: orbiting the Moon; landing; and returning samples.[citation needed] On December 14, 2005, it was reported "an effort to launch lunar orbiting satellites will be supplanted in 2007 by a program aimed at accomplishing an uncrewed lunar landing. A program to return uncrewed space vehicles from the Moon will begin in 2012 and last for five years, until the crewed program gets underway" in 2017, with a crewed Moon landing planned after that. The decision to develop a new Moon rocket in the 1962 Soviet UR-700M-class (Project Aelita) able to launch a 500-ton payload in LTO[dubious – discuss] and a more modest 50 tons LTO payload LV has been discussed in a 2006 conference by academician Zhang Guitian (张贵田), a liquid propellant rocket engine specialist, who developed the CZ-2 and CZ-4A rockets engines. On June 22, 2006, Long Lehao, deputy chief architect of the lunar probe project, laid out a schedule for China's lunar exploration. He set 2024 as the date of China's first moonwalk. In September 2010, it was announced that the country is planning to carry out explorations in deep space by sending a man to the Moon by 2025. China also hoped to bring a Moon rock sample back to Earth in 2017, and subsequently build an observatory on the Moon's surface. Ye Peijian, Commander in Chief of the Chang'e program and an academic at the Chinese Academy of Sciences, added that China has the "full capacity to accomplish Mars exploration by 2013." On December 14, 2013 China's Chang'e 3 became the first object to soft-land on the Moon since Luna 24 in 1976. On 20 May 2018, several months before the Chang'e 4 mission, the Queqiao was launched from Xichang Satellite Launch Center in China, on a Long March 4C rocket. The spacecraft took 24 days to reach L2, using a gravity assist at the Moon to save propellant. On 14 June 2018, Queqiao finished its final adjustment burn and entered the mission orbit, about 65,000 kilometres (40,000 mi) from the Moon. This is the first lunar relay satellite ever placed in this location. On January 3, 2019, Chang'e 4, the China National Space Administration's lunar rover, made the first-ever soft landing on the Moon's far side. The rover was able to transmit data back to Earth despite the lack of radio frequencies on the far side, via a dedicated satellite sent earlier to orbit the Moon. Landing and data transmission are considered landmark achievements for human space exploration. Yang Liwei declared at the 16th Human in Space Symposium of International Academy of Astronautics (IAA) in Beijing, on May 22, 2007, that building a lunar base was a crucial step to realize a flight to Mars and farther planets. According to practice, since the whole project is only at a very early preparatory research phase, no official crewed Moon program has been announced yet by the authorities. But its existence is nonetheless revealed by regular intentional leaks in the media. A typical example is the Lunar Roving Vehicle (月球车) that was shown on a Chinese TV channel (东方卫视) during the 2008 May Day celebrations. On 23 November 2020, China launched the new Moon mission Chang'e 5, which returned to Earth carrying lunar samples on 16 December 2020. Only two nations, the United States and the former Soviet Union have ever returned materials from the Moon, thus making China the third country to have ever achieved the feat. China sent Chang'e 6 on 3 May, which conducted the first lunar sample return from the far side of the Moon. This is China's second lunar sample return mission, the first was achieved by Chang'e 5 from the lunar near side 4 years ago. In 2006, the Chief Designer of the Shenzhou spacecraft stated in an interview that: 搞航天工程不是要达成升空之旅, 而是要让人可以正常在太空中工作, 为将来探索火星、土星等作好准备。 Space programs are not aimed at sending humans into space per se, but instead at enabling humans to work normally in space, and prepare for the future exploration of Mars, Saturn, and beyond. — CAS Academician Qi Faren Sun Laiyan, administrator of the China National Space Administration, said on July 20, 2006, that China would start deep space exploration focusing on Mars over the next five years, during the Eleventh Five-Year Plan (2006–2010) Program period. In April 2020, the Planetary Exploration of China program was announced. The program aims to explore planets of the Solar System, starting with Mars, then expanded to include asteroids and comets, Jupiter and more in the future. The first mission of the program, Tianwen-1 Mars exploration mission, began on July 23, 2020. A spacecraft, which consisted of an orbiter, a lander, a rover, a remote and a deployable camera, was launched by a Long March 5 rocket from Wenchang. The Tianwen-1 was inserted into Mars orbit in February 2021 after a seven-month journey, followed by a successful soft landing of the lander and Zhurong rover on May 14, 2021. According to the China Academy of Space Technology (CAST) presentation at the 2015 International Space Development Congress in Toronto, Canada, Chinese interest in space-based solar power began in the period 1990–1995. By 2013, there was a national goal, that "the state has decided that power coming from outside of the earth, such as solar power and development of other space energy resources, is to be China's future direction" and the following roadmap was identified: "In 2010, CAST will finish the concept design; in 2020, we will finish the industrial level testing of in-orbit construction and wireless transmissions. In 2025, we will complete the first 100kW SPS demonstration at LEO; and in 2035, the 100MW SPS will have an electric generating capacity. Finally in 2050, the first commercial level SPS system will be in operation at GEO." The article went on to state that "Since SPS development will be a huge project, it will be considered the equivalent of an Apollo program for energy. In the last century, America's leading position in science and technology worldwide was inextricably linked with technological advances associated with the implementation of the Apollo program. Likewise, as China's current achievements in aerospace technology are built upon with its successive generations of satellite projects in space, China will use its capabilities in space science to assure sustainable development of energy from space." In 2015, the CAST team won the International SunSat Design Competition with their video of a Multi-Rotary Joint concept. The design was presented in detail in a paper for the Online Journal of Space Communication. In 2016, Lt Gen. Zhang Yulin, deputy chief of the PLA armament development department of the Central Military Commission, suggested that China would next begin to exploit Earth-Moon space for industrial development. The goal would be the construction of space-based solar power satellites that would beam energy back to Earth. In June 2021, Chinese officials confirmed the continuation of plans for a geostationary solar power station by 2050. The updated schedule anticipates a small-scale electricity generation test in 2022, followed by a megawatt-level orbital power station by 2030. The gigawatt-level geostationary station will require over 10,000 tonnes of infrastructure, delivered using over 100 Long March 9 launches. List of launchers and projects China's first deep space probe, the Yinghuo-1 orbiter, was launched in November 2011 along with the joint Fobos-Grunt mission with Russia, but the rocket failed to leave Earth orbit and both probes underwent destructive re-entry on 15 January 2012. In 2018, Chinese researchers proposed a deep space exploration roadmap to explore Mars, an asteroid, Jupiter, and further targets, within the 2020–2030 timeframe. Current and upcoming robotic missions include: See also References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nir_Barkat] | [TOKENS: 2038]
Contents Nir Barkat Nir Barkat (Hebrew: נִיר בַּרְקָת; born 19 October 1959) is an Israeli businessman and politician, currently serving as Minister of Economy. He previously served as mayor of Jerusalem from 2008 to 2018. Biography Nir Barkat was born and raised in Jerusalem. His father, Zalman, was a physics professor at the Hebrew University, and his mother was a folk dancing instructor. His grandparents immigrated from Poland and Russia. Barkat joined the 98th Paratroopers Division of the Israel Defense Forces in 1977 and served for six years, including reserve duty, reaching the rank of major. Barkat was wounded in combat in Lebanon, during Operation Movil. He earned a BA in computer science from the Hebrew University of Jerusalem, and lives in the Jerusalem neighborhood of Beit HaKerem with his wife, Beverly. They have three daughters. Business career Barkat started his career in the high-tech industry by founding a software company called BRM in 1988, which specialized in antivirus software. Later, the company became an incubator venture firm that invested in several companies such as Check Point and Backweb. He later helped found the social investment company IVN, Israel Venture Network. In 2007, Barkat took part in the Israeli version of Dragons' Den, the venture-capitalist television program, which consists of entrepreneurs pitching their ideas in order to secure investment from business experts. According to Forbes in 2013, Barkat's net worth is estimated at NIS 450 million (about $122 million), more than the combined value of the next three politicians on the list making him the wealthiest Israeli politician. On 3 October 2021, Barkat's name was included among 565 Israelis whose names were included in the Pandora papers. As mayor, he did not take any salary from the city of Jerusalem. Political career Nir Barkat's entry into politics was gradual, after his exposure to and philanthropic investments in Jerusalem's education system. In 1999, the Barkat family began to explore the educational gaps in Jerusalem through their investment in The Snunit Center for the Advancement of Web Based Learning, a non-profit, non-governmental organization which uses web based resources to improve online education and improve personal and social growth within the Israeli society. Barkat saw this investment as the beginning of his interest in entering into Jerusalem's municipal politics. Barkat's official entry into politics began in January 2003, when he founded the party Yerushalayim Tatzli'ah ("Jerusalem Will Succeed") and ran in the Jerusalem mayoral race, securing 43% of the vote and losing to Uri Lupoliansky. After his initial loss, Barkat served as head of the opposition on the city council until his election as mayor in 2008. During this period he helped form StartUp Jerusalem, a venture to create jobs in the capital. He briefly led the Jerusalem faction of the Kadima party from 2006 - 2007, then a powerhouse in Israeli politics, but left due to disagreements with the proposal to relinquish portions of Jerusalem. Barkat ran for a second time in November 2008, this time winning the election with 52% of the vote (his main rival, Meir Porush, won 43%). Barkat was described as a secular politician, contrasting with both Lupoliansky and Porush, who are Haredi. He ran on a platform of increasing tourism, finding solutions to the housing crisis, and opposing the light rail. He also vowed to make city council more approachable and transparent and decried the use of the mayors office a stepping stone to national politics. Controversies during his first term included the dismissal of city council member Rachel Azaria and his proposal for relinquishing predominantly Arab populated neighborhoods on the outskirts of the city limits. He helped to initiate the city's first international marathon in 2011 and has personally participating in races both in Jerusalem and abroad. In 2013, he ran for a second term, during which he was endorsed by the Labor Party, and also by a range of prominent Likud activists; he also had the tacit support of Meretz, which withdrew its candidate, Pepe Alalu, in order not to steal votes away from Barkat. His opponent Moshe Lion had backing from Avigdor Lieberman, head of the Yisrael Beiteinu party and Aryeh Deri, head of Shas. Barkat was re-elected with 52% of the vote compared to his main opponent Moshe Lion former head of the Jerusalem Development Authority, who ran as the Likud candidate who garnered 43% of the electorate. Lion has since served as a member of city council and in 2015 joined Barkat's coalition. Following the tense campaign, Barkat was fined NIS 400,000 for improper use of election funds. Since his election as Mayor, Barkat served the city for a salary of one shekel a year. Controversies of his second term included the Formula 1 exhibition, part of the mayor's effort to raise Jerusalem's status as a cultural capital of the world and increase tourism. The Jerusalem Formula One event took place in 2013 and in 2014 but garnered criticism for street closures which led to school cancellations, over expenditures, and its appropriateness for the city. Other controversies included planned addition to the light rail, specifically the blue line, which was planned to run down Emek Refaim street. Barkat also had a long-running feud with Finance Minister Moshe Kahlon over funding which led to city-wide strikes several years in a row. Kahlon argued that Barkat was wasting funding and mismanagement, while Barkat argued that Kahlon was withholding funds for political reasons. The resulting strikes caused garbage to pile up throughout the city and the threat of mass dismissals of municipal employees. In December 2015 Barkat joined the Likud party. He previously endorsed Likud leader Benjamin Netanyahu for Prime Minister in the 2013 and 2015 Knesset elections. In March 2018 he announced his intention to run for national politics rather than seek re-election for a third term as mayor. Since the mid-2000s, Jerusalem has developed into a regional center for tech start-ups, and was named the #1 emerging tech hub by Entrepreneur magazine. Barkat's administration has provided incentives, tax breaks, and grants for companies with employees living in the city. By 2016, over 500 start-ups had been established in Jerusalem, bringing in upwards of $243 million in investment in the first nine months of 2015. "'After the election of [Mayor Nir] Barkat, personal activism strengthened in the city. People felt they had influence, and it really connected with the entrepreneurial character', said Dana Mann, a partner in PICO Ventures, and previously a partner in OurCrowd." Barkat has come under fire from some women's rights activists.[who?] Some women on the Jerusalem City Council have protested illegal modesty signs.[who?] Jerusalem city councilwoman Rachel Azaria, who brought the case of gender-segregated buses in Jerusalem to the court's attention, was fired by Barkat. Laura Wharton, a member of Jerusalem City Council, complained about the illegal modesty signs, but claims she was brushed off." Barkat has criticized Women of the Wall for their confrontational efforts to pray at the Kotel. In February 2015, Barkat garnered international attention when he intervened after seeing a Palestinian man trying to stab a Jewish person. Barkat succeeded in physically subduing the attacker, with the Mayoral security detail coming in immediately afterward and the victim receiving first aid. The Tzahal Square incident prompted responses from figures such as former Israeli ambassador to the United States Michael Oren, who stated that Barkat had "courageously" acted, as well as commentators on Facebook who shared tongue-in-cheek images depicting Barkat as Batman, Neo, and other film characters. In October 2015, he encouraged Israelis to carry guns as a "duty" in light of increased tensions. His comments were criticised by various commentators. In March 2018 he announced he would not run for another term in the Municipal Election, and instead will join the Likud Party, to be a member of Knesset in next elections. On 4 December 2018, he ceased serving as mayor. Barkat was appointed Minister of Economy and Industry on 1 January 2023, on behalf of his party, the Likud. After the Israeli military police in July 2024 visited Sde Teiman detention camp to detain nine Israeli soldiers suspected of abuse of a Palestinian prisoner, Barkat declared: "I support our fighters and call on the defense minister to immediately put a stop to the despicable show trial against them." In June 2025, Nir Barkat, then serving as Minister of Economy, faced controversy when his neighbor, Dr. Renana Keydar, was arrested during a peaceful protest outside his home. The protest involved singing and holding signs related to hostages, but Keydar, who was not even part of the protesting group, was detained for wearing a hat that said "Democracy." The arrest escalated to a strip search, which Keydar described as humiliating and potentially sexually abusive, raising significant concerns about police conduct and the suppression of dissent. This incident was part of a broader pattern of police actions against protesters since the beginning of the Gaza war, highlighting tensions around freedom of expression and the right to protest in Israel. The event sparked outrage and drew comparisons to authoritarian practices, further fueling debates about democratic values and government response to dissent. An interview with Keydar, conducted by N12 News and shared on social media, detailed the arrest and subsequent treatment, amplifying public and media scrutiny of the event. See also References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Tribe_(internet)] | [TOKENS: 2818]
Contents Tribe (internet) An internet tribe or digital tribe is an unofficial online community or organization of people who share a common interest, and who are usually loosely affiliated with each other through social media or other Internet routes. The term is related to "tribe", which traditionally refers to people closely associated in both geography and genealogy. Nowadays, it is more like a virtual community or a personal network and it is often called global digital tribe. Most anthropologists agree[weasel words] that a tribe is a (small) society that practices its own customs and culture, and that these define the tribe. The tribes are divided into clans, with their own customs and cultural values that differentiate them from activities that occur in 'real life' contexts. People feel more inclined to share and defend their ideas on social networks than they would face to face.[citation needed] Precedents The term "tribe" originated around the time of the Greek city-states and the early formation of the Roman Empire. The Latin term "tribus" has since been transformed to mean "A group of persons forming a community and claiming descent from a common ancestor" As years passed by, the range of meanings have grown greater, for example, "Any of various systems of social organization comprising several local villages, bands, districts, lineages, or other groups and sharing a common ancestry, language, culture, and name" (Morris, 1980, p. 1369). Morris (1980) also notes that a tribe is a "group of persons with a common occupation, interest, or habit," and "a large family." Vestiges of ancient tribe communities were preserved in both large gatherings (like football matches) and in small ones (like church communities). Even though nowadays the range of groups referred to as tribal is truly enormous, it was not until the industrial society eroded the tribal gatherings of more primitive societies and redefined community. However, the existence of social media as we know it today is due to the post-industrial society that has seen the rapid growth of personal computers, mobile phones and the Internet. People now can collaborate, communicate, celebrate, commemorate, give their advice and share their ideas around these virtual clans that have once again redefined the social behaviour. That internet tribes exist, is an expression of the existence of a human tribal instinct. History The first attempt of such social communities dates back to at least 2003, when tribe.net was launched. Tribes from a technical perspective Not only do Twitter tribes have mutual interests, but they also share potentially subconscious language features as found in the 2013 study by researchers from Royal Holloway University of London and Princeton. Dr. John Bryden from the School of Biological Sciences at Royal Holloway states that it is possible to anticipate which community somebody is likely to belong to, with up to 80 percent accuracy. This research shows that people try to join societies based on the same interests and hobbies. In order to achieve this, publicly available messages were sent via Twitter to record conversations between two or more participants. As a result, each community can be characterised by their most used words. This approach can enrich new communities detection based on word analysis in order to automatically classify people inside social networks. The methods of identification of tribes relied heavily on algorithms and techniques from statistical physics, computational biology and network science. A different approach is taken by Tribefinder. The system is able to identify tribal affiliations of Twitter users using deep learning and machine learning. The system establishes to which tribes individuals belong through the analysis of their tweets and the comparison of their vocabulary. These tribal vocabularies are previously generated based on the vocabulary of tribal influencers and leaders using keywords expressing concepts, ideas and beliefs. The final step to make the system learn on how to associate random individuals with specific tribes consists of the analysis of the language these influential tribal leaders use through deep learning. In so doing, classifiers are created using embedding and LSTM (long short-term memory) models. Specifically, these classifiers work by collecting the Twitter feeds of all the users from the tribes that Tribefinder is training on. On these, embedding is applied to map words into vectors, which are then used as input for the following LSTM models. Tribefinder analyzes the individual's word usage in their tweets and then assigns the corresponding alternative realities, lifestyle, and recreation tribal affiliation based on the similarities with the specific tribal vocabularies. An in-depth look into the research The research had four main stages on which it focused: background, results, conclusions and methods. The language is a system of communication consisting of sounds, words, and grammar, or the system of communication used by people in a particular country or type of work. Language is perhaps the most important characteristic that distinguishes human beings from other animals. In addition, it has a wide range of social implications that can be associated with social or cultural groups. People usually group in communities with the same interests. This will result in a variation of the words they use because of the differentiation of terms from each domain. Therefore, the hypothesis of this study would be that this variation should closely match the community structure of the network. To test this theory, around 250,000 users from the social networking and microblogging site Twitter were monitored in order to analyse whether the groups identified had the same language features or not. As Twitter uses unstructured data and users can send messages to any other users, the study had to be based on complex algorithms. These algorithms had to determine the word frequency inside messages between people and make a link to the groups they usually visited. The problem of detecting the community features is one of the main issues in the study of networking systems. Social networks naturally tend to divide themselves into communities or modules. However, some world networks are too big so they must be simplified before information can be extracted. As a result, an effective way of dealing with this drawback for smaller communities is by using modularity algorithms in order to partition users into even smaller groups. For larger ones, a more efficient algorithm called 'map equation' decomposes a network into modules by optimally compressing a description of information flows on the network. Each community was therefore characterised according to the words they used the most, based on a ranking algorithm. To determine the significance of word usage differences, word endings and word lengths were also measured and showed that the pattern found was the correct one. Moreover, these studies also helped in predicting community membership of users, by comparing their own word frequencies with community word usage. This helped in forecasting which community a certain user is going to access based on the words that they are using. The aim of this research was to study the bond between community structure in a social network environment and language use within the community . The striking pattern that was found suggests that people from different clans tend to use different words based on their own interests and hobbies. Even though this approach did not manage to cover all people inside Twitter, it has several advantages over ordinary surveys that cover a smaller scale of groups: it is systematic, it is non-intrusive and it easily produces large volumes of rich data. Moreover, other cultural characteristics can be found out when extending this study. For example, whether individuals that belong to multiple communities use different word sets in each of them. A process called snowball-sampling helped forming the sample network. Each user's tweets and messages were recorded and any new users referenced were added to a list from where they were picked to be sampled. Messages that were copies have been ignored. In order to find out the words that characterise each clan, the fraction of people that use a certain word was compared with the fraction of people that use that word globally. The difference between communities has also been measured by comparing the relative word usage frequency. Different spellings within tribes Words, and the way we spell them are in a continuous change, as we find new ways to communicate. Despite the fact that traditional dictionaries do not take into account the changes, online ones have adopted many of them. An interesting fact outlined in the research above is that communities tend to use their own distinctive spelling for words. According to Professor Vincent Jansen from Royal Holloway online communities spell words in different ways, just as people have different regional accents. For example, Justin Bieber fans tend to end words in "ee" as in "pleasee", while school teachers tend to use long words. Moreover, the largest group found in the study was composed of African Americans who were using the words "nigga", "poppin", and "chillin". Members of this community also tend to shorten the ends of the words, replacing "ing" with "in" and "er" with "a". The campfire Each tribe has an online-platform (such as Flickr or Tumblr), called campfire around which they gather. These campfires tend to enable one or more of the following three tribal activities: However, some brands are building their own tribes around platforms outside of these. Cooperation Cooperation is the action of working together to the same end. Cooperation developed naturally over time, as it helped companies to streamline their research costs and to better answer to users' requirements. As a result, nowadays organisations are looking for flexible structures that can easily adapt to this rapidly changing environment. Groupware systems perfectly cater to these needs of companies. Informal communication predominates and specialists in certain domains exchange their experience with other people within the groupware environment. Collaboration and cooperation are available through instant messages; people can discuss, chat and swap ideas.[citation needed] Moreover, people can work together while they are located remotely from each other. Groupware can be split into three categories: communication, collaboration and coordination, depending on the level of cooperation and technology involved in the process. One of the biggest and well-known cooperation software is Wikipedia. Wikipedia is a collaborative software because anyone can edit it. Any user can edit articles, view past revisions and discuss through a forum the current state of each article. Due to the fact that anyone can change it and find information very quickly, it has become one of the 10 most accessed sites on the Internet. Wikipedia has many advantages over other encyclopedias: However, there are also some drawbacks: Communication Communication is the act or an instance of communicating; the imparting or exchange of information, ideas, or feelings. Communication has drastically changed over time and social networks have changed the way people communicate. Even though people]can interact with each other 24/7, there is a new wave of barriers and threats. In the workplace environment, electronic Communication has overtaken face-to-face and voice-to-voice Communication by far. This major shift has been done in advantage of Generation Y, who prefer instant messaging than talking directly to someone. It is often said that it could become an ironic twist, but social media has the real potential of making us less social. However, there are studies that confirm that people are becoming more social, but the style in which they interact with each other has changed a lot. One of the major drawbacks of social networks is privacy, as people tend to trust others more rapidly and send more open messages about themselves. As a result, personal information can be easily exposed to other persons. Twitter and Facebook are two of the biggest social networks in the world. Facebook is currently the largest social network in the world with more than 1 billion people using this website. This actually means that approximately one in seven people on Earth use Facebook. Facebook users share their stories, images and videos in order to celebrate and commemorate events together. They can also play social games and like other Facebook pages. Moreover, there is also a section called 'News Feed' where users can see social information from their friends or from the pages that they liked or shared. Each user has their own profile page that is called 'wall', where they can post all the above-mentioned materials (their friends can do this as well). The biggest advantage of Facebook is that you can make new friends, as well as find old acquaintances and restart socialising with them. One of the most useful feature of Facebook is the existence of groups. Users with the same interests can create a new group or take part in already existing ones to debate information and exchange their ideas. However, there are also groups that are created to declare an affiliation, such as an obsession for different subjects. Twitter is another social network that allows users to send and read short messages called 'tweets'. Even though messages can contain only 280 characters, this is the perfect length for sending status updates to followers. The main advantage of Twitter is that people can gain followers quickly and share ideas and links very fast. There are networks of influential people who can be connected via Twitter. On Twitter, tribes manifest themselves as followers of either a person, a company or an institution. As a result, it can be used as a marketing tool to make someone's product visible, on condition that a big tribe of followers is created. In order to do this, the right community must be built, as finding the right people can be a challenge. There are some steps that users could take into account in order to make connections and therefore make people follow them: search using Twitter search, follow the followers of other users, look at Twitter Lists, use #Hashtags and find third-party programs. Conclusion As Seth Godin states, "The Internet eliminated geography". People join tribes or clans because they find and share the same ideas and interests with other people. The main disadvantage of old tribes is that they could not influence group behaviour. On the other hand, new tribes are self-sustaining and can survive without a leader, they are not necessarily dialogue based and they are long lasting. As it has been demonstrated within this article, tribes have influenced the way languages, organisations and cultures work. They have redefined old concepts with the help of social media and have changed the way people will interact in the future. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Nuris] | [TOKENS: 1221]
Contents Nuris Nuris (Arabic: نورِس) was a Palestinian Arab village in the District of Jenin. In 1945, Nuris had 570 inhabitants. It was depopulated during the 1948 War on 29 May 1948 under Operation Gideon. The Israeli moshav of Nurit was built on Nuris' village land in 1950. Location Nuris was located in the Jezreel Valley, 9 kilometers (5.6 mi) northeast of Jenin and southwest of the Jezreel Valley railway. It was linked by dirt roads to the villages of Zir'in and Al-Mazar. There were several springs north of Nuris, most importantly the 'Ain Jalut, one of the largest in Palestine. History Remains from the Bronze Age have been found here, as has pottery from the Byzantine era. Nuris was referred to by the Crusaders as "Nurith." Nearby, the Mamluks defeated the Mongols in the Battle of Ain Jalut (1260). In 1517, the village was included in the Ottoman Empire with the rest of Palestine. During the 16th and 17th centuries, it belonged to the Turabay Emirate (1517-1683), which encompassed also the Jezreel Valley, Haifa, Jenin, Beit She'an Valley, northern Jabal Nablus, Bilad al-Ruha/Ramot Menashe, and the northern part of the Sharon plain. In the 1596 tax-records Nuris appeared part of the nahiya (subdistrict) of Jenin under the liwa' (district) of Lajjun, with a population of 16 Muslim households; an estimated 88 persons. They paid a fixed tax rate of 25% on a number of products, including wheat, barley, olives, and goats and beehives; a total of 7,500 akçe. The village was captured and burned by Napoleon's troops, after the Battle of Mount Tabor in 1799. Pierre Jacotin named the village Noures on his map from that campaign. British traveller James Silk Buckingham visited the site in the early 19th century. Buckingham remarked that there were several other settlements in sight, "all inhabited by Mohammedans." In 1838 Edward Robinson noted Nuris during his travels in the region, located in the District of Jenin, also called "Haritheh esh-Shemaliyeh". In 1870/1871 (1288 AH), an Ottoman census listed the village in the nahiya (sub-district) of Shafa al-Qibly. In 1882, the PEF's Survey of Western Palestine described the village as being small, situated on rocky ground, much hidden between the hills, about 600 ft (180 m) above a valley. Nuris had an elementary school for boys, which was founded under the Ottomans in 1888, and a mosque. Some ancient ruins remained unexplored as of 1992. In the 1922 census of Palestine, conducted by the British Mandate authorities, Nuris had a population of 364, all Muslims. Part of the area was acquired by the Jewish community as part of the Sursock Purchase. In 1921, the village reportedly had 38 tenant families, and 224 people out of a total population of 364 (1922 census) cultivated 5,500 dunums out of a village area of 27,018. That year, the Sursock family sold some of the village lands to the Palestine Land Development Company. A group of 35 young Jews began to farm the land, which became the core of Kibbutz Ein Harod. Some of the villagers of Nuris received monetary compensation and left the village. Those who remained leased a block of land for a period of six years with the opportunity to purchase when the lease expired. They paid rental at 6% of the published sale offer on the land, but later, at the request of the farmers in Nuris, this was changed to one-fifth of the total yield in agricultural output of the land. After the original six-year lease was up, reports in 1928 showed that no villagers had bought the land leased to them. The leases were extended for three years while the ownership was transferred to the Jewish National Fund. In 1921 the average farmer cultivated 24 dunums, by 1929 this had drastically reduced to 4.4, although the population grew significantly. In the 1931 census, Nuris had a population of 429 people and a recorded 106 houses were located in the village. In the 1945 statistics, Nuris had 570 Muslim inhabitants with 163 houses, although the area was much smaller than it had been before 1920, with an area of 6256 dunums. The inhabitants, were mainly employed in cereal farming, although some land was allocated to irrigation and growing olives. On 19 April 1948, Palmach headquarters ordered the destruction of "enemy bases at Al-Mazar, Nuris and Zir'in". Israeli historian Benny Morris notes that destroying the villages was "part and parcel" of the Haganah operations at this time, however, he also writes that Nuris was not finally depopulated until the end of May. Following the war the area was incorporated into the State of Israel. A moshav, Nurit, was established to the northwest of the village site in 1950. Palestinian historian Walid Khalidi described the village in 1992: "The site, overgrown with pine and oak trees, is strewn with piles of stones. Part of the surrounding land is fenced in and is used as a grazing area, while another part is cultivated. Cactuses and olive and fig trees grow near the site." References Bibliography External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Viral_phenomenon] | [TOKENS: 4385]
Contents Viral phenomenon Viral phenomena or viral sensations are objects or patterns that are able to replicate themselves or convert other objects into copies of themselves when these objects are exposed to them. Analogous to the way in which viruses propagate, the term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. This concept has become a common way to describe how thoughts, information, and trends move into and through a human population. The popularity of viral media has been fueled by the rapid rise of social network sites,: 17 wherein audiences—who are metaphorically described as experiencing "infection" and "contamination"—play as passive carriers rather than an active role to 'spread' content, making such content "go viral".: 21 The term viral media differs from spreadable media as the latter refers to the potential of content to become viral. Memes are one known example of informational viral patterns. History The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene as an attempt to explain memetics; or, how ideas replicate, mutate, and evolve. When asked to assess this comparison, Lauren Ancel Meyers, a biology professor at the University of Texas, stated that "memes spread through online social networks similarly to the way diseases do through offline populations." This dispersion of cultural movements is shown through the spread of memes online, especially when seemingly innocuous or trivial trends spread and die in rapid fashion.: 19, 27 For example, multiple viral videos featuring Vince McMahon promoted misogynistic messages and hate against Jewish people, women, and the LGBTQ+ community. The video depicted McMahon throwing money into the ring at a WWE event. This video was taken out of context to support misogynistic views for the Men Going Their Own Way Movement to gain attention according to research led by the Institute for Strategic Dialogue. This example demonstrates how public figures are turned into viral phenomena. Popular audio and video content on apps like TikTok are also used as memes of public figures. The term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. If something goes viral, many people discuss it. Accordingly, Tony D. Sampson defines viral phenomena as spreadable accumulations of events, objects, and affects that are overall content built up by popular discourses surrounding network culture.: 22 There is also a relationship to the biological notion of disease spread and epidemiology. In this context, "going viral" is similar to an epidemic spread, which occurs if more than one person is infected by a pathogen for every person infected. Thus, if a piece of content is shared with more than one person every time it is seen, then this will result in viral growth. In Understanding Media (1964), philosopher Marshall McLuhan describes photography in particular, and technology in general, as having a potentially "virulent nature." In Jean Baudrillard's 1981 treatise Simulacra and Simulation, the philosopher describes An American Family, arguably the first "reality" television series, as a marker of a new age in which the medium of television has a "viral, endemic, chronic, alarming presence." Another formulation of the 'viral' concept includes the term media virus, or viral media, coined by Douglas Rushkoff, who defines it as a type of Trojan horse: "People are duped into passing a hidden agenda while circulating compelling content.": 17 Mosotho South-African media theorist Thomas Mofolo uses Rushkoff's idea to define viral as a type of virtual collective consciousness that primarily manifests via digital media networks and evolves into offline actions to produce a new social reality. Mofolo bases this definition on a study about how internet users involved in the Tunisian Arab Spring perceived the value of Facebook towards their revolution. Mofolo's understanding of the viral was first developed in a study on Global Citizen's #TogetherAtHome campaign and used to formulate a new theoretical framework called Hivemind Impact. Hivemind impact is a specific type of virality that is simulated via digital media networks with the goal of harnessing the virtual collective consciousness to take action on a social issue. For Mofolo, the viral eventually evolves into McLuhan's 'global village' when the virtual collective consciousness reaches a point of noogenesis that then becomes the noosphere. Content is more likely to reach this point, however, if it embodies certain characteristics that drive consumers to share. Research conducted by Dr. Jonah Berger at The University of Pennsylvania, summarized in his book Contagious: Why Things Catch On, suggests that content’s shareability can be increased by activating six key S.T.E.P.P.S. (i.e., Social currency, Triggers, Emotion, Public, Practical value, and Stories). Social currency refers to the fact that people are more likely to share things that make them look good, rather than bad. Consequently, the more sharing something makes people look smart, special, or in the know, the more likely they are to pass it on. Triggers – top of mind means tip of tongue. Sharing something requires thinking about it first, so the more people are reminded about a particular thing, the more likely they are to share it. Rebecca Black’s viral hit “Friday” gained traction from the built-in trigger of the end of the week. Emotion – when we care, we share. The more something activates emotion, particularly high arousal ones, the more likely people are to pass it on. An advertisement that tugs on heartstrings is more likely to be shared than an unemotional one. Public – built to show, built to grow. People tend to imitate others. But they can only imitate what others are doing if they can see it. So the easier it is to see what others are doing, the easier it is to imitate. Visible colors, patterns, or logos as well as things like "I voted" stickers facilitate imitation. Practical value – news you can use. People want to help others, so the more useful something is, the more likely people are to share it. Ways to save time and money, or useful advice, are all examples of this. Stories are vessels, or carriers of information. They bring products, services, and ideas along for the ride. So building a Trojan Horse Story can be a helpful way to encourage something to spread. Before writing and while most people were illiterate, the dominant means of spreading memes was oral culture like folk tales, folk songs, and oral poetry, which mutated over time as each retelling presented an opportunity for change. The printing press provided an easy way to copy written texts instead of handwritten manuscripts. In particular, pamphlets could be published in only a day or two, unlike books which took longer. For example, Martin Luther's Ninety-five Theses took only two months to spread throughout Europe. A study of United States newspapers in the 1800s found human-interest, "news you can use" stories and list-focused articles circulated nationally as local papers mailed copies to each other and selected content for reprinting. Chain letters spread by postal mail throughout the 1900s. Urban legends also began as word-of-mouth memes. Like hoaxes, they are examples of falsehoods that people swallow, and, like them, often achieve broad public notoriety. Beyond vocal sharing, the 20th century made huge strides in the World Wide Web and the ability to content share. In 1979, dial-up internet service provided by the company CompuServ was a key player in online communications and how information began spreading beyond the print. Those with access to a computer in the earliest of stages could not comprehend the full effect that public access to the internet could or would create. It is hard to remember the times of newspapers being delivered to households across the country in order to receive their news for the day, and it was when The Columbus Dispatch out of Columbus, Ohio broke barriers when it was first to publish in online format. The success that was predicted by CompuServe and the Associated Press led to some of the largest newspapers to become part of the movement to publish the news via online format. Content sharing in the journalism world brings new advances to viral aspects of how news is spread in a matter of seconds. The creation of the Internet enabled users to select and share content with each other electronically, providing new, faster, and more decentralized controlled channels for spreading memes. Email forwards are essentially text memes, often including jokes, hoaxes, email scams, written versions of urban legends, political messages, and digital chain letters; if widely forwarded they might be called 'viral emails'. User-friendly consumer photo editing tools like Photoshop and image-editing websites have facilitated the creation of the genre of the image macro, where a popular image is overlaid with different humorous text phrases. These memes are typically created with Impact font. The growth of video-sharing websites like YouTube made viral videos possible. It is sometimes difficult to predict which images and videos will "go viral"; sometimes the creation of a new Internet celebrity is a sudden surprise. One example of a viral video is "Numa Numa", a webcam video of then-19-year-old Gary Brolsma lip-syncing and dancing to the Romanian pop song "Dragostea Din Tei". The sharing of text, images, videos, or links to this content have been greatly facilitated by social media such as Facebook and Twitter. Other mimicry memes carried by Internet media include hashtags, language variations like intentional misspellings, and fads like planking. The popularity and widespread distribution of Internet memes have gotten the attention of advertisers, creating the field of viral marketing. A person, group, or company desiring much fast, cheap publicity might create a hashtag, image, or video designed to go viral; many such attempts are unsuccessful, but the few posts that "go viral" generate much publicity. Types Viral videos are among the most common type of viral phenomena. A viral video is any clip of animation or film that is spread rapidly through online sharing. Viral videos can receive millions of views as they are shared on social media sites, reposted to blogs, sent in emails and so on. When a video goes viral it has become very popular. Its exposure on the Internet grows exponentially as more and more people discover it and share it with others. An article or an image can also become viral. The classification is probably assigned more as a result of intensive activity and the rate of growth among users in a relatively short amount of time than of simply how many hits something receives. Most viral videos contain humor and fall into broad categories: With the creation of YouTube, a video-sharing website, there has been a huge surge in the number of viral videos on the Internet. This is primarily due to the ease of access to these videos and the ease of sharing them via social media websites. The ability to share videos from one person to another with ease means there are many cases of 'overnight' viral videos. "YouTube, which makes it easy to embed its content elsewhere have the freedom and mobility once ascribed to papyrus, enabling their rapid circulation across a range of social networks.": 30 YouTube has overtaken television in terms of the size of audience. As one example, American Idol was the most viewable TV show in 2009 in the U.S. while "a video of Scottish woman Susan Boyle auditioning for Britain's Got Talent with her singing was viewed more than 77 million times on YouTube". The capacity to attract an enormous audience on a user-friendly platform is one of the leading factors why YouTube generates viral videos. YouTube contributes to viral phenomenon spreadability since the idea of the platform is based on sharing and contribution. "Sites such as YouTube, eBay, Facebook, Flickr, Craigslist, and Wikipedia, only exist and have value because people use and contribute to them, and they are clearly better the more people are using and contributing to them. This is the essence of Web 2.0." An example of one of the most prolific viral YouTube videos that fall into the promotional viral videos category is Kony 2012. On March 5, 2012, the charity organization Invisible Children Inc. posted a short film about the atrocities committed in Uganda by Joseph Kony and his rebel army. Artists use YouTube as their one of the main branding and communication platform to spread videos and make them viral. YouTube viral videos make stars. As an example, Justin Bieber who was discovered since his video on YouTube Chris Brown's song "With You" went viral. Since its launch in 2005, YouTube has become a hub for aspiring singers and musicians. Talent managers look to it to find budding pop stars. According to Visible Measures, the original "Kony 2012" video documentary, and the hundreds of excerpts and responses uploaded by audiences across the Web, collectively garnered 100 million views in a record six days. This example of how quickly the video spread emphasizes how YouTube acts as a catalyst in the spread of viral media. YouTube is considered as "multiple existing forms of participatory culture" and that trend is useful for the sake of business. "The discourse of Web 2.0 its power has been its erasure of this larger history of participatory practices, with companies acting as if they were "bestowing" agency onto audiences, making their creative output meaningful by valuing it within the logics of commodity culture.": 71 Viral social media platforms such as TikTok have been using algorithms in their websites to recommend content that they feel their users will enjoy. Videos that go viral on these platforms could include a range of content that can be helpful or hurtful. Social platforms such as TikTok give people a "stage" to spread information at an accelerated rate, this may or may not expose people to subjective information with no screening from actual humans. This can involve disinformation, misinformation, and malinformation. In some cases, the algorithms used by social media platforms fail to realize the content it is pushing is false or harmful and may continue to market the content even though it is against the platform's terms and conditions. In other cases, the algorithms actively push this content to increase their engagement to capture online advertising revenue. This means that ideologies such as fascism, white supremacy, and misogyny may be easily accessed and sometimes forced into users feeds. Other content being promoted on platforms that may be harmful include; anti-LGBTQ, anti- Black, antisemitic, anti-Muslim, anti-Asian, anti-migrant and refugees viewpoints. Users who spread disinformation use the algorithms of video platforms, like YouTube or TikTok, exploit engagement tools in order to get their content viral. Users employ hashtags that influence the recommendation algorithm: generic hashtags (#foryou; #fyp; etc.) as well as the hashtags of trending topics. Users who want to spread disinformation also intentionally use variations of banned terms to evade content moderation. These misspelled terms have the same meaning and influence as the original terms. Users who want to spread disinformation use other tools that allow their videos to get viral: content elements such as point of view, scale, style, text, as well as the time their content is more likely to get viral. Also, the more emotion the content raises, the more chance the content has to get viral. Users who spread disinformation that violates TikTok's terms and conditions have multiple methods of getting around these rules. One way that a user can do this is by respawned accounts, to do this a user will create a new account after they have been banned they use a similar user name to their previous one so they can easily be found again. Another way users can get around terms and conditions is by a multiple part video series on their account where they often spell out racial slurs and hate speech. This not only gets a users account more views which could result in the algorithm pushing their content more but also evades the rules set by the developers as the algorithm has trouble flagging these multiple part videos. Viral marketing is the phenomenon in which people actively assess media or content and decide to spread to others such as making a word-of-mouth recommendation, passing content through social media, posting a video to YouTube. The term was first popularized in 1995, after Hotmail spreading their service offer "Get your free web-base email at Hotmail.": 19 Viral marketing has become important in the business field in building brand recognition, with companies trying to get their customers and other audiences involved in circulating and sharing their content on social media both in voluntary and involuntary ways. Many brands undertake guerrilla marketing or buzz marketing to gain public attention. Some marketing campaigns seek to engage an audience to unwittingly pass along their campaign message. The use of viral marketing is shifting from the concept that the content drives its own attention to the intended attempt to draw the attention. The companies are worried about making their content 'go viral' and how their customers' communication has the potential to circulate it widely. There has been much discussion about morality in doing viral marketing. Iain Short (2010) points out that many applications on Twitter and Facebook generates automated marketing message and update it on the audience's personal timelines without users personally pass it along. Stacy Wood from North Carolina State University has conducted research and found that the value of recommendations from 'everyday people' has a potential impact on the brands. Consumers have been bombarded by thousands of messages every day which makes authenticity and credibility of marketing message been questioned; word of mouth from 'everyday people' therefore becomes an incredibly important source of credible information. If a company sees that the word-of-mouth from "the average person" is crucial for the greater opportunity for influencing others, many questions remain. "What implicit contracts exist between brands and those recommenders? What moral codes and guidelines should brands respect when encouraging, soliciting, or reacting to comments from those audiences they wish to reach? What types of compensation, if any, do audience members deserve for their promotional labor when they provide a testimonial.": 75 An example of effective viral marketing can be the unprecedented boost in sales of the Popeyes chicken sandwich. After the Twitter account for Chick-fil-A attempted to undercut Popeyes by suggesting that Popeyes' chicken sandwich was not the "original chicken sandwich", Popeyes responded with a tweet that would end up going viral. After the response had amassed 85,000 retweets and 300,000 likes, Popeyes chains began to sell many more sandwiches to the point where many locations sold all of their stock of chicken sandwiches. This prompted other chicken chains to tweet about their chicken sandwiches, but none of these efforts became as widespread as it was for Popeyes. In macroeconomics, "financial contagion" is a proposed socially-viral phenomenon wherein disturbances quickly spread across global financial markets. Evaluation by commentators Some social commentators have a negative view of "viral" content, though others are neutral or celebrate the democratization of content as compared to the gatekeepers of older media. According to the authors of Spreadable Media: Creating Value and Meaning in a Networked Culture: "Ideas are transmitted, often without critical assessment, across a broad array of minds and this uncoordinated flow of information is associated with "bad ideas" or "ruinous fads and foolish fashions.": 307 Science fiction sometimes discusses 'viral' content "describing (generally bad) ideas that spread like germs.": 17 For example, the 1992 novel Snow Crash explores the implications of an ancient memetic meta-virus and its modern-day computer virus equivalent: We are all susceptible to the pull of viral ideas. Like mass hysteria. Or a tune that gets into your head that you keep on humming all day until you spread it to someone else. Jokes. Urban legends. Crackpot religions. No matter how smart we get, there is always this deep irrational part that makes us potential hosts for self-replicating information. — Snow Crash (1992) The spread of viral phenomena is also regarded as part of the cultural politics of network culture or the virality of the age of networks. Network culture enables the audience to create and spread viral content. "Audiences play an active role in 'spreading' content rather than serving as passive carriers of viral media: their choices, investments, agendas, and actions determine what gets valued.": 21 Various authors have pointed to the intensification in connectivity brought about by network technologies as a possible trigger for increased chances of infection from wide-ranging social, cultural, political, and economic contagions. For example, the social scientist Jan van Dijk warns of new vulnerabilities that arise when network society encounters "too much connectivity." The proliferation of global transport networks makes this model of society susceptible to the spreading of biological diseases. Digital networks become volatile under the destructive potential of computer viruses and worms. Enhanced by the rapidity and extensiveness of technological networks, the spread of social conformity, political rumor, fads, fashions, gossip, and hype threatens to destabilize established political order. Links between viral phenomena that spread on digital networks and the early sociological theories of Gabriel Tarde have been made in digital media theory by Tony D Sampson (2012; 2016). In this context, Tarde's social imitation thesis is used to argue against the biological deterministic theories of cultural contagion forwarded in memetics. In its place, Sampson proposes a Tarde-inspired somnambulist media theory of the viral. See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/GPT_Store] | [TOKENS: 601]
Contents GPT Store The GPT Store is a platform developed by OpenAI that enables users and developers to create, publish, and monetize GPTs without requiring advanced programming skills. GPTs are custom applications built using the artificial intelligence chatbot known as ChatGPT. History The GPT Store was announced in October 2023 and launched in January 2024. According to OpenAI, the platform aims to democratize access to advanced artificial intelligence and facilitate the creation of custom chatbot applications without requiring advanced programming skills. The platform has garnered attention from developers and companies for its innovative potential and monetization opportunities. Initially available only to paying customers, access to the GPT Store became free in May 2024. Features The GPT Store allows users to create and customize chatbots, known as GPTs, tailored to various needs such as customer service, personal assistance, video and image creation, and more. GPTs are categorized into various sections, including Programming, Education, and Research. The platform is designed to be user-friendly, with intuitive tools that do not require programming experience or advanced technical knowledge. Product availability will depend on customized settings, such as the set level of visibility and the creator's profile. Creators of GPTs will have the opportunity to monetize their applications through various business models, including subscriptions and pay-per-use. The GPT Store also features a star-based rating system for users to evaluate GPTs, similar to other app stores such as Apple's App Store and Google Play. Guidelines Guidelines outline the expectations and responsibilities users have when creating a GPT product or platform, and they are continually monitored for compliance to minimize violations. These guidelines emphasize principles of customization, transparency, and intellectual freedom, aiming to support diverse communities while ensuring responsible use of the technology. The policies prohibit activities such as promoting harm, harassment, defamation, violence, or terrorism, among other forms of misuse. They also require creators to respect privacy rights by avoiding the disclosure of non-public information without prior authorization. GPTs within the ChatGPT store must comply with OpenAI's branding standards, which regulate the use of ChatGPT logos, names, and other visual identifiers in the private sector without formal permission. Controversy Despite its initial success, the GPT Store has faced criticism concerning potential copyright violations. Some users and companies have expressed concerns about the use of AI-generated content that may infringe on intellectual property rights. For instance, a teacher has alleged that some students created GPTs that provided access to content from copyrighted books. The ChatGPT store has also garnered false impressions of credibility and dubious categorization due to the large number of online sites that aim to mimic its features, leading to scams and misleading practices. Scam Detector ranked the chat.gpt store an illegitimate platform with the lowest ranking of credibility, resulting in false impressions for the actual ChatGPT store. The legitimate platform can only be accessed through the official GPT online website. References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Electrical_engineering] | [TOKENS: 6450]
Contents Electrical engineering Electrical engineering is an engineering discipline concerned with the study, design, and application of equipment, devices, and systems that use electricity, electronics, and electromagnetism. It emerged as an identifiable occupation in the latter half of the 19th century after the commercialization of the electric telegraph, the telephone, and electrical power generation, distribution, and use. Electrical engineering is divided into a wide range of different fields, including computer engineering, systems engineering, power engineering, telecommunications, radio-frequency engineering, signal processing, instrumentation, control engineering, photovoltaic cells, electronics, and optics and photonics. Many of these disciplines overlap with other engineering branches, spanning a huge number of specializations including hardware engineering, power electronics, electromagnetics and waves, microwave engineering, nanotechnology, electrochemistry, renewable energies, mechatronics/control, and electrical materials science.[a] Electrical engineers also study machine learning and computer science techniques due to significant overlap. Electrical engineers typically hold a degree in electrical engineering, electronic or electrical and electronic engineering. Practicing engineers may have professional certification and be members of a professional body or an international standards organization. These include the International Electrotechnical Commission (IEC), the National Society of Professional Engineers (NSPE), the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET, formerly the IEE). Electrical engineers work in a very wide range of industries and the skills required are likewise variable. These range from circuit theory to the management skills of a project manager. The tools and equipment that an individual engineer may need are similarly variable, ranging from a simple voltmeter to sophisticated design and manufacturing software. History Electricity has been a subject of scientific interest since at least the early 17th century. William Gilbert was a prominent early electrical scientist, and was the first to draw a clear distinction between magnetism and static electricity. He is credited with establishing the term "electricity". He also designed the versorium: a device that detects the presence of statically charged objects. In 1762 Swedish professor Johan Wilcke invented a device later named electrophorus that produced a static electric charge. By 1800 Alessandro Volta had developed the voltaic pile, a forerunner of the electric battery. In the 19th century, research into the subject started to intensify. Notable developments in this century include the work of Hans Christian Ørsted, who discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle; of William Sturgeon, who in 1825 invented the electromagnet; of Joseph Henry and Edward Davy, who invented the electrical relay in 1835; of Georg Ohm, who in 1827 quantified the relationship between the electric current and potential difference in a conductor; of Michael Faraday, the discoverer of electromagnetic induction in 1831; and of James Clerk Maxwell, who in 1873 published a unified theory of electricity and magnetism in his treatise Electricity and Magnetism. In 1782, Georges-Louis Le Sage developed and presented in Berlin probably the world's first form of electric telegraphy, using 24 different wires, one for each letter of the alphabet. This telegraph connected two rooms. It was an electrostatic telegraph that moved gold leaf through electrical conduction. In 1795, Francisco Salva Campillo proposed an electrostatic telegraph system. Between 1803 and 1804, he worked on electrical telegraphy, and in 1804, he presented his report at the Royal Academy of Natural Sciences and Arts of Barcelona. Salva's electrolyte telegraph system was very innovative though it was greatly influenced by and based upon two discoveries made in Europe in 1800—Alessandro Volta's electric battery for generating an electric current and William Nicholson and Anthony Carlyle's electrolysis of water. Electrical telegraphy may be considered the first example of electrical engineering. Electrical engineering became a profession in the later 19th century. Practitioners had created a global electric telegraph network, and the first professional electrical engineering institutions were founded in the UK and the US to support the new discipline. Francis Ronalds created an electric telegraph system in 1816 and documented his vision of how the world could be transformed by electricity. Over 50 years later, he joined the new Society of Telegraph Engineers (soon to be renamed the Institution of Electrical Engineers) where he was regarded by other members as the first of their cohort. By the end of the 19th century, the world had been forever changed by the rapid communication made possible by the engineering development of land-lines, submarine cables, and, from about 1890, wireless telegraphy. Practical applications and advances in such fields created an increasing need for standardized units of measure. They led to the international standardization of the units volt, ampere, coulomb, ohm, farad, and henry. This was achieved at an international conference in Chicago in 1893. The publication of these standards formed the basis of future advances in standardization in various industries, and in many countries, the definitions were immediately recognized in relevant legislation. During these years, the study of electricity was largely considered to be a subfield of physics since early electrical technology was considered electromechanical in nature. The Technische Universität Darmstadt founded the world's first department of electrical engineering in 1882 and introduced the first-degree course in electrical engineering in 1883. The first electrical engineering degree program in the United States was started at Massachusetts Institute of Technology (MIT) in the physics department under Professor Charles Cross, though it was Cornell University to produce the world's first electrical engineering graduates in 1885. The first course in electrical engineering was taught in 1883 in Cornell's Sibley College of Mechanical Engineering and Mechanic Arts. In about 1885, Cornell President Andrew Dickson White established the first Department of Electrical Engineering in the United States. In the same year, University College London founded the first chair of electrical engineering in Great Britain. Professor Mendell P. Weinbach at University of Missouri established the electrical engineering department in 1886. Afterwards, universities and institutes of technology gradually started to offer electrical engineering programs to their students all over the world. During these decades the use of electrical engineering increased dramatically. In 1882, Thomas Edison switched on the world's first large-scale electric power network that provided 110 volts—direct current (DC)—to 59 customers on Manhattan Island in New York City. In 1884, Sir Charles Parsons invented the steam turbine allowing for more efficient electric power generation. Alternating current, with its ability to transmit power more efficiently over long distances via the use of transformers, developed rapidly in the 1880s and 1890s with transformer designs by Károly Zipernowsky, Ottó Bláthy and Miksa Déri (later called ZBD transformers), Lucien Gaulard, John Dixon Gibbs and William Stanley Jr. Practical AC motor designs including induction motors were independently invented by Galileo Ferraris and Nikola Tesla and further developed into a practical three-phase form by Mikhail Dolivo-Dobrovolsky and Charles Eugene Lancelot Brown. Charles Steinmetz and Oliver Heaviside contributed to the theoretical basis of alternating current engineering. The spread in the use of AC set off in the United States what has been called the war of the currents between a George Westinghouse backed AC system and a Thomas Edison backed DC power system, with AC being adopted as the overall standard. During the development of radio, many scientists and inventors contributed to radio technology and electronics. The mathematical work of James Clerk Maxwell during the 1850s had shown the relationship of different forms of electromagnetic radiation including the possibility of invisible airborne waves (later called "radio waves"). In his classic physics experiments of 1888, Heinrich Hertz proved Maxwell's theory by transmitting radio waves with a spark-gap transmitter, and detected them by using simple electrical devices. Other physicists experimented with these new waves and in the process developed devices for transmitting and detecting them. In 1895, Guglielmo Marconi began work on a way to adapt the known methods of transmitting and detecting these "Hertzian waves" into a purpose-built commercial wireless telegraphic system. Early on, he sent wireless signals over a distance of one and a half miles. In December 1901, he sent wireless waves that were not affected by the curvature of the Earth. Marconi later transmitted the wireless signals across the Atlantic between Poldhu, Cornwall, and St. John's, Newfoundland, a distance of 2,100 miles (3,400 km). Millimetre wave communication was first investigated by Jagadish Chandra Bose during 1894–1896, when he reached an extremely high frequency of up to 60 GHz in his experiments. He also introduced the use of semiconductor junctions to detect radio waves, when he patented the radio crystal detector in 1901. In 1897, Karl Ferdinand Braun introduced the cathode-ray tube as part of an oscilloscope, a crucial enabling technology for electronic television. John Fleming invented the first radio tube, the diode, in 1904. Two years later, Robert von Lieben and Lee De Forest independently developed the amplifier tube, called the triode. In 1920, Albert Hull developed the magnetron which would eventually lead to the development of the microwave oven in 1946 by Percy Spencer. In 1934, the British military began to make strides toward radar (which also uses the magnetron) under the direction of Dr Wimperis, culminating in the operation of the first radar station at Bawdsey in August 1936. In 1941, Konrad Zuse presented the Z3, the world's first fully functional and programmable computer using electromechanical parts. In 1943, Tommy Flowers designed and built the Colossus, the world's first fully functional, electronic, digital and programmable computer. In 1946, the ENIAC (Electronic Numerical Integrator and Computer) of John Presper Eckert and John Mauchly followed, beginning the computing era. The arithmetic performance of these machines allowed engineers to develop completely new technologies and achieve new objectives. In 1948, Claude Shannon published "A Mathematical Theory of Communication" which mathematically describes the passage of information with uncertainty (electrical noise). The first working transistor was a point-contact transistor invented by John Bardeen and Walter Houser Brattain while working under William Shockley at the Bell Telephone Laboratories (BTL) in 1947. They then invented the bipolar junction transistor in 1948. While early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, they opened the door for more compact devices. The first integrated circuits were the hybrid integrated circuit invented by Jack Kilby at Texas Instruments in 1958 and the monolithic integrated circuit chip invented by Robert Noyce at Fairchild Semiconductor in 1959. The MOSFET (metal–oxide–semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed Atalla and Dawon Kahng at BTL in 1959. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET made it possible to build high-density integrated circuit chips. The earliest experimental MOS IC chip to be fabricated was built by Fred Heiman and Steven Hofstein at RCA Laboratories in 1962. MOS technology enabled Moore's law, the doubling of transistors on an IC chip every two years, predicted by Gordon Moore in 1965. Silicon-gate MOS technology was developed by Federico Faggin at Fairchild in 1968. Since then, the MOSFET has been the basic building block of modern electronics. The mass-production of silicon MOSFETs and MOS integrated circuit chips, along with continuous MOSFET scaling miniaturization at an exponential pace (as predicted by Moore's law), has since led to revolutionary changes in technology, economy, culture and thinking. The Apollo program which culminated in landing astronauts on the Moon with Apollo 11 in 1969 was enabled by NASA's adoption of advances in semiconductor electronic technology, including MOSFETs in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). The development of MOS integrated circuit technology in the 1960s led to the invention of the microprocessor in the early 1970s. The first single-chip microprocessor was the Intel 4004, released in 1971. The Intel 4004 was designed and realized by Federico Faggin at Intel with his silicon-gate MOS technology, along with Intel's Marcian Hoff and Stanley Mazor and Busicom's Masatoshi Shima. The microprocessor led to the development of microcomputers and personal computers, and the microcomputer revolution. In the recent times, the subject of machine learning (including speech systems, computer vision and reinforcement learning) has had significant overlap with electrical engineering fields such as signal processing, image processing and control engineering, and is as such studied often by electrical engineers. Machine learning techniques are also used in electrical engineering systems in subfields such as electronic design automation, stochastic and adaptive control, smart grids, adaptive signal processing, etc. Subfields One of the properties of electricity is that it is very useful for energy transmission as well as for information transmission. These were also the first areas in which electrical engineering was developed. Today, electrical engineering has many subdisciplines, the most common of which are listed below. Although there are electrical engineers who focus exclusively on one of these subdisciplines, many deal with a combination of them. Sometimes, certain fields, such as electronic engineering and computer engineering, are considered disciplines in their own right. Power & Energy engineering deals with the generation, transmission, and distribution of electricity as well as the design of a range of related devices. These include transformers, electric generators, electric motors, high voltage engineering, and power electronics. In many regions of the world, governments maintain an electrical network called a power grid that connects a variety of generators together with users of their energy. Users purchase electrical energy from the grid, avoiding the costly exercise of having to generate their own. Power engineers may work on the design and maintenance of the power grid as well as the power systems that connect to it. Such systems are called on-grid power systems and may supply the grid with additional power, draw power from the grid, or do both. Power engineers may also work on systems that do not connect to the grid, called off-grid power systems, which in some cases are preferable to on-grid systems. Telecommunications engineering focuses on the transmission of information across a communication channel such as a coax cable, optical fiber or free space. Transmissions across free space require information to be encoded in a carrier signal to shift the information to a carrier frequency suitable for transmission; this is known as modulation. Popular analog modulation techniques include amplitude modulation and frequency modulation. The choice of modulation affects the cost and performance of a system and these two factors must be balanced carefully by the engineer. Once the transmission characteristics of a system are determined, telecommunication engineers design the transmitters and receivers needed for such systems. These two are sometimes combined to form a two-way communication device known as a transceiver. A key consideration in the design of transmitters is their power consumption as this is closely related to their signal strength. Typically, if the power of the transmitted signal is insufficient once the signal arrives at the receiver's antenna(s), the information contained in the signal will be corrupted by noise, specifically static. Control engineering focuses on the modeling of a diverse range of dynamic systems and the design of controllers that will cause these systems to behave in the desired manner. To implement such controllers, electronics control engineers may use electronic circuits, digital signal processors, microcontrollers, and programmable logic controllers (PLCs). Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles. It also plays an important role in industrial automation. Control engineers often use feedback when designing control systems. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system which adjusts the motor's power output accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. Control engineers also work in robotics to design autonomous systems using control algorithms which interpret sensory feedback to control actuators that move robots such as autonomous vehicles, autonomous drones and others used in a variety of industries. Electronic engineering involves the design and testing of electronic circuits that use the properties of components such as resistors, capacitors, inductors, diodes, and transistors to achieve a particular functionality. The tuned circuit, which allows the user of a radio to filter out all but a single station, is just one example of such a circuit. Another example to research is a pneumatic signal conditioner. Prior to the Second World War, the subject was commonly known as radio engineering and basically was restricted to aspects of communications and radar, commercial radio, and early television. Later, in post-war years, as consumer devices began to be developed, the field grew to include modern television, audio systems, computers, and microprocessors. In the mid-to-late 1950s, the term radio engineering gradually gave way to the name electronic engineering. Before the invention of the integrated circuit in 1959, electronic circuits were constructed from discrete components that could be manipulated by humans. These discrete circuits consumed much space and power and were limited in speed, although they are still common in some applications. By contrast, integrated circuits packed a large number—often millions—of tiny electrical components, mainly transistors, into a small chip around the size of a coin. This allowed for the powerful computers and other electronic devices we see today. Microelectronics engineering deals with the design and microfabrication of very small electronic circuit components for use in an integrated circuit or sometimes for use on their own as a general electronic component. The most common microelectronic components are semiconductor transistors, although all main electronic components (resistors, capacitors etc.) can be created at a microscopic level. Nanoelectronics is the further scaling of devices down to nanometer levels. Modern devices are already in the nanometer regime, with below 100 nm processing having been standard since around 2002. Microelectronic components are created by chemically fabricating wafers of semiconductors such as silicon (at higher frequencies, compound semiconductors like gallium arsenide and indium phosphide can be used) to obtain the desired transport of electric charge and current. The field of microelectronics involves a significant amount of chemistry and material science and requires the electronic engineer to have a working knowledge of the effects of quantum mechanics. Signal processing deals with the analysis and manipulation of signals. Signals can be either analog, in which case the signal varies continuously according to the information, or digital, in which case the signal varies according to a series of discrete values representing the information. For analog signals, signal processing may involve the amplification and filtering of audio signals for audio equipment or the modulation and demodulation of signals for telecommunications. For digital signals, signal processing may involve the compression, error detection and error correction of digitally sampled signals. Signal processing is a mathematically oriented area forming the core of digital signal processing and is rapidly expanding with new applications in multiple fields of electrical engineering such as communications, control, radar, audio engineering, broadcast engineering, power electronics, and biomedical engineering as many already existing analog systems are replaced with their digital counterparts. Analog signal processing is still important in the design of many control systems. DSP processor ICs are found in many types of modern electronic devices, such as digital television sets, radios, hi-fi audio equipment, mobile phones, multimedia players, camcorders and digital cameras, automobile control systems, noise cancelling headphones, digital spectrum analyzers, missile guidance systems, radar systems, and telematics systems. In such products, DSP may be responsible for noise reduction, speech recognition or synthesis, encoding or decoding digital media, wirelessly transmitting or receiving data, triangulating positions using GPS, and other kinds of image processing, video processing, audio processing, and speech processing. Instrumentation engineering deals with the design of devices to measure physical quantities such as pressure, flow, and temperature. The design of such instruments requires a good understanding of physics that often extends beyond electromagnetic theory. For example, flight instruments measure variables such as wind speed and altitude to enable pilots the control of aircraft analytically. Similarly, thermocouples use the Peltier-Seebeck effect to measure the temperature difference between two points. Often instrumentation is not used by itself, but instead as the sensors of larger electrical systems. For example, a thermocouple might be used to help ensure a furnace's temperature remains constant. For this reason, instrumentation engineering is often viewed as the counterpart of control. Computer engineering deals with the design of computers and computer systems. This may involve the design of new hardware. Computer engineers may also work on a system's software. However, the design of complex software systems is often the domain of software engineering, which is usually considered a separate discipline. Desktop computers represent a tiny fraction of the devices a computer engineer might work on, as computer-like architectures are now found in a range of embedded devices including video game consoles and DVD players. Computer engineers are involved in many hardware and software aspects of computing. Robots are one of the applications of computer engineering. Photonics and optics deals with the generation, transmission, amplification, modulation, detection, and analysis of electromagnetic radiation. The application of optics deals with design of optical instruments such as lenses, microscopes, telescopes, and other equipment that uses the properties of electromagnetic radiation. Other prominent applications of optics include electro-optical sensors and measurement systems, lasers, fiber-optic communication systems, and optical disc systems (e.g. CD and DVD). Photonics builds heavily on optical technology, supplemented with modern developments such as optoelectronics (mostly involving semiconductors), laser systems, optical amplifiers and novel materials (e.g. metamaterials). Related disciplines Mechatronics is an engineering discipline that deals with the convergence of electrical and mechanical systems. Such combined systems are known as electromechanical systems and have widespread adoption. Examples include automated manufacturing systems, heating, ventilation and air-conditioning systems, and various subsystems of aircraft and automobiles. Electronic systems design is the subject within electrical engineering that deals with the multi-disciplinary design issues of complex electrical and mechanical systems. The term mechatronics is typically used to refer to macroscopic systems but futurists have predicted the emergence of very small electromechanical devices. Already, such small devices, known as microelectromechanical systems (MEMS), are used in automobiles to tell airbags when to deploy, in digital projectors to create sharper images, and in inkjet printers to create nozzles for high definition printing. In the future it is hoped the devices will help build tiny implantable medical devices and improve optical communication. In aerospace engineering and robotics, an example is the most recent electric propulsion and ion propulsion. Education Electrical engineers typically possess an academic degree with a major in electrical engineering, electronics engineering, electronics and computer engineering, electrical engineering technology, or electrical and electronic engineering. The same fundamental principles are taught in all programs, though emphasis may vary according to title. The length of study for such a degree is usually four or five years and the completed degree may be designated as a Bachelor of Science in Electrical/Electronics Engineering Technology, Bachelor of Engineering, Bachelor of Science, Bachelor of Technology, or Bachelor of Applied Science, depending on the university. The bachelor's degree generally includes units covering physics, mathematics, computer science, project management, and a variety of topics in electrical engineering. Initially such topics cover most, if not all, of the subdisciplines of electrical engineering. At many schools, electronic engineering is included as part of an electrical award, sometimes explicitly, such as a Bachelor of Engineering (Electrical and Electronic), but in others, electrical and electronic engineering are both considered to be sufficiently broad and complex that separate degrees are offered. Some electrical engineers choose to study for a postgraduate degree such as a Master of Engineering/Master of Science (MEng/MSc), a Master of Engineering Management, a Doctor of Philosophy (PhD) in Engineering, an Engineering Doctorate (Eng.D.), or an Engineer's degree. The master's and engineer's degrees may consist of either research, coursework or a mixture of the two. The Doctor of Philosophy and Engineering Doctorate degrees consist of a significant research component and are often viewed as the entry point to academia. In the United Kingdom and some other European countries, Master of Engineering is often considered to be an undergraduate degree of slightly longer duration than the Bachelor of Engineering rather than a standalone postgraduate degree. Professional practice In most countries, a bachelor's degree in engineering represents the first step towards professional certification and the degree program itself is certified by a professional body. After completing a certified degree program the engineer must satisfy a range of requirements (including work experience requirements) before being certified. Once certified the engineer is designated the title of Professional Engineer (in the United States, Canada and South Africa), Chartered engineer or Incorporated Engineer (in India, Pakistan, the United Kingdom, Ireland and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (in much of the European Union). The advantages of licensure vary depending upon location. For example, in the United States and Canada "only a licensed engineer may seal engineering work for public and private clients". This requirement is enforced by state and provincial legislation such as Quebec's Engineers Act. In other countries, no such legislation exists. Practically all certifying bodies maintain a code of ethics that they expect all members to abide by or risk expulsion. In this way these organizations play an important role in maintaining ethical standards for the profession. Even in jurisdictions where certification has little or no legal bearing on work, engineers are subject to contract law. In cases where an engineer's work fails he or she may be subject to the tort of negligence and, in extreme cases, the charge of criminal negligence. An engineer's work must also comply with numerous other rules and regulations, such as building codes and legislation pertaining to environmental law. Professional bodies of note for electrical engineers include the Institute of Electrical and Electronics Engineers (IEEE) and the Institution of Engineering and Technology (IET). The IEEE claims to produce 30% of the world's literature in electrical engineering, has over 360,000 members worldwide and holds over 3,000 conferences annually. The IET publishes 21 journals, has a worldwide membership of over 150,000, and claims to be the largest professional engineering society in Europe. Obsolescence of technical skills is a serious concern for electrical engineers. Membership and participation in technical societies, regular reviews of periodicals in the field and a habit of continued learning are therefore essential to maintaining proficiency. An MIET(Member of the Institution of Engineering and Technology) is recognised in Europe as an Electrical and computer (technology) engineer. In Australia, Canada, and the United States, electrical engineers make up around 0.25% of the labor force.[b] Tools and work From the Global Positioning System to electric power generation, electrical engineers have contributed to the development of a wide range of technologies. They design, develop, test, and supervise the deployment of electrical systems and electronic devices. For example, they may work on the design of telecommunications systems, the operation of electric power stations, the lighting and wiring of buildings, the design of household appliances, or the electrical control of industrial machinery. Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work. Today most engineering work involves the use of computers and it is commonplace to use computer-aided design programs when designing electrical systems. Nevertheless, the ability to sketch ideas is still invaluable for quickly communicating with others. Although most electrical engineers will understand basic circuit theory (that is, the interactions of elements such as resistors, capacitors, diodes, transistors, and inductors in a circuit), the theories employed by engineers generally depend upon the work they do. For example, quantum mechanics and solid state physics might be relevant to an engineer working on VLSI (the design of integrated circuits), but are largely irrelevant to engineers working with macroscopic electrical systems. Even circuit theory may not be relevant to a person designing telecommunications systems that use off-the-shelf components. Perhaps the most important technical skills for electrical engineers are reflected in university programs, which emphasize strong numerical skills, computer literacy, and the ability to understand the technical language and concepts that relate to electrical engineering. A wide range of instrumentation is used by electrical engineers. For simple control circuits and alarms, a basic multimeter measuring voltage, current, and resistance may suffice. Where time-varying signals need to be studied, the oscilloscope is also an ubiquitous instrument. In RF engineering and high-frequency telecommunications, spectrum analyzers and network analyzers are used. In some disciplines, safety can be a particular concern with instrumentation. For instance, medical electronics designers must take into account that much lower voltages than normal can be dangerous when electrodes are directly in contact with internal body fluids. Power transmission engineering also has great safety concerns due to the high voltages used; although voltmeters may in principle be similar to their low voltage equivalents, safety and calibration issues make them very different. Many disciplines of electrical engineering use tests specific to their discipline. Audio electronics engineers use audio test sets consisting of a signal generator and a meter, principally to measure level but also other parameters such as harmonic distortion and noise. Likewise, information technology have their own test sets, often specific to a particular data format, and the same is true of television broadcasting. For many engineers, technical work accounts for only a fraction of the work they do. A lot of time may also be spent on tasks such as discussing proposals with clients, preparing budgets and determining project schedules. Many senior engineers manage a team of technicians or other engineers and for this reason project management skills are important. Most engineering projects involve some form of documentation and strong written communication skills are therefore very important. The workplaces of engineers are just as varied as the types of work they do. Electrical engineers may be found in the pristine lab environment of a fabrication plant, on board a Naval ship, the offices of a consulting firm or on site at a mine. During their working life, electrical engineers may find themselves supervising a wide range of individuals including scientists, electricians, computer programmers, and other engineers. Electrical engineering has an intimate relationship with the physical sciences. For instance, the physicist Lord Kelvin played a major role in the engineering of the first transatlantic telegraph cable. Conversely, the engineer Oliver Heaviside produced major work on the mathematics of transmission on telegraph cables. Electrical engineers are often required on major science projects. For instance, large particle accelerators such as CERN need electrical engineers to deal with many aspects of the project including the power distribution, the instrumentation, and the manufacture and installation of the superconducting electromagnets. See also Notes References Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-The_New_York_Times_14-1] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/PlayStation_(console)#cite_ref-FOOTNOTEKent2001502_86-0] | [TOKENS: 10728]
Contents PlayStation (console) The PlayStation[a] (codenamed PSX, abbreviated as PS, and retroactively PS1 or PS one) is a home video game console developed and marketed by Sony Computer Entertainment. It was released in Japan on 3 December 1994, followed by North America on 9 September 1995, Europe on 29 September 1995, and other regions following thereafter. As a fifth-generation console, the PlayStation primarily competed with the Nintendo 64 and the Sega Saturn. Sony began developing the PlayStation after a failed venture with Nintendo to create a CD-ROM peripheral for the Super Nintendo Entertainment System in the early 1990s. The console was primarily designed by Ken Kutaragi and Sony Computer Entertainment in Japan, while additional development was outsourced in the United Kingdom. An emphasis on 3D polygon graphics was placed at the forefront of the console's design. PlayStation game production was designed to be streamlined and inclusive, enticing the support of many third party developers. The console proved popular for its extensive game library, popular franchises, low retail price, and aggressive youth marketing which advertised it as the preferable console for adolescents and adults. Critically acclaimed games that defined the console include Gran Turismo, Crash Bandicoot, Spyro the Dragon, Tomb Raider, Resident Evil, Metal Gear Solid, Tekken 3, and Final Fantasy VII. Sony ceased production of the PlayStation on 23 March 2006—over eleven years after it had been released, and in the same year the PlayStation 3 debuted. More than 4,000 PlayStation games were released, with cumulative sales of 962 million units. The PlayStation signaled Sony's rise to power in the video game industry. It received acclaim and sold strongly; in less than a decade, it became the first computer entertainment platform to ship over 100 million units. Its use of compact discs heralded the game industry's transition from cartridges. The PlayStation's success led to a line of successors, beginning with the PlayStation 2 in 2000. In the same year, Sony released a smaller and cheaper model, the PS one. History The PlayStation was conceived by Ken Kutaragi, a Sony executive who managed a hardware engineering division and was later dubbed "the Father of the PlayStation". Kutaragi's interest in working with video games stemmed from seeing his daughter play games on Nintendo's Famicom. Kutaragi convinced Nintendo to use his SPC-700 sound processor in the Super Nintendo Entertainment System (SNES) through a demonstration of the processor's capabilities. His willingness to work with Nintendo was derived from both his admiration of the Famicom and conviction in video game consoles becoming the main home-use entertainment systems. Although Kutaragi was nearly fired because he worked with Nintendo without Sony's knowledge, president Norio Ohga recognised the potential in Kutaragi's chip and decided to keep him as a protégé. The inception of the PlayStation dates back to a 1988 joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges in the form of the Family Computer Disk System, and wanted to continue this complementary storage strategy for the SNES. Since Sony was already contracted to produce the SPC-700 sound processor for the SNES, Nintendo contracted Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". The PlayStation name had already been trademarked by Yamaha, but Nobuyuki Idei liked it so much that he agreed to acquire it for an undisclosed sum rather than search for an alternative. Sony was keen to obtain a foothold in the rapidly expanding video game market. Having been the primary manufacturer of the MSX home computer format, Sony had wanted to use their experience in consumer electronics to produce their own video game hardware. Although the initial agreement between Nintendo and Sony was about producing a CD-ROM drive add-on, Sony had also planned to develop a SNES-compatible Sony-branded console. This iteration was intended to be more of a home entertainment system, playing both SNES cartridges and a new CD format named the "Super Disc", which Sony would design. Under the agreement, Sony would retain sole international rights to every Super Disc game, giving them a large degree of control despite Nintendo's leading position in the video game market. Furthermore, Sony would also be the sole benefactor of licensing related to music and film software that it had been aggressively pursuing as a secondary application. The Play Station was to be announced at the 1991 Consumer Electronics Show (CES) in Las Vegas. However, Nintendo president Hiroshi Yamauchi was wary of Sony's increasing leverage at this point and deemed the original 1988 contract unacceptable upon realising it essentially handed Sony control over all games written on the SNES CD-ROM format. Although Nintendo was dominant in the video game market, Sony possessed a superior research and development department. Wanting to protect Nintendo's existing licensing structure, Yamauchi cancelled all plans for the joint Nintendo–Sony SNES CD attachment without telling Sony. He sent Nintendo of America president Minoru Arakawa (his son-in-law) and chairman Howard Lincoln to Amsterdam to form a more favourable contract with Dutch conglomerate Philips, Sony's rival. This contract would give Nintendo total control over their licences on all Philips-produced machines. Kutaragi and Nobuyuki Idei, Sony's director of public relations at the time, learned of Nintendo's actions two days before the CES was due to begin. Kutaragi telephoned numerous contacts, including Philips, to no avail. On the first day of the CES, Sony announced their partnership with Nintendo and their new console, the Play Station. At 9 am on the next day, in what has been called "the greatest ever betrayal" in the industry, Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips and would abandon their work with Sony. Incensed by Nintendo's renouncement, Ohga and Kutaragi decided that Sony would develop their own console. Nintendo's contract-breaking was met with consternation in the Japanese business community, as they had broken an "unwritten law" of native companies not turning against each other in favour of foreign ones. Sony's American branch considered allying with Sega to produce a CD-ROM-based machine called the Sega Multimedia Entertainment System, but the Sega board of directors in Tokyo vetoed the idea when Sega of America CEO Tom Kalinske presented them the proposal. Kalinske recalled them saying: "That's a stupid idea, Sony doesn't know how to make hardware. They don't know how to make software either. Why would we want to do this?" Sony halted their research, but decided to develop what it had developed with Nintendo and Sega into a console based on the SNES. Despite the tumultuous events at the 1991 CES, negotiations between Nintendo and Sony were still ongoing. A deal was proposed: the Play Station would still have a port for SNES games, on the condition that it would still use Kutaragi's audio chip and that Nintendo would own the rights and receive the bulk of the profits. Roughly two hundred prototype machines were created, and some software entered development. Many within Sony were still opposed to their involvement in the video game industry, with some resenting Kutaragi for jeopardising the company. Kutaragi remained adamant that Sony not retreat from the growing industry and that a deal with Nintendo would never work. Knowing that they had to take decisive action, Sony severed all ties with Nintendo on 4 May 1992. To determine the fate of the PlayStation project, Ohga chaired a meeting in June 1992, consisting of Kutaragi and several senior Sony board members. Kutaragi unveiled a proprietary CD-ROM-based system he had been secretly working on which played games with immersive 3D graphics. Kutaragi was confident that his LSI chip could accommodate one million logic gates, which exceeded the capabilities of Sony's semiconductor division at the time. Despite gaining Ohga's enthusiasm, there remained opposition from a majority present at the meeting. Older Sony executives also opposed it, who saw Nintendo and Sega as "toy" manufacturers. The opposers felt the game industry was too culturally offbeat and asserted that Sony should remain a central player in the audiovisual industry, where companies were familiar with one another and could conduct "civili[s]ed" business negotiations. After Kutaragi reminded him of the humiliation he suffered from Nintendo, Ohga retained the project and became one of Kutaragi's most staunch supporters. Ohga shifted Kutaragi and nine of his team from Sony's main headquarters to Sony Music Entertainment Japan (SMEJ), a subsidiary of the main Sony group, so as to retain the project and maintain relationships with Philips for the MMCD development project. The involvement of SMEJ proved crucial to the PlayStation's early development as the process of manufacturing games on CD-ROM format was similar to that used for audio CDs, with which Sony's music division had considerable experience. While at SMEJ, Kutaragi worked with Epic/Sony Records founder Shigeo Maruyama and Akira Sato; both later became vice-presidents of the division that ran the PlayStation business. Sony Computer Entertainment (SCE) was jointly established by Sony and SMEJ to handle the company's ventures into the video game industry. On 27 October 1993, Sony publicly announced that it was entering the game console market with the PlayStation. According to Maruyama, there was uncertainty over whether the console should primarily focus on 2D, sprite-based graphics or 3D polygon graphics. After Sony witnessed the success of Sega's Virtua Fighter (1993) in Japanese arcades, the direction of the PlayStation became "instantly clear" and 3D polygon graphics became the console's primary focus. SCE president Teruhisa Tokunaka expressed gratitude for Sega's timely release of Virtua Fighter as it proved "just at the right time" that making games with 3D imagery was possible. Maruyama claimed that Sony further wanted to emphasise the new console's ability to utilise redbook audio from the CD-ROM format in its games alongside high quality visuals and gameplay. Wishing to distance the project from the failed enterprise with Nintendo, Sony initially branded the PlayStation the "PlayStation X" (PSX). Sony formed their European division and North American division, known as Sony Computer Entertainment Europe (SCEE) and Sony Computer Entertainment America (SCEA), in January and May 1995. The divisions planned to market the new console under the alternative branding "PSX" following the negative feedback regarding "PlayStation" in focus group studies. Early advertising prior to the console's launch in North America referenced PSX, but the term was scrapped before launch. The console was not marketed with Sony's name in contrast to Nintendo's consoles. According to Phil Harrison, much of Sony's upper management feared that the Sony brand would be tarnished if associated with the console, which they considered a "toy". Since Sony had no experience in game development, it had to rely on the support of third-party game developers. This was in contrast to Sega and Nintendo, which had versatile and well-equipped in-house software divisions for their arcade games and could easily port successful games to their home consoles. Recent consoles like the Atari Jaguar and 3DO suffered low sales due to a lack of developer support, prompting Sony to redouble their efforts in gaining the endorsement of arcade-savvy developers. A team from Epic Sony visited more than a hundred companies throughout Japan in May 1993 in hopes of attracting game creators with the PlayStation's technological appeal. Sony found that many disliked Nintendo's practices, such as favouring their own games over others. Through a series of negotiations, Sony acquired initial support from Namco, Konami, and Williams Entertainment, as well as 250 other development teams in Japan alone. Namco in particular was interested in developing for PlayStation since Namco rivalled Sega in the arcade market. Attaining these companies secured influential games such as Ridge Racer (1993) and Mortal Kombat 3 (1995), Ridge Racer being one of the most popular arcade games at the time, and it was already confirmed behind closed doors that it would be the PlayStation's first game by December 1993, despite Namco being a longstanding Nintendo developer. Namco's research managing director Shegeichi Nakamura met with Kutaragi in 1993 to discuss the preliminary PlayStation specifications, with Namco subsequently basing the Namco System 11 arcade board on PlayStation hardware and developing Tekken to compete with Virtua Fighter. The System 11 launched in arcades several months before the PlayStation's release, with the arcade release of Tekken in September 1994. Despite securing the support of various Japanese studios, Sony had no developers of their own by the time the PlayStation was in development. This changed in 1993 when Sony acquired the Liverpudlian company Psygnosis (later renamed SCE Liverpool) for US$48 million, securing their first in-house development team. The acquisition meant that Sony could have more launch games ready for the PlayStation's release in Europe and North America. Ian Hetherington, Psygnosis' co-founder, was disappointed after receiving early builds of the PlayStation and recalled that the console "was not fit for purpose" until his team got involved with it. Hetherington frequently clashed with Sony executives over broader ideas; at one point it was suggested that a television with a built-in PlayStation be produced. In the months leading up to the PlayStation's launch, Psygnosis had around 500 full-time staff working on games and assisting with software development. The purchase of Psygnosis marked another turning point for the PlayStation as it played a vital role in creating the console's development kits. While Sony had provided MIPS R4000-based Sony NEWS workstations for PlayStation development, Psygnosis employees disliked the thought of developing on these expensive workstations and asked Bristol-based SN Systems to create an alternative PC-based development system. Andy Beveridge and Martin Day, owners of SN Systems, had previously supplied development hardware for other consoles such as the Mega Drive, Atari ST, and the SNES. When Psygnosis arranged an audience for SN Systems with Sony's Japanese executives at the January 1994 CES in Las Vegas, Beveridge and Day presented their prototype of the condensed development kit, which could run on an ordinary personal computer with two extension boards. Impressed, Sony decided to abandon their plans for a workstation-based development system in favour of SN Systems's, thus securing a cheaper and more efficient method for designing software. An order of over 600 systems followed, and SN Systems supplied Sony with additional software such as an assembler, linker, and a debugger. SN Systems produced development kits for future PlayStation systems, including the PlayStation 2 and was bought out by Sony in 2005. Sony strived to make game production as streamlined and inclusive as possible, in contrast to the relatively isolated approach of Sega and Nintendo. Phil Harrison, representative director of SCEE, believed that Sony's emphasis on developer assistance reduced most time-consuming aspects of development. As well as providing programming libraries, SCE headquarters in London, California, and Tokyo housed technical support teams that could work closely with third-party developers if needed. Sony did not favour their own over non-Sony products, unlike Nintendo; Peter Molyneux of Bullfrog Productions admired Sony's open-handed approach to software developers and lauded their decision to use PCs as a development platform, remarking that "[it was] like being released from jail in terms of the freedom you have". Another strategy that helped attract software developers was the PlayStation's use of the CD-ROM format instead of traditional cartridges. Nintendo cartridges were expensive to manufacture, and the company controlled all production, prioritising their own games, while inexpensive compact disc manufacturing occurred at dozens of locations around the world. The PlayStation's architecture and interconnectability with PCs was beneficial to many software developers. The use of the programming language C proved useful, as it safeguarded future compatibility of the machine should developers decide to make further hardware revisions. Despite the inherent flexibility, some developers found themselves restricted due to the console's lack of RAM. While working on beta builds of the PlayStation, Molyneux observed that its MIPS processor was not "quite as bullish" compared to that of a fast PC and said that it took his team two weeks to port their PC code to the PlayStation development kits and another fortnight to achieve a four-fold speed increase. An engineer from Ocean Software, one of Europe's largest game developers at the time, thought that allocating RAM was a challenging aspect given the 3.5 megabyte restriction. Kutaragi said that while it would have been easy to double the amount of RAM for the PlayStation, the development team refrained from doing so to keep the retail cost down. Kutaragi saw the biggest challenge in developing the system to be balancing the conflicting goals of high performance, low cost, and being easy to program for, and felt he and his team were successful in this regard. Its technical specifications were finalised in 1993 and its design during 1994. The PlayStation name and its final design were confirmed during a press conference on May 10, 1994, although the price and release dates had not been disclosed yet. Sony released the PlayStation in Japan on 3 December 1994, a week after the release of the Sega Saturn, at a price of ¥39,800. Sales in Japan began with a "stunning" success with long queues in shops. Ohga later recalled that he realised how important PlayStation had become for Sony when friends and relatives begged for consoles for their children. PlayStation sold 100,000 units on the first day and two million units within six months, although the Saturn outsold the PlayStation in the first few weeks due to the success of Virtua Fighter. By the end of 1994, 300,000 PlayStation units were sold in Japan compared to 500,000 Saturn units. A grey market emerged for PlayStations shipped from Japan to North America and Europe, with buyers of such consoles paying up to £700. "When September 1995 arrived and Sony's Playstation roared out of the gate, things immediately felt different than [sic] they did with the Saturn launch earlier that year. Sega dropped the Saturn $100 to match the Playstation's $299 debut price, but sales weren't even close—Playstations flew out the door as fast as we could get them in stock. Before the release in North America, Sega and Sony presented their consoles at the first Electronic Entertainment Expo (E3) in Los Angeles on 11 May 1995. At their keynote presentation, Sega of America CEO Tom Kalinske revealed that their Saturn console would be released immediately to select retailers at a price of $399. Next came Sony's turn: Olaf Olafsson, the head of SCEA, summoned Steve Race, the head of development, to the conference stage, who said "$299" and left the audience with a round of applause. The attention to the Sony conference was further bolstered by the surprise appearance of Michael Jackson and the showcase of highly anticipated games, including Wipeout (1995), Ridge Racer and Tekken (1994). In addition, Sony announced that no games would be bundled with the console. Although the Saturn had released early in the United States to gain an advantage over the PlayStation, the surprise launch upset many retailers who were not informed in time, harming sales. Some retailers such as KB Toys responded by dropping the Saturn entirely. The PlayStation went on sale in North America on 9 September 1995. It sold more units within two days than the Saturn had in five months, with almost all of the initial shipment of 100,000 units sold in advance and shops across the country running out of consoles and accessories. The well-received Ridge Racer contributed to the PlayStation's early success, — with some critics considering it superior to Sega's arcade counterpart Daytona USA (1994) — as did Battle Arena Toshinden (1995). There were over 100,000 pre-orders placed and 17 games available on the market by the time of the PlayStation's American launch, in comparison to the Saturn's six launch games. The PlayStation released in Europe on 29 September 1995 and in Australia on 15 November 1995. By November it had already outsold the Saturn by three to one in the United Kingdom, where Sony had allocated a £20 million marketing budget during the Christmas season compared to Sega's £4 million. Sony found early success in the United Kingdom by securing listings with independent shop owners as well as prominent High Street chains such as Comet and Argos. Within its first year, the PlayStation secured over 20% of the entire American video game market. From September to the end of 1995, sales in the United States amounted to 800,000 units, giving the PlayStation a commanding lead over the other fifth-generation consoles,[b] though the SNES and Mega Drive from the fourth generation still outsold it. Sony reported that the attach rate of sold games and consoles was four to one. To meet increasing demand, Sony chartered jumbo jets and ramped up production in Europe and North America. By early 1996, the PlayStation had grossed $2 billion (equivalent to $4.106 billion 2025) from worldwide hardware and software sales. By late 1996, sales in Europe totalled 2.2 million units, including 700,000 in the UK. Approximately 400 PlayStation games were in development, compared to around 200 games being developed for the Saturn and 60 for the Nintendo 64. In India, the PlayStation was launched in test market during 1999–2000 across Sony showrooms, selling 100 units. Sony finally launched the console (PS One model) countrywide on 24 January 2002 with the price of Rs 7,990 and 26 games available from start. PlayStation was also doing well in markets where it was never officially released. For example, in Brazil, due to the registration of the trademark by a third company, the console could not be released, which was why the market was taken over by the officially distributed Sega Saturn during the first period, but as the Sega console withdraws, PlayStation imports and large piracy increased. In another market, China, the most popular 32-bit console was Sega Saturn, but after leaving the market, PlayStation grown with a base of 300,000 users until January 2000, although Sony China did not have plans to release it. The PlayStation was backed by a successful marketing campaign, allowing Sony to gain an early foothold in Europe and North America. Initially, PlayStation demographics were skewed towards adults, but the audience broadened after the first price drop. While the Saturn was positioned towards 18- to 34-year-olds, the PlayStation was initially marketed exclusively towards teenagers. Executives from both Sony and Sega reasoned that because younger players typically looked up to older, more experienced players, advertising targeted at teens and adults would draw them in too. Additionally, Sony found that adults reacted best to advertising aimed at teenagers; Lee Clow surmised that people who started to grow into adulthood regressed and became "17 again" when they played video games. The console was marketed with advertising slogans stylised as "LIVE IN YUR WRLD. PLY IN URS" (Live in Your World. Play in Ours.) and "U R NOT E" (red E). The four geometric shapes were derived from the symbols for the four buttons on the controller. Clow thought that by invoking such provocative statements, gamers would respond to the contrary and say "'Bullshit. Let me show you how ready I am.'" As the console's appeal enlarged, Sony's marketing efforts broadened from their earlier focus on mature players to specifically target younger children as well. Shortly after the PlayStation's release in Europe, Sony tasked marketing manager Geoff Glendenning with assessing the desires of a new target audience. Sceptical over Nintendo and Sega's reliance on television campaigns, Glendenning theorised that young adults transitioning from fourth-generation consoles would feel neglected by marketing directed at children and teenagers. Recognising the influence early 1990s underground clubbing and rave culture had on young people, especially in the United Kingdom, Glendenning felt that the culture had become mainstream enough to help cultivate PlayStation's emerging identity. Sony partnered with prominent nightclub owners such as Ministry of Sound and festival promoters to organise dedicated PlayStation areas where demonstrations of select games could be tested. Sheffield-based graphic design studio The Designers Republic was contracted by Sony to produce promotional materials aimed at a fashionable, club-going audience. Psygnosis' Wipeout in particular became associated with nightclub culture as it was widely featured in venues. By 1997, there were 52 nightclubs in the United Kingdom with dedicated PlayStation rooms. Glendenning recalled that he had discreetly used at least £100,000 a year in slush fund money to invest in impromptu marketing. In 1996, Sony expanded their CD production facilities in the United States due to the high demand for PlayStation games, increasing their monthly output from 4 million discs to 6.5 million discs. This was necessary because PlayStation sales were running at twice the rate of Saturn sales, and its lead dramatically increased when both consoles dropped in price to $199 that year. The PlayStation also outsold the Saturn at a similar ratio in Europe during 1996, with 2.2 million consoles sold in the region by the end of the year. Sales figures for PlayStation hardware and software only increased following the launch of the Nintendo 64. Tokunaka speculated that the Nintendo 64 launch had actually helped PlayStation sales by raising public awareness of the gaming market through Nintendo's added marketing efforts. Despite this, the PlayStation took longer to achieve dominance in Japan. Tokunaka said that, even after the PlayStation and Saturn had been on the market for nearly two years, the competition between them was still "very close", and neither console had led in sales for any meaningful length of time. By 1998, Sega, encouraged by their declining market share and significant financial losses, launched the Dreamcast as a last-ditch attempt to stay in the industry. Although its launch was successful, the technically superior 128-bit console was unable to subdue Sony's dominance in the industry. Sony still held 60% of the overall video game market share in North America at the end of 1999. Sega's initial confidence in their new console was undermined when Japanese sales were lower than expected, with disgruntled Japanese consumers reportedly returning their Dreamcasts in exchange for PlayStation software. On 2 March 1999, Sony officially revealed details of the PlayStation 2, which Kutaragi announced would feature a graphics processor designed to push more raw polygons than any console in history, effectively rivalling most supercomputers. The PlayStation continued to sell strongly at the turn of the new millennium: in June 2000, Sony released the PSOne, a smaller, redesigned variant which went on to outsell all other consoles in that year, including the PlayStation 2. In 2005, PlayStation became the first console to ship 100 million units with the PlayStation 2 later achieving this faster than its predecessor. The combined successes of both PlayStation consoles led to Sega retiring the Dreamcast in 2001, and abandoning the console business entirely. The PlayStation was eventually discontinued on 23 March 2006—over eleven years after its release, and less than a year before the debut of the PlayStation 3. Hardware The main microprocessor is a R3000 CPU made by LSI Logic operating at a clock rate of 33.8688 MHz and 30 MIPS. This 32-bit CPU relies heavily on the "cop2" 3D and matrix math coprocessor on the same die to provide the necessary speed to render complex 3D graphics. The role of the separate GPU chip is to draw 2D polygons and apply shading and textures to them: the rasterisation stage of the graphics pipeline. Sony's custom 16-bit sound chip supports ADPCM sources with up to 24 sound channels and offers a sampling rate of up to 44.1 kHz and music sequencing. It features 2 MB of main RAM, with an additional 1 MB of video RAM. The PlayStation has a maximum colour depth of 16.7 million true colours with 32 levels of transparency and unlimited colour look-up tables. The PlayStation can output composite, S-Video or RGB video signals through its AV Multi connector (with older models also having RCA connectors for composite), displaying resolutions from 256×224 to 640×480 pixels. Different games can use different resolutions. Earlier models also had proprietary parallel and serial ports that could be used to connect accessories or multiple consoles together; these were later removed due to a lack of usage. The PlayStation uses a proprietary video compression unit, MDEC, which is integrated into the CPU and allows for the presentation of full motion video at a higher quality than other consoles of its generation. Unusual for the time, the PlayStation lacks a dedicated 2D graphics processor; 2D elements are instead calculated as polygons by the Geometry Transfer Engine (GTE) so that they can be processed and displayed on screen by the GPU. While running, the GPU can also generate a total of 4,000 sprites and 180,000 polygons per second, in addition to 360,000 per second flat-shaded. The PlayStation went through a number of variants during its production run. Externally, the most notable change was the gradual reduction in the number of external connectors from the rear of the unit. This started with the original Japanese launch units; the SCPH-1000, released on 3 December 1994, was the only model that had an S-Video port, as it was removed from the next model. Subsequent models saw a reduction in number of parallel ports, with the final version only retaining one serial port. Sony marketed a development kit for amateur developers known as the Net Yaroze (meaning "Let's do it together" in Japanese). It was launched in June 1996 in Japan, and following public interest, was released the next year in other countries. The Net Yaroze allowed hobbyists to create their own games and upload them via an online forum run by Sony. The console was only available to buy through an ordering service and with the necessary documentation and software to program PlayStation games and applications through C programming compilers. On 7 July 2000, Sony released the PS One (stylised as "PS one" or "PSone"), a smaller, redesigned version of the original PlayStation. It was the highest-selling console through the end of the year, outselling all other consoles—including the PlayStation 2. In 2002, Sony released a 5-inch (130 mm) LCD screen add-on for the PS One, referred to as the "Combo pack". It also included a car cigarette lighter adaptor adding an extra layer of portability. Production of the LCD "Combo Pack" ceased in 2004, when the popularity of the PlayStation began to wane in markets outside Japan. A total of 28.15 million PS One units had been sold by the time it was discontinued in March 2006. Three iterations of the PlayStation's controller were released over the console's lifespan. The first controller, the PlayStation controller, was released alongside the PlayStation in December 1994. It features four individual directional buttons (as opposed to a conventional D-pad), a pair of shoulder buttons on both sides, Start and Select buttons in the centre, and four face buttons consisting of simple geometric shapes: a green triangle, red circle, blue cross, and a pink square (, , , ). Rather than depicting traditionally used letters or numbers onto its buttons, the PlayStation controller established a trademark which would be incorporated heavily into the PlayStation brand. Teiyu Goto, the designer of the original PlayStation controller, said that the circle and cross represent "yes" and "no", respectively (though this layout is reversed in Western versions); the triangle symbolises a point of view and the square is equated to a sheet of paper to be used to access menus. The European and North American models of the original PlayStation controllers are roughly 10% larger than its Japanese variant, to account for the fact the average person in those regions has larger hands than the average Japanese person. Sony's first analogue gamepad, the PlayStation Analog Joystick (often erroneously referred to as the "Sony Flightstick"), was first released in Japan in April 1996. Featuring two parallel joysticks, it uses potentiometer technology previously used on consoles such as the Vectrex; instead of relying on binary eight-way switches, the controller detects minute angular changes through the entire range of motion. The stick also features a thumb-operated digital hat switch on the right joystick, corresponding to the traditional D-pad, and used for instances when simple digital movements were necessary. The Analog Joystick sold poorly in Japan due to its high cost and cumbersome size. The increasing popularity of 3D games prompted Sony to add analogue sticks to its controller design to give users more freedom over their movements in virtual 3D environments. The first official analogue controller, the Dual Analog Controller, was revealed to the public in a small glass booth at the 1996 PlayStation Expo in Japan, and released in April 1997 to coincide with the Japanese releases of analogue-capable games Tobal 2 and Bushido Blade. In addition to the two analogue sticks (which also introduced two new buttons mapped to clicking in the analogue sticks), the Dual Analog controller features an "Analog" button and LED beneath the "Start" and "Select" buttons which toggles analogue functionality on or off. The controller also features rumble support, though Sony decided that haptic feedback would be removed from all overseas iterations before the United States release. A Sony spokesman stated that the feature was removed for "manufacturing reasons", although rumours circulated that Nintendo had attempted to legally block the release of the controller outside Japan due to similarities with the Nintendo 64 controller's Rumble Pak. However, a Nintendo spokesman denied that Nintendo took legal action. Next Generation's Chris Charla theorised that Sony dropped vibration feedback to keep the price of the controller down. In November 1997, Sony introduced the DualShock controller. Its name derives from its use of two (dual) vibration motors (shock). Unlike its predecessor, its analogue sticks feature textured rubber grips, longer handles, slightly different shoulder buttons and has rumble feedback included as standard on all versions. The DualShock later replaced its predecessors as the default controller. Sony released a series of peripherals to add extra layers of functionality to the PlayStation. Such peripherals include memory cards, the PlayStation Mouse, the PlayStation Link Cable, the Multiplayer Adapter (a four-player multitap), the Memory Drive (a disk drive for 3.5-inch floppy disks), the GunCon (a light gun), and the Glasstron (a monoscopic head-mounted display). Released exclusively in Japan, the PocketStation is a memory card peripheral which acts as a miniature personal digital assistant. The device features a monochrome liquid crystal display (LCD), infrared communication capability, a real-time clock, built-in flash memory, and sound capability. Sharing similarities with the Dreamcast's VMU peripheral, the PocketStation was typically distributed with certain PlayStation games, enhancing them with added features. The PocketStation proved popular in Japan, selling over five million units. Sony planned to release the peripheral outside Japan but the release was cancelled, despite receiving promotion in Europe and North America. In addition to playing games, most PlayStation models are equipped to play CD-Audio. The Asian model SCPH-5903 can also play Video CDs. Like most CD players, the PlayStation can play songs in a programmed order, shuffle the playback order of the disc and repeat one song or the entire disc. Later PlayStation models use a music visualisation function called SoundScope. This function, as well as a memory card manager, is accessed by starting the console without either inserting a game or closing the CD tray, thereby accessing a graphical user interface (GUI) for the PlayStation BIOS. The GUI for the PS One and PlayStation differ depending on the firmware version: the original PlayStation GUI had a dark blue background with rainbow graffiti used as buttons, while the early PAL PlayStation and PS One GUI had a grey blocked background with two icons in the middle. PlayStation emulation is versatile and can be run on numerous modern devices. Bleem! was a commercial emulator which was released for IBM-compatible PCs and the Dreamcast in 1999. It was notable for being aggressively marketed during the PlayStation's lifetime, and was the centre of multiple controversial lawsuits filed by Sony. Bleem! was programmed in assembly language, which allowed it to emulate PlayStation games with improved visual fidelity, enhanced resolutions, and filtered textures that was not possible on original hardware. Sony sued Bleem! two days after its release, citing copyright infringement and accusing the company of engaging in unfair competition and patent infringement by allowing use of PlayStation BIOSs on a Sega console. Bleem! were subsequently forced to shut down in November 2001. Sony was aware that using CDs for game distribution could have left games vulnerable to piracy, due to the growing popularity of CD-R and optical disc drives with burning capability. To preclude illegal copying, a proprietary process for PlayStation disc manufacturing was developed that, in conjunction with an augmented optical drive in Tiger H/E assembly, prevented burned copies of games from booting on an unmodified console. Specifically, all genuine PlayStation discs were printed with a small section of deliberate irregular data, which the PlayStation's optical pick-up was capable of detecting and decoding. Consoles would not boot game discs without a specific wobble frequency contained in the data of the disc pregap sector (the same system was also used to encode discs' regional lockouts). This signal was within Red Book CD tolerances, so PlayStation discs' actual content could still be read by a conventional disc drive; however, the disc drive could not detect the wobble frequency (therefore duplicating the discs omitting it), since the laser pick-up system of any optical disc drive would interpret this wobble as an oscillation of the disc surface and compensate for it in the reading process. Early PlayStations, particularly early 1000 models, experience skipping full-motion video or physical "ticking" noises from the unit. The problems stem from poorly placed vents leading to overheating in some environments, causing the plastic mouldings inside the console to warp slightly and create knock-on effects with the laser assembly. The solution is to sit the console on a surface which dissipates heat efficiently in a well vented area or raise the unit up slightly from its resting surface. Sony representatives also recommended unplugging the PlayStation when it is not in use, as the system draws in a small amount of power (and therefore heat) even when turned off. The first batch of PlayStations use a KSM-440AAM laser unit, whose case and movable parts are all built out of plastic. Over time, the plastic lens sled rail wears out—usually unevenly—due to friction. The placement of the laser unit close to the power supply accelerates wear, due to the additional heat, which makes the plastic more vulnerable to friction. Eventually, one side of the lens sled will become so worn that the laser can tilt, no longer pointing directly at the CD; after this, games will no longer load due to data read errors. Sony fixed the problem by making the sled out of die-cast metal and placing the laser unit further away from the power supply on later PlayStation models. Due to an engineering oversight, the PlayStation does not produce a proper signal on several older models of televisions, causing the display to flicker or bounce around the screen. Sony decided not to change the console design, since only a small percentage of PlayStation owners used such televisions, and instead gave consumers the option of sending their PlayStation unit to a Sony service centre to have an official modchip installed, allowing play on older televisions. Game library The PlayStation featured a diverse game library which grew to appeal to all types of players. Critically acclaimed PlayStation games included Final Fantasy VII (1997), Crash Bandicoot (1996), Spyro the Dragon (1998), Metal Gear Solid (1998), all of which became established franchises. Final Fantasy VII is credited with allowing role-playing games to gain mass-market appeal outside Japan, and is considered one of the most influential and greatest video games ever made. The PlayStation's bestselling game is Gran Turismo (1997), which sold 10.85 million units. After the PlayStation's discontinuation in 2006, the cumulative software shipment was 962 million units. Following its 1994 launch in Japan, early games included Ridge Racer, Crime Crackers, King's Field, Motor Toon Grand Prix, Toh Shin Den (i.e. Battle Arena Toshinden), and Kileak: The Blood. The first two games available at its later North American launch were Jumping Flash! (1995) and Ridge Racer, with Jumping Flash! heralded as an ancestor for 3D graphics in console gaming. Wipeout, Air Combat, Twisted Metal, Warhawk and Destruction Derby were among the popular first-year games, and the first to be reissued as part of Sony's Greatest Hits or Platinum range. At the time of the PlayStation's first Christmas season, Psygnosis had produced around 70% of its launch catalogue; their breakthrough racing game Wipeout was acclaimed for its techno soundtrack and helped raise awareness of Britain's underground music community. Eidos Interactive's action-adventure game Tomb Raider contributed substantially to the success of the console in 1996, with its main protagonist Lara Croft becoming an early gaming icon and garnering unprecedented media promotion. Licensed tie-in video games of popular films were also prevalent; Argonaut Games' 2001 adaptation of Harry Potter and the Philosopher's Stone went on to sell over eight million copies late in the console's lifespan. Third-party developers committed largely to the console's wide-ranging game catalogue even after the launch of the PlayStation 2; some of the notable exclusives in this era include Harry Potter and the Philosopher's Stone, Fear Effect 2: Retro Helix, Syphon Filter 3, C-12: Final Resistance, Dance Dance Revolution Konamix and Digimon World 3.[c] Sony assisted with game reprints as late as 2008 with Metal Gear Solid: The Essential Collection, this being the last PlayStation game officially released and licensed by Sony. Initially, in the United States, PlayStation games were packaged in long cardboard boxes, similar to non-Japanese 3DO and Saturn games. Sony later switched to the jewel case format typically used for audio CDs and Japanese video games, as this format took up less retailer shelf space (which was at a premium due to the large number of PlayStation games being released), and focus testing showed that most consumers preferred this format. Reception The PlayStation was mostly well received upon release. Critics in the west generally welcomed the new console; the staff of Next Generation reviewed the PlayStation a few weeks after its North American launch, where they commented that, while the CPU is "fairly average", the supplementary custom hardware, such as the GPU and sound processor, is stunningly powerful. They praised the PlayStation's focus on 3D, and complemented the comfort of its controller and the convenience of its memory cards. Giving the system 41⁄2 out of 5 stars, they concluded, "To succeed in this extremely cut-throat market, you need a combination of great hardware, great games, and great marketing. Whether by skill, luck, or just deep pockets, Sony has scored three out of three in the first salvo of this war." Albert Kim from Entertainment Weekly praised the PlayStation as a technological marvel, rivalling that of Sega and Nintendo. Famicom Tsūshin scored the console a 19 out of 40, lower than the Saturn's 24 out of 40, in May 1995. In a 1997 year-end review, a team of five Electronic Gaming Monthly editors gave the PlayStation scores of 9.5, 8.5, 9.0, 9.0, and 9.5—for all five editors, the highest score they gave to any of the five consoles reviewed in the issue. They lauded the breadth and quality of the games library, saying it had vastly improved over previous years due to developers mastering the system's capabilities in addition to Sony revising their stance on 2D and role playing games. They also complimented the low price point of the games compared to the Nintendo 64's, and noted that it was the only console on the market that could be relied upon to deliver a solid stream of games for the coming year, primarily due to third party developers almost unanimously favouring it over its competitors. Legacy SCE was an upstart in the video game industry in late 1994, as the video game market in the early 1990s was dominated by Nintendo and Sega. Nintendo had been the clear leader in the industry since the introduction of the Nintendo Entertainment System in 1985 and the Nintendo 64 was initially expected to maintain this position. The PlayStation's target audience included the generation which was the first to grow up with mainstream video games, along with 18- to 29-year-olds who were not the primary focus of Nintendo. By the late 1990s, Sony became a highly regarded console brand due to the PlayStation, with a significant lead over second-place Nintendo, while Sega was relegated to a distant third. The PlayStation became the first "computer entertainment platform" to ship over 100 million units worldwide, with many critics attributing the console's success to third-party developers. It remains the sixth best-selling console of all time as of 2025[update], with a total of 102.49 million units sold. Around 7,900 individual games were published for the console during its 11-year life span, the second-most games ever produced for a console. Its success resulted in a significant financial boon for Sony as profits from their video game division contributed to 23%. Sony's next-generation PlayStation 2, which is backward compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The PlayStation's lead in installed base and developer support paved the way for the success of its successor, which overcame the earlier launch of the Sega's Dreamcast and then fended off competition from Microsoft's newcomer Xbox and Nintendo's GameCube. The PlayStation 2's immense success and failure of the Dreamcast were among the main factors which led to Sega abandoning the console market. To date, five PlayStation home consoles have been released, which have continued the same numbering scheme, as well as two portable systems. The PlayStation 3 also maintained backward compatibility with original PlayStation discs. Hundreds of PlayStation games have been digitally re-released on the PlayStation Portable, PlayStation 3, PlayStation Vita, PlayStation 4, and PlayStation 5. The PlayStation has often ranked among the best video game consoles. In 2018, Retro Gamer named it the third best console, crediting its sophisticated 3D capabilities as one of its key factors in gaining mass success, and lauding it as a "game-changer in every sense possible". In 2009, IGN ranked the PlayStation the seventh best console in their list, noting its appeal towards older audiences to be a crucial factor in propelling the video game industry, as well as its assistance in transitioning game industry to use the CD-ROM format. Keith Stuart from The Guardian likewise named it as the seventh best console in 2020, declaring that its success was so profound it "ruled the 1990s". In January 2025, Lorentio Brodesco announced the nsOne project, attempting to reverse engineer PlayStation's motherboard. Brodesco stated that "detailed documentation on the original motherboard was either incomplete or entirely unavailable". The project was successfully crowdfunded via Kickstarter. In June, Brodesco manufactured the first working motherboard, promising to bring a fully rooted version with multilayer routing as well as documentation and design files in the near future. The success of the PlayStation contributed to the demise of cartridge-based home consoles. While not the first system to use an optical disc format, it was the first highly successful one, and ended up going head-to-head with the proprietary cartridge-relying Nintendo 64,[d] which the industry had expected to use CDs like PlayStation. After the demise of the Sega Saturn, Nintendo was left as Sony's main competitor in Western markets. Nintendo chose not to use CDs for the Nintendo 64; they were likely concerned with the proprietary cartridge format's ability to help enforce copy protection, given their substantial reliance on licensing and exclusive games for their revenue. Besides their larger capacity, CD-ROMs could be produced in bulk quantities at a much faster rate than ROM cartridges, a week compared to two to three months. Further, the cost of production per unit was far cheaper, allowing Sony to offer games about 40% lower cost to the user compared to ROM cartridges while still making the same amount of net revenue. In Japan, Sony published fewer copies of a wide variety of games for the PlayStation as a risk-limiting step, a model that had been used by Sony Music for CD audio discs. The production flexibility of CD-ROMs meant that Sony could produce larger volumes of popular games to get onto the market quickly, something that could not be done with cartridges due to their manufacturing lead time. The lower production costs of CD-ROMs also allowed publishers an additional source of profit: budget-priced reissues of games which had already recouped their development costs. Tokunaka remarked in 1996: Choosing CD-ROM is one of the most important decisions that we made. As I'm sure you understand, PlayStation could just as easily have worked with masked ROM [cartridges]. The 3D engine and everything—the whole PlayStation format—is independent of the media. But for various reasons (including the economies for the consumer, the ease of the manufacturing, inventory control for the trade, and also the software publishers) we deduced that CD-ROM would be the best media for PlayStation. The increasing complexity of developing games pushed cartridges to their storage limits and gradually discouraged some third-party developers. Part of the CD format's appeal to publishers was that they could be produced at a significantly lower cost and offered more production flexibility to meet demand. As a result, some third-party developers switched to the PlayStation, including Square and Enix, whose Final Fantasy VII and Dragon Quest VII respectively had been planned for the Nintendo 64 (both companies later merged to form Square Enix). Other developers released fewer games for the Nintendo 64 (Konami, releasing only thirteen N64 games but over fifty on the PlayStation). Nintendo 64 game releases were less frequent than the PlayStation's, with many being developed by either Nintendo themselves or second-parties such as Rare. The PlayStation Classic is a dedicated video game console made by Sony Interactive Entertainment that emulates PlayStation games. It was announced in September 2018 at the Tokyo Game Show, and released on 3 December 2018, the 24th anniversary of the release of the original console. As a dedicated console, the PlayStation Classic features 20 pre-installed games; the games run off the open source emulator PCSX. The console is bundled with two replica wired PlayStation controllers (those without analogue sticks), an HDMI cable, and a USB-Type A cable. Internally, the console uses a MediaTek MT8167a Quad A35 system on a chip with four central processing cores clocked at @ 1.5 GHz and a Power VR GE8300 graphics processing unit. It includes 16 GB of eMMC flash storage and 1 Gigabyte of DDR3 SDRAM. The PlayStation Classic is 45% smaller than the original console. The PlayStation Classic received negative reviews from critics and was compared unfavorably to Nintendo's rival Nintendo Entertainment System Classic Edition and Super Nintendo Entertainment System Classic Edition. Criticism was directed at its meagre game library, user interface, emulation quality, use of PAL versions for certain games, use of the original controller, and high retail price, though the console's design received praise. The console sold poorly. See also Notes References
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-lodders1998_10-0] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================
[SOURCE: https://en.wikipedia.org/wiki/Internet#cite_ref-The_New_York_Times_14-2] | [TOKENS: 9291]
Contents Internet The Internet (or internet)[a] is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that comprises private, public, academic, business, and government networks of local to global scope, linked by electronic, wireless, and optical networking technologies. The Internet carries a vast range of information services and resources, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, discussion groups, internet telephony, streaming media and file sharing. Most traditional communication media, including telephone, radio, television, paper mail, newspapers, and print publishing, have been transformed by the Internet, giving rise to new media such as email, online music, digital newspapers, news aggregators, and audio and video streaming websites. The Internet has enabled and accelerated new forms of personal interaction through instant messaging, Internet forums, and social networking services. Online shopping has also grown to occupy a significant market across industries, enabling firms to extend brick and mortar presences to serve larger markets. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet date back to research that enabled the time-sharing of computer resources, the development of packet switching, and the design of computer networks for data communication. The set of communication protocols to enable internetworking on the Internet arose from research and development commissioned in the 1970s by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense in collaboration with universities and researchers across the United States and in the United Kingdom and France. The Internet has no single centralized governance in either technological implementation or policies for access and usage. Each constituent network sets its own policies. The overarching definitions of the two principal name spaces on the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the non-profit Internet Engineering Task Force (IETF). Terminology The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1945 by the United States War Department in a radio operator's manual, and in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks. The word Internet may be capitalized as a proper noun, although this is becoming less common. This reflects the tendency in English to capitalize new terms and move them to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases. The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web, or the Web, is only one of a large number of Internet services. It is the global collection of web pages, documents and other web resources linked by hyperlinks and URLs. History In the 1960s, computer scientists began developing systems for time-sharing of computer resources. J. C. R. Licklider proposed the idea of a universal network while working at Bolt Beranek & Newman and, later, leading the Information Processing Techniques Office at the Advanced Research Projects Agency (ARPA) of the United States Department of Defense. Research into packet switching,[c] one of the fundamental Internet technologies, started in the work of Paul Baran at RAND in the early 1960s and, independently, Donald Davies at the United Kingdom's National Physical Laboratory in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design of the ARPANET, an experimental resource sharing network proposed by ARPA. ARPANET development began with two network nodes which were interconnected between the University of California, Los Angeles and the Stanford Research Institute on 29 October 1969. The third site was at the University of California, Santa Barbara, followed by the University of Utah. By the end of 1971, 15 sites were connected to the young ARPANET. Thereafter, the ARPANET gradually developed into a decentralized communications network, connecting remote centers and military bases in the United States. Other user networks and research networks, such as the Merit Network and CYCLADES, were developed in the late 1960s and early 1970s. Early international collaborations for the ARPANET were rare. Connections were made in 1973 to Norway (NORSAR and, later, NDRE) and to Peter Kirstein's research group at University College London, which provided a gateway to British academic networks, the first internetwork for resource sharing. ARPA projects, the International Network Working Group and commercial initiatives led to the development of various protocols and standards by which multiple separate networks could become a single network, or a network of networks. In 1974, Vint Cerf at Stanford University and Bob Kahn at DARPA published a proposal for "A Protocol for Packet Network Intercommunication". Cerf and his graduate students used the term internet as a shorthand for internetwork in RFC 675. The Internet Experiment Notes and later RFCs repeated this use. The work of Louis Pouzin and Robert Metcalfe had important influences on the resulting TCP/IP design. National PTTs and commercial providers developed the X.25 standard and deployed it on public data networks. The ARPANET initially served as a backbone for the interconnection of regional academic and military networks in the United States to enable resource sharing. Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which facilitated worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP and PTT public data networks had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990. The linking of commercial networks and enterprises by the early 1990s, as well as the advent of the World Wide Web, marked the beginning of the transition to the modern Internet. Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Later in 1990, Tim Berners-Lee began writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also an HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic. As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance. Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near-instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever-greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. In November 2006, the Internet was included on USA Today's list of the New Seven Wonders. As of 31 March 2011[update], the estimated total number of Internet users was 2.095 billion (30% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet. Modern smartphones can access the Internet through cellular carrier networks, and internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016. As of 2018[update], 80% of the world's population were covered by a 4G network. The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years, especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.9 billion in 2012 to 4.8 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect the poorest users the most. One solution, zero-rating, is the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost. Social impact The Internet has enabled new forms of social interaction, activities, and social associations, giving rise to the scholarly study of the sociology of the Internet. Between 2000 and 2009, the number of Internet users globally rose from 390 million to 1.9 billion. By 2010, 22% of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 44 percent of world population, but two-thirds came from the richest countries, with 78 percent of Europeans using the Internet, followed by 57 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world. China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million users. China was followed by India, with some 700 million users, with the United States third with 275 million users. However, in terms of penetration, in 2022, China had a 70% penetration rate compared to India's 60% and the United States's 90%. In 2022, 54% of the world's Internet users were based in Asia, 14% in Europe, 7% in North America, 10% in Latin America and the Caribbean, 11% in Africa, 4% in the Middle East and 1% in Oceania. In 2019, Kuwait, Qatar, the Falkland Islands, Bermuda and Iceland had the highest Internet penetration by the number of users, with 93% or more of the population with access. As of 2022, it was estimated that 5.4 billion people use the Internet, more than two-thirds of the world's population. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet. After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). Modern character encoding standards, such as Unicode, allow for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation. The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly.[citation needed] Educational material at all levels from pre-school (e.g. CBeebies) to post-doctoral (e.g. scholarly literature through Google Scholar) is available on websites. The internet has facilitated the development of virtual universities and distance education, enabling both formal and informal education. The Internet allows researchers to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results. By the late 2010s the Internet had been described as "the main source of scientific information "for the majority of the global North population".: 111 Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all sites in terms of traffic. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Streaming media is the real-time delivery of digital media for immediate consumption or enjoyment by end users. Streaming companies (such as Netflix, Disney+, Amazon's Prime Video, Mubi, Hulu, and Apple TV+) now dominate the entertainment industry, eclipsing traditional broadcasters. Audio streamers such as Spotify and Apple Music also have significant market share in the audio entertainment market. Video sharing websites are also a major factor in the entertainment ecosystem. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses a web player to stream and show video files. YouTube users watch hundreds of millions, and upload hundreds of thousands, of videos daily. Other video sharing websites include Vimeo, Instagram and TikTok.[citation needed] Although many governments have attempted to restrict both Internet pornography and online gambling, this has generally failed to stop their widespread popularity. A number of advertising-funded ostensible video sharing websites known as "tube sites" have been created to host shared pornographic video content. Due to laws requiring the documentation of the origin of pornography, these websites now largely operate in conjunction with pornographic movie studios and their own independent creator networks, acting as de-facto video streaming services. Major players in this field include the market leader Aylo, the operator of PornHub and numerous other branded sites, as well as other independent operators such as xHamster and Xvideos. As of 2023[update], Internet traffic to pornographic video sites rivalled that of mainstream video streaming and sharing services. Remote work is facilitated by tools such as groupware, virtual private networks, conference calling, videotelephony, and VoIP so that work may be performed from any location, such as the worker's home.[citation needed] The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software, which allow groups to easily form, cheaply communicate, and share ideas. An example of collaborative software is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice).[citation needed] Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work.[citation needed] The internet also allows for cloud computing, virtual private networks, remote desktops, and remote work.[citation needed] The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment, including insults and hate speech, to, in extreme cases, rape and death threats, in response to posts they have made on social media. Social media companies have been criticized in the past for not doing enough to aid victims of online abuse. Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from pornography or violent content on the Internet. The most popular social networking services commonly forbid users under the age of 13. However, these policies can be circumvented by registering an account with a false birth date, and a significant number of children aged under 13 join such sites.[citation needed] Social networking services for younger children, which claim to provide better levels of protection for children, also exist. Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.[citation needed] Cyberslacking can become a drain on corporate resources; employees spend a significant amount of time surfing the Web while at work. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity. Electronic business encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion in 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales. While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality. A 2013 Institute for Local Self-Reliance report states that brick-and-mortar retailers employ 47 people for every $10 million in sales, while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people. Advertising on popular web pages can be lucrative, and e-commerce. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television.: 19 Many common online advertising practices are controversial and increasingly subject to regulation. The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism. Social media websites, such as Facebook and Twitter, helped people organize the Arab Spring, by helping activists organize protests, communicate grievances, and disseminate information. Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies. E-government is the use of technological communications devices, such as the Internet, to provide public services to citizens and other persons in a country or region. E-government offers opportunities for more direct and convenient citizen access to government and for government provision of services directly to citizens. Cybersectarianism is a new organizational form that involves: highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, online chat rooms, and web-based message boards. In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.[citation needed] Applications and services The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services. The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. HyperText Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistics and is one of many languages or protocols that can be used for communication on the Internet. World Wide Web browser software, such as Microsoft Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, enable users to navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain computer data, including graphics, sounds, text, video, multimedia and interactive content. Client-side scripts can include animations, games, office applications and scientific demonstrations. Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Internet telephony is a common communications service realized with the Internet. The name of the principal internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP).[citation needed] VoIP systems now dominate many markets, being as easy and convenient as a traditional telephone, while having substantial cost savings, especially over long distances. File sharing is the practice of transferring large amounts of data in the form of computer files across the Internet, for example via file servers. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. Access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by a digital signature. Governance The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the IETF. The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices when implementing Internet technologies. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The organization coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet. The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. Regional Internet registries (RIRs) were established for five regions of the world to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region:[citation needed] The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the Internet Engineering Task Force (IETF), Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.[citation needed] Infrastructure The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, and modems. However, as an example of internetworking, many of the network nodes are not necessarily Internet equipment per se. Internet packets are carried by other full-fledged networking protocols, with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.[citation needed] Internet service providers (ISPs) establish worldwide connectivity between individual networks at various levels of scope. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fiber-optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy.[citation needed] An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.[citation needed] Common methods of Internet access by users include broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology.[citation needed] Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. Most servers that provide internet services are today hosted in data centers, and content is often accessed through high-performance content delivery networks. Colocation centers often host private peering connections between their customers, internet transit providers, cloud providers, meet-me rooms for connecting customers together, Internet exchange points, and landing points and terminal equipment for fiber optic submarine communication cables, connecting the internet. Internet Protocol Suite The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in RFC 1122 and RFC 1123:[citation needed] The most prominent component of the Internet model is the Internet Protocol. IP enables internetworking, essentially establishing the Internet itself. Two versions of the Internet Protocol exist, IPv4 and IPv6.[citation needed] Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe the exchange of data over the network.[citation needed] For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via Dynamic Host Configuration Protocol, or are configured.[citation needed] Domain Name Systems convert user-inputted domain names (e.g. "en.wikipedia.org") into IP addresses.[citation needed] Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed in 1981 to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted. Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries began to urge all resource managers to plan rapid adoption and conversion. By design, IPv6 is not directly interoperable with IPv4. Instead, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities exist for internetworking, and some nodes have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol.[citation needed] Network infrastructure, however, has been lagging in this development.[citation needed] A subnet or subnetwork is a logical subdivision of an IP network.: 1, 16 Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.[citation needed] The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, 198.51.100.0/24 is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range 198.51.100.0 to 198.51.100.255 belong to this network. The IPv6 address specification 2001:db8::/32 is a large address block with 296 addresses, having a 32-bit routing prefix.[citation needed] For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, 255.255.255.0 is the subnet mask for the prefix 198.51.100.0/24.[citation needed] Computers and routers use routing tables in their operating system to forward IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet.[citation needed] The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet. Security Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information. Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users.[citation needed] Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale. Malware poses serious problems to individuals and businesses on the Internet. According to Symantec's 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016. Cybercrime, which includes malware attacks as well as other crimes committed by computer, was predicted to cost the world economy US$6 trillion in 2021, and is increasing at a rate of 15% per year. Since 2021, malware has been designed to target computer systems that run critical infrastructure such as the electricity distribution network. Malware can be designed to evade antivirus software detection algorithms. The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Under the Act, all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.[d] The large amount of data gathered from packet capture requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access to certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software were allegedly installed by German Siemens AG and Finnish Nokia. Some governments, such as those of Myanmar, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters. In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive specific on individual computers or networks in order to limit access by children to pornographic material or depiction of violence.[citation needed] Performance As the Internet is a heterogeneous network, its physical characteristics, including, for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization. PB per monthYear020,00040,00060,00080,000100,000120,000140,000199019952000200520102015Petabytes per monthGlobal Internet Traffic Volume The volume of Internet traffic is difficult to measure because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.[citation needed] An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to the small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests. Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis. In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer-reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files. See also Notes References Sources Further reading External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Roskosmos] | [TOKENS: 5064]
Contents Roscosmos The State Corporation for Space Activities "Roscosmos",[note 1] commonly known simply as Roscosmos (Russian: Роскосмос), is a state corporation of the Russian Federation responsible for space flights, cosmonautics programs, and aerospace research. Originating from the Soviet space program founded in the 1950s, Roscosmos emerged following the dissolution of the Soviet Union in 1991. It initially began as the Russian Space Agency,[note 2] which was established on 25 February 1992 and restructured in 1999 and 2004 as the Russian Aviation and Space Agency[note 3] and the Federal Space Agency (Roscosmos),[note 4] respectively. In 2015, the Federal Space Agency (Roscosmos) was merged with the United Rocket and Space Corporation, a government corporation, to re-nationalize the space industry of Russia, leading to Roscosmos in its current form. Roscosmos is headquartered in Moscow, with its main Mission Control Center in the nearby city of Korolyov, and the Yuri Gagarin Cosmonaut Training Center located in Star City in Moscow Oblast. Its launch facilities include Baikonur Cosmodrome in Kazakhstan, the world's first and largest spaceport, and Vostochny Cosmodrome, which is being built in the Russian Far East in Amur Oblast. Its director since February 2025 is Dmitry Bakanov. As the main successor to the Soviet space program, Roscosmos' legacy includes the world's first satellite, the first human spaceflight, and the first space station (Salyut). Its current activities include the International Space Station, wherein it is a major partner. On 22 February 2019, Roscosmos announced the construction of its new headquarters in Moscow, the National Space Centre. Its Astronaut Corps is the first in world's history. History The Soviet space program did not have central executive agencies. Instead, its organizational architecture was multi-centered; it was the design bureaus and the council of designers that had the most say, not the political leadership. The creation of a central agency after the reorganization of the Soviet Union into the Russian Federation was therefore a new development. The Russian Space Agency was formed on 25 February 1992, by a decree of President Yeltsin. Yuri Koptev, who had previously worked with designing Mars landers at NPO Lavochkin, became the agency's first director. In the early years, the agency suffered from lack of authority as the powerful design bureaus fought to protect their own spheres of operation and to survive. For example, the decision to keep Mir in operation beyond 1999 was not made by the agency, but by the private shareholder board of the Energia design bureau. Another example is that the decision to develop the new Angara rocket was rather a function of Khrunichev's ability to attract resources than a conscious long-term decision by the agency. The 1990s saw serious financial problems due to the decreased cash flow, which encouraged the space agency to improvise and seek other ways to keep space programs running. This resulted in the agency's leading role in commercial satellite launches and space tourism.[citation needed] Scientific missions, such as interplanetary probes or astronomy missions during these years played a very small role, and although the agency had connections with the Russian aerospace forces, its budget was not part of Russia's defense budget; nevertheless, the agency managed to operate the Mir space station well past its planned lifespan, contributed to the International Space Station, and continued to fly Soyuz and Progress missions. In 1994, Roscosmos renewed the lease on its Baikonur cosmodrome with the government of Kazakhstan. On 31 October 2000, a Soyuz spacecraft lifted off from the Baikonur Cosmodrome at 10:53 a.m. Kazakhstan time. On board were Expedition One Commander William M. (Bill) Shepherd of NASA and cosmonauts Sergei Krikalev and Yuri Gidzenko of Roscosmos. The trio arrived at the International Space Station on 2 November, marking the start of an uninterrupted human presence on the orbiting laboratory. In March 2004, the agency's director Yuri Koptev was replaced by Anatoly Perminov, who had previously served as the first commander of the Space Forces. The Russian economy boomed throughout 2005 from high prices for exports, such as oil and gas, the outlook for future funding in 2006 appeared more favorable. This resulted in the Russian Duma approving a budget of 305 billion rubles (about US$11 billion) for the Space Agency from January 2006 until 2015, with overall space expenditures in Russia total about 425 billion rubles for the same time period. The budget for 2006 was as high as 25 billion rubles (about US$900 million), which is a 33% increase from the 2005 budget. Under the current 10-year budget approved, the budget of the Space Agency shall increase 5–10% per year, providing the space agency with a constant influx of money. In addition to the budget, Roscosmos plans to have over 130 billion rubles flowing into its budget by other means, such as industry investments and commercial space launches. It is around the time US-based The Planetary Society entered a partnership with Roscosmos. The federal space budget for the year 2009 was left unchanged despite the global economic crisis, standing at about 82 billion rubles ($2.4 billion).[citation needed] In 2011, the government spent 115 billion rubles ($3.8 bln) in the national space programs.[citation needed] The proposed project core budget for 2013 to be around 128.3 billion rubles. The budget for the whole space program is 169.8 billion rubles. ($5.6 bln). By 2015, the amount of the budget can be increased to 199.2 billion rubles. Priorities of the Russian space program include the new Angara rocket family and development of new communications, navigation and remote Earth sensing spacecraft.[citation needed] The GLONASS global navigation satellite system has for many years been one of the top priorities and has been given its own budget line in the federal space budget. In 2007, GLONASS received 9.9 billion rubles ($360 million), and under the terms of a directive signed by Prime Minister Vladimir Putin in 2008, an additional $2.6 billion will be allocated for its development.[citation needed] Due to International Space Station involvements, up to 50% of Russia's space budget is spent on the crewed space program as of 2009[update]. Some observers have pointed out that this has a detrimental effect on other aspects of space exploration, and that the other space powers spend much lesser proportions of their overall budgets on maintaining human presence in orbit. Despite the considerably improved budget,[when?] attention of legislative and executive authorities, positive media coverage and broad support among the population, the Russian space program continues to face several problems. Wages in the space industry are low; the average age of employees is high (46 years in 2007), and much of the equipment is obsolete.[citation needed] On the positive side, many companies in the sector have been able to profit from contracts and partnerships with foreign companies; several new systems such as new rocket upper stages have been developed in recent years; investments have been made to production lines, and companies have started to pay more attention to educating a new generation of engineers and technicians. On 29 April 2011, Perminov was replaced with Vladimir Popovkin as the director of Roscosmos. The 65-year-old Perminov was over the legal age for state officials, and had received some criticism after a failed GLONASS launch in December 2010. Popovkin is a former commander of the Russian Space Forces and First Deputy Defense Minister of Russia. Also in 2011, the Fobos-Grunt Mars mission was lost in low Earth orbit and crashed back to earth in 2012 As a result of a series of reliability problems, and proximate to the failure of a July 2013 Proton M launch, a major reorganization of the Russian space industry was undertaken. The United Rocket and Space Corporation was formed as a joint-stock corporation by the government in August 2013 to consolidate the Russian space sector. Deputy Prime Minister Dmitry Rogozin said "the failure-prone space sector is so troubled that it needs state supervision to overcome its problems." Three days following the Proton M launch failure, the Russian government had announced that "extremely harsh measures" would be taken "and spell the end of the [Russian] space industry as we know it." Information indicated then that the government intended to reorganize in such a way as to "preserve and enhance the Roscosmos space agency." More detailed plans released in October 2013 called for a re-nationalization of the "troubled space industry", with sweeping reforms including a new "unified command structure and reducing redundant capabilities, acts that could lead to tens of thousands of layoffs." According to Rogozin, the Russian space sector employs about 250,000 people, while the United States needs only 70,000 to achieve similar results. He said: "Russian space productivity is eight times lower than America's, with companies duplicating one another's work and operating at about 40 percent efficiency." Under the 2013 plan, Roscosmos was to "act as a federal executive body and contracting authority for programs to be implemented by the industry." Despite Russian state efforts in the reorganization, two more Proton launch vehicle failures occurred in 2014 and 2015. The government reorganized all of Russia's rocket engine companies into a single entity in June 2015. NPO Energomash, as well as all other engine companies, became a part of United Rocket and Space Corporation. The decree to actually abolish Roscosmos as a state agency was signed by Vladimir Putin in December 2015, which was replaced by a state-run corporation effective 1 January 2016. In 2016, the state agency was dissolved and the Roscosmos brand moved to the state corporation, which had been created in 2013 as the United Rocket and Space Corporation, with the specific mission to renationalize the Russian space sector. In May 2018, Putin selected Rogozin to be the head of the Russian state space corporation Roscosmos. In 2018, Russian President Vladimir Putin said "it 'is necessary to drastically improve the quality and reliability of space and launch vehicles' ... to preserve Russia's increasingly threatened leadership in space." In November 2018 Alexei Kudrin, head of Russian financial audit agency, named Roscosmos as the public enterprise with "the highest losses" due to "irrational spending" and outright theft and corruption, under the leadership of Igor Komarov who was terminated in May 2018 in favour of Rogozin. In 2020 Roscosmos under Rogozin reneged on its participation in Lunar Gateway, a NASA-led project that will see a lunar orbiter spaceport for the moon. It had previously signed an agreement in September 2017 with the Americans. In March 2021, Roscosmos signed a memorandum of cooperative construction of a lunar base called the International Lunar Research Station with the China National Space Administration.” In April 2021, Roscosmos announced that it will be departing the ISS program after 2024. In its place, it was announced that a new space station (Russian Orbital Service Station) will be constructed starting in 2025. In June 2021 Rogozin complained that sanctions imposed in the wake of the 2014 Russian annexation of Crimea were hurting Roscosmos. In September 2021, Roscosmos announced its revenue and net income, losing 25 billion roubles and 1 billion roubles respectively in 2020, due to the reduction of profit from foreign contracts, an increase in show-up pay, stay-at-home days and personnel health expenses due to the COVID-19 pandemic. According to Roscosmos, these losses would also impact the corporation for the next two years. In October, Roscosmos placed the tests of rocket engines in the engineering bureau of chemical automatics in Voronezh on hold for one month to deliver 33 tons of oxygen to local medical centers, as part of aid for the COVID-19 pandemic. In December 2021, the Government of Russia confirmed determination of the agreement with Roscosmos for development of next-gen space systems, the document been provided for the officials in July 2020. Since the Russian invasion of Ukraine on 24 February 2022, Roscosmos launched nine rockets in 2022 and seven in the first half of 2023. In early March 2022, Roscosmos under Rogozin suspended its participation in the ESA's spaceport in Kourou, French Guiana in a tit-for-tat move over the sanctions imposed in the wake of the Russian invasion. As well Rogozin said he would suspend delivery of the RD-181 engine which is used for the Northrop Grumman Antares-Cygnus space cargo delivery system. In late March 2022, the European Space Agency (ESA) suspended cooperation with Roscosmos in the ExoMars rover mission because of the Russian invasion, and British satellite venture OneWeb signed contracts with ISRO and SpaceX to launch its satellites after friction had developed "with Moscow" and Roscosmos, its previous orbit service provider. The friction had developed over Rogozin's command that OneWeb needed to ditch its venture capital investment from the UK government. On 2 May 2022, Rogozin announced that Roscosmos would terminate its involvement in the ISS with 12 months' notice as stipulated in the international contract that governs the satellite. This followed the 3 March 2022 announcement that Roscosmos would cease cooperation on scientific experiments at the Spacelab, and the 25 March 2022 announcement by Rogozin that "cooperation with Europe is now impossible after sanctions over the Ukraine war." Rogozin was removed from his job as CEO in July 2022, and replaced with Yury Borisov, who seemed to stabilize the relationship with the ISS partners, especially NASA. One complaint against Rogozin was his risky words about terminating the ISS agreement over the war in Ukraine, which he broadcast as early as April 2022. At one point in time NASA had bought 71 return trips on Soyuz for almost $4 billion over six years. The global space-launch services market was valued at $12.4 billion in 2021 and was forecast to reach $38 billion by decade's end. An American academic wrote that in the wake of the Russian invasion, Roscosmos' share of that market was likely to decline in favour of new entrants such as Japan and India, as well as commercial entrants like SpaceX and Blue Origin. In June 2023, Roscosmos held a campaign to recruit volunteers for the Uran Battalion, a militia for the Russian invasion of Ukraine. In October 2023, Borisov announced the need for 150 billion rubles to build the Russian space station in the next three years. At completion in 2032, it will have absorbed 609 billion rubles. In February 2024, at the 2023 AGM, Borisov announced the loss of 180 billion rubles in export revenues, chiefly engine sales and launch services, because of the Western hostility to the Russian invasion of Ukraine. Roscosmos had lost 90% of its launch service contracts since the advent of the war. In late 2025, Roscosmos launched three Iranian satellites into orbit aboard a Soyuz-2.1b rocket from Vostochny Cosmodrome. Analysts described the mission as part of Russia's ongoing cooperation with non-Western partners. Roscosmos and Russia's space industry are facing significant challenges. The country is on track to conduct its fewest orbital launches since 1961. As of August 15, 2024, only nine launches had occurred, a sharp decline partly attributed to the loss of Western customers following Russia's invasion of Ukraine. Roscosmos has reported financial losses of 180 billion rubles ($2.1 billion) due to canceled contracts. The agency's first deputy director indicated it may not achieve profitability until 2025. From 2024 on Roscosmos headquarters will be located in the new National Space Center in the Moscow district of Fili. Current programs Roscosmos uses a family of several launch rockets, the most famous of them being the R-7, commonly known as the Soyuz rocket that is capable of launching about 7.5 tons into low Earth orbit (LEO). The Proton rocket (or UR-500K) has a lift capacity of over 20 tons to LEO. Smaller rockets include Rokot and other Stations. Currently rocket development encompasses both a new rocket system, Angara, as well as enhancements of the Soyuz rocket, Soyuz-2 and Soyuz-2-3. Two modifications of the Soyuz, the Soyuz-2.1a and Soyuz-2.1b have already been successfully tested, enhancing the launch capacity to 8.5 tons to LEO. Future projects include the Soyuz successor launch rocket. Roscosmos operates a number of programs for Earth science, communication, and scientific research like the Bion-M space medicine satellite series, the Elektro–L meteorological satellite series and the Meteor-M meteorological satellite series. Roscosmos also operates one science satellite (Spektr-RG) and no interplanetary probes. As of 2024 there are plans for scientific robotic missions to one of the Mars moons as well as an increase in Lunar orbit research satellites to one (Luna-Glob). Future plans include: The agency has expanded its collaborative efforts with foreign partners, including the launch of services for Iranian satellites in 2025. Resurs-P is a series of Russian commercial Earth observation satellites capable of acquiring high-resolution imagery (resolution up to 1.0 m). The spacecraft is operated by Roscosmos as a replacement of the Resurs-DK No.1 satellite. Gonets is a series of civilian low Earth orbit communication satellite system. On 2016, the system consists of 13 satellites (12 Gonets-M and 1 Gonets-D1). Create HEO space system "Arctic" to address the hydrological and meteorological problems in the Arctic region and the northern areas of the Earth, with the help of two spacecraft "Arktika-M" and in the future within the system can create a communications satellite "Arktika-MS" and radar satellites "Arktika-R." The launch of two satellites "Obzor-R" (Review-R) Remote Sensing of the Earth, with the AESA radar and four spacecraft "Obzor-O" (Review-O) to capture the Earth's surface in normal and infrared light in a broad swath of 80 km with a resolution of 10 meters. The first two satellites of the projects planned for launch in 2015.[citation needed] On 19 July 2014, Roscosmos launched the Foton-M4 satellite containing, among other animals and plants, a group of five geckos. The five geckos, four females and one male, were used as a part of the Gecko-F4 research program aimed at measuring the effects of weightlessness on the lizards' ability to procreate and develop in the harsh environment. However, soon after the spacecraft exited the atmosphere, mission control lost contact with the vessel which led to an attempt to reestablish communication that was only achieved later in the mission. When the satellite returned to Earth after its planned two-month mission had been cut short to 44 days, the space agency researchers reported that all the geckos had perished during the flight. The exact cause that led to the deaths of the geckos was declared unknown by the scientific team in charge of the project. Reports from the Institute of Medical and Biological Problems in Russia have indicated that the lizards had been dead for at least a week prior to their return to Earth. A number of those connected to the mission have theorized that a failure in the vessel's heating system may have caused the cold blooded reptiles to freeze to death. Included in the mission were a number of fruit flies, plants, and mushrooms which all survived the mission. Roscosmos is one of the partners in the International Space Station program. It contributed the core space modules Zarya and Zvezda, which were both launched by Proton rockets and later were joined by NASA's Unity Module. The Rassvet module was launched aboard Space Shuttle Atlantis and is primarily used for cargo storage and as a docking port for visiting spacecraft. The Nauka module is the final planned component of the ISS, launch was postponed several times from the initially planned date in 2007, but attached to ISS in July 2021. Roscosmos is responsible for expedition crew launches by Soyuz-TMA spacecraft and resupplies the space station with Progress space transporters. After the initial ISS contract with NASA expired, Roscosmos and NASA, with the approval of the US government, entered into a space contract running until 2011, according to which Roscosmos will sell NASA spots on Soyuz spacecraft for approximately $21 million per person each way, thus $42 million to and back from the ISS per person, as well as provide Progress transport flights, at $50 million per Progress as outlined in the Exploration Systems Architecture Study. Roscosmos announced that according to this arrangement, crewed Soyuz flights would be doubled to 4 per year and Progress flights doubled to 8 per year beginning in 2008.[needs update][citation needed] Roscosmos has provided space tourism for fare-paying passengers to ISS through the Space Adventures company. As of 2009, six space tourists have contracted with Roscosmos and have flown into space, each for an estimated fee of at least $20 million (USD).[needs update] Continued international collaboration in ISS missions has been thrown into doubt by the 2022 Russian invasion of Ukraine and related sanctions on Russia, although resupply missions continued in 2022 and 2023. In 2018, Russia agreed to help build the Suffa observatory in Uzbekistan. The observatory was started in 1991, but stalled after the fall of the USSR. One of Roscosmos's projects that was widely covered in the media in 2005 was Kliper, a small lifting body reusable spacecraft. While Roscosmos had reached out to ESA and JAXA as well as others to share development costs of the project, it also stated that it will go forward with the project even without the support of other space agencies. This statement was backed by the approval of its budget for 2006–2015, which includes the necessary funding of Kliper. However, the Kliper program was cancelled in July 2006, and has been replaced by the new Prospective Piloted Transport System. (Orel) project. As of August 2023, the first uncrewed and crewed test flights of Orel spacecraft are expected to occur in 2028. Launch control The Russian Space Forces is the military counterpart of the Roscosmos with similar mission objectives as of the United States Space Force. The Russian branch was formed after the merging of the space components of the Russian Air Force and the Aerospace Defense Forces (VKO) in 2015. The Space Forces controls Russia's Plesetsk Cosmodrome launch facility. Roscosmos and the Space Forces share control of the Baikonur Cosmodrome, where Roscosmos reimburses the VKO for the wages of many of the flight controllers during civilian launches. Roscosmos and the Space Forces also share control of the Yuri Gagarin Cosmonaut Training Center. It has been announced that Russia is to build another spaceport in Tsiolkovsky, Amur Oblast. The Vostochny Cosmodrome was scheduled to be finished by 2018 having launched its first rocket in 2016. Subsidiaries As of 2017, Roscosmos had the following subsidiaries: See also Notes References External links Media related to Roscosmos at Wikimedia Commons
========================================
[SOURCE: https://en.wikipedia.org/wiki/Mars#cite_ref-lodders1998_10-2] | [TOKENS: 11899]
Contents Mars Mars is the fourth planet from the Sun. It is also known as the "Red Planet", for its orange-red appearance. Mars is a desert-like rocky planet with a tenuous atmosphere that is primarily carbon dioxide (CO2). At the average surface level the atmospheric pressure is a few thousandths of Earth's, atmospheric temperature ranges from −153 to 20 °C (−243 to 68 °F), and cosmic radiation is high. Mars retains some water, in the ground as well as thinly in the atmosphere, forming cirrus clouds, fog, frost, larger polar regions of permafrost and ice caps (with seasonal CO2 snow), but no bodies of liquid surface water. Its surface gravity is roughly a third of Earth's or double that of the Moon. Its diameter, 6,779 km (4,212 mi), is about half the Earth's, or twice the Moon's, and its surface area is the size of all the dry land of Earth. Fine dust is prevalent across the surface and the atmosphere, being picked up and spread at the low Martian gravity even by the weak wind of the tenuous atmosphere. The terrain of Mars roughly follows a north-south divide, the Martian dichotomy, with the northern hemisphere mainly consisting of relatively flat, low lying plains, and the southern hemisphere of cratered highlands. Geologically, the planet is fairly active with marsquakes trembling underneath the ground, but also hosts many enormous volcanoes that are extinct (the tallest is Olympus Mons, 21.9 km or 13.6 mi tall), as well as one of the largest canyons in the Solar System (Valles Marineris, 4,000 km or 2,500 mi long). Mars has two natural satellites that are small and irregular in shape: Phobos and Deimos. With a significant axial tilt of 25 degrees, Mars experiences seasons, like Earth (which has an axial tilt of 23.5 degrees). A Martian solar year is equal to 1.88 Earth years (687 Earth days), a Martian solar day (sol) is equal to 24.6 hours. Mars formed along with the other planets approximately 4.5 billion years ago. During the martian Noachian period (4.5 to 3.5 billion years ago), its surface was marked by meteor impacts, valley formation, erosion, the possible presence of water oceans and the loss of its magnetosphere. The Hesperian period (beginning 3.5 billion years ago and ending 3.3–2.9 billion years ago) was dominated by widespread volcanic activity and flooding that carved immense outflow channels. The Amazonian period, which continues to the present, is the currently dominating and remaining influence on geological processes. Because of Mars's geological history, the possibility of past or present life on Mars remains an area of active scientific investigation, with some possible traces needing further examination. Being visible with the naked eye in Earth's sky as a red wandering star, Mars has been observed throughout history, acquiring diverse associations in different cultures. In 1963 the first flight to Mars took place with Mars 1, but communication was lost en route. The first successful flyby exploration of Mars was conducted in 1965 with Mariner 4. In 1971 Mariner 9 entered orbit around Mars, being the first spacecraft to orbit any body other than the Moon, Sun or Earth; following in the same year were the first uncontrolled impact (Mars 2) and first successful landing (Mars 3) on Mars. Probes have been active on Mars continuously since 1997. At times, more than ten probes have simultaneously operated in orbit or on the surface, more than at any other planet beyond Earth. Mars is an often proposed target for future crewed exploration missions, though no such mission is currently planned. Natural history Scientists have theorized that during the Solar System's formation, Mars was created as the result of a random process of run-away accretion of material from the protoplanetary disk that orbited the Sun. Mars has many distinctive chemical features caused by its position in the Solar System. Elements with comparatively low boiling points, such as chlorine, phosphorus, and sulfur, are much more common on Mars than on Earth; these elements were probably pushed outward by the young Sun's energetic solar wind. After the formation of the planets, the inner Solar System may have been subjected to the so-called Late Heavy Bombardment. About 60% of the surface of Mars shows a record of impacts from that era, whereas much of the remaining surface is probably underlain by immense impact basins caused by those events. However, more recent modeling has disputed the existence of the Late Heavy Bombardment. There is evidence of an enormous impact basin in the Northern Hemisphere of Mars, spanning 10,600 by 8,500 kilometres (6,600 by 5,300 mi), or roughly four times the size of the Moon's South Pole–Aitken basin, which would be the largest impact basin yet discovered if confirmed. It has been hypothesized that the basin was formed when Mars was struck by a Pluto-sized body about four billion years ago. The event, thought to be the cause of the Martian hemispheric dichotomy, created the smooth Borealis basin that covers 40% of the planet. A 2023 study shows evidence, based on the orbital inclination of Deimos (a small moon of Mars), that Mars may once have had a ring system 3.5 billion years to 4 billion years ago. This ring system may have been formed from a moon, 20 times more massive than Phobos, orbiting Mars billions of years ago; and Phobos would be a remnant of that ring. Epochs: The geological history of Mars can be split into many periods, but the following are the three primary periods: Geological activity is still taking place on Mars. The Athabasca Valles is home to sheet-like lava flows created about 200 million years ago. Water flows in the grabens called the Cerberus Fossae occurred less than 20 million years ago, indicating equally recent volcanic intrusions. The Mars Reconnaissance Orbiter has captured images of avalanches. Physical characteristics Mars is approximately half the diameter of Earth or twice that of the Moon, with a surface area only slightly less than the total area of Earth's dry land. Mars is less dense than Earth, having about 15% of Earth's volume and 11% of Earth's mass, resulting in about 38% of Earth's surface gravity. Mars is the only presently known example of a desert planet, a rocky planet with a surface akin to that of Earth's deserts. The red-orange appearance of the Martian surface is caused by iron(III) oxide (nanophase Fe2O3) and the iron(III) oxide-hydroxide mineral goethite. It can look like butterscotch; other common surface colors include golden, brown, tan, and greenish, depending on the minerals present. Like Earth, Mars is differentiated into a dense metallic core overlaid by less dense rocky layers. The outermost layer is the crust, which is on average about 42–56 kilometres (26–35 mi) thick, with a minimum thickness of 6 kilometres (3.7 mi) in Isidis Planitia, and a maximum thickness of 117 kilometres (73 mi) in the southern Tharsis plateau. For comparison, Earth's crust averages 27.3 ± 4.8 km in thickness. The most abundant elements in the Martian crust are silicon, oxygen, iron, magnesium, aluminum, calcium, and potassium. Mars is confirmed to be seismically active; in 2019, it was reported that InSight had detected and recorded over 450 marsquakes and related events. Beneath the crust is a silicate mantle responsible for many of the tectonic and volcanic features on the planet's surface. The upper Martian mantle is a low-velocity zone, where the velocity of seismic waves is lower than surrounding depth intervals. The mantle appears to be rigid down to the depth of about 250 km, giving Mars a very thick lithosphere compared to Earth. Below this the mantle gradually becomes more ductile, and the seismic wave velocity starts to grow again. The Martian mantle does not appear to have a thermally insulating layer analogous to Earth's lower mantle; instead, below 1050 km in depth, it becomes mineralogically similar to Earth's transition zone. At the bottom of the mantle lies a basal liquid silicate layer approximately 150–180 km thick. The Martian mantle appears to be highly heterogenous, with dense fragments up to 4 km across, likely injected deep into the planet by colossal impacts ~4.5 billion years ago; high-frequency waves from eight marsquakes slowed as they passed these localized regions, and modeling indicates the heterogeneities are compositionally distinct debris preserved because Mars lacks plate tectonics and has a sluggishly convecting interior that prevents complete homogenization. Mars's iron and nickel core is at least partially molten, and may have a solid inner core. It is around half of Mars's radius, approximately 1650–1675 km, and is enriched in light elements such as sulfur, oxygen, carbon, and hydrogen. The temperature of the core is estimated to be 2000–2400 K, compared to 5400–6230 K for Earth's solid inner core. In 2025, based on data from the InSight lander, a group of researchers reported the detection of a solid inner core 613 kilometres (381 mi) ± 67 kilometres (42 mi) in radius. Mars is a terrestrial planet with a surface that consists of minerals containing silicon and oxygen, metals, and other elements that typically make up rock. The Martian surface is primarily composed of tholeiitic basalt, although parts are more silica-rich than typical basalt and may be similar to andesitic rocks on Earth, or silica glass. Regions of low albedo suggest concentrations of plagioclase feldspar, with northern low albedo regions displaying higher than normal concentrations of sheet silicates and high-silicon glass. Parts of the southern highlands include detectable amounts of high-calcium pyroxenes. Localized concentrations of hematite and olivine have been found. Much of the surface is deeply covered by finely grained iron(III) oxide dust. The Phoenix lander returned data showing Martian soil to be slightly alkaline and containing elements such as magnesium, sodium, potassium and chlorine. These nutrients are found in soils on Earth, and are necessary for plant growth. Experiments performed by the lander showed that the Martian soil has a basic pH of 7.7, and contains 0.6% perchlorate by weight, concentrations that are toxic to humans. Streaks are common across Mars and new ones appear frequently on steep slopes of craters, troughs, and valleys. The streaks are dark at first and get lighter with age. The streaks can start in a tiny area, then spread out for hundreds of metres. They have been seen to follow the edges of boulders and other obstacles in their path. The commonly accepted hypotheses include that they are dark underlying layers of soil revealed after avalanches of bright dust or dust devils. Several other explanations have been put forward, including those that involve water or even the growth of organisms. Environmental radiation levels on the surface are on average 0.64 millisieverts of radiation per day, and significantly less than the radiation of 1.84 millisieverts per day or 22 millirads per day during the flight to and from Mars. For comparison the radiation levels in low Earth orbit, where Earth's space stations orbit, are around 0.5 millisieverts of radiation per day. Hellas Planitia has the lowest surface radiation at about 0.342 millisieverts per day, featuring lava tubes southwest of Hadriacus Mons with potentially levels as low as 0.064 millisieverts per day, comparable to radiation levels during flights on Earth. Although Mars has no evidence of a structured global magnetic field, observations show that parts of the planet's crust have been magnetized, suggesting that alternating polarity reversals of its dipole field have occurred in the past. This paleomagnetism of magnetically susceptible minerals is similar to the alternating bands found on Earth's ocean floors. One hypothesis, published in 1999 and re-examined in October 2005 (with the help of the Mars Global Surveyor), is that these bands suggest plate tectonic activity on Mars four billion years ago, before the planetary dynamo ceased to function and the planet's magnetic field faded. Geography and features Although better remembered for mapping the Moon, Johann Heinrich von Mädler and Wilhelm Beer were the first areographers. They began by establishing that most of Mars's surface features were permanent and by more precisely determining the planet's rotation period. In 1840, Mädler combined ten years of observations and drew the first map of Mars. Features on Mars are named from a variety of sources. Albedo features are named for classical mythology. Craters larger than roughly 50 km are named for deceased scientists and writers and others who have contributed to the study of Mars. Smaller craters are named for towns and villages of the world with populations of less than 100,000. Large valleys are named for the word "Mars" or "star" in various languages; smaller valleys are named for rivers. Large albedo features retain many of the older names but are often updated to reflect new knowledge of the nature of the features. For example, Nix Olympica (the snows of Olympus) has become Olympus Mons (Mount Olympus). The surface of Mars as seen from Earth is divided into two kinds of areas, with differing albedo. The paler plains covered with dust and sand rich in reddish iron oxides were once thought of as Martian "continents" and given names like Arabia Terra (land of Arabia) or Amazonis Planitia (Amazonian plain). The dark features were thought to be seas, hence their names Mare Erythraeum, Mare Sirenum and Aurorae Sinus. The largest dark feature seen from Earth is Syrtis Major Planum. The permanent northern polar ice cap is named Planum Boreum. The southern cap is called Planum Australe. Mars's equator is defined by its rotation, but the location of its Prime Meridian was specified, as was Earth's (at Greenwich), by choice of an arbitrary point; Mädler and Beer selected a line for their first maps of Mars in 1830. After the spacecraft Mariner 9 provided extensive imagery of Mars in 1972, a small crater (later called Airy-0), located in the Sinus Meridiani ("Middle Bay" or "Meridian Bay"), was chosen by Merton E. Davies, Harold Masursky, and Gérard de Vaucouleurs for the definition of 0.0° longitude to coincide with the original selection. Because Mars has no oceans, and hence no "sea level", a zero-elevation surface had to be selected as a reference level; this is called the areoid of Mars, analogous to the terrestrial geoid. Zero altitude was defined by the height at which there is 610.5 Pa (6.105 mbar) of atmospheric pressure. This pressure corresponds to the triple point of water, and it is about 0.6% of the sea level surface pressure on Earth (0.006 atm). For mapping purposes, the United States Geological Survey divides the surface of Mars into thirty cartographic quadrangles, each named for a classical albedo feature it contains. In April 2023, The New York Times reported an updated global map of Mars based on images from the Hope spacecraft. A related, but much more detailed, global Mars map was released by NASA on 16 April 2023. The vast upland region Tharsis contains several massive volcanoes, which include the shield volcano Olympus Mons. The edifice is over 600 km (370 mi) wide. Because the mountain is so large, with complex structure at its edges, giving a definite height to it is difficult. Its local relief, from the foot of the cliffs which form its northwest margin to its peak, is over 21 km (13 mi), a little over twice the height of Mauna Kea as measured from its base on the ocean floor. The total elevation change from the plains of Amazonis Planitia, over 1,000 km (620 mi) to the northwest, to the summit approaches 26 km (16 mi), roughly three times the height of Mount Everest, which in comparison stands at just over 8.8 kilometres (5.5 mi). Consequently, Olympus Mons is either the tallest or second-tallest mountain in the Solar System; the only known mountain which might be taller is the Rheasilvia peak on the asteroid Vesta, at 20–25 km (12–16 mi). The dichotomy of Martian topography is striking: northern plains flattened by lava flows contrast with the southern highlands, pitted and cratered by ancient impacts. It is possible that, four billion years ago, the Northern Hemisphere of Mars was struck by an object one-tenth to two-thirds the size of Earth's Moon. If this is the case, the Northern Hemisphere of Mars would be the site of an impact crater 10,600 by 8,500 kilometres (6,600 by 5,300 mi) in size, or roughly the area of Europe, Asia, and Australia combined, surpassing Utopia Planitia and the Moon's South Pole–Aitken basin as the largest impact crater in the Solar System. Mars is scarred by 43,000 impact craters with a diameter of 5 kilometres (3.1 mi) or greater. The largest exposed crater is Hellas, which is 2,300 kilometres (1,400 mi) wide and 7,000 metres (23,000 ft) deep, and is a light albedo feature clearly visible from Earth. There are other notable impact features, such as Argyre, which is around 1,800 kilometres (1,100 mi) in diameter, and Isidis, which is around 1,500 kilometres (930 mi) in diameter. Due to the smaller mass and size of Mars, the probability of an object colliding with the planet is about half that of Earth. Mars is located closer to the asteroid belt, so it has an increased chance of being struck by materials from that source. Mars is more likely to be struck by short-period comets, i.e., those that lie within the orbit of Jupiter. Martian craters can[discuss] have a morphology that suggests the ground became wet after the meteor impact. The large canyon, Valles Marineris (Latin for 'Mariner Valleys, also known as Agathodaemon in the old canal maps), has a length of 4,000 kilometres (2,500 mi) and a depth of up to 7 kilometres (4.3 mi). The length of Valles Marineris is equivalent to the length of Europe and extends across one-fifth the circumference of Mars. By comparison, the Grand Canyon on Earth is only 446 kilometres (277 mi) long and nearly 2 kilometres (1.2 mi) deep. Valles Marineris was formed due to the swelling of the Tharsis area, which caused the crust in the area of Valles Marineris to collapse. In 2012, it was proposed that Valles Marineris is not just a graben, but a plate boundary where 150 kilometres (93 mi) of transverse motion has occurred, making Mars a planet with possibly a two-tectonic plate arrangement. Images from the Thermal Emission Imaging System (THEMIS) aboard NASA's Mars Odyssey orbiter have revealed seven possible cave entrances on the flanks of the volcano Arsia Mons. The caves, named after loved ones of their discoverers, are collectively known as the "seven sisters". Cave entrances measure from 100 to 252 metres (328 to 827 ft) wide and they are estimated to be at least 73 to 96 metres (240 to 315 ft) deep. Because light does not reach the floor of most of the caves, they may extend much deeper than these lower estimates and widen below the surface. "Dena" is the only exception; its floor is visible and was measured to be 130 metres (430 ft) deep. The interiors of these caverns may be protected from micrometeoroids, UV radiation, solar flares and high energy particles that bombard the planet's surface. Martian geysers (or CO2 jets) are putative sites of small gas and dust eruptions that occur in the south polar region of Mars during the spring thaw. "Dark dune spots" and "spiders" – or araneiforms – are the two most visible types of features ascribed to these eruptions. Similarly sized dust will settle from the thinner Martian atmosphere sooner than it would on Earth. For example, the dust suspended by the 2001 global dust storms on Mars only remained in the Martian atmosphere for 0.6 years, while the dust from Mount Pinatubo took about two years to settle. However, under current Martian conditions, the mass movements involved are generally much smaller than on Earth. Even the 2001 global dust storms on Mars moved only the equivalent of a very thin dust layer – about 3 μm thick if deposited with uniform thickness between 58° north and south of the equator. Dust deposition at the two rover sites has proceeded at a rate of about the thickness of a grain every 100 sols. Atmosphere Mars lost its magnetosphere 4 billion years ago, possibly because of numerous asteroid strikes, so the solar wind interacts directly with the Martian ionosphere, lowering the atmospheric density by stripping away atoms from the outer layer. Both Mars Global Surveyor and Mars Express have detected ionized atmospheric particles trailing off into space behind Mars, and this atmospheric loss is being studied by the MAVEN orbiter. Compared to Earth, the atmosphere of Mars is quite rarefied. Atmospheric pressure on the surface today ranges from a low of 30 Pa (0.0044 psi) on Olympus Mons to over 1,155 Pa (0.1675 psi) in Hellas Planitia, with a mean pressure at the surface level of 600 Pa (0.087 psi). The highest atmospheric density on Mars is equal to that found 35 kilometres (22 mi) above Earth's surface. The resulting mean surface pressure is only 0.6% of Earth's 101.3 kPa (14.69 psi). The scale height of the atmosphere is about 10.8 kilometres (6.7 mi), which is higher than Earth's 6 kilometres (3.7 mi), because the surface gravity of Mars is only about 38% of Earth's. The atmosphere of Mars consists of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 μm in diameter which give the Martian sky a tawny color when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it. Despite repeated detections of methane on Mars, there is no scientific consensus as to its origin. One suggestion is that methane exists on Mars and that its concentration fluctuates seasonally. The existence of methane could be produced by non-biological process such as serpentinization involving water, carbon dioxide, and the mineral olivine, which is known to be common on Mars, or by Martian life. Compared to Earth, its higher concentration of atmospheric CO2 and lower surface pressure may be why sound is attenuated more on Mars, where natural sources are rare apart from the wind. Using acoustic recordings collected by the Perseverance rover, researchers concluded that the speed of sound there is approximately 240 m/s for frequencies below 240 Hz, and 250 m/s for those above. Auroras have been detected on Mars. Because Mars lacks a global magnetic field, the types and distribution of auroras there differ from those on Earth; rather than being mostly restricted to polar regions as is the case on Earth, a Martian aurora can encompass the planet. In September 2017, NASA reported radiation levels on the surface of the planet Mars were temporarily doubled, and were associated with an aurora 25 times brighter than any observed earlier, due to a massive, and unexpected, solar storm in the middle of the month. Mars has seasons, alternating between its northern and southern hemispheres, similar to on Earth. Additionally the orbit of Mars has, compared to Earth's, a large eccentricity and approaches perihelion when it is summer in its southern hemisphere and winter in its northern, and aphelion when it is winter in its southern hemisphere and summer in its northern. As a result, the seasons in its southern hemisphere are more extreme and the seasons in its northern are milder than would otherwise be the case. The summer temperatures in the south can be warmer than the equivalent summer temperatures in the north by up to 30 °C (54 °F). Martian surface temperatures vary from lows of about −110 °C (−166 °F) to highs of up to 35 °C (95 °F) in equatorial summer. The wide range in temperatures is due to the thin atmosphere which cannot store much solar heat, the low atmospheric pressure (about 1% that of the atmosphere of Earth), and the low thermal inertia of Martian soil. The planet is 1.52 times as far from the Sun as Earth, resulting in just 43% of the amount of sunlight. Mars has the largest dust storms in the Solar System, reaching speeds of over 160 km/h (100 mph). These can vary from a storm over a small area, to gigantic storms that cover the entire planet. They tend to occur when Mars is closest to the Sun, and have been shown to increase global temperature. Seasons also produce dry ice covering polar ice caps. Hydrology While Mars contains water in larger amounts, most of it is dust covered water ice at the Martian polar ice caps. The volume of water ice in the south polar ice cap, if melted, would be enough to cover most of the surface of the planet with a depth of 11 metres (36 ft). Water in its liquid form cannot persist on the surface due to Mars's low atmospheric pressure, which is less than 1% that of Earth. Only at the lowest of elevations are the pressure and temperature high enough for liquid water to exist for short periods. Although little water is present in the atmosphere, there is enough to produce clouds of water ice and different cases of snow and frost, often mixed with snow of carbon dioxide dry ice. Landforms visible on Mars strongly suggest that liquid water has existed on the planet's surface. Huge linear swathes of scoured ground, known as outflow channels, cut across the surface in about 25 places. These are thought to be a record of erosion caused by the catastrophic release of water from subsurface aquifers, though some of these structures have been hypothesized to result from the action of glaciers or lava. One of the larger examples, Ma'adim Vallis, is 700 kilometres (430 mi) long, much greater than the Grand Canyon, with a width of 20 kilometres (12 mi) and a depth of 2 kilometres (1.2 mi) in places. It is thought to have been carved by flowing water early in Mars's history. The youngest of these channels is thought to have formed only a few million years ago. Elsewhere, particularly on the oldest areas of the Martian surface, finer-scale, dendritic networks of valleys are spread across significant proportions of the landscape. Features of these valleys and their distribution strongly imply that they were carved by runoff resulting from precipitation in early Mars history. Subsurface water flow and groundwater sapping may play important subsidiary roles in some networks, but precipitation was probably the root cause of the incision in almost all cases. Along craters and canyon walls, there are thousands of features that appear similar to terrestrial gullies. The gullies tend to be in the highlands of the Southern Hemisphere and face the Equator; all are poleward of 30° latitude. A number of authors have suggested that their formation process involves liquid water, probably from melting ice, although others have argued for formation mechanisms involving carbon dioxide frost or the movement of dry dust. No partially degraded gullies have formed by weathering and no superimposed impact craters have been observed, indicating that these are young features, possibly still active. Other geological features, such as deltas and alluvial fans preserved in craters, are further evidence for warmer, wetter conditions at an interval or intervals in earlier Mars history. Such conditions necessarily require the widespread presence of crater lakes across a large proportion of the surface, for which there is independent mineralogical, sedimentological and geomorphological evidence. Further evidence that liquid water once existed on the surface of Mars comes from the detection of specific minerals such as hematite and goethite, both of which sometimes form in the presence of water. The chemical signature of water vapor on Mars was first unequivocally demonstrated in 1963 by spectroscopy using an Earth-based telescope. In 2004, Opportunity detected the mineral jarosite. This forms only in the presence of acidic water, showing that water once existed on Mars. The Spirit rover found concentrated deposits of silica in 2007 that indicated wet conditions in the past, and in December 2011, the mineral gypsum, which also forms in the presence of water, was found on the surface by NASA's Mars rover Opportunity. It is estimated that the amount of water in the upper mantle of Mars, represented by hydroxyl ions contained within Martian minerals, is equal to or greater than that of Earth at 50–300 parts per million of water, which is enough to cover the entire planet to a depth of 200–1,000 metres (660–3,280 ft). On 18 March 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 centimetres (24 in), during the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain. In September 2015, NASA announced that they had found strong evidence of hydrated brine flows in recurring slope lineae, based on spectrometer readings of the darkened areas of slopes. These streaks flow downhill in Martian summer, when the temperature is above −23 °C, and freeze at lower temperatures. These observations supported earlier hypotheses, based on timing of formation and their rate of growth, that these dark streaks resulted from water flowing just below the surface. However, later work suggested that the lineae may be dry, granular flows instead, with at most a limited role for water in initiating the process. A definitive conclusion about the presence, extent, and role of liquid water on the Martian surface remains elusive. Researchers suspect much of the low northern plains of the planet were covered with an ocean hundreds of meters deep, though this theory remains controversial. In March 2015, scientists stated that such an ocean might have been the size of Earth's Arctic Ocean. This finding was derived from the ratio of protium to deuterium in the modern Martian atmosphere compared to that ratio on Earth. The amount of Martian deuterium (D/H = 9.3 ± 1.7 10−4) is five to seven times the amount on Earth (D/H = 1.56 10−4), suggesting that ancient Mars had significantly higher levels of water. Results from the Curiosity rover had previously found a high ratio of deuterium in Gale Crater, though not significantly high enough to suggest the former presence of an ocean. Other scientists caution that these results have not been confirmed, and point out that Martian climate models have not yet shown that the planet was warm enough in the past to support bodies of liquid water. Near the northern polar cap is the 81.4 kilometres (50.6 mi) wide Korolev Crater, which the Mars Express orbiter found to be filled with approximately 2,200 cubic kilometres (530 cu mi) of water ice. In November 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region. The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior (which is 12,100 cubic kilometers). During observations from 2018 through 2021, the ExoMars Trace Gas Orbiter spotted indications of water, probably subsurface ice, in the Valles Marineris canyon system. Orbital motion Mars's average distance from the Sun is roughly 230 million km (143 million mi), and its orbital period is 687 (Earth) days. The solar day (or sol) on Mars is only slightly longer than an Earth day: 24 hours, 39 minutes, and 35.244 seconds. A Martian year is equal to 1.8809 Earth years, or 1 year, 320 days, and 18.2 hours. The gravitational potential difference and thus the delta-v needed to transfer between Mars and Earth is the second lowest for Earth. The axial tilt of Mars is 25.19° relative to its orbital plane, which is similar to the axial tilt of Earth. As a result, Mars has seasons like Earth, though on Mars they are nearly twice as long because its orbital period is that much longer. In the present day, the orientation of the north pole of Mars is close to the star Deneb. Mars has a relatively pronounced orbital eccentricity of about 0.09; of the seven other planets in the Solar System, only Mercury has a larger orbital eccentricity. It is known that in the past, Mars has had a much more circular orbit. At one point, 1.35 million Earth years ago, Mars had an eccentricity of roughly 0.002, much less than that of Earth today. Mars's cycle of eccentricity is 96,000 Earth years compared to Earth's cycle of 100,000 years. Mars has its closest approach to Earth (opposition) in a synodic period of 779.94 days. It should not be confused with Mars conjunction, where the Earth and Mars are at opposite sides of the Solar System and form a straight line crossing the Sun. The average time between the successive oppositions of Mars, its synodic period, is 780 days; but the number of days between successive oppositions can range from 764 to 812. The distance at close approach varies between about 54 and 103 million km (34 and 64 million mi) due to the planets' elliptical orbits, which causes comparable variation in angular size. At their furthest Mars and Earth can be as far as 401 million km (249 million mi) apart. Mars comes into opposition from Earth every 2.1 years. The planets come into opposition near Mars's perihelion in 2003, 2018 and 2035, with the 2020 and 2033 events being particularly close to perihelic opposition. The mean apparent magnitude of Mars is +0.71 with a standard deviation of 1.05. Because the orbit of Mars is eccentric, the magnitude at opposition from the Sun can range from about −3.0 to −1.4. The minimum brightness is magnitude +1.86 when the planet is near aphelion and in conjunction with the Sun. At its brightest, Mars (along with Jupiter) is second only to Venus in apparent brightness. Mars usually appears distinctly yellow, orange, or red. When farthest away from Earth, it is more than seven times farther away than when it is closest. Mars is usually close enough for particularly good viewing once or twice at 15-year or 17-year intervals. Optical ground-based telescopes are typically limited to resolving features about 300 kilometres (190 mi) across when Earth and Mars are closest because of Earth's atmosphere. As Mars approaches opposition, it begins a period of retrograde motion, which means it will appear to move backwards in a looping curve with respect to the background stars. This retrograde motion lasts for about 72 days, and Mars reaches its peak apparent brightness in the middle of this interval. Moons Mars has two relatively small (compared to Earth's) natural moons, Phobos (about 22 km (14 mi) in diameter) and Deimos (about 12 km (7.5 mi) in diameter), which orbit at 9,376 km (5,826 mi) and 23,460 km (14,580 mi) around the planet. The origin of both moons is unclear, although a popular theory states that they were asteroids captured into Martian orbit. Both satellites were discovered in 1877 by Asaph Hall and were named after the characters Phobos (the deity of panic and fear) and Deimos (the deity of terror and dread), twins from Greek mythology who accompanied their father Ares, god of war, into battle. Mars was the Roman equivalent to Ares. In modern Greek, the planet retains its ancient name Ares (Aris: Άρης). From the surface of Mars, the motions of Phobos and Deimos appear different from that of the Earth's satellite, the Moon. Phobos rises in the west, sets in the east, and rises again in just 11 hours. Deimos, being only just outside synchronous orbit – where the orbital period would match the planet's period of rotation – rises as expected in the east, but slowly. Because the orbit of Phobos is below a synchronous altitude, tidal forces from Mars are gradually lowering its orbit. In about 50 million years, it could either crash into Mars's surface or break up into a ring structure around the planet. The origin of the two satellites is not well understood. Their low albedo and carbonaceous chondrite composition have been regarded as similar to asteroids, supporting a capture theory. The unstable orbit of Phobos would seem to point toward a relatively recent capture. But both have circular orbits near the equator, which is unusual for captured objects, and the required capture dynamics are complex. Accretion early in the history of Mars is plausible, but would not account for a composition resembling asteroids rather than Mars itself, if that is confirmed. Mars may have yet-undiscovered moons, smaller than 50 to 100 metres (160 to 330 ft) in diameter, and a dust ring is predicted to exist between Phobos and Deimos. A third possibility for their origin as satellites of Mars is the involvement of a third body or a type of impact disruption. More-recent lines of evidence for Phobos having a highly porous interior, and suggesting a composition containing mainly phyllosilicates and other minerals known from Mars, point toward an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's satellite. Although the visible and near-infrared (VNIR) spectra of the moons of Mars resemble those of outer-belt asteroids, the thermal infrared spectra of Phobos are reported to be inconsistent with chondrites of any class. It is also possible that Phobos and Deimos were fragments of an older moon, formed by debris from a large impact on Mars, and then destroyed by a more recent impact upon the satellite. More recently, a study conducted by a team of researchers from multiple countries suggests that a lost moon, at least fifteen times the size of Phobos, may have existed in the past. By analyzing rocks which point to tidal processes on the planet, it is possible that these tides may have been regulated by a past moon. Human observations and exploration The history of observations of Mars is marked by oppositions of Mars when the planet is closest to Earth and hence is most easily visible, which occur every couple of years. Even more notable are the perihelic oppositions of Mars, which are distinguished because Mars is close to perihelion, making it even closer to Earth. The ancient Sumerians named Mars Nergal, the god of war and plague. During Sumerian times, Nergal was a minor deity of little significance, but, during later times, his main cult center was the city of Nineveh. In Mesopotamian texts, Mars is referred to as the "star of judgement of the fate of the dead". The existence of Mars as a wandering object in the night sky was also recorded by the ancient Egyptian astronomers and, by 1534 BCE, they were familiar with the retrograde motion of the planet. By the period of the Neo-Babylonian Empire, the Babylonian astronomers were making regular records of the positions of the planets and systematic observations of their behavior. For Mars, they knew that the planet made 37 synodic periods, or 42 circuits of the zodiac, every 79 years. They invented arithmetic methods for making minor corrections to the predicted positions of the planets. In Ancient Greece, the planet was known as Πυρόεις. Commonly, the Greek name for the planet now referred to as Mars, was Ares. It was the Romans who named the planet Mars, for their god of war, often represented by the sword and shield of the planet's namesake. In the fourth century BCE, Aristotle noted that Mars disappeared behind the Moon during an occultation, indicating that the planet was farther away. Ptolemy, a Greek living in Alexandria, attempted to address the problem of the orbital motion of Mars. Ptolemy's model and his collective work on astronomy was presented in the multi-volume collection later called the Almagest (from the Arabic for "greatest"), which became the authoritative treatise on Western astronomy for the next fourteen centuries. Literature from ancient China confirms that Mars was known by Chinese astronomers by no later than the fourth century BCE. In the East Asian cultures, Mars is traditionally referred to as the "fire star" (火星) based on the Wuxing system. In 1609 Johannes Kepler published a 10 year study of Martian orbit, using the diurnal parallax of Mars, measured by Tycho Brahe, to make a preliminary calculation of the relative distance to the planet. From Brahe's observations of Mars, Kepler deduced that the planet orbited the Sun not in a circle, but in an ellipse. Moreover, Kepler showed that Mars sped up as it approached the Sun and slowed down as it moved farther away, in a manner that later physicists would explain as a consequence of the conservation of angular momentum.: 433–437 In 1610 the first use of a telescope for astronomical observation, including Mars, was performed by Italian astronomer Galileo Galilei. With the telescope the diurnal parallax of Mars was again measured in an effort to determine the Sun-Earth distance. This was first performed by Giovanni Domenico Cassini in 1672. The early parallax measurements were hampered by the quality of the instruments. The only occultation of Mars by Venus observed was that of 13 October 1590, seen by Michael Maestlin at Heidelberg. By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. On 5 September 1877, a perihelic opposition to Mars occurred. The Italian astronomer Giovanni Schiaparelli used a 22-centimetre (8.7 in) telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which, with the possible exception of the natural canyon Valles Marineris, were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means "channels" or "grooves", was popularly mistranslated in English as "canals". Influenced by the observations, the orientalist Percival Lowell founded an observatory which had 30- and 45-centimetre (12- and 18-in) telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894, and the following less favorable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public. The canali were independently observed by other astronomers, like Henri Joseph Perrotin and Louis Thollon in Nice, using one of the largest telescopes of that time. The seasonal changes (consisting of the diminishing of the polar caps and the dark areas formed during Martian summers) in combination with the canals led to speculation about life on Mars, and it was a long-held belief that Mars contained vast seas and vegetation. As bigger telescopes were used, fewer long, straight canali were observed. During observations in 1909 by Antoniadi with an 84-centimetre (33 in) telescope, irregular patterns were observed, but no canali were seen. The first spacecraft from Earth to visit Mars was Mars 1 of the Soviet Union, which flew by in 1963, but contact was lost en route. NASA's Mariner 4 followed and became the first spacecraft to successfully transmit from Mars; launched on 28 November 1964, it made its closest approach to the planet on 15 July 1965. Mariner 4 detected the weak Martian radiation belt, measured at about 0.1% that of Earth, and captured the first images of another planet from deep space. Once spacecraft visited the planet during the 1960s and 1970s, many previous concepts of Mars were radically broken. After the results of the Viking life-detection experiments, the hypothesis of a dead planet was generally accepted. The data from Mariner 9 and Viking allowed better maps of Mars to be made. Until 1997 and after Viking 1 shut down in 1982, Mars was only visited by three unsuccessful probes, two flying past without contact (Phobos 1, 1988; Mars Observer, 1993), and one (Phobos 2 1989) malfunctioning in orbit before reaching its destination Phobos. In 1997 Mars Pathfinder became the first successful rover mission beyond the Moon and started together with Mars Global Surveyor (operated until late 2006) an uninterrupted active robotic presence at Mars that has lasted until today. It produced complete, extremely detailed maps of the Martian topography, magnetic field and surface minerals. Starting with these missions a range of new improved crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars, with successful missions by the NASA (United States), Jaxa (Japan), ESA, United Kingdom, ISRO (India), Roscosmos (Russia), the United Arab Emirates, and CNSA (China) to study the planet's surface, climate, and geology, uncovering the different elements of the history and dynamic of the hydrosphere of Mars and possible traces of ancient life. As of 2023[update], Mars is host to ten functioning spacecraft. Eight are in orbit: 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, ExoMars Trace Gas Orbiter, the Hope orbiter, and the Tianwen-1 orbiter. Another two are on the surface: the Mars Science Laboratory Curiosity rover and the Perseverance rover. Collected maps are available online at websites including Google Mars. NASA provides two online tools: Mars Trek, which provides visualizations of the planet using data from 50 years of exploration, and Experience Curiosity, which simulates traveling on Mars in 3-D with Curiosity. Planned missions to Mars include: As of February 2024[update], debris from these types of missions has reached over seven tons. Most of it consists of crashed and inactive spacecraft as well as discarded components. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Key areas include establishing telecommunications, payload delivery and surface imaging. Habitability and habitation During the late 19th century, it was widely accepted in the astronomical community that Mars had life-supporting qualities, including the presence of oxygen and water. However, in 1894 W. W. Campbell at Lick Observatory observed the planet and found that "if water vapor or oxygen occur in the atmosphere of Mars it is in quantities too small to be detected by spectroscopes then available". That observation contradicted many of the measurements of the time and was not widely accepted. Campbell and V. M. Slipher repeated the study in 1909 using better instruments, but with the same results. It was not until the findings were confirmed by W. S. Adams in 1925 that the myth of the Earth-like habitability of Mars was finally broken. However, even in the 1960s, articles were published on Martian biology, putting aside explanations other than life for the seasonal changes on Mars. The current understanding of planetary habitability – the ability of a world to develop environmental conditions favorable to the emergence of life – favors planets that have liquid water on their surface. Most often this requires the orbit of a planet to lie within the habitable zone, which for the Sun is estimated to extend from within the orbit of Earth to about that of Mars. During perihelion, Mars dips inside this region, but Mars's thin (low-pressure) atmosphere prevents liquid water from existing over large regions for extended periods. The past flow of liquid water demonstrates the planet's potential for habitability. Recent evidence has suggested that any water on the Martian surface may have been too salty and acidic to support regular terrestrial life. The environmental conditions on Mars are a challenge to sustaining organic life: the planet has little heat transfer across its surface, it has poor insulation against bombardment by the solar wind due to the absence of a magnetosphere and has insufficient atmospheric pressure to retain water in a liquid form (water instead sublimes to a gaseous state). Mars is nearly, or perhaps totally, geologically dead; the end of volcanic activity has apparently stopped the recycling of chemicals and minerals between the surface and interior of the planet. Evidence suggests that the planet was once significantly more habitable than it is today, but whether living organisms ever existed there remains unknown. The Viking probes of the mid-1970s carried experiments designed to detect microorganisms in Martian soil at their respective landing sites and had positive results, including a temporary increase in CO2 production on exposure to water and nutrients. This sign of life was later disputed by scientists, resulting in a continuing debate, with NASA scientist Gilbert Levin asserting that Viking may have found life. A 2014 analysis of Martian meteorite EETA79001 found chlorate, perchlorate, and nitrate ions in sufficiently high concentrations to suggest that they are widespread on Mars. UV and X-ray radiation would turn chlorate and perchlorate ions into other, highly reactive oxychlorines, indicating that any organic molecules would have to be buried under the surface to survive. Small quantities of methane and formaldehyde detected by Mars orbiters are both claimed to be possible evidence for life, as these chemical compounds would quickly break down in the Martian atmosphere. Alternatively, these compounds may instead be replenished by volcanic or other geological means, such as serpentinite. Impact glass, formed by the impact of meteors, which on Earth can preserve signs of life, has also been found on the surface of the impact craters on Mars. Likewise, the glass in impact craters on Mars could have preserved signs of life, if life existed at the site. The Cheyava Falls rock discovered on Mars in June 2024 has been designated by NASA as a "potential biosignature" and was core sampled by the Perseverance rover for possible return to Earth and further examination. Although highly intriguing, no definitive final determination on a biological or abiotic origin of this rock can be made with the data currently available. Several plans for a human mission to Mars have been proposed, but none have come to fruition. The NASA Authorization Act of 2017 directed NASA to study the feasibility of a crewed Mars mission in the early 2030s; the resulting report concluded that this would be unfeasible. In addition, in 2021, China was planning to send a crewed Mars mission in 2033. Privately held companies such as SpaceX have also proposed plans to send humans to Mars, with the eventual goal to settle on the planet. As of 2024, SpaceX has proceeded with the development of the Starship launch vehicle with the goal of Mars colonization. In plans shared with the company in April 2024, Elon Musk envisions the beginning of a Mars colony within the next twenty years. This would be enabled by the planned mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. The moon Phobos has been proposed as an anchor point for a space elevator. Besides national space agencies and space companies, groups such as the Mars Society and The Planetary Society advocate for human missions to Mars. In culture Mars is named after the Roman god of war (Greek Ares), but was also associated with the demi-god Heracles (Roman Hercules) by ancient Greek astronomers, as detailed by Aristotle. This association between Mars and war dates back at least to Babylonian astronomy, in which the planet was named for the god Nergal, deity of war and destruction. It persisted into modern times, as exemplified by Gustav Holst's orchestral suite The Planets, whose famous first movement labels Mars "The Bringer of War". The planet's symbol, a circle with a spear pointing out to the upper right, is also used as a symbol for the male gender. The symbol dates from at least the 11th century, though a possible predecessor has been found in the Greek Oxyrhynchus Papyri. The idea that Mars was populated by intelligent Martians became widespread in the late 19th century. Schiaparelli's "canali" observations combined with Percival Lowell's books on the subject put forward the standard notion of a planet that was a drying, cooling, dying world with ancient civilizations constructing irrigation works. Many other observations and proclamations by notable personalities added to what has been termed "Mars Fever". In the present day, high-resolution mapping of the surface of Mars has revealed no artifacts of habitation, but pseudoscientific speculation about intelligent life on Mars still continues. Reminiscent of the canali observations, these speculations are based on small scale features perceived in the spacecraft images, such as "pyramids" and the "Face on Mars". In his book Cosmos, planetary astronomer Carl Sagan wrote: "Mars has become a kind of mythic arena onto which we have projected our Earthly hopes and fears." The depiction of Mars in fiction has been stimulated by its dramatic red color and by nineteenth-century scientific speculations that its surface conditions might support not just life but intelligent life. This gave way to many science fiction stories involving these concepts, such as H. G. Wells's The War of the Worlds, in which Martians seek to escape their dying planet by invading Earth; Ray Bradbury's The Martian Chronicles, in which human explorers accidentally destroy a Martian civilization; as well as Edgar Rice Burroughs's series Barsoom, C. S. Lewis's novel Out of the Silent Planet (1938), and a number of Robert A. Heinlein stories before the mid-sixties. Since then, depictions of Martians have also extended to animation. A comic figure of an intelligent Martian, Marvin the Martian, appeared in Haredevil Hare (1948) as a character in the Looney Tunes animated cartoons of Warner Brothers, and has continued as part of popular culture to the present. After the Mariner and Viking spacecraft had returned pictures of Mars as a lifeless and canal-less world, these ideas about Mars were abandoned; for many science-fiction authors, the new discoveries initially seemed like a constraint, but eventually the post-Viking knowledge of Mars became itself a source of inspiration for works like Kim Stanley Robinson's Mars trilogy. See also Notes References Further reading External links Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Local Volume → Virgo Supercluster → Laniakea Supercluster → Pisces–Cetus Supercluster Complex → Local Hole → Observable universe → UniverseEach arrow (→) may be read as "within" or "part of".
========================================