Number
int64
1
7.61k
Text
stringlengths
2
3.11k
901
optical data transfer, e.g. fiber-optic cable
902
optical storage,
903
Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower.
904
There are some disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that real-world logic systems require "logic-level restoration, cascadability, fan-out and input–output isolation", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself.
905
A significant challenge to optical computing is that computation is a nonlinear process in which multiple signals must interact. Light, which is an electromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material, and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors.
906
A further misconception is that since light can travel much faster than the drift velocity of electrons, and at frequencies measured in THz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey the transform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by its spectral bandwidth. In fiber-optic communications, practical limits such as dispersion often constrain channels to bandwidths of tens of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmitting ultrashort pulses down highly dispersive waveguides.
907
Photonic logic is the use of photons in logic gates . Switching is obtained using nonlinear optical effects when two or more signals are combined.
908
Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects.
909
Other approaches that have been investigated include photonic logic at a molecular level, using photoluminescent chemicals. In a demonstration, Witlicki et al. performed logical operations using molecules and SERS.
910
The basic idea is to delay light in order to perform useful computations. Of interest would be to solve NP-complete problems as those are difficult problems for the conventional computers.
911
There are two basic properties of light that are actually used in this approach:
912
When solving a problem with time-delays the following steps must be followed:
913
The first problem attacked in this way was the Hamiltonian path problem.
914
The simplest one is the subset sum problem. An optical device solving an instance with four numbers {a1, a2, a3, a4} is depicted below:
915
The travelling salesman problem has been solved by Shaked et al. by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator.
916
Many computations, particularly in scientific applications, require frequent use of the 2D discrete Fourier transform – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the natural Fourier transforming property of lenses. The input is encoded using a liquid crystal spatial light modulator and the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations.
917
Physical computers whose design was inspired by the theoretical Ising model are called Ising machines.
918
Yoshihisa Yamamoto's lab at Stanford pioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on an optical table.
919
Later a team at Hewlett Packard Labs developed photonic chip design tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip.
920
Some additional companies involved with optical computing development include IBM, Microsoft, Procyon Photonics, Lightelligence, Lightmatter, Optalysys, Xanadu Quantum Technologies, ORCA Computing, PsiQuantum, Quandela , and TundraSystems Global.
921
Media related to Optical computing at Wikimedia Commons
922
A hard disk drive , hard disk, hard drive, or fixed disk, is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box.
923
Hard disk drives were introduced by IBM in 1956, and were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like mobile phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation, most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced for servers. Though production is growing slowly , sales revenues and unit shipments are declining, because solid-state drives have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times.
924
The revenues for SSDs, most of which use NAND flash memory, slightly exceeded those for HDDs in 2018. Flash storage products had more than twice the revenue of hard disk drives as of 2017. Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. As of 2019, the cost per bit of SSDs is falling, and the price premium over HDDs has narrowed.
925
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes, where 1 gigabyte = 1 000 megabytes = 1 000 000 kilobytes = 1 000 000 000 bytes . Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. There can be confusion regarding storage capacity, since capacities are stated in decimal gigabytes by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified as the time required to move the heads to a track or cylinder , the time it takes for the desired sector to move under the head , and finally, the speed at which the data is transmitted .
926
The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as SATA , USB, SAS , or PATA cables.
927
The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two large refrigerators and stored five million six-bit characters on a stack of 52 disks . The 350 had a single arm with two read/write heads, one facing up and the other down, that moved both horizontally between a pair of adjacent platters and vertically from one pair of platters to a second set. Variants of the IBM 350 were the IBM 355, IBM 7300 and IBM 1405.
928
Over time, as recording densities were greatly increased, further reductions in disk diameter to 3.5" and 2.5" were found to be optimum. Powerful rare earth magnet materials became affordable during this period, and were complementary to the swing arm actuator design to make possible the compact form factors of modern HDDs.
929
As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s, their cost had been reduced to the point where they were standard on all but the cheapest computers.
930
Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter, internal HDDs proliferated on personal computers.
931
External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays , so on those models, external SCSI disks were the only reasonable option for expanding upon any internal storage.
932
HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content.
933
In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. As of 2018, HDDs were forecast to reach 100 TB capacities around 2025, but as of 2019, the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage , represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase.
934
The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013.
935
In 2019, Western Digital closed its last Malaysian HDD factory due to decreasing demand, to focus on SSD production. All three remaining HDD manufacturers have had decreasing demand for their HDDs since 2014.
936
A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions.
937
A typical HDD design consists of a spindle that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is 0.07–0.18 mm thick.
938
The platters in contemporary HDDs are spun at speeds varying from 4,200 RPM in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. As of November 2019, the platters in most consumer-grade HDDs spin at 5,400 or 7,200 RPM.
939
Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it.
940
In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm moves the heads on an arc across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or, in some older designs, a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track, but modern drives use zone bit recording, increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones.
941
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects⁠ ⁠— thermally induced magnetic instability which is commonly known as the "superparamagnetic limit". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording , first shipped in 2005, and as of 2007, used in certain HDDs. Perpendicular recording may be accompanied by changes in the manufacturing of the read/write heads to increase the strength of the magnetic field created by the heads.
942
In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called exchange spring media magnetic storage technology, also known as exchange coupled composite media, allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer.
943
The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
944
The HDD's electronics controls the movement of the actuator and the rotation of the disk and transfers data to/from a disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles or segments interspersed with real data . The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil motor to rotate the arm. A more modern servo system also employs milli and/or micro actuators to more accurately position the read/write heads. The spinning of the disks uses fluid-bearing spindle motors. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed.
945
Modern drives make extensive use of error correction codes , particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data.
946
In the newest drives, as of 2009, low-density parity-check codes were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon limit and thus provide the highest storage density available.
947
Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" , while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T feature counts the total number of errors in the entire HDD fixed by ECC , and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure.
948
The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located.
949
Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include:
950
Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive.
951
The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host.
952
The World Wide Web is an information system that enables content sharing over the Internet through user-friendly ways meant to appeal to users beyond IT specialists and hobbyists. It allows documents and other web resources to be accessed over the Internet according to specific rules of the Hypertext Transfer Protocol .
953
The Web was invented by English computer scientist Tim Berners-Lee while at CERN in 1989 and opened to the public in 1991. It was conceived as a "universal linked information system". Documents and other media content are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators .
954
The original and still very common document type is a web page formatted in Hypertext Markup Language . This markup language supports plain text, images, embedded video and audio contents, and scripts that implement complex user interaction. The HTML language also supports hyperlinks which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application software. The information in the Web is transferred across the Internet using HTTP. Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government agencies, and individual users; and comprises an enormous amount of educational, entertainment, commercial, and government information.
955
The Web has become the world's dominant information systems platform. It is the primary tool that billions of people worldwide use to interact with the Internet.
956
The Web was invented by English computer scientist Tim Berners-Lee while working at CERN. He was motivated by the problem of storing, updating, and finding documents and data files in that large and constantly changing organization, as well as distributing them to collaborators outside CERN. In his design, Berners-Lee dismissed the common tree structure approach, used for instance in the existing CERNDOC documentation system and in the Unix filesystem, as well as approaches that relied in tagging files with keywords, as in the VAX/NOTES system. Instead he adopted concepts he had put into practice with his private ENQUIRE system built at CERN. When he became aware of Ted Nelson's hypertext model , in which documents can be linked in unconstrained ways through hyperlinks associated with "hot spots" embedded in the text, it helped to confirm the validity of his concept.
957
The model was later popularized by Apple's HyperCard system. Unlike Hypercard, Berners-Lee's new system from the outset was meant to support links between multiple databases on independent computers, and to allow simultaneous access by many users from any computer on the Internet. He also specified that the system should eventually handle other media besides text, such as graphics, speech, and video. Links could refer to mutable data files, or even fire up programs on their server computer. He also conceived "gateways" that would allow access through the new system to documents organized in other ways . Finally, he insisted that the system should be decentralized, without any central control or coordination over the creation of links.
958
Berners-Lee submitted a proposal to CERN in May 1989, without giving the system a name. He got a working system implemented by the end of 1990, including a browser called WorldWideWeb and an HTTP server running at CERN. As part of that development he defined the first version of the HTTP protocol, the basic URL syntax, and implicitly made HTML the primary document format. The technology was released outside CERN to other research institutions starting in January 1991, and then to the whole Internet on 23 August 1991. The Web was a success at CERN, and began to spread to other scientific and academic institutions. Within the next two years, there were 50 websites created.
959
CERN made the Web protocol and code available royalty free in 1993, enabling its widespread use. After the NCSA released the Mosaic web browser later that year, the Web's popularity grew rapidly as thousands of websites sprang up in less than a year. Mosaic was a graphical browser that could display inline images and submit forms that were processed by the HTTPd server. Marc Andreessen and Jim Clark founded Netscape the following year and released the Navigator browser, which introduced Java and JavaScript to the Web. It quickly became the dominant browser. Netscape became a public company in 1995 which triggered a frenzy for the Web and started the dot-com bubble. Microsoft responded by developing its own browser, Internet Explorer, starting the browser wars. By bundling it with Windows, it became the dominant browser for 14 years.
960
Berners-Lee founded the World Wide Web Consortium which created XML in 1996 and recommended replacing HTML with stricter XHTML. In the meantime, developers began exploiting an IE feature called XMLHttpRequest to make Ajax applications and launched the Web 2.0 revolution. Mozilla, Opera, and Apple rejected XHTML and created the WHATWG which developed HTML5. In 2009, the W3C conceded and abandoned XHTML. In 2019, it ceded control of the HTML specification to the WHATWG.
961
The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet.
962
Tim Berners-Lee states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Nonetheless, it is often called simply the Web, and also often the web; see Capitalization of Internet for details. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng , which satisfies www and literally means "10,000-dimensional net", a translation that reflects the design concept and proliferation of the World Wide Web.
963
Use of the www prefix has been declining, especially when web applications sought to brand their domain names and make them easily pronounceable. As the mobile Web grew in popularity, services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding "www." to the domain.
964
In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his "Podgrams" series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday : "The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for".
965
The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet's transport protocols.
966
Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' , or 'navigating the Web'. Early studies of this new behavior investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation.
967
The following example demonstrates the functioning of a web browser when accessing a page at the URL http://example.org/home.html. The browser resolves the server name of the URL into an Internet Protocol address using the globally distributed Domain Name System . This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text:
968
The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the webserver can fulfil the request it sends an HTTP response back to the browser indicating success:
969
followed by the content of the requested page. Hypertext Markup Language for a basic web page might look like this:
970
The web browser parses the HTML and interprets the markup that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources.
971
Hypertext Markup Language is the standard markup language for creating web pages and web applications. With Cascading Style Sheets and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web.
972
Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document.
973
HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as <img /> and <input /> directly introduce content into the page. Other tags such as <p> surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page.
974
HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium , maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML since 1997.
975
Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: <a href="http://example.org/home.html">Example.org Homepage</a>.
976
Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb in November 1990.
977
The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called "dead" links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts.
978
Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; the first web server was nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name and the www subdomain refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root.
979
When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix "www" to the beginning of it and possibly ".com", ".org" and ".net" at the end, depending on what might be missing. For example, entering "microsoft" may be transformed to http://www.microsoft.com/ and "openoffice" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices.
980
The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted.
981
A web page is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device.
982
The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page.
983
On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol to make such requests to the web server.
984
A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc.
985
A static web page is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application.
986
Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so.
987
A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing.
988
A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way.
989
A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server.
990
Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser.
991
JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardised version is ECMAScript. To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax . Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available.
992
A website is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com.
993
A website may be accessible via a public Internet Protocol network, such as the Internet, or a private local area network , by referencing a uniform resource locator that identifies the site.
994
Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet.
995
Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language . They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol , which may optionally employ encryption to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal.
996
Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time price quotations for different types of markets, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs.
997
A web browser is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer.
998
In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies , and home pages and may have facilities for recording passwords for logging into web sites.
999
The most popular browsers are Chrome, Firefox, Safari, Internet Explorer, and Edge.
1,000
A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols.