diff --git "a/Computer_science final.csv" "b/Computer_science final.csv" new file mode 100644--- /dev/null +++ "b/Computer_science final.csv" @@ -0,0 +1,9375 @@ +Number,Text +1,"A computer is a machine that can be programmed to automatically carry out sequences of arithmetic or logical operations . Modern digital electronic computers can perform generic sets of operations known as programs. These programs enable computers to perform a wide range of tasks. The term computer system may refer to a nominally complete computer that includes the hardware, operating system, software, and peripheral equipment needed and used for full operation; or to a group of computers that are linked and function together, such as a computer network or computer cluster." +2,"A broad range of industrial and consumer products use computers as control systems, including simple special-purpose devices like microwave ovens and remote controls, and factory devices like industrial robots. Computers are at the core of general-purpose devices such as personal computers and mobile devices such as smartphones. Computers power the Internet, which links billions of computers and users." +3,"Early computers were meant to be used only for calculations. Simple manual instruments like the abacus have aided people in doing calculations since ancient times. Early in the Industrial Revolution, some mechanical devices were built to automate long, tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II, both electromechanical and using thermionic valves. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET and monolithic integrated circuit chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with transistor counts increasing at a rapid pace , leading to the Digital Revolution during the late 20th to early 21st centuries." +4,"Conventionally, a modern computer consists of at least one processing element, typically a central processing unit in the form of a microprocessor, together with some type of computer memory, typically semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices , output devices , and input/output devices that perform both functions . Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved." +5,"According to the Oxford English Dictionary, the first known use of computer was in a 1613 book called The Yong Mans Gleanings by the English writer Richard Brathwait: ""I haue read the truest computer of Times, and the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number."" This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were often hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women." +6,"The Online Etymology Dictionary gives the first attested use of computer in the 1640s, meaning 'one who calculates'; this is an ""agent noun from compute "". The Online Etymology Dictionary states that the use of the term to mean ""'calculating machine' is from 1897."" The Online Etymology Dictionary indicates that the ""modern use"" of the term, to mean 'programmable digital electronic computer' dates from ""1945 under this name; theoretical from 1937, as Turing machine""." +7,"Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was most likely a form of tally stick. Later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, likely livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example." +8,"The abacus was initially used for arithmetic tasks. The Roman abacus was developed from devices used in Babylonia as early as 2400 BCE. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money." +9,"The Antikythera mechanism is believed to be the earliest known mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to approximately c. 100 BCE. Devices of comparable complexity to the Antikythera mechanism would not reappear until the fourteenth century." +10,"Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BCE and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD." +11,"The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation." +12,The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. +13,"The slide rule was invented around 1620–1630 by the English clergyman William Oughtred, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Slide rules with special scales are still used for quick performance of routine calculations, such as the E6B circular slide rule used for time and distance calculations on light aircraft." +14,"In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll that could write holding a quill pen. By switching the number and order of its internal wheels different letters, and hence different messages, could be produced. In effect, it could be mechanically ""programmed"" to read instructions. Along with two other complex machines, the doll is at the Musée d'Art et d'Histoire of Neuchâtel, Switzerland, and still operates." +15,"In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar machine, which, through a system of pulleys and cylinders and over, could predict the perpetual calendar for every year from 0 CE to 4000 CE, keeping track of leap years and varying day length. The tide-predicting machine invented by the Scottish scientist Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location." +16,"The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876, Sir William Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers." +17,"In the 1890s, the Spanish engineer Leonardo Torres Quevedo began to develop a series of advanced analog machines that could solve real and complex roots of polynomials, which were published in 1901 by the Paris Academy of Sciences." +18,"Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the ""father of the computer"", he conceptualized and invented the first mechanical computer in the early 19th century." +19,"After working on his difference engine he announced his invention in 1822, in a paper to the Royal Astronomical Society, titled ""Note on the application of machinery to the computation of astronomical and mathematical tables"", he also designed to aid in navigational calculations, in 1833 he realized that a much more general design, an analytical engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete." +20,"The machine was about a century ahead of its time. All the parts for his machine had to be made by hand – this was a major problem for a device with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to political and financial difficulties as well as his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry Babbage, completed a simplified version of the analytical engine's computing unit in 1888. He gave a successful demonstration of its use in computing tables in 1906." +21,"During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticated analog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers. The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson in 1872. The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the elder brother of the more famous Sir William Thomson." +22,"The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious. By the 1950s, the success of digital electronic computers had spelled the end for most analog computing machines, but analog computers remained in use during the 1950s in some specialized applications such as education and aircraft ." +23,"By 1938, the United States Navy had developed an electromechanical analog computer small enough to use aboard a submarine. This was the Torpedo Data Computer, which used trigonometry to solve the problem of firing a torpedo at a moving target. During World War II similar devices were developed in other countries as well." +24,"Early digital computers were electromechanical; electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created by German engineer Konrad Zuse in 1939 in Berlin, was one of the earliest examples of an electromechanical relay computer." +25,"In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5–10 Hz. Program code was supplied on punched film while data could be stored in 64 words of memory or supplied from the keyboard. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating-point numbers. Rather than the harder-to-implement decimal system , using a binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was not itself a universal computer but could be extended to be Turing complete." +26,"Zuse's next computer, the Z4, became the world's first commercial computer; after initial delay due to the Second World War, it was completed in 1950 and delivered to the ETH Zurich. The computer was manufactured by Zuse's own company, Zuse KG, which was founded in 1941 as the first company with the sole purpose of developing computers in Berlin." +27,"Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. The engineer Tommy Flowers, working at the Post Office Research Station in London in the 1930s, began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation five years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes. In the US, John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed and tested the Atanasoff–Berry Computer in 1942, the first ""automatic electronic digital computer"". This design was also all-electronic and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory." +28,"During World War II, the British code-breakers at Bletchley Park achieved a number of successes at breaking encrypted German military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes which were often run by women. To crack the more sophisticated German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman and his colleagues commissioned Flowers to build the Colossus. He spent eleven months from early February 1943 designing and building the first Colossus. After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February." +29,"Colossus was the world's first electronic digital programmable computer. It used a large number of valves . It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine Mk II Colossi were built . Colossus Mark I contained 1,500 thermionic valves , but Mark II with 2,400 valves, was both five times faster and simpler to operate than Mark I, greatly speeding the decoding process." +30,"The ENIAC was the first electronic programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it was much faster, more flexible, and it was Turing-complete. Like the Colossus, a ""program"" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were six women, often known collectively as the ""ENIAC girls""." +31,"It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High speed memory was limited to 20 words . Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors." +32,"The principle of the modern computer was proposed by Alan Turing in his seminal 1936 paper, On Computable Numbers. Turing proposed a simple device that he called ""Universal Computing machine"" and that is now known as a universal Turing machine. He proved that such a machine is capable of computing anything that is computable by executing instructions stored on tape, allowing the machine to be programmable. The fundamental concept of Turing's design is the stored program, where all the instructions for computing are stored in memory. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine." +33,"Early computing machines had fixed programs. Changing its function required the re-wiring and re-structuring of the machine. With the proposal of the stored-program computer this changed. A stored-program computer includes by design an instruction set and can store in memory a set of instructions that details the computation. The theoretical basis for the stored-program computer was laid out by Alan Turing in his 1936 paper. In 1945, Turing joined the National Physical Laboratory and began work on developing an electronic stored-program digital computer. His 1945 report ""Proposed Electronic Calculator"" was the first specification for such a device. John von Neumann at the University of Pennsylvania also circulated his First Draft of a Report on the EDVAC in 1945." +34,"The Manchester Baby was the world's first stored-program computer. It was built at the University of Manchester in England by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948. It was designed as a testbed for the Williams tube, the first random-access digital storage device. Although the computer was described as ""small and primitive"" by a 1998 retrospective, it was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project began at the university to develop it into a practically useful computer, the Manchester Mark 1." +35,"The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer. Built by Ferranti, it was delivered to the University of Manchester in February 1951. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. In October 1947 the directors of British catering company J. Lyons & Company decided to take an active role in promoting the commercial development of computers. Lyons's LEO I computer, modelled closely on the Cambridge EDSAC of 1949, became operational in April 1951 and ran the world's first routine office computer job." +36,Grace Hopper was the first to develop a compiler for a programming language. +37,"The concept of a field-effect transistor was proposed by Julius Edgar Lilienfeld in 1925. John Bardeen and Walter Brattain, while working under William Shockley at Bell Labs, built the first working transistor, the point-contact transistor, in 1947, which was followed by Shockley's bipolar junction transistor in 1948. From 1955 onwards, transistors replaced vacuum tubes in computer designs, giving rise to the ""second generation"" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialized applications." +38,"At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Their first transistorized computer and the first in the world, was operational by 1953, and a second version was completed there in April 1955. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer. That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell." +39,"The metal–oxide–silicon field-effect transistor , also known as the MOS transistor, was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density integrated circuits. In addition to data processing, it also enabled the practical use of MOS transistors as memory cell storage elements, leading to the development of MOS semiconductor memory, which replaced earlier magnetic-core memory in computers. The MOSFET led to the microcomputer revolution, and became the driving force behind the computer revolution. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of digital electronics." +40,"The next great advance in computing power came with the advent of the integrated circuit . +The idea of the integrated circuit was first conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer. Dummer presented the first public description of an integrated circuit at the Symposium on Progress in Quality Electronic Components in Washington, D.C., on 7 May 1952." +41,"The first working ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. In his patent application of 6 February 1959, Kilby described his new device as ""a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated"". However, Kilby's invention was a hybrid integrated circuit , rather than a monolithic integrated circuit chip. Kilby's IC had external wire connections, which made it difficult to mass-produce." +42,"Noyce also came up with his own idea of an integrated circuit half a year later than Kilby. Noyce's invention was the first true monolithic IC chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. Noyce's monolithic IC was fabricated using the planar process, developed by his colleague Jean Hoerni in early 1959. In turn, the planar process was based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide in the late 1950s." +43,"Modern monolithic ICs are predominantly MOS integrated circuits, built from MOSFETs . The earliest experimental MOS IC to be fabricated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS IC in 1964, developed by Robert Norman. Following the development of the self-aligned gate MOS transistor by Robert Kerwin, Donald Klein and John Sarace at Bell Labs in 1967, the first silicon-gate MOS IC with self-aligned gates was developed by Federico Faggin at Fairchild Semiconductor in 1968. The MOSFET has since become the most critical device component in modern ICs." +44,"The development of the MOS integrated circuit led to the invention of the microprocessor, and heralded an explosion in the commercial and personal use of computers. While the subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term ""microprocessor"", it is largely undisputed that the first single-chip microprocessor was the Intel 4004, designed and realized by Federico Faggin with his silicon-gate MOS IC technology, along with Ted Hoff, Masatoshi Shima and Stanley Mazor at Intel. In the early 1970s, MOS IC technology enabled the integration of more than 10,000 transistors on a single chip." +45,"System on a Chip are complete computers on a microchip the size of a coin. They may or may not have integrated RAM and flash memory. If not integrated, the RAM is usually placed directly above or below the SoC, and the flash memory is usually placed right next to the SoC, this all done to improve data transfer speeds, as the data signals do not have to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with modern SoCs being the size of a coin while also being hundreds of thousands of times more powerful than ENIAC, integrating billions of transistors, and consuming only a few watts of power." +46,"The first mobile computers were heavy and ran from mains power. The 50 lb IBM 5100 was an early example. Later portables such as the Osborne 1 and Compaq Portable were considerably lighter but still needed to be plugged in. The first laptops, such as the Grid Compass, removed this requirement by incorporating batteries – and with the continued miniaturization of computing resources and advancements in portable battery life, portable computers grew in popularity in the 2000s. The same developments allowed manufacturers to integrate computing resources into cellular mobile phones by the early 2000s." +47,"These smartphones and tablets run on a variety of operating systems and recently became the dominant computing device on the market. These are powered by System on a Chip , which are complete computers on a microchip the size of a coin." +48,"Computers can be classified in a number of different ways, including:" +49,"The term hardware covers all of those parts of a computer that are tangible physical objects. Circuits, computer chips, graphic cards, sound cards, memory , motherboard, displays, power supplies, cables, keyboards, printers and ""mice"" input devices are all hardware." +50,"A general-purpose computer has four main components: the arithmetic logic unit , the control unit, the memory, and the input and output devices . These parts are interconnected by buses, often made of groups of wires. Inside each of these parts are thousands to trillions of small electrical circuits which can be turned off or on by means of an electronic switch. Each circuit represents a bit of information so that when the circuit is on it represents a ""1"", and when off it represents a ""0"" . The circuits are arranged in logic gates so that one or more of the circuits may control the state of one or more of the other circuits." +51,"When unprocessed data is sent to the computer with the help of input devices, the data is processed and sent to output devices. The input devices may be hand-operated or automated. The act of processing is mainly regulated by the CPU. Some examples of input devices are:" +52,The means through which computer gives output are known as output devices. Some examples of output devices are: +53,"The control unit manages the computer's various components; it reads and interprets the program instructions, transforming them into control signals that activate other parts of the computer. Control systems in advanced computers may change the order of execution of some instructions to improve performance." +54,"A key component common to all CPUs is the program counter, a special memory cell that keeps track of which location in memory the next instruction is to be read from." +55,"The control system's function is as follows— this is a simplified description, and some of these steps may be performed concurrently or in a different order depending on the type of CPU:" +56,"Since the program counter is just another set of memory cells, it can be changed by calculations done in the ALU. Adding 100 to the program counter would cause the next instruction to be read from a place 100 locations further down the program. Instructions that modify the program counter are often known as ""jumps"" and allow for loops and often conditional instruction execution ." +57,"The sequence of operations that the control unit goes through to process an instruction is in itself like a short computer program, and indeed, in some more complex CPU designs, there is another yet smaller computer called a microsequencer, which runs a microcode program that causes all of these events to happen." +58,"The control unit, ALU, and registers are collectively known as a central processing unit . Early CPUs were composed of many separate components. Since the 1970s, CPUs have typically been constructed on a single MOS integrated circuit chip called a microprocessor." +59,"The ALU is capable of performing two classes of operations: arithmetic and logic. The set of arithmetic operations that a particular ALU supports may be limited to addition and subtraction, or might include multiplication, division, trigonometry functions such as sine, cosine, etc., and square roots. Some can operate only on whole numbers while others use floating point to represent real numbers, albeit with limited precision. However, any computer that is capable of performing just the simplest operations can be programmed to break down the more complex operations into simple steps that it can perform. Therefore, any computer can be programmed to perform any arithmetic operation—although it will take more time to do so if its ALU does not directly support the operation. An ALU may also compare numbers and return Boolean truth values depending on whether one is equal to, greater than or less than the other . Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These can be useful for creating complicated conditional statements and processing Boolean logic." +60,"Superscalar computers may contain multiple ALUs, allowing them to process several instructions simultaneously. Graphics processors and computers with SIMD and MIMD features often contain ALUs that can perform arithmetic on vectors and matrices." +61,"A computer's memory can be viewed as a list of cells into which numbers can be placed or read. Each cell has a numbered ""address"" and can store a single number. The computer can be instructed to ""put the number 123 into the cell numbered 1357"" or to ""add the number that is in cell 1357 to the number that is in cell 2468 and put the answer into cell 1595."" The information stored in memory may represent practically anything. Letters, numbers, even computer instructions can be placed into memory with equal ease. Since the CPU does not differentiate between different types of information, it is the software's responsibility to give significance to what the memory sees as nothing but a series of numbers." +62,"In almost all modern computers, each memory cell is set up to store binary numbers in groups of eight bits . Each byte is able to represent 256 different numbers ; either from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used . When negative numbers are required, they are usually stored in two's complement notation. Other arrangements are possible, but are usually not seen outside of specialized applications or historical contexts. A computer can store any kind of information in memory if it can be represented numerically. Modern computers have billions or even trillions of bytes of memory." +63,"The CPU contains a special set of memory cells called registers that can be read and written to much more rapidly than the main memory area. There are typically between two and one hundred registers depending on the type of CPU. Registers are used for the most frequently needed data items to avoid having to access main memory every time data is needed. As data is constantly being worked on, reducing the need to access main memory greatly increases the computer's speed." +64,Computer main memory comes in two principal varieties: +65,"RAM can be read and written to anytime the CPU commands it, but ROM is preloaded with data and software that never changes, therefore the CPU can only read from it. ROM is typically used to store the computer's initial start-up instructions. In general, the contents of RAM are erased when the power to the computer is turned off, but ROM retains its data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates loading the computer's operating system from the hard disk drive into RAM whenever the computer is turned on or reset. In embedded computers, which frequently do not have disk drives, all of the required software may be stored in ROM. Software stored in ROM is often called firmware, because it is notionally more like hardware than software. Flash memory blurs the distinction between ROM and RAM, as it retains its data when turned off but is also rewritable. It is typically much slower than conventional ROM and RAM however, so its use is restricted to applications where high speed is unnecessary." +66,"In more sophisticated computers there may be one or more RAM cache memories, which are slower than registers but faster than main memory. Generally computers with this sort of cache are designed to move frequently needed data into the cache automatically, often without the need for any intervention on the programmer's part." +67,"I/O is the means by which a computer exchanges information with the outside world. Devices that provide input or output to the computer are called peripherals. On a typical personal computer, peripherals include input devices like the keyboard and mouse, and output devices such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve as both input and output devices. Computer networking is another form of I/O. +I/O devices are often complex computers in their own right, with their own CPU and memory. A graphics processing unit might contain fifty or more tiny computers that perform the calculations necessary to display 3D graphics. Modern desktop computers contain many smaller computers that assist the main CPU in performing I/O. A 2016-era flat screen display contains its own computer circuitry." +68,"While a computer may be viewed as running one gigantic program stored in its main memory, in some systems it is necessary to give the appearance of running several programs simultaneously. This is achieved by multitasking i.e. having the computer switch rapidly between running each program in turn. One means by which this is done is with a special signal called an interrupt, which can periodically cause the computer to stop executing instructions where it was and do something else instead. By remembering where it was executing prior to the interrupt, the computer can return to that task later. If several programs are running ""at the same time"". then the interrupt generator might be causing several hundred interrupts per second, causing a program switch each time. Since modern computers typically execute instructions several orders of magnitude faster than human perception, it may appear that many programs are running at the same time even though only one is ever executing in any given instant. This method of multitasking is sometimes termed ""time-sharing"" since each program is allocated a ""slice"" of time in turn." +69,"Before the era of inexpensive computers, the principal use for multitasking was to allow many people to share the same computer. Seemingly, multitasking would cause a computer that is switching between several programs to run more slowly, in direct proportion to the number of programs it is running, but most programs spend much of their time waiting for slow input/output devices to complete their tasks. If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it will not take a ""time slice"" until the event it is waiting for has occurred. This frees up time for other programs to execute so that many programs may be run simultaneously without unacceptable speed loss." +70,"Some computers are designed to distribute their work across several CPUs in a multiprocessing configuration, a technique once employed in only large and powerful machines such as supercomputers, mainframe computers and servers. Multiprocessor and multi-core personal and laptop computers are now widely available, and are being increasingly used in lower-end markets as a result." +71,"Supercomputers in particular often have highly unique architectures that differ significantly from the basic stored-program architecture and from general-purpose computers. They often feature thousands of CPUs, customized high-speed interconnects, and specialized computing hardware. Such designs tend to be useful for only specialized tasks due to the large scale of program organization required to use most of the available resources at once. Supercomputers usually see usage in large-scale simulation, graphics rendering, and cryptography applications, as well as with other so-called ""embarrassingly parallel"" tasks." +72,"Software refers to parts of the computer which do not have a material form, such as programs, data, protocols, etc. Software is that part of a computer system that consists of encoded information or computer instructions, in contrast to the physical hardware from which the system is built. Computer software includes computer programs, libraries and related non-executable data, such as online documentation or digital media. It is often divided into system software and application software. Computer hardware and software require each other and neither can be realistically used on its own. When software is stored in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC compatible computer, it is sometimes called ""firmware""." +73,"There are thousands of different programming languages—some intended for general purpose, others useful for only highly specialized applications." +74,"The defining feature of modern computers which distinguishes them from all other machines is that they can be programmed. That is to say that some type of instructions can be given to the computer, and it will process them. Modern computers based on the von Neumann architecture often have machine code in the form of an imperative programming language. In practical terms, a computer program may be just a few instructions or extend to many millions of instructions, as do the programs for word processors and web browsers for example. A typical modern computer can execute billions of instructions per second and rarely makes a mistake over many years of operation. Large computer programs consisting of several million instructions may take teams of programmers years to write, and due to the complexity of the task almost certainly contain errors." +75,This section applies to most common RAM machine–based computers. +76,"In most cases, computer instructions are simple: add one number to another, move some data from one location to another, send a message to some external device, etc. These instructions are read from the computer's memory and are generally carried out in the order they were given. However, there are usually specialized instructions to tell the computer to jump ahead or backwards to some other place in the program and to carry on executing from there. These are called ""jump"" instructions . Furthermore, jump instructions may be made to happen conditionally so that different sequences of instructions may be used depending on the result of some previous calculation or some external event. Many computers directly support subroutines by providing a type of jump that ""remembers"" the location it jumped from and another instruction to return to the instruction following that jump instruction." +77,"Program execution might be likened to reading a book. While a person will normally read each word and line in sequence, they may at times jump back to an earlier place in the text or skip sections that are not of interest. Similarly, a computer may sometimes go back and repeat the instructions in some section of the program over and over again until some internal condition is met. This is called the flow of control within the program and it is what allows the computer to perform tasks repeatedly without human intervention." +78,"Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such as adding two numbers with just a few button presses. But to add together all of the numbers from 1 to 1,000 would take thousands of button presses and a lot of time, with a near certainty of making a mistake. On the other hand, a computer may be programmed to do this with just a few simple instructions. The following example is written in the MIPS assembly language:" +79,"Once told to run this program, the computer will perform the repetitive addition task without further human intervention. It will almost never make a mistake and a modern PC can complete the task in a fraction of a second." +80,"In most computers, individual instructions are stored as machine code with each instruction being given a unique number . The command to add two numbers together would have one opcode; the command to multiply them would have a different opcode, and so on. The simplest computers are able to perform any of a handful of different instructions; the more complex computers have several hundred to choose from, each with a unique numerical code. Since the computer's memory is able to store numbers, it can also store the instruction codes. This leads to the important fact that entire programs can be represented as lists of numbers and can themselves be manipulated inside the computer in the same way as numeric data. The fundamental concept of storing programs in the computer's memory alongside the data they operate on is the crux of the von Neumann, or stored program, architecture. In some cases, a computer might store some or all of its program in memory that is kept separate from the data it operates on. This is called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers display some traits of the Harvard architecture in their designs, such as in CPU caches." +81,"While it is possible to write computer programs as long lists of numbers and while this technique was used with many early computers, it is extremely tedious and potentially error-prone to do so in practice, especially for complicated programs. Instead, each basic instruction can be given a short name that is indicative of its function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are collectively known as a computer's assembly language. Converting programs written in assembly language into something the computer can actually understand is usually done by a computer program called an assembler." +82,"Programming languages provide various ways of specifying programs for computers to run. Unlike natural languages, programming languages are designed to permit no ambiguity and to be concise. They are purely written languages and are often difficult to read aloud. They are generally either translated into machine code by a compiler or an assembler before being run, or translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid method of the two techniques." +83,"Machine languages and the assembly languages that represent them are generally unique to the particular architecture of a computer's central processing unit . For instance, an ARM architecture CPU cannot understand the machine language of an x86 CPU that might be in a PC. Historically a significant number of other cpu architectures were created and saw extensive use, notably including the MOS Technology 6502 and 6510 in addition to the Zilog Z80." +84,"Although considerably easier than in machine language, writing long programs in assembly language is often difficult and is also error prone. Therefore, most practical programs are written in more abstract high-level programming languages that are able to express the needs of the programmer more conveniently . High level languages are usually ""compiled"" into machine language using another computer program called a compiler. High level languages are less related to the workings of the target computer than assembly language, and more related to the language and structure of the problem to be solved by the final program. It is therefore often possible to use different compilers to translate the same high level language program into the machine language of many different types of computer. This is part of the means by which software like video games may be made available for different computer architectures such as personal computers and various video game consoles." +85,"Program design of small programs is relatively simple and involves the analysis of the problem, collection of inputs, using the programming constructs within languages, devising or using established procedures and algorithms, providing data for output devices and solutions to the problem as applicable. As problems become larger and more complex, features such as subprograms, modules, formal documentation, and new paradigms such as object-oriented programming are encountered. Large programs involving thousands of line of code and more require formal software methodologies. The task of developing large software systems presents a significant intellectual challenge. Producing software with an acceptably high reliability within a predictable schedule and budget has historically been difficult; the academic and professional discipline of software engineering concentrates specifically on this challenge." +86,"Errors in computer programs are called ""bugs"". They may be benign and not affect the usefulness of the program, or have only subtle effects. However, in some cases they may cause the program or the entire system to ""hang"", becoming unresponsive to input such as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit, code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs are usually not the fault of the computer. Since computers merely execute the instructions they are given, bugs are nearly always the result of programmer error or an oversight made in the program's design. Admiral Grace Hopper, an American computer scientist and developer of the first compiler, is credited for having first used the term ""bugs"" in computing after a dead moth was found shorting a relay in the Harvard Mark II computer in September 1947." +87,"Computers have been used to coordinate information between multiple locations since the 1950s. The U.S. military's SAGE system was the first large-scale example of such a system, which led to a number of special-purpose commercial systems such as Sabre. In the 1970s, computer engineers at research institutions throughout the United States began to link their computers together using telecommunications technology. The effort was funded by ARPA , and the computer network that resulted was called the ARPANET. The technologies that made the Arpanet possible spread and evolved." +88,"In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSL saw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. ""Wireless"" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments." +89,"A computer does not need to be electronic, nor even have a processor, nor RAM, nor even a hard disk. While popular usage of the word ""computer"" is synonymous with a personal electronic computer, a typical modern definition of a computer is: ""A device that computes, especially a programmable electronic machine that performs high-speed mathematical or logical operations or that assembles, stores, correlates, or otherwise processes information."" According to this definition, any device that processes information qualifies as a computer." +90,"There is active research to make non-classical computers out of many promising new types of technology, such as optical computers, DNA computers, neural computers, and quantum computers. Most computers are universal, and are able to calculate any computable function, and are limited only by their memory capacity and operating speed. However different designs of computers can give very different performance for particular problems; for example quantum computers can potentially break some modern encryption algorithms very quickly." +91,There are many types of computer architectures: +92,"Of all these abstract machines, a quantum computer holds the most promise for revolutionizing computing. Logic gates are a common abstraction which can apply to most of the above digital or analog paradigms. The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore, any type of computer is able to perform the same computational tasks, given enough time and storage capacity." +93,"A computer will solve problems in exactly the way it is programmed to, without regard to efficiency, alternative solutions, possible shortcuts, or possible errors in the code. Computer programs that learn and adapt are part of the emerging field of artificial intelligence and machine learning. Artificial intelligence based products generally fall into two major categories: rule-based systems and pattern recognition systems. Rule-based systems attempt to represent the rules used by human experts and tend to be expensive to develop. Pattern-based systems use data about a problem to generate conclusions. Examples of pattern-based systems include voice recognition, font recognition, translation and the emerging field of on-line marketing." +94,"As the use of computers has spread throughout society, there are an increasing number of careers involving computers." +95,"The need for computers to work well together and to be able to exchange information has spawned the need for many standards organizations, clubs and societies of both a formal and informal nature." +96,"Testbank contains randomized versions of this quiz for classroom use. For more information on printing these quizzes, see How to use testbank." +97,"For on-screen use, this version of the quiz can be used as a pre-reading activity for Wikipedia:Computer. An excellent homework assignment might be to ask students to submit more quiz questions to Wikiversity." +98,"Some speech recognition systems require ""training"" where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called ""speaker-independent"" systems. Systems that use training are called ""speaker dependent""." +99,"Speech recognition applications include voice user interfaces such as voice dialing , call routing , domotic appliance control, search key words , simple data entry , preparation of structured documents , determining speaker characteristics, speech-to-text processing , and aircraft . Automatic pronunciation assessment is used in education such as for spoken language learning." +100,"The term voice recognition or speaker identification refers to identifying the speaker, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process." +101,"From the technology perspective, speech recognition has a long history with several waves of major innovations. Most recently, the field has benefited from advances in deep learning and big data. The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the worldwide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems." +102,"The key areas of growth were: vocabulary size, speaker independence, and processing speed." +103,Raj Reddy was the first person to take on continuous speech recognition as a graduate student at Stanford University in the late 1960s. Previous systems required users to pause after each word. Reddy's system issued spoken commands for playing chess. +104,"Around this time Soviet researchers invented the dynamic time warping algorithm and used it to create a recognizer capable of operating on a 200-word vocabulary. DTW processed speech by dividing it into short frames, e.g. 10ms segments, and processing each frame as a single unit. Although DTW would be superseded by later algorithms, the technique carried on. Achieving speaker independence remained unsolved at this time period." +105,"During the late 1960s Leonard Baum developed the mathematics of Markov chains at the Institute for Defense Analysis. A decade later, at CMU, Raj Reddy's students James Baker and Janet M. Baker began using the hidden Markov model for speech recognition. James Baker had learned about HMMs from a summer job at the Institute of Defense Analysis during his undergraduate education. The use of HMMs allowed researchers to combine different sources of knowledge, such as acoustics, language, and syntax, in a unified probabilistic model." +106,The 1980s also saw the introduction of the n-gram language model. +107,"Much of the progress in the field is owed to the rapidly increasing capabilities of computers. At the end of the DARPA program in 1976, the best computer available to researchers was the PDP-10 with 4 MB ram. It could take up to 100 minutes to decode just 30 seconds of speech." +108,Two practical products were: +109,"By this point, the vocabulary of the typical commercial speech recognition system was larger than the average human vocabulary. Raj Reddy's former student, Xuedong Huang, developed the Sphinx-II system at CMU. The Sphinx-II system was the first to do speaker-independent, large vocabulary, continuous speech recognition and it had the best performance in DARPA's 1992 evaluation. Handling continuous speech with a large vocabulary was a major milestone in the history of speech recognition. Huang went on to found the speech recognition group at Microsoft in 1993. Raj Reddy's student Kai-Fu Lee joined Apple where, in 1992, he helped develop a speech interface prototype for the Apple computer known as Casper." +110,"Lernout & Hauspie, a Belgium-based speech recognition company, acquired several other companies, including Kurzweil Applied Intelligence in 1997 and Dragon Systems in 2000. The L&H speech technology was used in the Windows XP operating system. L&H was an industry leader until an accounting scandal brought an end to the company in 2001. The speech technology from L&H was bought by ScanSoft which became Nuance in 2005. Apple originally licensed software from Nuance to provide speech recognition capability to its digital assistant Siri." +111,"In the 2000s DARPA sponsored two speech recognition programs: Effective Affordable Reusable Speech-to-Text in 2002 and Global Autonomous Language Exploitation . Four teams participated in the EARS program: IBM, a team led by BBN with LIMSI and Univ. of Pittsburgh, Cambridge University, and a team composed of ICSI, SRI and University of Washington. EARS funded the collection of the Switchboard telephone speech corpus containing 260 hours of recorded conversations from over 500 speakers. The GALE program focused on Arabic and Mandarin broadcast news speech. Google's first effort at speech recognition came in 2007 after hiring some researchers from Nuance. The first product was GOOG-411, a telephone based directory service. The recordings from GOOG-411 produced valuable data that helped Google improve their recognition systems. Google Voice Search is now supported in over 30 languages." +112,"In the United States, the National Security Agency has made use of a type of speech recognition for keyword spotting since at least 2006. This technology allows analysts to search through large volumes of recorded conversations and isolate mentions of keywords. Recordings can be indexed and analysts can run queries over the database to find conversations of interest. Some government research programs focused on intelligence applications of speech recognition, e.g. DARPA's EARS's program and IARPA's Babel program." +113,"By early 2010s speech recognition, also called voice recognition was clearly differentiated from speaker recognition, and speaker independence was considered a major breakthrough. Until then, systems required a ""training"" period. A 1987 ad for a doll had carried the tagline ""Finally, the doll that understands you."" – despite the fact that it was described as ""which children could train to respond to their voice""." +114,"In 2017, Microsoft researchers reached a historical human parity milestone of transcribing conversational telephony speech on the widely benchmarked Switchboard task. Multiple deep learning models were used to optimize speech recognition accuracy. The speech recognition word error rate was reported to be as low as 4 professional human transcribers working together on the same benchmark, which was funded by IBM Watson speech team on the same task." +115,Both acoustic modeling and language modeling are important parts of modern statistically based speech recognition algorithms. Hidden Markov models are widely used in many systems. Language modeling is also used in many other natural language processing applications such as document classification or statistical machine translation. +116,"Modern general-purpose speech recognition systems are based on hidden Markov models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale , speech can be approximated as a stationary process. Speech can be thought of as a Markov model for many stochastic purposes." +117,"Another reason why HMMs are popular is that they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence of n-dimensional real-valued vectors , outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speech and decorrelating the spectrum using a cosine transform, then taking the first coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or , each phoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes." +118,"Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would need context dependency for the phonemes ; it would use cepstral normalization to normalize for a different speaker and recording conditions; for further speaker normalization, it might use vocal tract length normalization for male-female normalization and maximum likelihood linear regression for more general speaker adaptation. The features would have so-called delta and delta-delta coefficients to capture speech dynamics and in addition, might use heteroscedastic linear discriminant analysis ; or might skip the delta and delta-delta coefficients and use splicing and an LDA-based projection followed perhaps by heteroscedastic linear discriminant analysis or a global semi-tied co variance transform . Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximum mutual information , minimum classification error , and minimum phone error ." +119,"Decoding of the speech would probably use the Viterbi algorithm to find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information and combining it statically beforehand ." +120,"A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list or as a subset of the models . Re scoring is usually done by trying to minimize the Bayes risk : Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions . The loss function is usually the Levenshtein distance, though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re score lattices represented as weighted finite state transducers with edit distances represented themselves as a finite state transducer verifying certain assumptions." +121,Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach. +122,"Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW." +123,"A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences with certain restrictions. That is, the sequences are ""warped"" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models." +124,"Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification, phoneme classification through multi-objective evolutionary algorithms, isolated word recognition, audiovisual speech recognition, audiovisual speaker recognition and speaker adaptation." +125,"Neural networks make fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words, early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies." +126,"One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction, step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks , Time Delay Neural Networks, and transformers have demonstrated improved performance in this area." +127,"Deep neural networks and denoising autoencoders are also under investigation. A deep feedforward neural network is an artificial neural network with multiple hidden layers of units between the input and output layers. Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data." +128,"Since 2014, there has been much research interest in ""end-to-end"" ASR. Traditional phonetic-based approaches required separate components and training for the pronunciation, acoustic, and language model. End-to-end models jointly learn all the components of the speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, a n-gram language model is required for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on mobile devices. Consequently, modern commercial ASR systems from Google and Apple are deployed on the cloud and require a network connection as opposed to the device locally." +129,"The first attempt at end-to-end ASR was with Connectionist Temporal Classification -based systems introduced by Alex Graves of Google DeepMind and Navdeep Jaitly of the University of Toronto in 2014. The model consisted of recurrent neural networks and a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however it is incapable of learning the language due to conditional independence assumptions similar to a HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to clean up the transcripts. Later, Baidu expanded on the work with extremely large datasets and demonstrated some commercial success in Chinese Mandarin and English. In 2016, University of Oxford presented LipNet, the first end-to-end sentence-level lipreading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted grammar dataset. A large-scale CNN-RNN-CTC architecture was presented in 2018 by Google DeepMind achieving 6 times better performance than human experts." +130,"An alternative approach to CTC-based models are attention-based models. Attention-based ASR models were introduced simultaneously by Chan et al. of Carnegie Mellon University and Google Brain and Bahdanau et al. of the University of Montreal in 2016. The model named ""Listen, Attend and Spell"" , literally ""listens"" to the acoustic signal, pays ""attention"" to different parts of the signal and ""spells"" out the transcript one character at a time. Unlike CTC-based models, attention-based models do not have conditional-independence assumptions and can learn all the components of a speech recognizer including the pronunciation, acoustic and language model directly. This means, during deployment, there is no need to carry around a language model making it very practical for applications with limited memory. By the end of 2016, the attention-based models have seen considerable success including outperforming the CTC models . Various extensions have been proposed since the original LAS model. Latent Sequence Decompositions was proposed by Carnegie Mellon University, MIT and Google Brain to directly emit sub-word units which are more natural than English characters; University of Oxford and Google DeepMind extended LAS to ""Watch, Listen, Attend and Spell"" to handle lip reading surpassing human-level performance." +131,"Typically a manual control input, for example by means of a finger control on the steering-wheel, enables the speech recognition system and this is signaled to the driver by an audio prompt. Following the audio prompt, the system has a ""listening window"" during which it may accept a speech input for recognition. " +132,"Simple voice commands may be used to initiate phone calls, select radio stations or play music from a compatible smartphone, MP3 player or music-loaded flash drive. Voice recognition capabilities vary between car make and model. Some of the most recent car models offer natural-language speech recognition in place of a fixed set of commands, allowing the driver to use full sentences and common phrases. With such systems there is, therefore, no need for the user to memorize a set of fixed command words." +133,"Automatic pronunciation assessment is the use of speech recognition to verify the correctness of pronounced speech, as distinguished from manual assessment by an instructor or proctor. Also called speech verification, pronunciation evaluation, and pronunciation scoring, the main application of this technology is computer-aided pronunciation teaching when combined with computer-aided instruction for computer-assisted language learning , speech remediation, or accent reduction. Pronunciation assessment does not determine unknown speech but instead, knowing the expected word in advance, it attempts to verify the correctness of the learner's pronunciation and ideally their intelligibility to listeners, sometimes along with often inconsequential prosody such as intonation, pitch, tempo, rhythm, and stress. Pronunciation assessment is also used in reading tutoring, for example in products such as Microsoft Teams and from Amira Learning. Automatic pronunciation assessment can also be used to help diagnose and treat speech disorders such as apraxia." +134,"Assessing authentic listener intelligibility is essential for avoiding inaccuracies from accent bias, especially in high-stakes assessments; from words with multiple correct pronunciations; and from phoneme coding errors in machine-readable pronunciation dictionaries. In 2022, researchers found that some newer speech to text systems, based on end-to-end reinforcement learning to map audio signals directly into words, produce word and phrase confidence scores very closely correlated with genuine listener intelligibility. In the Common European Framework of Reference for Languages assessment criteria for ""overall phonological control"", intelligibility outweighs formally correct pronunciation at all levels." +135,"In the health care sector, speech recognition can be implemented in front-end or back-end of the medical documentation process. Front-end speech recognition is where the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Back-end or deferred speech recognition is where the provider dictates into a digital dictation system, the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, where the draft is edited and report finalized. Deferred speech recognition is widely used in the industry currently." +136,"One of the major issues relating to the use of speech recognition in healthcare is that the American Recovery and Reinvestment Act of 2009 provides for substantial financial benefits to physicians who utilize an EMR according to ""Meaningful Use"" standards. These standards require that a substantial amount of data be maintained by the EMR . The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary: the ergonomic gains of using speech recognition to enter structured discrete data are relatively minimal for people who are sighted and who can operate a keyboard and mouse." +137,"A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of the clinician's interaction with the EHR involves navigation through the user interface using menus, and tab/button clicks, and is heavily dependent on keyboard and mouse: voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice ""macros"", where the use of certain phrases – e.g., ""normal report"", will automatically fill in a large number of default values and/or generate boilerplate, which will vary with the type of the exam – e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system." +138,Prolonged use of speech recognition software in conjunction with word processors has shown benefits to short-term-memory restrengthening in brain AVM patients who have been treated with resection. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques. +139,"Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition in fighter aircraft. Of particular note have been the US program in speech recognition for the Advanced Fighter Technology Integration /F-16 aircraft , the program in France for Mirage aircraft, and other programs in the UK dealing with a variety of aircraft platforms. In these programs, speech recognizers have been operated successfully in fighter aircraft, with applications including setting radio frequencies, commanding an autopilot system, setting steer-point coordinates and weapons release parameters, and controlling flight display." +140,"Working with Swedish pilots flying in the JAS-39 Gripen cockpit, Englund found recognition deteriorated with increasing g-loads. The report also concluded that adaptation greatly improved the results in all cases and that the introduction of models for breathing was shown to improve recognition scores significantly. Contrary to what might have been expected, no effects of the broken English of the speakers were found. It was evident that spontaneous speech caused problems for the recognizer, as might have been expected. A restricted vocabulary, and above all, a proper syntax, could thus be expected to improve recognition accuracy substantially." +141,"The Eurofighter Typhoon, currently in service with the UK RAF, employs a speaker-dependent system, requiring each pilot to create a template. The system is not used for any safety-critical or weapon-critical tasks, such as weapon release or lowering of the undercarriage, but is used for a wide range of other cockpit functions. Voice commands are confirmed by visual and/or aural feedback. The system is seen as a major design feature in the reduction of pilot workload, and even allows the pilot to assign targets to his aircraft with two simple voice commands or to any of his wingmen with only five commands." +142,Speaker-independent systems are also being developed and are under test for the F35 Lightning II and the Alenia Aermacchi M-346 Master lead-in fighter trainer. These systems have produced word accuracy scores in excess of 98%. +143,"The problems of achieving high recognition accuracy under stress and noise are particularly relevant in the helicopter environment as well as in the jet fighter environment. The acoustic noise problem is actually more severe in the helicopter environment, not only because of the high noise levels but also because the helicopter pilot, in general, does not wear a facemask, which would reduce acoustic noise in the microphone. Substantial test and evaluation programs have been carried out in the past decade in speech recognition systems applications in helicopters, notably by the U.S. Army Avionics Research and Development Activity and by the Royal Aerospace Establishment in the UK. Work in France has included speech recognition in the Puma helicopter. There has also been much useful work in Canada. Results have been encouraging, and voice applications have included: control of communication radios, setting of navigation systems, and control of an automated target handover system." +144,"As in fighter applications, the overriding issue for voice in helicopters is the impact on pilot effectiveness. Encouraging results are reported for the AVRADA tests, although these represent only a feasibility demonstration in a test environment. Much remains to be done both in speech recognition and in overall speech technology in order to consistently achieve performance improvements in operational settings." +145,"Training for air traffic controllers represents an excellent application for speech recognition systems. Many ATC training systems currently require a person to act as a ""pseudo-pilot"", engaging in a voice dialog with the trainee controller, which simulates the dialog that the controller would have to conduct with pilots in a real ATC situation. Speech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as a pseudo-pilot, thus reducing training and support personnel. In theory, Air controller tasks are also characterized by highly structured speech as the primary output of the controller, hence reducing the difficulty of the speech recognition task should be possible. In practice, this is rarely the case. The FAA document 7110.65 details the phrases that should be used by air traffic controllers. While this document gives less than 150 examples of such phrases, the number of phrases supported by one of the simulation vendors speech recognition systems is in excess of 500,000." +146,"The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada are currently using ATC simulators with speech recognition from a number of different vendors." +147,"ASR is now commonplace in the field of telephony and is becoming more widespread in the field of computer gaming and simulation. In telephony systems, ASR is now being predominantly used in contact centers by integrating it with IVR systems. Despite the high level of integration with word processing in general personal computing, in the field of document production, ASR has not seen the expected increases in use." +148,"The improvement of mobile processor speeds has made speech recognition practical in smartphones. Speech is used mostly as a part of a user interface, for creating predefined or custom speech commands." +149,"People with disabilities can benefit from speech recognition programs. For individuals that are Deaf or Hard of Hearing, speech recognition software is used to automatically generate a closed-captioning of conversations such as discussions in conference rooms, classroom lectures, and/or religious services." +150,"Students who are blind or have very low vision can benefit from using the technology to convey words and then hear the computer recite them, as well as use a computer by commanding with their voice, instead of having to look at the screen and keyboard." +151,"Students who are physically disabled have a Repetitive strain injury/other injuries to the upper extremities can be relieved from having to worry about handwriting, typing, or working with scribe on school assignments by using speech-to-text programs. They can also utilize speech recognition technology to enjoy searching the Internet or using a computer at home without having to physically operate a mouse and keyboard." +152,"Speech recognition can allow students with learning disabilities to become better writers. By saying the words aloud, they can increase the fluidity of their writing, and be alleviated of concerns regarding spelling, punctuation, and other mechanics of writing. Also, see Learning disability." +153,"The use of voice recognition software, in conjunction with a digital audio recorder and a personal computer running word-processing software has proven to be positive for restoring damaged short-term memory capacity, in stroke and craniotomy individuals." +154,"Speech recognition is also very useful for people who have difficulty using their hands, ranging from mild repetitive stress injuries to involve disabilities that preclude using conventional computer input devices. In fact, people who used the keyboard a lot and developed RSI became an urgent early market for speech recognition. Speech recognition is used in deaf telephony, such as voicemail to text, relay services, and captioned telephone. Individuals with learning disabilities who have problems with thought-to-paper communication can possibly benefit from the software but the technology is not bug proof. Also the whole idea of speak to text can be hard for intellectually disabled person's due to the fact that it is rare that anyone tries to learn the technology to teach the person with the disability." +155,"This type of technology can help those with dyslexia but other disabilities are still in question. The effectiveness of the product is the problem that is hindering it from being effective. Although a kid may be able to say a word depending on how clear they say it the technology may think they are saying another word and input the wrong one. Giving them more work to fix, causing them to have to take more time with fixing the wrong word." +156,"The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate , whereas speed is measured with the real time factor. Other measures of accuracy include Single Word Error Rate and Command Success Rate ." +157,"Speech recognition by machine is a very complex problem, however. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition may vary with the following:" +158,"As mentioned earlier in this article, the accuracy of speech recognition may vary depending on the following factors:" +159,"With discontinuous speech full sentences separated by silence are used, therefore it becomes easier to recognize the speech as well as with isolated speech. + +With continuous speech naturally spoken sentences are used, therefore it becomes harder to recognize the speech, different from both isolated and discontinuous speech." +160,Constraints are often represented by grammar. +161,Speech recognition is a multi-leveled pattern recognition task. +162,"e.g. Known word pronunciations or legal word sequences, which can compensate for errors or uncertainties at a lower level;" +163,For telephone speech the sampling rate is 8000 samples per second; +164,"computed every 10 ms, with one 10 ms section called a frame;" +165,"Analysis of four-step neural network approaches can be explained by further information. Sound is produced by air vibration, which we register by ears, but machines by receivers. Basic sound creates a wave which has two descriptions: amplitude , and frequency . +Accuracy can be computed with the help of word error rate . Word error rate can be calculated by aligning the recognized word and referenced word using dynamic string alignment. The problem may occur while computing the word error rate due to the difference between the sequence lengths of the recognized word and referenced word." +166,The formula to compute the word error rate is: +167,"W + E + R + = + + + + + + n + + + + + {\displaystyle WER={ \over n}}" +168,"where s is the number of substitutions, d is the number of deletions, i is the number of insertions, and n is the number of word references." +169,"While computing, the word recognition rate is used. The formula is:" +170,where h is the number of correctly recognized words: +171,"Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like ""Alexa"" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action. Voice-controlled devices are also accessible to visitors to the building, or even those outside the building if they can be heard inside. Attackers may be able to gain access to personal information, like calendar, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases." +172,"Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempt to send commands without nearby people noticing. The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system." +173,"Popular speech recognition conferences held each year or two include SpeechTEK and SpeechTEK Europe, ICASSP, Interspeech/Eurospeech, and the IEEE ASRU. Conferences in the field of natural language processing, such as ACL, NAACL, EMNLP, and HLT, are beginning to include papers on speech processing. Important journals include the IEEE Transactions on Speech and Audio Processing , Computer Speech and Language, and Speech Communication." +174,"Books like ""Fundamentals of Speech Recognition"" by Lawrence Rabiner can be useful to acquire basic knowledge but may not be fully up to date . Another good source can be ""Statistical Methods for Speech Recognition"" by Frederick Jelinek and ""Spoken Language Processing "" by Xuedong Huang etc., ""Computer Speech"", by Manfred R. Schroeder, second edition published in 2004, and ""Speech Processing: A Dynamic and Optimization-Oriented Approach"" published in 2003 by Li Deng and Doug O'Shaughnessey. The updated textbook Speech and Language Processing by Jurafsky and Martin presents the basics and the state of the art for ASR. Speaker recognition also uses the same features, most of the same front-end processing, and classification techniques as is done in speech recognition. A comprehensive textbook, ""Fundamentals of Speaker Recognition"" is an in depth source for up to date details on the theory and practice. A good insight into the techniques used in the best modern systems can be gained by paying attention to government sponsored evaluations such as those organised by DARPA ." +175,"A good and accessible introduction to speech recognition technology and its history is provided by the general audience book ""The Voice in the Machine. Building Computers That Understand Speech"" by Roberto Pieraccini ." +176,"The most recent book on speech recognition is Automatic Speech Recognition: A Deep Learning Approach written by Microsoft researchers D. Yu and L. Deng and published near the end of 2014, with highly mathematically oriented technical detail on how deep learning methods are derived and implemented in modern speech recognition systems based on DNNs and related deep learning methods. A related book, published earlier in 2014, ""Deep Learning: Methods and Applications"" by L. Deng and D. Yu provides a less technical but more methodology-focused overview of DNN-based speech recognition during 2009–2014, placed within the more general context of deep learning applications including not only speech recognition but also image recognition, natural language processing, information retrieval, multimodal processing, and multitask learning." +177,"In terms of freely available resources, Carnegie Mellon University's Sphinx toolkit is one place to start to both learn about speech recognition and to start experimenting. Another resource is the HTK book . For more recent and state-of-the-art techniques, Kaldi toolkit can be used. In 2017 Mozilla launched the open source project called Common Voice to gather big database of voices that would help build free speech recognition project DeepSpeech , using Google's open source platform TensorFlow. When Mozilla redirected funding away from the project in 2020, it was forked by its original developers as Coqui STT using the same open-source license." +178,Google Gboard supports speech recognition on all Android applications. It can be activated through the microphone icon. +179,The commercial cloud based speech recognition APIs are broadly available. +180,"For more software resources, see List of speech recognition software." +181,"Pattern recognition systems are commonly trained from labeled ""training"" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition." +182,"In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of classes . Pattern recognition is a more general problem that encompasses other types of output as well. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values ; and parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence." +183,"Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform ""most likely"" matching of the inputs, taking into account their statistical variation. This is opposed to pattern matching algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors." +184,A modern definition of pattern recognition is: +185,"Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value. Supervised learning assumes that a set of training data has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A learning procedure then generates a model that attempts to meet two sometimes conflicting objectives: Perform as well as possible on the training data, and generalize as well as possible to new data . Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data . In cases of unsupervised learning, there may be no training data at all." +186,"Sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. The unsupervised equivalent of classification is normally known as clustering, based on the common perception of the task as involving no training data to speak of, and of grouping the input data into clusters based on some inherent similarity measure , rather than assigning each input instance into one of a set of pre-defined classes. In some fields, the terminology is different. In community ecology, the term classification is used to refer to what is commonly known as ""clustering""." +187,"The piece of input data for which an output value is generated is formally termed an instance. The instance is formally described by a vector of features, which together constitute a description of all known characteristics of the instance. These feature vectors can be seen as defining points in an appropriate multidimensional space, and methods for manipulating vectors in vector spaces can be correspondingly applied to them, such as computing the dot product or the angle between two vectors. Features typically are either categorical , ordinal , integer-valued or real-valued . Often, categorical and ordinal data are grouped together, and this is also the case for integer-valued and real-valued data. Many algorithms work only in terms of categorical data and require that real-valued or integer-valued data be discretized into groups ." +188,"Many common pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which simply output a ""best"" label, often probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the N-best labels with associated probabilities, for some value of N, instead of simply a single best label. When the number of possible labels is fairly small , N may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms:" +189,"Feature selection algorithms attempt to directly prune out redundant or irrelevant features. A general introduction to feature selection which summarizes approaches and challenges, has been given. The complexity of feature-selection is, because of its non-monotonous character, an optimization problem where given a total of + + + + n + + + {\displaystyle n} + + features the powerset consisting of all + + + + + 2 + + n + + + − + 1 + + + {\displaystyle 2^{n}-1} + + subsets of features need to be explored. The Branch-and-Bound algorithm does reduce this complexity but is intractable for medium to large values of the number of available features + + + + n + + + {\displaystyle n}" +190,"Techniques to transform the raw feature vectors are sometimes used prior to application of the pattern-matching algorithm. Feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis . The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features." +191,"The problem of pattern recognition can be stated as follows: Given an unknown function + + + + g + : + + + X + + + → + + + Y + + + + + {\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}} + + that maps input instances + + + + + x + + ∈ + + + X + + + + + {\displaystyle {\boldsymbol {x}}\in {\mathcal {X}}} + + to output labels + + + + y + ∈ + + + Y + + + + + {\displaystyle y\in {\mathcal {Y}}} + +, along with training data + + + + + D + + = + { + + , + … + , + + } + + + {\displaystyle \mathbf {D} =\{,\dots ,\}} + + assumed to represent accurate examples of the mapping, produce a function + + + + h + : + + + X + + + → + + + Y + + + + + {\displaystyle h:{\mathcal {X}}\rightarrow {\mathcal {Y}}} + + that approximates as closely as possible the correct mapping + + + + g + + + {\displaystyle g} + +. . In order for this to be a well-defined problem, ""approximates as closely as possible"" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function or cost function that assigns a specific value to ""loss"" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of + + + + + + X + + + + + {\displaystyle {\mathcal {X}}} + +. In practice, neither the distribution of + + + + + + X + + + + + {\displaystyle {\mathcal {X}}} + + nor the ground truth function + + + + g + : + + + X + + + → + + + Y + + + + + {\displaystyle g:{\mathcal {X}}\rightarrow {\mathcal {Y}}} + + are known exactly, but can be computed only empirically by collecting a large number of samples of + + + + + + X + + + + + {\displaystyle {\mathcal {X}}} + + and hand-labeling them using the correct value of + + + + + + Y + + + + + {\displaystyle {\mathcal {Y}}} + + . The particular loss function depends on the type of label being predicted. For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data . The goal of the learning procedure is then to minimize the error rate on a ""typical"" test set." +192,"For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form" +193,"where the feature vector input is + + + + + x + + + + {\displaystyle {\boldsymbol {x}}} + +, and the function f is typically parameterized by some parameters + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + +. In a discriminative approach to the problem, f is estimated directly. In a generative approach, however, the inverse probability + + + + p + + + + {\displaystyle p} + + is instead estimated and combined with the prior probability + + + + p + + + + {\displaystyle p} + + using Bayes' rule, as follows:" +194,"When the labels are continuously distributed , the denominator involves integration rather than summation:" +195,"The value of + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + + is typically learned using maximum a posteriori estimation. This finds the best value that simultaneously meets two conflicting objects: To perform as well as possible on the training data and to find the simplest possible model. Essentially, this combines maximum likelihood estimation with a regularization procedure that favors simpler models over more complex models. In a Bayesian context, the regularization procedure can be viewed as placing a prior probability + + + + p + + + + {\displaystyle p} + + on different values of + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + +. Mathematically:" +196,"where + + + + + + θ + + + ∗ + + + + + {\displaystyle {\boldsymbol {\theta }}^{*}} + + is the value used for + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + + in the subsequent evaluation procedure, and + + + + p + + + + {\displaystyle p} + +, the posterior probability of + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + +, is given by" +197,"In the Bayesian approach to this problem, instead of choosing a single parameter vector + + + + + + θ + + + ∗ + + + + + {\displaystyle {\boldsymbol {\theta }}^{*}} + +, the probability of a given label for a new instance + + + + + x + + + + {\displaystyle {\boldsymbol {x}}} + + is computed by integrating over all possible values of + + + + + θ + + + + {\displaystyle {\boldsymbol {\theta }}} + +, weighted according to the posterior probability:" +198,"The first pattern classifier – the linear discriminant presented by Fisher – was developed in the frequentist tradition. The frequentist approach entails that the model parameters are considered unknown, but objective. The parameters are then computed from the collected data. For the linear discriminant, these parameters are precisely the mean vectors and the covariance matrix. Also the probability of each class + + + + p + + + + {\displaystyle p} + + is estimated from the collected dataset. Note that the usage of 'Bayes rule' in a pattern classifier does not make the classification approach Bayesian." +199,"Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities + + + + p + + + + {\displaystyle p} + + can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., the Beta- and Dirichlet-distributions. The Bayesian approach facilitates a seamless intermixing between expert knowledge in the form of subjective probabilities, and objective observations." +200,Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach. +201,"Within medical science, pattern recognition is the basis for computer-aided diagnosis systems. CAD describes a procedure that supports the doctor's interpretations and findings. Other typical applications of pattern recognition techniques are automatic speech recognition, speaker identification, classification of text into several categories , the automatic recognition of handwriting on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms. The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems." +202,"Optical character recognition is an example of the application of a pattern classifier. The method of signing one's name was captured with stylus and overlay starting in 1990. The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers." +203,Pattern recognition has many real-world applications in image processing. Some examples include: +204,"In psychology, pattern recognition is used to make sense of and identify objects, and is closely related to perception. This explains how the sensory inputs humans receive are made meaningful. Pattern recognition can be thought of in two different ways. The first concerns template matching and the second concerns feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long-term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters , suggest that the stimuli are broken down into their component parts for identification. One observation is a capital E having three horizontal lines and one vertical line." +205,"Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized as generative or discriminative." +206,Parametric: +207,Nonparametric: +208,Unsupervised: +209,Two different kinds of rule-based systems emerged within the field of artificial intelligence in the 1970s: +210,The differences and relationships between these two kinds of rule-based system has been a major source of misunderstanding and confusion. +211,"Both kinds of rule-based systems use either forward or backward chaining, in contrast with imperative programs, which execute commands listed sequentially. However, logic programming systems have a logical interpretation, whereas production systems do not." +212,"A classic example of a production rule-based system is the domain-specific expert system that uses rules to make deductions or choices. For example, an expert system might help a doctor choose the correct diagnosis based on a cluster of symptoms, or select tactical moves to play a game." +213,"Rule-based systems can be used to perform lexical analysis to compile or interpret computer programs, or in natural language processing." +214,"Rule-based programming attempts to derive execution instructions from a starting set of data and rules. This is a more indirect method than that employed by an imperative programming language, which lists execution steps sequentially." +215,A typical rule-based system has four basic components: +216,"Whereas the matching phase of the inference engine has a logical interpretation, the conflict resolution and action phases do not. Instead, ""their semantics is usually described as a series of applications of various state-changing operators, which often gets quite involved , and they can hardly be regarded as declarative""." +217,"The logic programming family of computer systems includes the programming language Prolog, the database language Datalog and the knowledge representation and problem-solving language Answer Set Programming . In all of these languages, rules are written in the form of clauses:" +218,and are read as declarative sentences in logical form: +219,"In the simplest case of Horn clauses , which are a subset of first-order logic, all of the A, B1, ..., Bn are atomic formulae." +220,"Although Horn clause logic programs are Turing complete, for many practical applications, it is useful to extend Horn clause programs by allowing negative conditions, implemented by negation as failure. Such extended logic programs have the knowledge representation capabilities of a non-monotonic logic." +221,"The most obvious difference between the two kinds of systems is that production rules are typically written in the forward direction, if A then B, and logic programming rules are typically written in the backward direction, B if A. In the case of logic programming rules, this difference is superficial and purely syntactic. It does not affect the semantics of the rules. Nor does it affect whether the rules are used to reason backwards, Prolog style, to reduce the goal B to the subgoals A, or whether they are used, Datalog style, to derive B from A." +222,"In the case of production rules, the forward direction of the syntax reflects the stimulus-response character of most production rules, with the stimulus A coming before the response B. Moreover, even in cases when the response is simply to draw a conclusion B from an assumption A, as in modus ponens, the match-resolve-act cycle is restricted to reasoning forwards from A to B. Reasoning backwards in a production system would require the use of an entirely different kind of inference engine." +223,"In his Introduction to Cognitive Science, Paul Thagard includes logic and rules as alternative approaches to modelling human thinking. He does not consider logic programs in general, but he considers Prolog to be, not a rule-based system, but ""a programming language that uses logic representations and deductive techniques"" ." +224,"He argues that rules, which have the form IF condition THEN action, are ""very similar"" to logical conditionals, but they are simpler and have greater psychological plausability . Among other differences between logic and rules, he argues that logic uses deduction, but rules use search and can be used to reason either forward or backward . Sentences in logic ""have to be interpreted as universally true"", but rules can be defaults, which admit exceptions . He does not observe that all of these features of rules apply to logic programming systems." +225,"Machine learning is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance." +226,"Machine learning approaches have been applied to many fields including natural language processing, computer vision, speech recognition, email filtering, agriculture, and medicine. ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods." +227,"The mathematical foundations of ML are provided by mathematical optimization methods. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning." +228,"From a theoretical viewpoint, probably approximately correct learning provides a framework for describing machine learning." +229,"The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period." +230,"Although the earliest machine learning model was introduced in the 1950s when Arthur Samuel invented a program that calculated the winning chance in checkers for each side, the history of machine learning roots back to decades of human desire and effort to study human cognitive processes. In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes." +231,"By the early 1960s an experimental ""learning machine"" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyze sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively ""trained"" by a human operator/teacher to recognize patterns and equipped with a ""goof"" button to cause it to re-evaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that an artificial neural network learns to recognize 40 characters from a computer terminal." +232,"Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: ""A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."" This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper ""Computing Machinery and Intelligence"", in which the question ""Can machines think?"" is replaced with the question ""Can machines do what we can do?""." +233,Modern-day machine learning has two objectives. One is to classify data based on models which have been developed; the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. A machine learning algorithm for stock trading may inform the trader of future potential predictions. +234,"As a scientific endeavor, machine learning grew out of the quest for artificial intelligence . In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed ""neural networks""; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis.: 488" +235,"However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.: 488  By 1980, expert systems had come to dominate AI, and statistics was out of favor. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.: 708–710, 755  Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as ""connectionism"", by researchers from other disciplines including Hopfield, Rumelhart, and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.: 25" +236,"Machine learning , reorganized and recognized as its own field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory." +237,"An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors, and compression-based similarity measures compute similarity within these feature spaces. For each compressor C we define an associated vector space ℵ, such that C maps an input string x, corresponding to the vector norm ||~x||. An exhaustive examination of the feature spaces underlying all compression algorithms is precluded by space; instead, feature vectors chooses to examine three representative lossless compression methods, LZW, LZ77, and PPM." +238,"According to AIXI theory, a connection more directly explained in Hutter Prize, the best possible compression of x is the smallest possible software that generates x. For example, in that model, a zip file's compressed size includes both the zip file and the unzipping software, since you can not unzip it without both, but there may be an even smaller combined form." +239,"Examples of AI-powered audio/video compression software include VP9, NVIDIA Maxine, AIVC, AccMPEG. Examples of software that can perform AI-powered image compression include OpenCV, TensorFlow, MATLAB's Image Processing Toolbox and High-Fidelity Generative Image Compression." +240,"In unsupervised machine learning, k-means clustering can be utilized to compress data by grouping similar data points into clusters. This technique simplifies handling extensive datasets that lack predefined labels and finds widespread use in fields such as image compression." +241,"Data compression aims to reduce the size of data files, enhancing storage efficiency and speeding up data transmission. K-means clustering, an unsupervised machine learning algorithm, is employed to partition a dataset into a specified number of clusters, k, each represented by the centroid of its points. This process condenses extensive datasets into a more compact set of representative points. Particularly beneficial in image and signal processing, k-means clustering aids in data reduction by replacing groups of data points with their centroids, thereby preserving the core information of the original data while significantly decreasing the required storage space." +242,"Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of unknown properties in the data . Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as ""unsupervised learning"" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data." +243,Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances . +244,"The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms." +245,"Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field." +246,"Conventional statistical analyses require the a priori selection of a model most suitable for the study data set. In addition, only significant or theoretically relevant variables based on previous experience are included for analysis. In contrast, machine learning is not built on a pre-structured model; rather, the data shape the model by detecting underlying patterns. The more variables used to train the model, the more accurate the ultimate model will be." +247,"Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein ""algorithmic model"" means more or less the machine learning algorithms like Random Forest." +248,"Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning." +249,"Analytical and computational techniques derived from deep-rooted physics of disordered systems can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics." +250,"A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases." +251,"The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory via the Probably Approximately Correct Learning model. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error." +252,"For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer." +253,"In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time." +254,"Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the ""signal"" or ""feedback"" available to the learning system:" +255,"Although each algorithm has advantages and limitations, no single algorithm works for all problems." +256,"Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task." +257,"Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email." +258,"Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification." +259,"Unsupervised learning algorithms find structures in data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering, dimensionality reduction, and density estimation. Unsupervised learning algorithms also streamlined the process of identifying large indel based haplotypes of a gene of interest from pan-genome." +260,"Cluster analysis is the assignment of a set of observations into subsets so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity." +261,"Semi-supervised learning falls between unsupervised learning and supervised learning . Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy." +262,"In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets." +263,"Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In reinforcement learning, the environment is typically represented as a Markov decision process . Many reinforcements learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent." +264,"Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called the ""number of features"". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis . PCA involves changing higher-dimensional data to a smaller space . This results in a smaller dimension of data , while keeping all original variables in the model without changing the data. +The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularization." +265,"Other approaches have been developed which do not fit neatly into this three-fold categorization, and sometimes more than one is used by the same machine learning system. For example, topic modeling, meta-learning." +266,"Self-learning, as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning, named crossbar adaptive array . It is learning with no external rewards and no external teacher advice. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions about consequence situations. The system is driven by the interaction between cognition and emotion. +The self-learning algorithm updates a memory matrix W =||w|| such that in each iteration executes the following machine learning routine:" +267,in situation s perform action a +268,receive a consequence situation s' +269,compute emotion of being in the consequence situation v +270,update crossbar memory w' = w + v +271,"It is a system with only one input, situation, and only one output, action a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioral environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioral environment. After receiving the genome vector from the genetic environment, the CAA learns a goal-seeking behavior, in an environment that contains both desirable and undesirable situations." +272,"Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task." +273,"Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization and various forms of clustering." +274,"Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data." +275,"Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms." +276,"Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions, and is assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the K-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot." +277,"In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions." +278,"In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns." +279,"Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labeled as ""normal"" and ""abnormal"" and involves training a classifier . Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set and then test the likelihood of a test instance to be generated by the model." +280,"Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning ." +281,"Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of ""interestingness""." +282,"Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves ""rules"" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems." +283,"Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale systems in supermarkets. For example, the rule + + + + { + + o + n + i + o + n + s + , + p + o + t + a + t + o + e + s + + } + ⇒ + { + + b + u + r + g + e + r + + } + + + {\displaystyle \{\mathrm {onions,potatoes} \}\Rightarrow \{\mathrm {burger} \}} + + found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions." +284,"Learning classifier systems are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions." +285,"Inductive logic programming is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses , such as functional programs." +286,"Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set." +287,"A machine learning model is a type of mathematical model which, after being ""trained"" on a given dataset, can be used to make predictions or classifications on new data. During training, a learning algorithm iteratively adjusts the model's internal parameters to minimize errors in its predictions. By extension the term model can refer to several level of specifity, from a general class of models and their associated learning algorithms, to a fully trained model with all its internal parameters tuned." +288,"Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection." +289,"Artificial neural networks , or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems ""learn"" to perform tasks by considering examples, generally without being programmed with any task-specific rules." +290,"An ANN is a model based on a collection of connected units or nodes called ""artificial neurons"", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a ""signal"", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called ""edges"". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer to the last layer , possibly after traversing the layers multiple times." +291,"The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis." +292,Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition. +293,"Decision tree learning uses a decision tree as a predictive model to go from observations about an item to conclusions about the item's target value . It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making." +294,"Support-vector machines , also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces." +295,"Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression , logistic regression or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space." +296,"A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph . For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams." +297,"A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations." +298,"Given a set of observed points, or input–output examples, the distribution of the output of a new point as function of its input data can be directly computed by looking like the observed points and the covariances between those points and the new, unobserved point." +299,Gaussian processes are popular surrogate models in Bayesian optimization used to do hyperparameter optimization. +300,"A genetic algorithm is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms." +301,"The theory of belief functions, also referred to as evidence theory or Dempster–Shafer theory, is a general framework for reasoning with uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories. These theoretical frameworks can be thought of as a kind of learner and have some analogous properties of how evidence is combined , just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches in order to incorporate ignorance and Uncertainty quantification. These belief function approaches that are implemented within the machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms are dependent on the number of propositions , and can lead a much higher computation time when compared to other machine learning approaches." +302,"Typically, machine learning models require a high quantity of reliable data in order for the models to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Bias models may result in detrimental outcomes thereby furthering the negative impacts on society or objectives. Algorithmic bias is a potential result of data not being fully prepared for training. Machine learning ethics is becoming a field of study and notably be integrated within machine learning engineering teams." +303,"Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google." +304,"There are many applications for machine learning, including:" +305,"In 2006, the media-services provider Netflix held the first ""Netflix Prize"" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behavior of travelers. Recently, machine learning technology was also applied to optimize smartphone's performance and thermal behavior based on the user's interaction with the phone. When applied correctly, machine learning algorithms can utilize a wide range of company characteristics to predict stock returns without overfitting. By employing effective feature engineering and combining forecasts, MLAs can generate results that far surpass those obtained from basic linear techniques like OLS." +306,"Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems." +307,"The ""black box theory"" poses another yet significant challenge. Black box refers to a situation where the algorithm or the process of producing an output is entirely opaque, meaning that even the coders of the algorithm cannot audit the pattern that the machine extracted out of the data. The House of Lords Select Committee, which claimed that such an “intelligence system” that could have a “substantial impact on an individual’s life” would not be considered acceptable unless it provided “a full and satisfactory explanation for the decisions” it makes." +308,"In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users." +309,"Machine learning has been used as a strategy to update the evidence related to a systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves." +310,"Machine learning approaches in particular can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on human-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society." +311,"Language models learned from data have been shown to contain human-like biases. In an experiment carried out by ProPublica, an investigative journalism organization, a machine learning algorithm's insight towards the recidivism rates among prisoners falsely flagged “black defendants high risk twice as often as white defendants.” In 2015, Google photos would often tag black people as gorillas, and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all. Similar issues with recognizing non-white people have been found in many other systems. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language." +312,"Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that ""There's nothing artificial about AI...It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.""" +313,"Explainable AI , or Interpretable AI, or Explainable Machine Learning , is artificial intelligence in which humans can understand the decisions or predictions made by the AI. It contrasts with the ""black box"" concept in machine learning where even its designers cannot explain why an AI arrived at a specific decision. By refining the mental models of users of AI-powered systems and dismantling their misconceptions, XAI promises to help users perform more effectively. XAI may be an implementation of the social right to explanation." +314,"Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data but penalizing the theory in accordance with how complex the theory is." +315,"Learners can also disappoint by ""learning the wrong lesson"". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often do not primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in ""adversarial"" images that the system misclassifies." +316,"Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. For some systems, it is possible to change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation and/or evasion via adversarial machine learning." +317,"Researchers have demonstrated how backdoors can be placed undetectably into classifying machine learning models which are often developed and/or trained by third parties. Parties can change the classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access." +318,"Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy." +319,"In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate and True Negative Rate respectively. Similarly, investigators sometimes report the false positive rate as well as the false negative rate . However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic and ROC's associated area under the curve ." +320,"Machine learning poses a host of ethical questions. Systems that are trained on datasets collected with biases may exhibit these biases upon use , thus digitizing cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and this program had denied nearly 60 candidates who were found to be either women or had non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's predictive algorithm that resulted in “disproportionately high levels of over-policing in low-income and minority communities” after being trained with historical crime data." +321,"While responsible collection of data and documentation of algorithmic rules used by a system is considered a critical part of machine learning, some researchers blame lack of participation and representation of minority population in the field of AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research Association in 2021, “female faculty merely make up 16.1%” of all faculty members who focus on AI among several universities around the world. Furthermore, among the group of “new U.S. resident AI PhD graduates,” 45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as African American, which further demonstrates a lack of diversity in the field of AI." +322,"AI can be well-equipped to make decisions in technical fields, which rely heavily on data and historical information. These decisions rely on objectivity and logical reasoning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases." +323,"Other forms of ethical challenges, not related to personal biases, are seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated." +324,"Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units. By 2019, graphic processing units , often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware computing used in the largest deep learning projects from AlexNet to AlphaZero , and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months." +325,"A physical neural network or Neuromorphic computer is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse. ""Physical"" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse." +326,"Embedded Machine Learning is a sub-field of machine learning, where the machine learning model is run on embedded systems with limited computing resources such as wearable computers, edge devices and microcontrollers. Running machine learning model in embedded devices removes the need for transferring and storing data on cloud servers for further processing, henceforth, reducing data breaches and privacy leaks happening because of transferring data, and also minimizes theft of intellectual properties, personal data and business secrets. Embedded Machine Learning could be applied through several techniques including hardware acceleration, using approximate computing, optimization of machine learning models and many more. Pruning, Quantization, Knowledge Distillation, Low-Rank Factorization, Network Architecture Search & Parameter Sharing are few of the techniques used for optimization of machine learning models." +327,Software suites containing a variety of machine learning algorithms include the following: +328,"Artificial intelligence , in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs." +329,"AI technology is widely used throughout industry, government, and science. Some high-profile applications include advanced web search engines ; recommendation systems ; interacting via human speech ; autonomous vehicles ; generative and creative tools ; and superhuman play and analysis in strategy games . However, many AI applications are not perceived as AI: ""A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore.""" +330,"Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence. Artificial intelligence was founded as an academic discipline in 1956. The field went through multiple cycles of optimism, followed by periods of disappointment and loss of funding, known as AI winter. Funding and interest vastly increased after 2012 when deep learning surpassed all previous AI techniques, and after 2017 with the transformer architecture. This led to the AI boom of the early 2020s, with companies, universities, and laboratories overwhelmingly based in the United States pioneering significant advances in artificial intelligence." +331,"The growing use of artificial intelligence in the 21st century is influencing a societal and economic shift towards increased automation, data-driven decision-making, and the integration of AI systems into various economic sectors and areas of life, impacting job markets, healthcare, government, industry, and education. This raises questions about the long-term effects, ethical implications, and risks of AI, prompting discussions about regulatory policies to ensure the safety and benefits of the technology." +332,"The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics. General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals." +333,"To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields." +334,The general problem of simulating intelligence has been broken into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research. +335,"Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics." +336,"Many of these algorithms are insufficient for solving large reasoning problems because they experience a ""combinatorial explosion"": they became exponentially slower as the problems grew larger. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem." +337,"Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery , and other areas." +338,"A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge ; default reasoning ; and many other aspects and domains of knowledge." +339,"Among the most difficult problems in knowledge representation are: the breadth of commonsense knowledge ; and the sub-symbolic form of most commonsense knowledge . There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications." +340,"An ""agent"" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision making agent assigns a number to each situation that measures how much the agent prefers it. For each possible action, it can calculate the ""expected utility"": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility." +341,"In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in and it may not know for certain what will happen after each possible action . It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked." +342,"In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain what the outcome will be." +343,"A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way, and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated , be heuristic, or it can be learned." +344,"Game theory describes rational behavior of multiple interacting agents, and is used in AI programs that make decisions that involve other agents." +345,Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. +346,"There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires a human to label the input data first, and comes in two main varieties: classification and regression ." +347,"In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as ""good"". Transfer learning is when the knowledge gained from one problem is applied to a new problem. Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning." +348,"Computational learning theory can assess learners by computational complexity, by sample complexity , or by other notions of optimization." +349,"Natural language processing allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering." +350,"Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called ""micro-worlds"" . Margaret Masterman believed that it was meaning, and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure." +351,"Modern deep learning techniques for NLP include word embedding , transformers , and others. In 2019, generative pre-trained transformer language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications." +352,Machine perception is the ability to use input from sensors to deduce aspects of the world. Computer vision is the ability to analyze visual input. +353,"The field includes speech recognition, image classification, facial recognition, object recognition, and robotic perception." +354,"Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction." +355,"However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject." +356,A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence. +357,AI research uses a wide variety of techniques to accomplish the goals above. +358,AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search. +359,"State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis." +360,"Simple exhaustive searches are rarely sufficient for most real-world problems: the search space quickly grows to astronomical numbers. The result is a search that is too slow or never completes. ""Heuristics"" or ""rules of thumb"" can help to prioritize choices that are more likely to reach a goal." +361,"Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and counter-moves, looking for a winning position." +362,Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally. +363,Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks. +364,"Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by ""mutating"" and ""recombining"" them, selecting only the fittest to survive each generation." +365,Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization and ant colony optimization . +366,"Formal logic is used for reasoning and knowledge representation. +Formal logic comes in two main forms: propositional logic +and predicate logic ." +367,"Deductive reasoning in logic is the process of proving a new statement from other statements that are given and assumed to be true . Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules." +368,"Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved." +369,"Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages." +370,"Fuzzy logic assigns a ""degree of truth"" between 0 and 1. It can therefore handle propositions that are vague and partially true." +371,"Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. +Other specialized versions of logic have been developed to describe many complex domains." +372,"Many problems in AI require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design." +373,"Bayesian networks are a tool that can be used for reasoning , learning , planning and perception ." +374,"Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time ." +375,"The simplest AI applications can be divided into two types: classifiers , on one hand, and controllers , on the other hand. Classifiers +are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience." +376,"There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine displaced k-nearest neighbor in the 1990s. +The naive Bayes classifier is reportedly the ""most widely used learner"" at Google, due in part to its scalability. +Neural networks are also used as classifiers." +377,"An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers." +378,"Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. +Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function." +379,"In feedforward neural networks the signal passes in only one direction. +Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. +Perceptrons +use only a single layer of neurons, deep learning uses multiple layers. +Convolutional neural networks strengthen the connection between neurons that are ""close"" to each other—this is especially important in image processing, where a local set of neurons must identify an ""edge"" before the network can identify an object." +380,"Deep learning +uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces." +381,"Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification +and others. The reason that deep learning performs so well in so many applications is not known as of 2023. +The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough +but because of two factors: the incredible increase in computer power and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet." +382,"Generative pre-trained transformers are large language models that are based on the semantic relationships between words in sentences . Text-based GPT models are pre-trained on a large corpus of text which can be from the internet. The pre-training consists in predicting the next token . Throughout this pre-training, GPT models accumulate knowledge about the world, and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful and harmless, usually with a technique called reinforcement learning from human feedback . Current GPT models are still prone to generating falsehoods called ""hallucinations"", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow you to ask a question or request a task in simple text." +383,"Current models and services include: Gemini , ChatGPT, Grok, Claude, Copilot and LLaMA. Multimodal GPT models can process different types of data such as images, videos, sound and text." +384,"In the late 2010s, graphics processing units that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software, had replaced previously used central processing unit as the dominant means for large-scale machine learning models' training. Historically, specialized languages, such as Lisp, Prolog, Python and others, had been used." +385,"AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines , targeting online advertisements, recommendation systems , driving internet traffic, targeted advertising , virtual assistants , autonomous vehicles , automatic language translation , facial recognition and image labeling ." +386,"The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients." +387,"For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. New AI tools can deepen our understanding of biomedically relevant pathways. For example, AlphaFold 2 demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. In 2023, it was reported that AI guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria." +388,"Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then in 2017 it defeated Ke Jie, who was the best Go player in the world. Other programs handle imperfect-information games, such as the poker-playing program Pluribus. DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games. In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map. In 2021 an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning." +389,"Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams. AI was incorporated into military operations in Iraq and Syria." +390,"In November 2023, US Vice President Kamala Harris disclosed a declaration signed by 31 nations to set guardrails for the military use of AI. The commitments include using legal reviews to ensure the compliance of military AI with international laws, and being cautious and transparent in the development of this technology." +391,"In the early 2020s, generative AI gained widespread prominence. In March 2023, 58% of US adults had heard about ChatGPT and 14% had tried it. The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts." +392,"There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported they had incorporated ""AI"" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management." +393,"In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water." +394,"Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for ""classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights"" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation." +395,"AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to ""solve intelligence, and then use that to solve everything else"". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning." +396,"Machine-learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright." +397,"Technology companies collect a wide range of data from their users, including online activity, geolocation data, video and audio. +For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them. Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy." +398,"AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy. Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted ""from the question of 'what they know' to the question of 'what they're doing with it'.""" +399,"Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of ""fair use"". Website owners who do not wish to have their copyrighted content AI-indexed or 'scraped' can add code to their site if they do not want their website to be indexed by a search engine, which is currently available through certain services such as OpenAI. Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include ""the purpose and character of the use of the copyrighted work"" and ""the effect upon the potential market for the copyrighted work"". In 2023, leading authors sued AI companies for using their work to train generative AI." +400,"YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement . The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem." +401,"In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda. AI pioneer Geoffrey Hinton expressed concern about AI enabling ""authoritarian leaders to manipulate their electorates"" on a large scale, among other risks." +402,"Machine learning applications will be biased if they learn from biased data. +The developers may not be aware that the bias exists. +Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people then the algorithm may cause discrimination. +Fairness in machine learning is the study of how to prevent the harm caused by algorithmic bias. It has become serious area of academic study within AI. Researchers have discovered it is not always possible to define ""fairness"" in a way that satisfies all stakeholders." +403,"On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as ""gorillas"" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called ""sample size disparity"". Google ""fixed"" this problem by preventing the system from labelling anything as a ""gorilla"". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon." +404,"COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. +In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data." +405,"A program can make biased decisions even if the data does not explicitly mention a problematic feature . The feature will correlate with other features , and the program will make the same decisions based on these features as it would on ""race"" or ""gender"". +Moritz Hardt said ""the most robust fact in this research area is that fairness through blindness doesn't work.""" +406,"Criticism of COMPAS highlighted that machine learning models are designed to make ""predictions"" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these ""recommendations"" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is necessarily descriptive and not proscriptive." +407,"Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women." +408,"At its 2022 Conference on Fairness, Accountability, and Transparency , the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed." +409,"Many AI systems are so complex that their designers cannot explain how they reach their decisions. Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist." +410,"It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as ""cancerous"", because pictures of malignancies typically include a ruler to show the scale. Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at ""low risk"" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading." +411,"People who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used." +412,DARPA established the XAI program in 2014 to try and solve these problems. +413,"There are several possible solutions to the transparency problem. SHAP tried to solve the transparency problems by visualising the contribution of each feature to the output. LIME can locally approximate a model with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network have learned and produce output that can suggest what the network is learning." +414,"Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states." +415,"A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person. In 2014, 30 nations supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. By 2015, over fifty countries were reported to be researching battlefield robots." +416,"AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive with liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. All these technologies have been available since 2020 or earlier -- AI facial recognition systems are already being used for mass surveillance in China." +417,"There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours." +418,Training AI systems requires an enormous amount of computing power. Usually only Big Tech companies have the financial resources to make such investments. Smaller startups such as Cohere and OpenAI end up buying access to data centers from Google and Microsoft respectively. +419,"Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment." +420,"In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that ""we're in uncharted territory"" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at ""high risk"" of potential automation, while an OECD report classified only 9% of U.S. jobs as ""high risk"". The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence." +421,"Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that ""the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution"" is ""worth taking seriously"". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy." +422,"From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement." +423,"It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, ""spell the end of the human race"". This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like ""self-awareness"" and becomes a malevolent character. These sci-fi scenarios are misleading in several ways." +424,"First, AI does not require human-like ""sentience"" to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it . Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that ""you can't fetch the coffee if you're dead."" In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is ""fundamentally on our side""." +425,"Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are made of language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive." +426,"The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, and Elon Musk have expressed concern about existential risk from AI. +AI pioneers including Fei-Fei Li, Geoffrey Hinton, Yoshua Bengio, Cynthia Breazeal, Rana el Kaliouby, Demis Hassabis, Joy Buolamwini, and Sam Altman have expressed concerns about the risks of AI. In 2023, many leading AI experts issued the joint statement that ""Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war""." +427,"Other researchers, however, spoke in favor of a less dystopian view. AI pioneer Juergen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making ""human lives longer and healthier and easier."" While the tools that are now being used to improve lives can also be used by bad actors, ""they can also be used against the bad actors."" Andrew Ng also argued that ""it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."" Yann LeCun ""scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."" In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine. However, after 2016, the study of current and future risks and possible solutions became a serious area of research." +428,"Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk." +429,"Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. +The field of machine ethics is also called computational morality, +and was founded at an AAAI symposium in 2005." +430,"Other approaches include Wendell Wallach's ""artificial moral agents"" +and Stuart J. Russell's three principles for developing provably beneficial machines." +431,"Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas:" +432,"Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others; however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks." +433,"Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers." +434,"The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence ; it is therefore related to the broader regulation of algorithms. +The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. +Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. +Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, US and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. +The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. +In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics." +435,"In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that ""products and services using AI have more benefits than drawbacks"". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. +In a 2023 Fox News poll, 35% of Americans thought it ""very important"", and an additional 41% thought it ""somewhat important"", for the federal government to regulate AI, versus 13% responding ""not very important"" and 8% responding ""not at all important""." +436,"In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence." +437,"The study of mechanical or ""formal"" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as ""0"" and ""1"", could simulate any conceivable form of mathematical reasoning. This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an ""electronic brain"". +They developed several areas of research that would become part of AI, +such as McCullouch and Pitts design for ""artificial neurons"" in 1943, and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that ""machine intelligence"" was plausible." +438,"The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as ""astonishing"": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. Artificial intelligence laboratories were set up at a number of British and U.S. Universities in the latter 1950s and early 1960s." +439,"Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. Herbert Simon predicted, ""machines will be capable, within twenty years, of doing any work a man can do"". Marvin Minsky agreed, writing, ""within a generation ... the problem of creating 'artificial intelligence' will substantially be solved"". They had, however, underestimated the difficulty of the problem. In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. The ""AI winter"", a period when obtaining funding for AI projects was difficult, followed." +440,"In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began." +441,"Up to this point, most of AI's funding had gone to projects which used high level symbols to represent mental objects like plans, goals, beliefs and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, and began to look into ""sub-symbolic"" approaches. Rodney Brooks rejected ""representation"" in general and focussed directly on engineering machines that move and survive. Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of ""connectionism"", including neural network research, by Geoffrey Hinton and others. In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks." +442,"AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This ""narrow"" and ""formal"" focus allowed researchers to produce verifiable results and collaborate with other fields . +By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as ""artificial intelligence"". +However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence , which had several well-funded institutions by the 2010s." +443,"Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. +For many specific tasks, other methods were abandoned. +Deep learning's success was based on both hardware improvements and access to large amounts of data . Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research increased by 50% in the years 2015–2019." +444,"In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study." +445,"In the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. +These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to 'AI Impacts', about $50 billion annually was invested in ""AI"" around 2022 in the U.S. alone and about 20% of new US Computer Science PhD graduates have specialized in ""AI"". +About 800,000 ""AI""-related US job openings existed in 2022." +446,"Alan Turing wrote in 1950 ""I propose to consider the question 'can machines think'?"" He advised changing the question from whether a machine ""thinks"", to ""whether or not it is possible for machinery to show intelligent behaviour"". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is ""actually"" thinking or literally has a ""mind"". Turing notes that we can not determine these things about other people but ""it is usual to have a polite convention that everyone thinks""" +447,"Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. However, they are critical that the test requires the machine to imitate humans. ""Aeronautical engineering texts,"" they wrote, ""do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"" AI founder John McCarthy agreed, writing that ""Artificial intelligence is not, by definition, simulation of human intelligence""." +448,"McCarthy defines intelligence as ""the computational part of the ability to achieve goals in the world."" Another AI founder, Marvin Minsky similarly describes it as ""the ability to solve hard problems"". The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals. These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the ""intelligence"" of the machine—and no other philosophical discussion is required, or may not even be possible." +449,"Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence." +450,"No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches . This approach is mostly sub-symbolic, soft and narrow . Critics argue that these questions may have to be revisited by future generations of AI researchers." +451,"Symbolic AI simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at ""intelligent"" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: ""A physical symbol system has the necessary and sufficient means of general intelligent action.""" +452,"However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level ""intelligent"" tasks were easy for AI, but low level ""instinctive"" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a ""feel"" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him." +453,"The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches." +454,"""Neats"" hope that intelligent behavior is described using simple, elegant principles . ""Scruffies"" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, but eventually was seen as irrelevant. Modern AI has elements of both." +455,"Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks." +456,"AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively." +457,"The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that ""he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."" However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction." +458,"David Chalmers identified two problems in understanding the mind, which he named the ""hard"" and ""easy"" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something . Human information processing is easy to explain, however, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like." +459,Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. +460,"Philosopher John Searle characterized this position as ""strong AI"": ""The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."" Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind." +461,"It is difficult or impossible to reliably evaluate whether an advanced AI is sentient , and if so, to what degree. But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. Sapience may provide another moral basis for AI rights. Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society." +462,"In 2017, the European Union considered granting ""electronic personhood"" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own." +463,"Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited." +464,A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. +465,"If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an ""intelligence explosion"" and Vernor Vinge called a ""singularity""." +466,"However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do." +467,"Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger." +468,"Edward Fredkin argues that ""artificial intelligence is the next stage in evolution"", an idea first proposed by Samuel Butler's ""Darwin among the Machines"" as far back as 1863, and expanded upon by George Dyson in his book of the same name in 1998." +469,"Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction." +470,"A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey , with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator and The Matrix . In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still and Bishop from Aliens are less prominent in popular culture." +471,"Isaac Asimov introduced the Three Laws of Robotics in many books and stories, most notably the ""Multivac"" series about a super-intelligent computer of the same name. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity." +472,"Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence." +473,The two most widely used textbooks in 2023. . +474,These were the four of the most widely used AI textbooks in 2008: +475,"A cellular automaton is a discrete model of computation studied in automata theory. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. Cellular automata have found application in various areas, including physics, theoretical biology and microstructure modeling." +476,"A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off . The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood is defined relative to the specified cell. An initial state is selected by assigning a state for each cell. A new generation is created , according to some fixed rule that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known, such as the stochastic cellular automaton and asynchronous cellular automaton." +477,"The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann while they were contemporaries at Los Alamos National Laboratory. While studied by some throughout the 1950s and 1960s, it was not until the 1970s and Conway's Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s, Stephen Wolfram engaged in a systematic study of one-dimensional cellular automata, or what he calls elementary cellular automata; his research assistant Matthew Cook showed that one of these rules is Turing-complete." +478,"The primary classifications of cellular automata, as outlined by Wolfram, are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogeneity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolve in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class is thought to be computationally universal, or capable of simulating a Turing machine. Special types of cellular automata are reversible, where only a single configuration leads directly to a subsequent one, and totalistic, in which the future value of individual cells only depends on the total value of a group of neighboring cells. Cellular automata can simulate a variety of real-world systems, including biological and chemical ones." +479,"One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a ""cell"" and each cell has two possible states, black and white. The neighborhood of a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are the von Neumann neighborhood and the Moore neighborhood. The former, named after the founding cellular automaton theorist, consists of the four orthogonally adjacent cells. The latter includes the von Neumann neighborhood as well as the four diagonally adjacent cells. For such a cell and its Moore neighborhood, there are 512 possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model. Another common neighborhood type is the extended von Neumann neighborhood, which includes the two closest cells in each orthogonal direction, for a total of eight. The general equation for the total number of automata possible is kks, where k is the number of possible states for a cell, and s is the number of neighboring cells used to determine the cell's next state. Thus, in the two-dimensional system with a Moore neighborhood, the total number of automata possible would be 229, or 1.34×10154." +480,"It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states; the assignment of state values is called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata." +481,"Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighborhoods differently for these cells. One could say that they have fewer neighbors, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with periodic boundary conditions resulting in a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form a torus . Universes of other dimensions are handled similarly. This solves boundary problems with neighborhoods, but another advantage is that it is easily programmable using modular arithmetic functions. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cell xit is {xi−1t−1, xit−1, xi+1t−1}, where t is the time step , and i is the index in one generation." +482,"Stanislaw Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a ""sea of parts"" from which to build its replicant. Neumann wrote a paper entitled ""The general and logical theory of automata"" for the Hixon Symposium in 1948. Ulam was the one who suggested using a discrete system for creating a reductionist model of self-replication. Nils Aall Barricelli performed many of the earliest explorations of these models of artificial life." +483,"Ulam and von Neumann created a method for calculating liquid motion in the late 1950s. The driving concept of the method was to consider a liquid as a group of discrete units and calculate the motion of each based on its neighbors' behaviors. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a cellular automaton with a small neighborhood , and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe by designing a 200,000 cell configuration that could do so. This design is known as the tessellation model, and is called a von Neumann universal constructor." +484,"Also in the 1940s, Norbert Wiener and Arturo Rosenblueth developed a model of excitable media with some of the characteristics of a cellular automaton. Their specific motivation was the mathematical description of impulse conduction in cardiac systems. However their model is not a cellular automaton because the medium in which signals propagate is continuous, and wave fronts are curves. A true cellular automaton model of excitable media was developed and studied by J. M. Greenberg and S. P. Hastings in 1978; see Greenberg-Hastings cellular automaton. The original work of Wiener and Rosenblueth contains many insights and continues to be cited in modern research publications on cardiac arrhythmia and excitable systems." +485,"In the 1960s, cellular automata were studied as a particular type of dynamical system and the connection with the mathematical field of symbolic dynamics was established for the first time. In 1969, Gustav A. Hedlund compiled many results following this point of view in what is still considered as a seminal paper for the mathematical study of cellular automata. The most fundamental result is the characterization in the Curtis–Hedlund–Lyndon theorem of the set of global rules of cellular automata as the set of continuous endomorphisms of shift spaces." +486,"In 1969, German computer pioneer Konrad Zuse published his book Calculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is the output of a deterministic computation on a single cellular automaton; ""Zuse's Theory"" became the foundation of the field of study called digital physics." +487,"Also in 1969 computer scientist Alvy Ray Smith completed a Stanford PhD dissertation on Cellular Automata Theory, the first mathematical treatment of CA as a general class of computers. Many papers came from this dissertation: He showed the equivalence of neighborhoods of various shapes, how to reduce a Moore to a von Neumann neighborhood or how to reduce any neighborhood to a von Neumann neighborhood. He proved that two-dimensional CA are computation universal, introduced 1-dimensional CA, and showed that they too are computation universal, even with simple neighborhoods. He showed how to subsume the complex von Neumann proof of construction universality into a consequence of computation universality in a 1-dimensional CA. Intended as the introduction to the German edition of von Neumann's book on CA, he wrote a survey of the field with dozens of references to papers, by many authors in many countries over a decade or so of work, often overlooked by modern CA researchers." +488,"In the 1970s a two-state, two-dimensional cellular automaton named Game of Life became widely known, particularly among the early computing community. Invented by John Conway and popularized by Martin Gardner in a Scientific American article, its rules are as follows:" +489,"Any live cell with fewer than two live neighbours dies, as if caused by underpopulation." +490,Any live cell with two or three live neighbours lives on to the next generation. +491,"Any live cell with more than three live neighbours dies, as if by overpopulation." +492,"Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction." +493,"Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparent randomness and order. One of the most apparent features of the Game of Life is the frequent occurrence of gliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate a universal Turing machine. It was viewed as a largely recreational topic, and little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules in the early 1970s." +494,"Stephen Wolfram independently began working on cellular automata in mid-1981 after considering how complex patterns seemed formed in nature in violation of the Second Law of Thermodynamics. His investigations were initially spurred by a desire to model systems such as the neural networks found in brains. He published his first paper in Reviews of Modern Physics investigating elementary cellular automata in June 1983. The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms. His investigations, however, led him to realize that cellular automata were poor at modelling neural networks. Additionally, during this period Wolfram formulated the concepts of intrinsic randomness and computational irreducibility, and suggested that rule 110 may be universal—a fact proved later by Wolfram's research assistant Matthew Cook in the 1990s." +495,"Wolfram, in A New Kind of Science and several papers dating from the mid-1980s, defined four classes into which cellular automata and several other simple computational models can be divided depending on their behavior. While earlier studies in cellular automata tended to try to identify type of patterns for specific rules, Wolfram's classification was the first attempt to classify the rules themselves. In order of complexity the classes are:" +496,"These definitions are qualitative in nature and there is some room for interpretation. According to Wolfram, ""...with almost any general classification scheme there are inevitably cases which get assigned to one class by one definition and another class by another definition. And so it is with cellular automata: there are occasionally rules...that show some features of one class and some of another."" Wolfram's classification has been empirically matched to a clustering of the compressed lengths of the outputs of cellular automata." +497,"There have been several attempts to classify cellular automata in formally rigorous classes, inspired by Wolfram's classification. For instance, Culik and Yu proposed three well-defined classes , which are sometimes called Culik–Yu classes; membership in these proved undecidable. +Wolfram's class 2 can be partitioned into two subgroups of stable and oscillating rules." +498,"The idea that there are 4 classes of dynamical system came originally from Nobel-prize winning chemist Ilya Prigogine who identified these 4 classes of thermodynamical systems: systems in thermodynamic equilibrium, spatially/temporally uniform systems, chaotic systems, and complex far-from-equilibrium systems with dissipative structures ." +499,"A cellular automaton is reversible if, for every current configuration of the cellular automaton, there is exactly one past configuration . If one thinks of a cellular automaton as a function mapping configurations to configurations, reversibility implies that this function is bijective. If a cellular automaton is reversible, its time-reversed behavior can also be described as a cellular automaton; this fact is a consequence of the Curtis–Hedlund–Lyndon theorem, a topological characterization of cellular automata. For cellular automata in which not every configuration has a preimage, the configurations without preimages are called Garden of Eden patterns." +500,"For one-dimensional cellular automata there are known algorithms for deciding whether a rule is reversible or irreversible. However, for cellular automata of two or more dimensions reversibility is undecidable; that is, there is no algorithm that takes as input an automaton rule and is guaranteed to determine correctly whether the automaton is reversible. The proof by Jarkko Kari is related to the tiling problem by Wang tiles." +501,"Reversible cellular automata are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws of thermodynamics. Such cellular automata have rules specially constructed to be reversible. Such systems have been studied by Tommaso Toffoli, Norman Margolus and others. Several techniques can be used to explicitly construct reversible cellular automata with known inverses. Two common ones are the second-order cellular automaton and the block cellular automaton, both of which involve modifying the definition of a cellular automaton in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional cellular automata with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional cellular automata. Conversely, it has been shown that every reversible cellular automaton can be emulated by a block cellular automaton." +502,"A special class of cellular automata are totalistic cellular automata. The state of each cell in a totalistic cellular automaton is represented by a number , and the value of a cell at time t depends only on the sum of the values of the cells in its neighborhood at time t − 1. If the state of the cell at time t depends on both its own state and the total of its neighbors at time t − 1 then the cellular automaton is properly called outer totalistic. Conway's Game of Life is an example of an outer totalistic cellular automaton with cell values 0 and 1; outer totalistic cellular automata with the same Moore neighborhood structure as Life are sometimes called life-like cellular automata." +503,There are many possible generalizations of the cellular automaton concept. +504,"One way is by using something other than a rectangular grid. For example, if a plane is tiled with regular hexagons, those hexagons could be used as cells. In many cases the resulting cellular automata are equivalent to those with rectangular grids with specially designed neighborhoods and rules. Another variation would be to make the grid itself irregular, such as with Penrose tiles." +505,"Also, rules can be probabilistic rather than deterministic. Such cellular automata are called probabilistic cellular automata. A probabilistic rule gives, for each pattern at time t, the probabilities that the central cell will transition to each possible state at time t + 1. Sometimes a simpler rule is used; for example: ""The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color.""" +506,"The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used." +507,"In cellular automata, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself." +508,"There are continuous automata. These are like totalistic cellular automata, but instead of the rule and states being discrete , continuous functions are used, and the states become continuous . The state of a location is a finite number of real numbers. Certain cellular automata can yield diffusion in liquid patterns in this way." +509,"Continuous spatial automata have a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example is reaction–diffusion textures, differential equations proposed by Alan Turing to explain how chemical reactions could create the stripes on zebras and spots on leopards. When these are approximated by cellular automata, they often yield similar patterns. MacLennan considers continuous spatial automata as a model of computation." +510,"There are known examples of continuous spatial automata, which exhibit propagating phenomena analogous to gliders in the Game of Life." +511,Graph rewriting automata are extensions of cellular automata based on graph rewriting systems. +512,"The simplest nontrivial cellular automaton would be one-dimensional, with two possible states per cell, and a cell's neighbors defined as the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 23 = 8 possible patterns for a neighborhood. A rule consists of deciding, for each pattern, whether the cell will be a 1 or a 0 in the next generation. There are then 28 = 256 possible rules." +513,"These 256 cellular automata are generally referred to by their Wolfram code, a standard naming convention invented by Wolfram that gives each rule a number from 0 to 255. A number of papers have analyzed and compared these 256 cellular automata. The rule 30, rule 90, rule 110, and rule 184 cellular automata are particularly interesting. The images below show the history of rules 30 and 110 when the starting configuration consists of a 1 surrounded by 0s. Each row of pixels represents a generation in the history of the automaton, with t=0 being the top row. Each pixel is colored white for 0 and black for 1." +514,"Rule 30 exhibits class 3 behavior, meaning even simple input patterns such as that shown lead to chaotic, seemingly random histories." +515,"Rule 110, like the Game of Life, exhibits what Wolfram calls class 4 behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development of A New Kind of Science, as a research assistant to Wolfram in 1994, Matthew Cook proved that some of these structures were rich enough to support universality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class 4 systems are inherently likely to be universal. Cook presented his proof at a Santa Fe Institute conference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof announced before the publication of A New Kind of Science. In 2004, Cook's proof was finally published in Wolfram's journal Complex Systems , over ten years after Cook came up with it. Rule 110 has been the basis for some of the smallest universal Turing machines." +516,"An elementary cellular automaton rule is specified by 8 bits, and all elementary cellular automaton rules can be considered to sit on the vertices of the 8-dimensional unit hypercube. This unit hypercube is the cellular automaton rule space. For next-nearest-neighbor cellular automata, a rule is specified by 25 = 32 bits, and the cellular automaton rule space is a 32-dimensional unit hypercube. A distance between two rules can be defined by the number of steps required to move from one vertex, which represents the first rule, and another vertex, representing another rule, along the edge of the hypercube. This rule-to-rule distance is also called the Hamming distance." +517,"Cellular automaton rule space allows us to ask the question concerning whether rules with similar dynamical behavior are ""close"" to each other. Graphically drawing a high dimensional hypercube on the 2-dimensional plane remains a difficult task, and one crude locator of a rule in the hypercube is the number of bit-1 in the 8-bit string for elementary rules . Drawing the rules in different Wolfram classes in these slices of the rule space show that class 1 rules tend to have lower number of bit-1s, thus located in one region of the space, whereas class 3 rules tend to have higher proportion of bit-1s." +518,"For larger cellular automaton rule space, it is shown that class 4 rules are located between the class 1 and class 3 rules. This observation is the foundation for the phrase edge of chaos, and is reminiscent of the phase transition in thermodynamics." +519,Several biological processes occur—or can be simulated—by cellular automata. +520,Some examples of biological phenomena modeled by cellular automata with a simple state space are: +521,"Additionally, biological phenomena which require explicit modeling of the agents' velocities may be modeled by cellular automata with a more complex state space and rules, such as biological lattice-gas cellular automata. These include phenomena of great medical importance, such as:" +522,"The Belousov–Zhabotinsky reaction is a spatio-temporal chemical oscillator that can be simulated by means of a cellular automaton. In the 1950s A. M. Zhabotinsky discovered that when a thin, homogenous layer of a mixture of malonic acid, acidified bromate, and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the ""Computer Recreations"" section of the August 1988 issue of Scientific American, A. K. Dewdney discussed a cellular automaton developed by Martin Gerhardt and Heike Schuster of the University of Bielefeld . This automaton produces wave patterns that resemble those in the Belousov-Zhabotinsky reaction." +523,"Probabilistic cellular automata are used in statistical and condensed matter physics to study phenomena like fluid dynamics and phase transitions. The Ising model is a prototypical example, in which each cell can be in either of two states called ""up"" and ""down"", making an idealized representation of a magnet. By adjusting the parameters of the model, the proportion of cells being in the same state can be varied, in ways that help explicate how ferromagnets become demagnetized when heated. Moreover, results from studying the demagnetization phase transition can be transferred to other phase transitions, like the evaporation of a liquid into a gas; this convenient cross-applicability is known as universality. The phase transition in the two-dimensional Ising model and other systems in its universality class has been of particular interest, as it requires conformal field theory to understand in depth. Other cellular automata that have been of significance in physics include lattice gas automata, which simulate fluid flows." +524,"Cellular automaton processors are physical implementations of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, or tessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with adjacent neighbor cells. No means exists to communicate directly with cells farther away. One such cellular automaton processor array configuration is the systolic array. Cell interaction can be via electric charge, magnetism, vibration , or any other physically useful means. This can be done in several ways so that no wires are needed between any elements. This is very unlike processors used in most computers today which are divided into sections with elements that can communicate with distant elements over wires." +525,Rule 30 was originally suggested as a possible block cipher for use in cryptography. Two-dimensional cellular automata can be used for constructing a pseudorandom number generator. +526,"Cellular automata have been proposed for public-key cryptography. The one-way function is the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. Cellular automata have also been applied to design error correction codes." +527,Other problems that can be solved with cellular automata include: +528,Cellular automata have been used in generative music and evolutionary music composition and procedural terrain generation in video games. +529,"For a random starting pattern, these maze-generating cellular automata will evolve into complex mazes with well-defined walls outlining corridors. Mazecetric, which has the rule B3/S1234 has a tendency to generate longer and straighter corridors compared with Maze, with the rule B3/S12345. Since these cellular automaton rules are deterministic, each maze generated is uniquely determined by its random starting pattern. This is a significant drawback since the mazes tend to be relatively predictable." +530,Specific cellular automata rules include: +531,"In computability theory, the Church–Turing thesis is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:" +532,"Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief ." +533,"On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined." +534,"Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe and what can be efficiently computed ). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind ." +535,"J. B. Rosser  addresses the notion of ""effective computability"" as follows: ""Clearly the existence of CC and RC presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps"". Thus the adverb-adjective ""effective"" is used in a sense of ""1a: producing a decided, decisive, or desired effect"", and ""capable of producing a result""." +536,"In the following, the words ""effectively calculable"" will mean ""produced by any intuitively 'effective' means whatsoever"" and ""effectively computable"" will mean ""produced by a Turing-machine or equivalent mechanical device"". Turing's ""definitions"" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same:" +537,"One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of ""algorithm"" or ""effective calculability"" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. Was the notion of ""effective calculability"" to be an ""axiom or axioms"" in an axiomatic system, merely a definition that ""identified"" two or more propositions, an empirical hypothesis to be verified by observation of natural events, or just a proposal for the sake of argument ?" +538,"In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the ""effectively computable"" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal ""thoroughly unsatisfactory"". Rather, in correspondence with Church , Gödel proposed axiomatizing the notion of ""effective calculability""; indeed, in a 1935 letter to Kleene, Church reported that:" +539,"But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ . But he did not think that the two ideas could be satisfactorily identified ""except heuristically""." +540,"Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and ""general"" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form." +541,"Many years later in a letter to Davis , Gödel said that ""he was, at the time of these lectures, not at all convinced that his concept of recursion comprised all possible recursions"". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of ""algorithm"" or ""mechanical procedure"" or ""formal system""." +542,"A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's ""identification"" of effective computability with the λ-calculus and recursion, stating:" +543,"Rather, he regarded the notion of ""effective calculability"" as merely a ""working hypothesis"" that might lead by inductive reasoning to a ""natural law"" rather than by ""a definition or an axiom"". This idea was ""sharply"" criticized by Church." +544,Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. +545,"Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper ""On Computable Numbers, with an Application to the Entscheidungsproblem"" appeared. In it he stated another notion of ""effective computability"" with the introduction of his a-machines . And in a proof-sketch added as an ""Appendix"" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made ""the identification with effectiveness in the ordinary sense evident immediately""." +546,"In a few years Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church and Turing had individually proposed that their ""formal systems"" should be definitions of ""effective calculability""; neither framed their statements as theses." +547,Rosser formally identified the three notions-as-definitions: +548,"Kleene proposes Thesis I: This left the overt expression of a ""thesis"" to Kleene. In 1943 Kleene proposed his ""Thesis I"":" +549,"Since a precise mathematical definition of the term effectively calculable has been wanting, we can take this thesis ... as a definition of it ..." +550,"The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name ""Church's Thesis"" and ""Turing's Thesis"", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness . In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, ""Church's thesis"" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present ""Turing's thesis"", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of ""Theorem XXX""." +551,"Kleene, finally, uses for the first time the term the ""Church-Turing thesis"" in a section in which he helps to give clarifications to concepts in Alan Turing's paper ""The Word Problem in Semi-Groups with Cancellation"", as demanded in a critique from William Boone." +552,"An attempt to understand the notion of ""effective computability"" better led Robin Gandy in 1980 to analyze machine computation . Gandy's curiosity about, and analysis of, cellular automata , parallelism, and crystalline automata, led him to propose four ""principles  ... which it is argued, any machine must satisfy"". His most-important fourth, ""the principle of causality"" is based on the ""finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance"". From these principles and some additional constraints— a lower bound on the linear dimensions of any of the parts, an upper bound on speed of propagation , discrete progress of the machine, and deterministic behavior—he produces a theorem that ""What can be calculated by a device satisfying principles I–IV is computable.""" +553,"In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of ""effective calculability"" with the intent of ""sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework"". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—""a human computing agent who proceeds mechanically"". These constraints reduce to:" +554,The matter remains in active discussion within the academic community. +555,"The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. ""the correct definition of mechanical computability was established beyond any doubt by Turing"". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function." +556,"Other formalisms have been proposed for describing effective calculability/computability. Kleene adds to the list the functions ""reckonable in the system S1"" of Kurt Gödel 1936, and Emil Post's ""canonical systems"". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model . Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into ""up-down counters"", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky : ""... they just wanted to ... convince themselves that there is no way to extend the notion of computable function.""" +557,"All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of ""effective calculability/computability"" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel proposed something stronger than this; he observed that there was something ""absolute"" about the concept of ""reckonable in S1"":" +558,"Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude ""by the Church–Turing thesis"" that the function is Turing computable ." +559,Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: +560,"Proof: Let A be infinite RE. We list the elements of A effectively, n0, n1, n2, n3, ..." +561,"From this list we extract an increasing sublist: put m0 = n0, after finitely many steps we find an nk such that nk > m0, put m1 = nk. We repeat this procedure to find m2 > m1, etc. this yields an effective listing of the subset B={m0, m1, m2,...} of A, with the property mi < mi+1." +562,"Claim. B is decidable. For, in order to test k in B we must check if k = mi for some i. Since the sequence of mi's is increasing we have to produce at most k+1 elements of the list and compare them with k. If none of them is equal to k, then k not in B. Since this test is effective, B is decidable and, by Church's thesis, recursive." +563,"In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive." +564,"The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: ""All physically computable functions are Turing-computable."": 101" +565,The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. +566,"A variation of the Church–Turing thesis addresses whether an arbitrary but ""reasonable"" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the complexity-theoretic Church–Turing thesis or the extended Church���Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: ""A probabilistic Turing machine can efficiently simulate any realistic model of computation."" The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani . The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time equals deterministic polynomial time , the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: ""'Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space."" The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine." +567,"If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: ""A quantum Turing machine can efficiently simulate any realistic model of computation.""" +568,"Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, +stating ""Though Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid"". They claim that forms of computation not captured by the thesis are relevant today, +terms which they call super-Turing computation." +569,"Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:" +570,"The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics." +571,"The universe is not equivalent to a Turing machine , but incomputable physical events are not ""harnessable"" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category." +572,"The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, ""non-algorithmic"" computation." +573,"There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept." +574,"Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory.: 101–123" +575,"One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method." +576,"Several computational models allow for the computation of non-computable functions. These are known as +hypercomputers. +Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community." +577,"An analog signal is any continuous-time signal representing some other quantity, i.e., analogous to another quantity. For example, in an analog audio signal, the instantaneous signal voltage varies continuously with the pressure of the sound waves." +578,"In contrast, a digital signal represents the original time-varying quantity as a sampled sequence of quantized values. Digital sampling imposes some bandwidth and dynamic range constraints on the representation and adds quantization error." +579,"The term analog signal usually refers to electrical signals; however, mechanical, pneumatic, hydraulic, and other systems may also convey or be considered analog signals." +580,"An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. In an electrical signal, the voltage, current, or frequency of the signal may be varied to represent the information." +581,"Any information may be conveyed by an analog signal; such a signal may be a measured response to changes in a physical variable, such as sound, light, temperature, position, or pressure. The physical variable is converted to an analog signal by a transducer. For example, sound striking the diaphragm of a microphone induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone or the voltage produced by a condenser microphone. The voltage or the current is said to be an analog of the sound." +582,"An analog signal is subject to electronic noise and distortion introduced by communication channels, recording and signal processing operations, which can progressively degrade the signal-to-noise ratio . As the signal is transmitted, copied, or processed, the unavoidable noise introduced in the signal path will accumulate as a generation loss, progressively and irreversibly degrading the SNR, until in extreme cases, the signal can be overwhelmed. Noise can show up as hiss and intermodulation distortion in audio signals, or snow in video signals. Generation loss is irreversible as there is no reliable method to distinguish the noise from the signal." +583,"Converting an analog signal to digital form introduces a low-level quantization noise into the signal due to finite resolution of digital systems. Once in digital form, the signal can be transmitted, stored, and processed without introducing additional noise or distortion using error detection and correction." +584,"Noise accumulation in analog systems can be minimized by electromagnetic shielding, balanced lines, low-noise amplifiers and high-quality electrical components." +585,"Abstract machines are typically categorized into two types based on the quantity of operations they can execute simultaneously at any given moment: deterministic abstract machines and non-deterministic abstract machines. A deterministic abstract machine is a system in which a particular beginning state or condition always yields the same outputs. There is no randomness or variation in how inputs are transformed into outputs. In contrast, a non-deterministic abstract machine can provide various outputs for the same input on different executions. Unlike a deterministic algorithm, which gives the same result for the same input regardless of the number of iterations, a non-deterministic algorithm takes various paths to arrive to different outputs. Non-deterministic algorithms are helpful for obtaining approximate answers when deriving a precise solution using a deterministic approach is difficult or costly." +586,"Turing machines, for example, are some of the most fundamental abstract machines in computer science. These machines conduct operations on a tape of any length. Their instructions provide for both modifying the symbols and changing the symbol that the machine’s pointer is currently at. For example, a rudimentary Turing machine could have a single command, ""convert symbol to 1 then move right"", and this machine would only produce a string of 1s. This basic Turing machine is deterministic; however, nondeterministic Turing machines that can execute several actions given the same input may also be built." +587,"Any implementation of an abstract machine in the case of physical implementation uses some kind of physical device to execute the instructions of a programming language. An abstract machine, however, can also be implemented in software or firmware at levels between the abstract machine and underlying physical device." +588,"An abstract machine is, intuitively, just an abstraction of the idea of a physical computer. For actual execution, algorithms must be properly formalised using the constructs offered by a programming language. This implies that the algorithms to be executed must be expressed using programming language instructions. The syntax of a programming language enables the construction of programs using a finite set of constructs known as instructions. Most abstract machines share a program store and a state, which often includes a stack and registers. In digital computers, the stack is simply a memory unit with an address register that can count only positive integers . The address register for the stack is known as a stack pointer because its value always refers to the top item on the stack. The program consists of a series of instructions, with a stack pointer indicating the next instruction to be performed. When the instruction is completed, a stack pointer is advanced. This fundamental control mechanism of an abstract machine is also known as its execution loop. Thus, an abstract machine for a programming language is any collection of data structures and algorithms capable of storing and running programs written in the programming language. It bridges the gap between the high level of a programming language and the low level of an actual machine by providing an intermediate language step for compilation. An abstract machine's instructions are adapted to the unique operations necessary to implement operations of a certain source language or set of source languages." +589,"In the late 1950s, the Association for Computing Machinery and other allied organisations developed many proposals for Universal Computer Oriented Language , such as Conway's machine. The UNCOL concept is good, but it has not been widely used due to the poor performance of the generated code. In many areas of computing, its performance will continue to be an issue despite the development of the Java Virtual Machine in the late 1990s. Algol Object Code , P4-machine , UCSD P-machine , and Forth are some successful abstract machines of this kind." +590,"Abstract machines for object-oriented programming languages are often stack-based and have special access instructions for object fields and methods. In these machines, memory management is often implicit performed by a garbage collector . Smalltalk-80 , Self , and Java are examples of this implementation." +591,"A string processing language is a computer language that focuses on processing strings rather than numbers. There have been string processing languages in the form of command shells, programming tools, macro processors, and scripting languages for decades. Using a suitable abstract machine has two benefits: increased execution speed and enhanced portability. Snobol4 and ML/I are two notable instances of early string processing languages that use an abstract machine to gain machine independence." +592,"The early abstract machines for functional languages, including the SECD machine and Cardelli's Functional Abstract Machine , defined strict evaluation, also known as eager or call-by-value evaluation, in which function arguments are evaluated before the call and precisely once. Recently, the majority of research has been on lazy evaluation, such as the G-machine , Krivine machine , and Three Instruction Machine , in which function arguments are evaluated only if necessary and at most once. One reason is because effective implementation of strict evaluation is now well-understood, therefore the necessity for an abstract machine has diminished." +593,"Predicate calculus is the foundation of logic programming languages. The most well-known logic programming language is Prolog. The rules in Prolog are written in a uniform format known as universally quantified 'Horn clauses', which means to begin the calculation that attempts to discover a proof of the objective. The Warren Abstract Machine WAM , which has become the de facto standard in Prolog program compilation, has been the focus of most study. It provides special purpose instructions such as data unification instructions and control flow instructions to support backtracking ." +594,"A generic abstract machine is made up of a memory and an interpreter. The memory is used to store data and programs, while the interpreter is the component that executes the instructions included in programs." +595,"The interpreter must carry out the operations that are unique to the language it is interpreting. However, given the variety of languages, it is conceivable to identify categories of operations and an ""execution mechanism"" shared by all interpreters. The interpreter's operations and accompanying data structures are divided into the following categories:" +596,Operations for processing primitive data: +597,Operations and data structures for controlling the sequence of execution of operations; +598,Operations and data structures for controlling data transfers; +599,Operations and data structures for memory management. +600,"An abstract machine must contain operations for manipulating primitive data types such as strings and integers. For example, integers are nearly universally considered a basic data type for both physical abstract machines and the abstract machines used by many programming languages. The machine carries out the arithmetic operations necessary, such as addition and multiplication, within a single time step." +601,"Operations and structures for ""sequence control"" allow controlling the execution flow of program instructions. When certain conditions are met, it is necessary to change the typical sequential execution of a program. Therefore, the interpreter employs data structures that are modified by operations distinct from those used for data manipulation ." +602,Data transfer operations are used to control how operands and data are transported from memory to the interpreter and vice versa. These operations deal with the store and the retrieval order of operands from the store. +603,"Memory management is concerned with the operations performed in memory to allocate data and applications. In the abstract machine, data and programmes can be held indefinitely, or in the case of programming languages, memory can be allocated or deallocated using a more complex mechanism." +604,"Abstract machine hierarchies are often employed, in which each machine uses the functionality of the level immediately below and adds additional functionality of its own to meet the level immediately above. A hardware computer, constructed with physical electronic devices, can be added at the most basic level. Above this level, the abstract microprogrammed machine level may be introduced. The abstract machine supplied by the operating system, which is implemented by a program written in machine language, is located immediately above . On the one hand, the operating system extends the capability of the physical machine by providing higher-level primitives that are not available on the physical machine . The host machine is formed by the abstract machine given by the operating system, on which a high-level programming language is implemented using an intermediary machine, such as the Java Virtual machine and its byte code language. The level given by the abstract machine for the high-level language is not usually the final level of hierarchy. At this point, one or more applications that deliver additional services together may be introduced. A ""web machine"" level, for example, can be added to implement the functionalities necessary to handle Web communications . The ""Web Service"" level is located above this, and it provides the functionalities necessary to make web services communicate, both in terms of interaction protocols and the behaviour of the processes involved. At this level, entirely new languages that specify the behaviour of so-called ""business processes"" based on Web services may be developed . Finally, a specialised application can be found at the highest level which has very specific and limited functionality." +605,"Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM." +606,"Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software." +607,"In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All models of register machines are Turing equivalent." +608,"The register machine gets its name from its use of one or more ""registers"". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer." +609,"There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer:" +610,Any properly defined register machine model is Turing equivalent. Computational speed is very dependent on the model specifics. +611,"In practical computer science, a related concept known as a virtual machine is occasionally employed to reduce reliance on underlying machine architectures. These virtual machines are also utilized in educational settings. In textbooks, the term ""register machine"" is sometimes used interchangeably to describe a virtual machine." +612,A register machine consists of: +613,"An unbounded number of labelled, discrete, unbounded registers unbounded in extent : a finite set of registers + + + + + r + + 0 + + + … + + r + + n + + + + + {\displaystyle r_{0}\ldots r_{n}} + + each considered to be of infinite extent and each of which holds a single non-negative integer . The registers may do their own arithmetic, or there may be one or more special registers that do the arithmetic e.g. an ""accumulator"" and/or ""address register"". See also Random-access machine." +614,"Tally counters or marks: discrete, indistinguishable objects or marks of only one sort suitable for the model. In the most-reduced counter machine model, per each arithmetic operation only one object/mark is either added to or removed from its location/tape. In some counter machine models and most RAM and RASP models more than one object/mark can be added or removed in one operation with ""addition"" and usually ""subtraction""; sometimes with ""multiplication"" and/or ""division"". Some models have control operations such as ""copy"" that move ""clumps"" of objects/marks from register to register in one action." +615,"A limited set of instructions: the instructions tend to divide into two classes: arithmetic and control. The instructions are drawn from the two classes to form ""instruction-sets"", such that an instruction set must allow the model to be Turing equivalent . +Arithmetic: Arithmetic instructions may operate on all registers or on a specific register, such as an accumulator. Typically, they are selected from the following sets, though exceptions exist: Counter machine: { Increment , Decrement , Clear-to-zero } Reduced RAM, RASP: { Increment , Decrement , Clear-to-zero , Load-immediate-constant k, Add , Proper-Subtract , Increment accumulator, Decrement accumulator, Clear accumulator, Add the contents of register r to the accumulator, Proper-Subtract the contents of register r from the accumulator } Augmented RAM, RASP: Includes all of the reduced instructions as well as: { Multiply, Divide, various Boolean bit-wise operations } +Control: Counter machine models: Optionally include { Copy }. RAM and RASP models: Most include { Copy }, or { Load Accumulator from r, Store accumulator into r, Load Accumulator with an immediate constant }. All models: Include at least one conditional ""jump"" following the test of a register, such as { Jump-if-zero, Jump-if-not-zero , Jump-if-equal, Jump-if-not-equal }. All models optionally include: { unconditional program jump }. + +Register-addressing method: +Counter machine: no indirect addressing, immediate operands possible in highly atomized models +RAM and RASP: indirect addressing available, immediate operands typical +Input-output: optional in all models" +616,"Arithmetic: Arithmetic instructions may operate on all registers or on a specific register, such as an accumulator. Typically, they are selected from the following sets, though exceptions exist: Counter machine: { Increment , Decrement , Clear-to-zero } Reduced RAM, RASP: { Increment , Decrement , Clear-to-zero , Load-immediate-constant k, Add , Proper-Subtract , Increment accumulator, Decrement accumulator, Clear accumulator, Add the contents of register r to the accumulator, Proper-Subtract the contents of register r from the accumulator } Augmented RAM, RASP: Includes all of the reduced instructions as well as: { Multiply, Divide, various Boolean bit-wise operations }" +617,"Control: Counter machine models: Optionally include { Copy }. RAM and RASP models: Most include { Copy }, or { Load Accumulator from r, Store accumulator into r, Load Accumulator with an immediate constant }. All models: Include at least one conditional ""jump"" following the test of a register, such as { Jump-if-zero, Jump-if-not-zero , Jump-if-equal, Jump-if-not-equal }. All models optionally include: { unconditional program jump }." +618,"Counter machine: no indirect addressing, immediate operands possible in highly atomized models" +619,"RAM and RASP: indirect addressing available, immediate operands typical" +620,Input-output: optional in all models +621,"State register: A special Instruction Register , distinct from the registers mentioned earlier, stores the current instruction to be executed along with its address in the instruction table. This register, along with its associated table, is located within the finite state machine. The IR is inaccessible in all models. In the case of RAM and RASP, for determining the ""address"" of a register, the model can choose either the address specified by the table and temporarily stored in the IR for direct addressing or the contents of the register specified by the instruction in the IR for indirect addressing. It's important to note that the IR is not the ""program counter"" of the RASP . The PC is merely another register akin to an accumulator, but specifically reserved for holding the number of the RASP's current register-based instruction. Thus, a RASP possesses two ""instruction/program"" registers: the IR , and a PC for the program stored in the registers. Additionally, aside from the PC, a RASP may also dedicate another register to the ""Program-Instruction Register"" ." +622,"Two trends appeared in the early 1950s—the first to characterize the computer as a Turing machine, the second to define computer-like models—models with sequential instruction sequences and conditional jumps—with the power of a Turing machine, i.e. a so-called Turing equivalence. Need for this work was carried out in context of two ""hard"" problems: the unsolvable word problem posed by Emil Post—his problem of ""tag""—and the very ""hard"" problem of Hilbert's problems—the 10th question around Diophantine equations. Researchers were questing for Turing-equivalent models that were less ""logical"" in nature and more ""arithmetic"": 281 : 218" +623,"The first trend toward characterizing computers has originated with Hans Hermes , Rózsa Péter , and Heinz Kaphengst  , the second trend with Hao Wang and, as noted above, furthered along by Zdzislaw Alexander Melzak  , Joachim Lambek and Marvin Minsky ." +624,The last five names are listed explicitly in that order by Yuri Matiyasevich. He follows up with: +625,"Lambek, Melzak, Minsky and Shepherdson and Sturgis independently discovered the same idea at the same time. See note on precedence below." +626,The history begins with Wang's model. +627,Wang's work followed from Emil Post's paper and led Wang to his definition of his Wang B-machine—a two-symbol Post–Turing machine computation model with only four atomic instructions: +628,"To these four both Wang and then C. Y. Lee added another instruction from the Post set { ERASE }, and then a Post's unconditional jump { JUMP_to_ instruction_z } (or to make things easier, the conditional jump JUMP_IF_blank_to_instruction_z, or both. Lee named this a ""W-machine"" model:" +629,"Wang expressed hope that his model would be ""a rapprochement"": 63  between the theory of Turing machines and the practical world of the computer." +630,"Wang's work was highly influential. We find him referenced by Minsky and , Melzak , Shepherdson and Sturgis . Indeed, Shepherdson and Sturgis remark that:" +631,Martin Davis eventually evolved this model into the Post–Turing machine. +632,Difficulties with the Wang/Post–Turing model: +633,"Except there was a problem: the Wang model was still a single-tape Turing-like device, however nice its sequential program instruction-flow might be. Both Melzak and Shepherdson and Sturgis observed this :" +634,"Indeed, as examples at Turing machine examples, Post–Turing machine and partial function show, the work can be ""complicated""." +635,"So why not 'cut the tape' so each is infinitely long but left-ended, and call these three tapes ""Post–Turing tapes""? The individual heads will move left and right . In one sense the heads indicate ""the tops of the stack"" of concatenated marks. Or in Minsky and Hopcroft and Ullman : 171ff  the tape is always blank except for a mark at the left end—at no time does a head ever print or erase." +636,"Care must be taken to write the instructions so that a test-for-zero and jump occurs before decrementing, otherwise the machine will ""fall off the end"" or ""bump against the end""—creating an instance of a partial function." +637,"Minsky and Shepherdson–Sturgis prove that only a few tapes—as few as one—still allow the machine to be Turing equivalent if the data on the tape is represented as a Gödel number ; this number will evolve as the computation proceeds. In the one tape version with Gödel number encoding the counter machine must be able to multiply the Gödel number by a constant , and divide by a constant and jump if the remainder is zero. Minsky shows that the need for this bizarre instruction set can be relaxed to { INC , JZDEC } and the convenience instructions { CLR , J } if two tapes are available. A simple Gödelization is still required, however. A similar result appears in Elgot–Robinson with respect to their RASP model." +638,"Melzak's model is significantly different. He took his own model, flipped the tapes vertically, called them ""holes in the ground"" to be filled with ""pebble counters"". Unlike Minsky's ""increment"" and ""decrement"", Melzak allowed for proper subtraction of any count of pebbles and ""adds"" of any count of pebbles." +639,"He defines indirect addressing for his model: 288  and provides two examples of its use;: 89  his ""proof"": 290–292  that his model is Turing equivalent is so sketchy that the reader cannot tell whether or not he intended the indirect addressing to be a requirement for the proof." +640,Legacy of Melzak's model is Lambek's simplification and the reappearance of his mnemonic conventions in Cook and Reckhow 1973. +641,"Lambek took Melzak's ternary model and atomized it down to the two unary instructions—X+, X− if possible else jump—exactly the same two that Minsky had come up with." +642,"However, like the Minsky model, the Lambek model does execute its instructions in a default-sequential manner—both X+ and X− carry the identifier of the next instruction, and X− also carries the jump-to instruction if the zero-test is successful." +643,"A RASP or random-access stored-program machine begins as a counter machine with its ""program of instruction"" placed in its ""registers"". Analogous to, but independent of, the finite state machine's ""Instruction Register"", at least one of the registers ) and one or more ""temporary"" registers maintain a record of, and operate on, the current instruction's number. The finite state machine's TABLE of instructions is responsible for fetching the current program instruction from the proper register, parsing the program instruction, fetching operands specified by the program instruction, and executing the program instruction." +644,"Except there is a problem: If based on the counter machine chassis this computer-like, von Neumann machine will not be Turing equivalent. It cannot compute everything that is computable. Intrinsically the model is bounded by the size of its finite state machine's instructions. The counter machine based RASP can compute any primitive recursive function but not all mu recursive functions ." +645,"Elgot–Robinson investigate the possibility of allowing their RASP model to ""self modify"" its program instructions. The idea was an old one, proposed by Burks–Goldstine–von Neumann , and sometimes called ""the computed goto."" Melzak specifically mentions the ""computed goto"" by name but instead provides his model with indirect addressing." +646,"Computed goto: A RASP program of instructions that modifies the ""goto address"" in a conditional- or unconditional-jump program instruction." +647,"But this does not solve the problem . What is necessary is a method to fetch the address of a program instruction that lies ""beyond/above"" the upper bound of the finite state machine instruction register and TABLE." +648,"Minsky hints at the issue in his investigation of a counter machine equipped with the instructions { CLR , INC , and RPT }. He doesn't tell us how to fix the problem, but he does observe that:" +649,"But Elgot and Robinson solve the problem: They augment their P0 RASP with an indexed set of instructions—a somewhat more complicated form of indirect addressing. Their P'0 model addresses the registers by adding the contents of the ""base"" register to the ""index"" specified explicitly in the instruction . Thus the indexing P'0 instructions have one more parameter than the non-indexing P0 instructions:" +650,By 1971 Hartmanis has simplified the indexing to indirection for use in his RASP model. +651,"Indirect addressing: A pointer-register supplies the finite state machine with the address of the target register required for the instruction. Said another way: The contents of the pointer-register is the address of the ""target"" register to be used by the instruction. If the pointer-register is unbounded, the RAM, and a suitable RASP built on its chassis, will be Turing equivalent. The target register can serve either as a source or destination register, as specified by the instruction." +652,"Note that the finite state machine does not have to explicitly specify this target register's address. It just says to the rest of the machine: Get me the contents of the register pointed to by my pointer-register and then do xyz with it. It must specify explicitly by name, via its instruction, this pointer-register but it doesn't have to know what number the pointer-register actually contains ." +653,Cook and Reckhow cite Hartmanis and simplify his model to what they call a random-access machine . In a sense we are back to Melzak but with a much simpler model than Melzak's. +654,"Minsky was working at the MIT Lincoln Laboratory and published his work there; his paper was received for publishing in the Annals of Mathematics on 15 August 1960, but not published until November 1961. While receipt occurred a full year before the work of Melzak and Lambek was received and published . That both were Canadians and published in the Canadian Mathematical Bulletin, neither would have had reference to Minsky's work because it was not yet published in a peer-reviewed journal, but Melzak references Wang, and Lambek references Melzak, leads one to hypothesize that their work occurred simultaneously and independently." +655,"Almost exactly the same thing happened to Shepherdson and Sturgis. Their paper was received in December 1961—just a few months after Melzak and Lambek's work was received. Again, they had little or no benefit of reviewing the work of Minsky. They were careful to observe in footnotes that papers by Ershov, Kaphengst and Péter had ""recently appeared"": 219  These were published much earlier but appeared in the German language in German journals so issues of accessibility present themselves." +656,"The final paper of Shepherdson and Sturgis did not appear in a peer-reviewed journal until 1963. And as they note in their Appendix A, the 'systems' of Kaphengst , Ershov , Péter are all so similar to what results were obtained later as to be indistinguishable to a set of the following:" +657,"Indeed, Shepherson and Sturgis conclude" +658,"By order of publishing date the work of Kaphengst , Ershov , Péter were first." +659,"Background texts: The following bibliography of source papers includes a number of texts to be used as background. The mathematics that led to the flurry of papers about abstract machines in the 1950s and 1960s can be found in van Heijenoort —an assemblage of original papers spanning the 50 years from Frege to Gödel . Davis The Undecidable carries the torch onward beginning with Gödel through Gödel's postscriptum;: 71  the original papers of Alan Turing and Emil Post are included in The Undecidable. The mathematics of Church, Rosser and Kleene that appear as reprints of original papers in The Undecidable is carried further in Kleene , a mandatory text for anyone pursuing a deeper understanding of the mathematics behind the machines. Both Kleene and Davis are referenced by a number of the papers." +660,"For a good treatment of the counter machine see Minsky Chapter 11 ""Models similar to Digital Computers""—he calls the counter machine a ""program computer"". A recent overview is found at van Emde Boas . A recent treatment of the Minsky /Lambek model can be found Boolos–Burgess–Jeffrey ; they reincarnate Lambek's ""abacus model"" to demonstrate equivalence of Turing machines and partial recursive functions, and they provide a graduate-level introduction to both abstract machine models and the mathematics of recursion theory. Beginning with the first edition Boolos–Burgess this model appeared with virtually the same treatment." +661,"The papers: The papers begin with Wang and his dramatic simplification of the Turing machine. Turing , Kleene , Davis and in particular Post are cited in Wang ; in turn, Wang is referenced by Melzak , Minsky and Shepherdson–Sturgis as they independently reduce the Turing tapes to ""counters"". Melzak provides his pebble-in-holes counter machine model with indirection but doesn't carry the treatment further. The work of Elgot–Robinson define the RASP—the computer-like random-access stored-program machines—and appear to be the first to investigate the failure of the bounded counter machine to calculate the mu-recursive functions. This failure—except with the draconian use of Gödel numbers in the manner of Minsky —leads to their definition of ""indexed"" instructions for their RASP model. Elgot–Robinson and more so Hartmanis investigate RASPs with self-modifying programs. Hartmanis specifies an instruction set with indirection, citing lecture notes of Cook . For use in investigations of computational complexity Cook and his graduate student Reckhow provide the definition of a RAM . The pointer machines are an offshoot of Knuth and independently Schönhage ." +662,"For the most part the papers contain mathematics beyond the undergraduate level—in particular the primitive recursive functions and mu recursive functions presented elegantly in Kleene and less in depth, but useful nonetheless, in Boolos–Burgess–Jeffrey ." +663,All texts and papers excepting the four starred have been witnessed. These four are written in German and appear as references in Shepherdson–Sturgis and Elgot–Robinson ; Shepherdson–Sturgis offer a brief discussion of their results in Shepherdson–Sturgis' Appendix A. The terminology of at least one paper seems to hark back to the Burke–Goldstine–von Neumann analysis of computer architecture. +664,"NUMA architectures logically follow in scaling from symmetric multiprocessing architectures. They were developed commercially during the 1990s by Unisys, Convex Computer , Honeywell Information Systems Italy , Silicon Graphics , Sequent Computer Systems , Data General , Digital and ICL. Techniques developed by these companies later featured in a variety of Unix-like operating systems, and to an extent in Windows NT." +665,"The first commercial implementation of a NUMA-based Unix system was the Symmetrical Multi Processing XPS-100 family of servers, designed by Dan Gielan of VAST Corporation for Honeywell Information Systems Italy." +666,"Modern CPUs operate considerably faster than the main memory they use. In the early days of computing and data processing, the CPU generally ran slower than its own memory. The performance lines of processors and memory crossed in the 1960s with the advent of the first supercomputers. Since then, CPUs increasingly have found themselves ""starved for data"" and having to stall while waiting for data to arrive from memory . Many supercomputer designs of the 1980s and 1990s focused on providing high-speed memory access as opposed to faster processors, allowing the computers to work on large data sets at speeds other systems could not approach." +667,"Limiting the number of memory accesses provided the key to extracting high performance from a modern computer. For commodity processors, this meant installing an ever-increasing amount of high-speed cache memory and using increasingly sophisticated algorithms to avoid cache misses. But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems without NUMA make the problem considerably worse. Now a system can starve several processors at the same time, notably because only one processor can access the computer's memory at a time." +668,"NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory. For problems involving spread data , NUMA can improve the performance over a single shared memory by a factor of roughly the number of processors . Another approach to addressing this problem is the multi-channel memory architecture, in which a linear increase in the number of memory channels increases the memory access concurrency linearly." +669,"Of course, not all data ends up confined to a single task, which means that more than one processor may require the same data. To handle these cases, NUMA systems include additional hardware or software to move data between memory banks. This operation slows the processors attached to those banks, so the overall speed increase due to NUMA heavily depends on the nature of the running tasks." +670,"AMD implemented NUMA with its Opteron processor , using HyperTransport. Intel announced NUMA compatibility for its x86 and Itanium servers in late 2007 with its Nehalem and Tukwila CPUs. Both Intel CPU families share a common chipset; the interconnection is called Intel QuickPath Interconnect , which provides extremely high bandwidth to enable high on-board scalability and was replaced by a new version called Intel UltraPath Interconnect with the release of Skylake ." +671,"Nearly all CPU architectures use a small amount of very fast non-shared memory known as cache to exploit locality of reference in memory accesses. With NUMA, maintaining cache coherence across shared memory has a significant overhead. Although simpler to design and build, non-cache-coherent NUMA systems become prohibitively complex to program in the standard von Neumann architecture programming model." +672,"Typically, ccNUMA uses inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA may perform poorly when multiple processors attempt to access the same memory area in rapid succession. Support for NUMA in operating systems attempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary." +673,"Alternatively, cache coherency protocols such as the MESIF protocol attempt to reduce the communication required to maintain cache coherency. Scalable Coherent Interface is an IEEE standard defining a directory-based cache coherency protocol to avoid scalability limitations found in earlier multiprocessor systems. For example, SCI is used as the basis for the NumaConnect technology." +674,"One can view NUMA as a tightly coupled form of cluster computing. The addition of virtual memory paging to a cluster architecture can allow the implementation of NUMA entirely in software. However, the inter-node latency of software-based NUMA remains several orders of magnitude greater than that of hardware-based NUMA." +675,"Since NUMA largely influences memory access performance, certain software optimizations are needed to allow scheduling threads and processes close to their in-memory data." +676,"As of 2011, ccNUMA systems are multiprocessor systems based on the AMD Opteron processor, which can be implemented without external logic, and the Intel Itanium processor, which requires the chipset to support NUMA. Examples of ccNUMA-enabled chipsets are the SGI Shub , the Intel E8870, the HP sx2000 , and those found in NEC Itanium-based systems. Earlier ccNUMA systems such as those from Silicon Graphics were based on MIPS processors and the DEC Alpha 21364 processor." +677,"A scalar processor is classified as a single instruction, single data processor in Flynn's taxonomy. The Intel 486 is an example of a scalar processor. It is to be contrasted with a vector processor where a single instruction operates simultaneously on multiple data items processor). The difference is analogous to the difference between scalar and vector arithmetic." +678,The term scalar in computing dates to the 1970 and 1980s when vector processors were first introduced. It was originally used to distinguish the older designs from the new vector processors. +679,"A superscalar processor may execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier. The Cortex-M7, like many consumer CPUs today, is a superscalar processor." +680,"A scalar data type, or just scalar, is any non-composite value." +681,"Generally, all basic primitive data types are considered scalar:" +682,"Some programming languages also treat strings as scalar types, while other languages treat strings as arrays or objects." +683,A quantum computer is a computer that takes advantage of quantum mechanical phenomena. +684,"On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states." +685,"Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern ""classical"" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the technology is largely experimental and impractical, with several obstacles to useful applications. Moreover, scalable quantum computers do not hold promise for many practical tasks, and for many important tasks quantum speedups are proven impossible." +686,"The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two ""basis"" states. When measuring a qubit, the result is a probabilistic output of a classical bit, therefore making quantum computers nondeterministic in general. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly." +687,"Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. Paradoxically, perfectly isolating qubits is also undesirable because quantum computations typically need to initialize qubits, perform controlled qubit interactions, and measure the resulting quantum states. Each of those operations introduces errors and suffers from noise, and such inaccuracies accumulate." +688,"In principle, a non-quantum computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms for carefully selected tasks require exponentially fewer computational steps than the best known non-quantum algorithms. Such tasks can in theory be solved on a large-scale quantum computer whereas classical computers would not finish computations in any reasonable amount of time. However, quantum speedup is not universal or even typical across computational tasks, since basic tasks such as sorting are proven to not allow any asymptotic quantum speedup. Claims of quantum supremacy have drawn significant attention to the discipline, but are demonstrated on contrived tasks, while near-term practical use cases remain limited." +689,"For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for the nuclear physics used in the Manhattan Project." +690,"A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state , using a technique called quantum gate teleportation." +691,"An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution." +692,"Neuromorphic quantum computing is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation." +693,A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice. +694,"A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical." +695,"Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators can produce high-quality random numbers, which are essential for secure encryption." +696,"However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern." +697,"Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era." +698,"Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping." +699,"Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware , hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing." +700,"Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms." +701,"Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems." +702,"Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely." +703,"Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems." +704,"Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer." +705,About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry . Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid 2020s although some have predicted it will take longer. +706,"A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers . By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security." +707,"Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search ." +708,Problems that can be efficiently addressed with Grover's algorithm have the following properties: +709,"There is no searchable structure in the collection of possible answers," +710,"The number of possible answers to check is the same as the number of inputs to the algorithm, and" +711,There exists a boolean function that evaluates each input and determines whether it is the correct answer. +712,"For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs , as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies." +713,"Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems." +714,"Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks." +715,"For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks." +716,"Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms." +717,"As of 2023, classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. For many tasks there is no promise of useful quantum speedup, and some tasks provably prohibit any quantum speedup in the sense that any speedup is ruled out by proven theorems. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain." +718,There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: +719,"Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co." +720,The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. +721,"One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 , typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds." +722,"As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions." +723,"These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time." +724,"As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing." +725,"Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger." +726,"Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates." +727,"Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark." +728,"In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it." +729,"In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds." +730,"Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications." +731,"In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims." +732,"Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarised current quantum computers as being ""For now, absolutely nothing"". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are ""insufficient for practical quantum advantage without significant improvements across the software/hardware stack"". It argues that the most promising candidates for achieving speedup with quantum computers are ""small-data problems"", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, ""will not achieve quantum advantage with current quantum algorithms in the foreseeable future"", and it identified I/O constraints that make speedup unlikely for ""big data problems, unstructured linear systems, and database search based on Grover's algorithm""." +733,This state of affairs can be traced to several current and long-term considerations. +734,"In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms and show that some quantum algorithms asymptomatically improve upon those bounds." +735,"Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons." +736,Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: +737,"A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well." +738,"Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers." +739,"Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis." +740,"While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers." +741,"The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for ""bounded error, quantum, polynomial time"". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP , the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that + + + + + + B + P + P + ⊆ + B + Q + P + + + + + {\displaystyle {\mathsf {BPP\subseteq BQP}}} + + and is widely suspected that + + + + + + B + Q + P + ⊊ + B + P + P + + + + + {\displaystyle {\mathsf {BQP\subsetneq BPP}}} + +, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity." +742,"The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that + + + + + + P + ⊆ + B + Q + P + ⊆ + P + S + P + A + C + E + + + + + {\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}} + +; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP . It is suspected that + + + + + + N + P + ⊈ + B + Q + P + + + + + {\displaystyle {\mathsf {NP\nsubseteq BQP}}} + +; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems ." +743,"The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are:" +744,"The term ""architecture"" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory . To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of ""system architecture"", a term that seemed more useful than ""machine organization""." +745,"Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, ""Computer architecture, like other architecture, is the art of determining the needs of the user of a structure and then designing to meet those needs as effectively as possible within economic and technological constraints.""" +746,"Brooks went on to help develop the IBM System/360 line of computers, in which ""architecture"" became a noun defining ""what the user needs to know"". Later, computer users came to use the term in many less explicit ways." +747,"The earliest computer architectures were designed on paper and then directly built into the final hardware form. +Later, computer architecture prototypes were physically built in the form of a transistor–transistor logic computer—such as the prototypes of the 6800 and the PA-RISC—tested, and tweaked, before committing to the final hardware form. +As of the 1990s, new computer architectures are typically ""built"", tested, and tweaked—inside some other computer architecture in a computer architecture simulator; or inside a FPGA as a soft microprocessor; or both—before committing to the final hardware form." +748,The discipline of computer architecture has three main subcategories: +749,"There are other technologies in computer architecture. The following technologies are used in bigger companies like Intel, and were estimated in 2002 to count for 1% of all of computer architecture:" +750,"Computer architecture is concerned with balancing the performance, efficiency, cost, and reliability of a computer system. The case of instruction set architecture can be used to illustrate the balance of these competing factors. More complex instruction sets enable programmers to write more space efficient programs, since a single instruction can encode some higher-level abstraction . However, longer and more complex instructions take longer for the processor to decode and can be more costly to implement effectively. The increased complexity from a large instruction set also creates more room for unreliability when instructions interact in unexpected ways." +751,"The implementation involves integrated circuit design, packaging, power, and cooling. Optimization of the design requires familiarity with compilers, operating systems to logic design, and packaging." +752,"An instruction set architecture is the interface between the computer's software and hardware and also can be viewed as the programmer's view of the machine. Computers do not understand high-level programming languages such as Java, C++, or most programming languages used. A processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand." +753,"Besides instructions, the ISA defines items in the computer that are available to a program—e.g., data types, registers, addressing modes, and memory. Instructions locate these available items with register indexes and memory addressing modes." +754,"The ISA of a computer is usually described in a small instruction manual, which describes how the instructions are encoded. Also, it may define short mnemonic names for the instructions. The names can be recognized by a software development tool called an assembler. An assembler is a computer program that translates a human-readable form of the ISA into a computer-readable form. Disassemblers are also widely available, usually in debuggers and software programs to isolate and correct malfunctions in binary computer programs." +755,"ISAs vary in quality and completeness. A good ISA compromises between programmer convenience , size of the code , cost of the computer to interpret the instructions , and speed of the computer . Memory organization defines how instructions interact with the memory, and how memory interacts with itself." +756,"During design emulation, emulators can run programs written in a proposed instruction set. Modern emulators can measure size, cost, and speed to determine whether a particular ISA is meeting its goals." +757,"Computer organization helps optimize performance-based products. For example, software engineers need to know the processing power of processors. They may need to optimize software in order to gain the most performance for the lowest price. This can require quite a detailed analysis of the computer's organization. For example, in an SD card, the designers might need to arrange the card so that the most data can be processed in the fastest possible way." +758,"Computer organization also helps plan the selection of a processor for a particular project. Multimedia projects may need very rapid data access, while virtual machines may need fast interrupts. Sometimes certain tasks need additional components as well. For example, a computer capable of running a virtual machine needs virtual memory hardware so that the memory of different virtual computers can be kept separated. Computer organization and features also affect power consumption and processor cost." +759,"Once an instruction set and micro-architecture have been designed, a practical machine must be developed. This design process is called the implementation. Implementation is usually not considered architectural design, but rather hardware design engineering. Implementation can be further broken down into several steps:" +760,"For CPUs, the entire implementation process is organized differently and is often referred to as CPU design." +761,"The exact form of a computer system depends on the constraints and goals. Computer architectures usually trade off standards, power versus performance, cost, memory capacity, latency and throughput. Sometimes other considerations, such as features, size, weight, reliability, and expandability are also factors." +762,The most common scheme does an in-depth power analysis and figures out how to keep power consumption low while maintaining adequate performance. +763,"Modern computer performance is often described in instructions per cycle , which measures the efficiency of the architecture at any clock frequency; a faster IPC rate means the computer is faster. Older computers had IPC counts as low as 0.1 while modern processors easily reach nearly 1. Superscalar processors may reach three to five IPC by executing several instructions per clock cycle." +764,"Counting machine-language instructions would be misleading because they can do varying amounts of work in different ISAs. The ""instruction"" in the standard measurements is not a count of the ISA's machine-language instructions, but a unit of measurement, usually based on the speed of the VAX computer architecture." +765,"Many people used to measure a computer's speed by the clock rate . This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have greater performance. As a result, manufacturers have moved away from clock speed as a measure of performance." +766,"Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs." +767,There are two main types of speed: latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event . +768,"Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse, but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable and limited time period after the brake pedal is sensed or else failure of the brake will occur." +769,"Benchmarking takes all these factors into account by measuring the time a computer takes to run through a series of test programs. Although benchmarking shows strengths, it should not be how you choose a computer. Often the measured machines split on different measures. For example, one system might handle scientific applications quickly, while another might render video games more smoothly. Furthermore, designers may target and add special features to their products, through hardware or software, that permit a specific benchmark to execute quickly but do not offer similar advantages to general tasks." +770,Power efficiency is another important measurement in modern computers. Higher power efficiency can often be traded for lower speed or higher cost. The typical measurement when referring to power consumption in computer architecture is MIPS/W . +771,"Modern circuits have less power required per transistor as the number of transistors per chip grows. This is because each transistor that is put in a new chip requires its own power supply and requires new pathways to be built to power it. However, the number of transistors per chip is starting to increase at a slower rate. Therefore, power efficiency is starting to become as important, if not more important than fitting more and more transistors into a single chip. Recent processor designs have shown this emphasis as they put more focus on power efficiency rather than cramming as many transistors into a single chip as possible. In the world of embedded computers, power efficiency has long been an important goal next to throughput and latency." +772,"Increases in clock frequency have grown more slowly over the past few years, compared to power reduction improvements. This has been driven by the end of Moore's Law and demand for longer battery life and reductions in size for mobile technology. This change in focus from higher clock rates to power consumption and miniaturization can be shown by the significant reductions in power consumption, as much as 50%, that were reported by Intel in their release of the Haswell microarchitecture; where they dropped their power consumption benchmark from 30 to 40 watts down to 10-20 watts. Comparing this to the processing speed increase of 3 GHz to 4 GHz it can be seen that the focus in research and development is shifting away from clock frequency and moving towards consuming less power and taking up less space." +773,"Shor proposed multiple similar algorithms for solving the factoring problem, the discrete logarithm problem, and the period-finding problem. ""Shor's algorithm"" usually refers to the factoring algorithm, but may refer to any of the three algorithms. The discrete logarithm algorithm and the factoring algorithm are instances of the period-finding algorithm, and all three are instances of the hidden subgroup problem." +774,"The basic characteristic of a computable function is that there must be a finite procedure telling how to compute the function. The models of computation listed above give different interpretations of what a procedure is and how it is used, but these interpretations share many properties. The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language." +775,"Enderton gives the following characteristics of a procedure for computing a computable function; similar characterizations have been given by Turing , Rogers , and others." +776,Enderton goes on to list several clarifications of these 3 requirements of the procedure for a computable function: +777,"The procedure must theoretically work for arbitrarily large arguments. It is not assumed that the arguments are smaller than the number of atoms in the Earth, for example." +778,"The procedure is required to halt after finitely many steps in order to produce an output, but it may take arbitrarily many steps before halting. No time limitation is assumed." +779,"Although the procedure may use only a finite amount of storage space during a successful computation, there is no bound on the amount of space that is used. It is assumed that additional storage space can be given to the procedure whenever the procedure asks for it." +780,"To summarise, based on this view a function is computable if:" +781,The field of computational complexity studies functions with prescribed bounds on the time and/or space allowed in a successful computation. +782,"A set A of natural numbers is called computable if there is a computable, total function f such that for any natural number n, f = 1 if n is in A and f = 0 if n is not in A." +783,"A set of natural numbers is called computably enumerable if there is a computable function f such that for each number n, f is defined if and only if n is in the set. Thus a set is computably enumerable if and only if it is the domain of some computable function. The word enumerable is used because the following are equivalent for a nonempty subset B of the natural numbers:" +784,"One such function, which is provable total but not primitive recursive, is the Ackermann function: since it is recursively defined, it is indeed easy to prove its computability ." +785,"In a sound proof system, every provably total function is indeed total, but the converse is not true: in every first-order proof system that is strong enough and sound , one can prove the existence of total functions that cannot be proven total in the proof system." +786,"If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier. One uses a Turing machine that enumerates the relevant proofs, and for every input n calls fn by invoking the Turing machine that computes it according to the n-th proof. Such a Turing machine is guaranteed to halt if the proof system is sound." +787,"Every computable function has a finite procedure giving explicit, unambiguous instructions on how to compute it. Furthermore, this procedure has to be encoded in the finite alphabet used by the computational model, so there are only countably many computable functions. For example, functions may be encoded using a string of bits ." +788,"The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaver, Kolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin's constant." +789,"Similarly, most subsets of the natural numbers are not computable. The halting problem was the first such set to be constructed. The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements are true. Turing and Church independently showed in the 1930s that this set of natural numbers is not computable. According to the Church–Turing thesis, there is no effective procedure which can perform these computations." +790,The notion of computability of a function can be relativized to an arbitrary set of natural numbers A. A function f is defined to be computable in A when it satisfies the definition of a computable function with modifications allowing access to A as an oracle. As with the concept of a computable function relative computability can be given equivalent definitions in many different models of computation. This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of A. We can also talk about f being computable in g by identifying g with its graph. +791,"Hyperarithmetical theory studies those sets that can be computed from a computable ordinal number of iterates of the Turing jump of the empty set. This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation. Even more general recursion theories have been studied, such as E-recursion theory in which any set can be used as an argument to an E-recursive function." +792,"Although the Church–Turing thesis states that the computable functions include all functions with algorithms, it is possible to consider broader classes of functions that relax the requirements that algorithms must possess. The field of Hypercomputation studies models of computation that go beyond normal Turing computation." +793,A quantum computer is a computer that takes advantage of quantum mechanical phenomena. +794,"On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states." +795,"Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern ""classical"" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the technology is largely experimental and impractical, with several obstacles to useful applications. Moreover, scalable quantum computers do not hold promise for many practical tasks, and for many important tasks quantum speedups are proven impossible." +796,"The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two ""basis"" states. When measuring a qubit, the result is a probabilistic output of a classical bit, therefore making quantum computers nondeterministic in general. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly." +797,"Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. Paradoxically, perfectly isolating qubits is also undesirable because quantum computations typically need to initialize qubits, perform controlled qubit interactions, and measure the resulting quantum states. Each of those operations introduces errors and suffers from noise, and such inaccuracies accumulate." +798,"In principle, a non-quantum computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms for carefully selected tasks require exponentially fewer computational steps than the best known non-quantum algorithms. Such tasks can in theory be solved on a large-scale quantum computer whereas classical computers would not finish computations in any reasonable amount of time. However, quantum speedup is not universal or even typical across computational tasks, since basic tasks such as sorting are proven to not allow any asymptotic quantum speedup. Claims of quantum supremacy have drawn significant attention to the discipline, but are demonstrated on contrived tasks, while near-term practical use cases remain limited." +799,"For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for the nuclear physics used in the Manhattan Project." +800,"An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution." +801,"Neuromorphic quantum computing is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation." +802,A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice. +803,"A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical." +804,"Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators can produce high-quality random numbers, which are essential for secure encryption." +805,"However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern." +806,"Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era." +807,"Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping." +808,"Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware , hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing." +809,"Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms." +810,"Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, but evidence suggests that this is unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems." +811,"Other problems, including the simulation of quantum physical processes from chemistry and solid-state physics, the approximation of certain Jones polynomials, and the quantum algorithm for linear systems of equations have quantum algorithms appearing to give super-polynomial speedups and are BQP-complete. Because these problems are BQP-complete, an equally fast classical algorithm for them would imply that no quantum algorithm gives a super-polynomial speedup, which is believed to be unlikely." +812,"Some quantum algorithms, like Grover's algorithm and amplitude amplification, give polynomial speedups over corresponding classical algorithms. Though these algorithms give comparably modest quadratic speedup, they are widely applicable and thus give speedups for a wide range of problems." +813,"Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, quantum simulation may be an important application of quantum computing. Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider. In June 2023, IBM computer scientists reported that a quantum computer produced better results for a physics problem than a conventional supercomputer." +814,About 2% of the annual global energy output is used for nitrogen fixation to produce ammonia for the Haber process in the agricultural fertilizer industry . Quantum simulations might be used to understand this process and increase the energy efficiency of production. It is expected that an early use of quantum computing will be modeling that improves the efficiency of the Haber–Bosch process by the mid 2020s although some have predicted it will take longer. +815,"A notable application of quantum computation is for attacks on cryptographic systems that are currently in use. Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers . By comparison, a quantum computer could solve this problem exponentially faster using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, Diffie–Hellman, and elliptic curve Diffie–Hellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security." +816,"Identifying cryptographic systems that may be secure against quantum algorithms is an actively researched topic under the field of post-quantum cryptography. Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory. Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem. It has been proven that applying Grover's algorithm to break a symmetric algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case, meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search ." +817,There exists a boolean function that evaluates each input and determines whether it is the correct answer. +818,"For problems with all these properties, the running time of Grover's algorithm on a quantum computer scales as the square root of the number of inputs , as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied is a Boolean satisfiability problem, where the database through which the algorithm iterates is that of all possible answers. An example and possible application of this is a password cracker that attempts to guess a password. Breaking symmetric ciphers with this algorithm is of interest to government agencies." +819,"Quantum annealing relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which slowly evolves to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process. Adiabatic optimization may be helpful for solving computational biology problems." +820,"Since quantum computers can produce outputs that classical computers cannot produce efficiently, and since quantum computation is fundamentally linear algebraic, some express hope in developing quantum algorithms that can speed up machine learning tasks." +821,"For example, the HHL Algorithm, named after its discoverers Harrow, Hassidim, and Lloyd, is believed to provide speedup over classical counterparts. Some research groups have recently explored the use of quantum annealing hardware for training Boltzmann machines and deep neural networks." +822,"Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome in the future by quantum computers. Quantum computers are naturally good for solving complex quantum many-body problems and thus may be instrumental in applications involving quantum chemistry. Therefore, one can expect that quantum-enhanced generative models including quantum GANs may eventually be developed into ultimate generative chemistry algorithms." +823,"As of 2023, classical computers outperform quantum computers for all real-world applications. While current quantum computers may speed up solutions to particular mathematical problems, they give no computational advantage for practical tasks. For many tasks there is no promise of useful quantum speedup, and some tasks provably prohibit any quantum speedup in the sense that any speedup is ruled out by proven theorems. Scientists and engineers are exploring multiple technologies for quantum computing hardware and hope to develop scalable quantum architectures, but serious obstacles remain." +824,There are a number of technical challenges in building a large-scale quantum computer. Physicist David DiVincenzo has listed these requirements for a practical quantum computer: +825,"Sourcing parts for quantum computers is also very difficult. Superconducting quantum computers, like those constructed by Google and IBM, need helium-3, a nuclear research byproduct, and special superconducting cables made only by the Japanese company Coax Co." +826,The control of multi-qubit systems requires the generation and coordination of a large number of electrical signals with tight and deterministic timing resolution. This has led to the development of quantum controllers that enable interfacing with the qubits. Scaling these systems to support a growing number of qubits is an additional challenge. +827,"One of the greatest challenges involved with constructing quantum computers is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 , typically range between nanoseconds and seconds at low temperature. Currently, some quantum computers require their qubits to be cooled to 20 millikelvin in order to prevent significant decoherence. A 2020 study argues that ionizing radiation such as cosmic rays can nevertheless cause certain systems to decohere within milliseconds." +828,"As a result, time-consuming tasks may render some quantum algorithms inoperable, as attempting to maintain the state of qubits for a long enough duration will eventually corrupt the superpositions." +829,"These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time." +830,"As described by the threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often-cited figure for the required error rate in each gate for fault-tolerant computation is 10−3, assuming the noise is depolarizing." +831,"Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of digits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction. With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1 MHz, about 10 seconds. However, the encoding and error-correction overheads increase the size of a real fault-tolerant quantum computer by several orders of magnitude. Careful estimates show that at least 3 million physical qubits would factor 2,048-bit integer in 5 months on a fully error-corrected trapped-ion quantum computer. In terms of the number of physical qubits, to date, this remains the lowest estimate for practically useful integer factorization problem sizing 1,024-bit or larger." +832,"Another approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads, and relying on braid theory to form stable logic gates." +833,"Physicist John Preskill coined the term quantum supremacy to describe the engineering feat of demonstrating that a programmable quantum device can solve a problem beyond the capabilities of state-of-the-art classical computers. The problem need not be useful, so some view the quantum supremacy test only as a potential future benchmark." +834,"In October 2019, Google AI Quantum, with the help of NASA, became the first to claim to have achieved quantum supremacy by performing calculations on the Sycamore quantum computer more than 3,000,000 times faster than they could be done on Summit, generally considered the world's fastest computer. This claim has been subsequently challenged: IBM has stated that Summit can perform samples much faster than claimed, and researchers have since developed better algorithms for the sampling problem used to claim quantum supremacy, giving substantial reductions to the gap between Sycamore and classical supercomputers and even beating it." +835,"In December 2020, a group at USTC implemented a type of Boson sampling on 76 photons with a photonic quantum computer, Jiuzhang, to demonstrate quantum supremacy. The authors claim that a classical contemporary supercomputer would require a computational time of 600 million years to generate the number of samples their quantum processor can generate in 20 seconds." +836,"Claims of quantum supremacy have generated hype around quantum computing, but they are based on contrived benchmark tasks that do not directly imply useful real-world applications." +837,"In January 2024, a study published in Physical Review Letters provided direct verification of quantum supremacy experiments by computing exact amplitudes for experimentally generated bitstrings using a new-generation Sunway supercomputer, demonstrating a significant leap in simulation capability built on a multiple-amplitude tensor network contraction algorithm. This development underscores the evolving landscape of quantum computing, highlighting both the progress and the complexities involved in validating quantum supremacy claims." +838,"Despite high hopes for quantum computing, significant progress in hardware, and optimism about future applications, a 2023 Nature spotlight article summarised current quantum computers as being ""For now, absolutely nothing"". The article elaborated that quantum computers are yet to be more useful or efficient than conventional computers in any case, though it also argued that in the long term such computers are likely to be useful. A 2023 Communications of the ACM article found that current quantum computing algorithms are ""insufficient for practical quantum advantage without significant improvements across the software/hardware stack"". It argues that the most promising candidates for achieving speedup with quantum computers are ""small-data problems"", for example in chemistry and materials science. However, the article also concludes that a large range of the potential applications it considered, such as machine learning, ""will not achieve quantum advantage with current quantum algorithms in the foreseeable future"", and it identified I/O constraints that make speedup unlikely for ""big data problems, unstructured linear systems, and database search based on Grover's algorithm""." +839,This state of affairs can be traced to several current and long-term considerations. +840,"In particular, building computers with large numbers of qubits may be futile if those qubits are not connected well enough and cannot maintain sufficiently high degree of entanglement for long time. When trying to outperform conventional computers, quantum computing researchers often look for new tasks that can be solved on quantum computers, but this leaves the possibility that efficient non-quantum techniques will be developed in response, as seen for Quantum supremacy demonstrations. Therefore, it is desirable to prove lower bounds on the complexity of best possible non-quantum algorithms and show that some quantum algorithms asymptomatically improve upon those bounds." +841,"Some researchers have expressed skepticism that scalable quantum computers could ever be built, typically because of the issue of maintaining coherence at large scales, but also for other reasons." +842,Bill Unruh doubted the practicality of quantum computers in a paper published in 1994. Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle. Skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved. Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows: +843,"A practical quantum computer must use a physical system as a programmable quantum register. Researchers are exploring several technologies as candidates for reliable qubit implementations. Superconductors and trapped ions are some of the most developed proposals, but experimentalists are considering other hardware possibilities as well." +844,"Any computational problem solvable by a classical computer is also solvable by a quantum computer. Intuitively, this is because it is believed that all physical phenomena, including the operation of classical computers, can be described using quantum mechanics, which underlies the operation of quantum computers." +845,"Conversely, any problem solvable by a quantum computer is also solvable by a classical computer. It is possible to simulate both quantum and classical computers manually with just some paper and a pen, if given enough time. More formally, any quantum computer can be simulated by a Turing machine. In other words, quantum computers provide no additional power over classical computers in terms of computability. This means that quantum computers cannot solve undecidable problems like the halting problem, and the existence of quantum computers does not disprove the Church–Turing thesis." +846,"While quantum computers cannot solve any problems that classical computers cannot already solve, it is suspected that they can solve certain problems faster than classical computers. For instance, it is known that quantum computers can efficiently factor integers, while this is not believed to be the case for classical computers." +847,"The class of problems that can be efficiently solved by a quantum computer with bounded error is called BQP, for ""bounded error, quantum, polynomial time"". More formally, BQP is the class of problems that can be solved by a polynomial-time quantum Turing machine with an error probability of at most 1/3. As a class of probabilistic problems, BQP is the quantum counterpart to BPP , the class of problems that can be solved by polynomial-time probabilistic Turing machines with bounded error. It is known that + + + + + + B + P + P + ⊆ + B + Q + P + + + + + {\displaystyle {\mathsf {BPP\subseteq BQP}}} + + and is widely suspected that + + + + + + B + Q + P + ⊊ + B + P + P + + + + + {\displaystyle {\mathsf {BQP\subsetneq BPP}}} + +, which intuitively would mean that quantum computers are more powerful than classical computers in terms of time complexity." +848,"The exact relationship of BQP to P, NP, and PSPACE is not known. However, it is known that + + + + + + P + ⊆ + B + Q + P + ⊆ + P + S + P + A + C + E + + + + + {\displaystyle {\mathsf {P\subseteq BQP\subseteq PSPACE}}} + +; that is, all problems that can be efficiently solved by a deterministic classical computer can also be efficiently solved by a quantum computer, and all problems that can be efficiently solved by a quantum computer can also be solved by a deterministic classical computer with polynomial space resources. It is further suspected that BQP is a strict superset of P, meaning there are problems that are efficiently solvable by quantum computers that are not efficiently solvable by deterministic classical computers. For instance, integer factorization and the discrete logarithm problem are known to be in BQP and are suspected to be outside of P. On the relationship of BQP to NP, little is known beyond the fact that some NP problems that are believed not to be in P are also in BQP . It is suspected that + + + + + + N + P + ⊈ + B + Q + P + + + + + {\displaystyle {\mathsf {NP\nsubseteq BQP}}} + +; that is, it is believed that there are efficiently checkable problems that are not efficiently solvable by a quantum computer. As a direct consequence of this belief, it is also suspected that BQP is disjoint from the class of NP-complete problems ." +849,"The concept of wetware is an application of specific interest to the field of computer manufacturing. Moore's law, which states that the number of transistors which can be placed on a silicon chip is doubled roughly every two years, has acted as a goal for the industry for decades, but as the size of computers continues to decrease, the ability to meet this goal has become more difficult, threatening to reach a plateau. Due to the difficulty in reducing the size of computers because of size limitations of transistors and integrated circuits, wetware provides an unconventional alternative. A wetware computer composed of neurons is an ideal concept because, unlike conventional materials which operate in binary , a neuron can shift between thousands of states, constantly altering its chemical conformation, and redirecting electrical pulses through over 200,000 channels in any of its many synaptic connections. Because of this large difference in the possible settings for any one neuron, compared to the binary limitations of conventional computers, the space limitations are far fewer." +850,"The concept of wetware is distinct and unconventional and draws slight resonance with both hardware and software from conventional computers. While hardware is understood as the physical architecture of traditional computational devices, built from electrical circuitry and silicone plates, software represents the encoded architecture of storage and instructions. Wetware is a separate concept that uses the formation of organic molecules, mostly complex cellular structures , to create a computational device such as a computer. In wetware, the ideas of hardware and software are intertwined and interdependent. The molecular and chemical composition of the organic or biological structure would represent not only the physical structure of the wetware but also the software, being continually reprogrammed by the discrete shifts in electrical pulses and chemical concentration gradients as the molecules change their structures to communicate signals. The responsiveness of a cell, proteins, and molecules to changing conformations, both within their structures and around them, ties the idea of internal programming and external structure together in a way that is alien to the current model of conventional computer architecture." +851,"The structure of wetware represents a model where the external structure and internal programming are interdependent and unified; meaning that changes to the programming or internal communication between molecules of the device would represent a physical change in the structure. The dynamic nature of wetware borrows from the function of complex cellular structures in biological organisms. The combination of “hardware” and “software” into one dynamic, and interdependent system which uses organic molecules and complexes to create an unconventional model for computational devices is a specific example of applied biorobotics." +852,"Cells in many ways can be seen as their form of naturally occurring wetware, similar to the concept that the human brain is the preexisting model system for complex wetware. In his book Wetware: A Computer in Every Living Cell Dennis Bray explains his theory that cells, which are the most basic form of life, are just a highly complex computational structure, like a computer. To simplify one of his arguments a cell can be seen as a type of computer, using its structured architecture. In this architecture, much like a traditional computer, many smaller components operate in tandem to receive input, process the information, and compute an output. In an overly simplified, non-technical analysis, cellular function can be broken into the following components: Information and instructions for execution are stored as DNA in the cell, RNA acts as a source for distinctly encoded input, processed by ribosomes and other transcription factors to access and process the DNA and to output a protein. Bray's argument in favor of viewing cells and cellular structures as models of natural computational devices is important when considering the more applied theories of wetware to biorobotics." +853,"Wetware and biorobotics are closely related concepts, which both borrow from similar overall principles. A biorobotic structure can be defined as a system modeled from a preexisting organic complex or model such as cells or more complex structures like organs or whole organisms. Unlike wetware the concept of biorobotics is not always a system composed of organic molecules, but instead could be composed of conventional material which is designed and assembled in a structure similar or derived from a biological model. Biorobotics have many applications and are used to address the challenges of conventional computer architecture. Conceptually, designing a program, robot, or computational device after a preexisting biological model such as a cell, or even a whole organism, provides the engineer or programmer the benefits of incorporating into the structure the evolutionary advantages of the model." +854,"In 1999 William Ditto and his team of researchers at Georgia Institute of Technology and Emory University created a basic form of a wetware computer capable of simple addition by harnessing leech neurons. Leeches were used as a model organism due to the large size of their neuron, and the ease associated with their collection and manipulation. However, these results have never been published in a peer-reviewed journal, prompting questions about the validity of the claims. The computer was able to complete basic addition through electrical probes inserted into the neuron. The manipulation of electrical currents through neurons was not a trivial accomplishment, however. Unlike conventional computer architecture, which is based on the binary on/off states, neurons are capable of existing in thousands of states and communicate with each other through synaptic connections which each contain over 200,000 channels. Each can be dynamically shifted in a process called self-organization to constantly form and reform new connections. A conventional computer program called the dynamic clamp was written by Eve Marder, a neurobiologist at Brandeis University that was capable of reading the electrical pulses from the neurons in real time and interpreting them. This program was used to manipulate the electrical signals being input into the neurons to represent numbers and to communicate with each other to return the sum. While this computer is a very basic example of a wetware structure it represents a small example with fewer neurons than found in a more complex organ. It is thought by Ditto that by increasing the number of neurons present the chaotic signals sent between them will self-organize into a more structured pattern, such as the regulation of heart neurons into a constant heartbeat found in humans and other living organisms." +855,"After his work creating a basic computer from leech neurons, Ditto continued to work not only with organic molecules and wetware but also on the concept of applying the chaotic nature of biological systems and organic molecules to conventional material and logic gates. Chaotic systems have advantages for generating patterns and computing higher-order functions like memory, arithmetic logic, and input/output operations. In his article Construction of a Chaotic Computer Chip Ditto discusses the advantages in programming of using chaotic systems, with their greater sensitivity to respond and reconfigure logic gates in his conceptual chaotic chip. The main difference between a chaotic computer chip and a conventional computer chip is the reconfigurability of the chaotic system. Unlike a traditional computer chip, where a programmable gate array element must be reconfigured through the switching of many single-purpose logic gates, a chaotic chip can reconfigure all logic gates through the control of the pattern generated by the non-linear chaotic element." +856,"Cognitive biology evaluates cognition as a basic biological function. W. Tecumseh Fitch, a professor of cognitive biology at the University of Vienna, is a leading theorist on ideas of cellular intentionality. The idea is that not only do whole organisms have a sense of ""aboutness"" of intentionality, but that single cells also carry a sense of intentionality through cells' ability to adapt and reorganize in response to certain stimuli. Fitch discusses the idea of nano-intentionality, specifically in regards to neurons, in their ability to adjust rearrangements to create neural networks. He discusses the ability of cells such as neurons to respond independently to stimuli such as damage to be what he considers ""intrinsic intentionality"" in cells, explaining that ""hile at a vastly simpler level than intentionality at the human cognitive level, I propose that this basic capacity of living things provides the necessary building blocks for cognition and higher-order intentionality."" Fitch describes the value of his research to specific areas of computer science such as artificial intelligence and computer architecture. He states ""If a researcher aims to make a conscious machine, doing it with rigid switches is barking up the wrong tree."" Fitch believes that an important aspect of the development of areas such as artificial intelligence is wetware with nano-intentionally, and autonomous ability to adapt and restructure itself." +857,"In a review of the above-mentioned research conducted by Fitch, Daniel Dennett, a professor at Tufts University, discusses the importance of the distinction between the concept of hardware and software when evaluating the idea of wetware and organic material such as neurons. Dennett discusses the value of observing the human brain as a preexisting example of wetware. He sees the brain as having ""the competence of a silicon computer to take on an unlimited variety of temporary cognitive roles."" Dennett disagrees with Fitch on certain areas, such as the relationship of software/hardware versus wetware, and what a machine with wetware might be capable of. Dennett highlights the importance of additional research into human cognition to better understand the intrinsic mechanism by which the human brain can operate, to better create an organic computer." +858,"Brain-on-a-chip devices have been developed that are ""aimed at testing and predicting the effects of biological and chemical agents, disease or pharmaceutical drugs on the brain over time"". Wetware computers may be useful for research about brain diseases and brain health/capacities , for drug discovery, for testing genome edits and research about brain aging." +859,"Wetware computers may have substantial ethical implications, for instance related to possible potentials to sentience and suffering and dual-use technology." +860,"Moreover, in some cases the human brain itself may be connected as a kind of ""wetware"" to other information technology systems which may also have large social and ethical implications, including issues related to intimate access to people's brains. For example, in 2021 Chile became the first country to approve neurolaw that establishes rights to personal identity, free will and mental privacy." +861,"The concept of artificial insects may raise substantial ethical questions, including questions related to the decline in insect populations." +862,"It is an open question whether human cerebral organoids could develop a degree or form of consciousness. Whether or how it could acquire its moral status with related rights and limits may also be potential future questions. There is research on how consciousness could be detected. As cerebral organoids may acquire human brain-like neural function subjective experience and consciousness may be feasible. Moreover, it may be possible that they acquire such upon transplantation into animals. A study notes that it may, in various cases, be morally permissible ""to create self-conscious animals by engrafting human cerebral organoids, but in the case, the moral status of such animals should be carefully considered""." +863,"While there have been few major developments in the creation of an organic computer since the neuron-based calculator developed by Ditto in the 1990s, research continues to push the field forward, and in 2023 a functioning computer was constructed by researchers at the University of Illinois Urbana-Champaign using 80,000 mouse neurons as processor that can detect light and electrical signals. Projects such as the modeling of chaotic pathways in silicon chips by Ditto have made discoveries in ways of organizing traditional silicon chips and structuring computer architecture to be more efficient and better structured. Ideas emerging from the field of cognitive biology also help to continue to push discoveries in ways of structuring systems for artificial intelligence, to better imitate preexisting systems in humans." +864,"In a proposed fungal computer using basidiomycetes, information is represented by spikes of electrical activity, a computation is implemented in a mycelium network, and an interface is realized via fruit bodies." +865,"Connecting cerebral organoids with other nerve tissues may become feasible in the future, as is the connection of physical artificial neurons and the control of muscle tissue. External modules of biological tissue could trigger parallel trains of stimulation back into the brain. All-organic devices could be advantageous because it could be biocompatible which may allow it to be implanted into the human body. This may enable treatments of certain diseases and injuries to the nervous system." +866,Three companies are focusing specifically on wetware computing using living neurons: +867,"Leonard Adleman of the University of Southern California initially developed this field in 1994. Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have occurred and various Turing machines have been proven to be constructible." +868,"Since then the field has expanded into several avenues. In 1995, the idea for DNA-based memory was proposed by Eric Baum who conjectured that a vast amount of data can be stored in a tiny amount of DNA due to its ultra-high density. This expanded the horizon of DNA computing into the realm of memory technology although the in vitro demonstrations were made almost after a decade." +869,"The field of DNA computing can be categorized as a sub-field of the broader DNA nanoscience field started by Ned Seeman about a decade before Len Adleman's demonstration. Ned's original idea in the 1980s was to build arbitrary structures using bottom-up DNA self-assembly for applications in crystallography. However, it morphed into the field of structural DNA self-assembly which as of 2020 is extremely sophisticated. Self-assembled structure from a few nanometers tall all the way up to several tens of micrometers in size have been demonstrated in 2018." +870,"In 1994, Prof. Seeman's group demonstrated early DNA lattice structures using a small set of DNA components. While the demonstration by Adleman showed the possibility of DNA-based computers, the DNA design was trivial because as the number of nodes in a graph grows, the number of DNA components required in Adleman's implementation would grow exponentially. Therefore, computer scientists and biochemists started exploring tile-assembly where the goal was to use a small set of DNA strands as tiles to perform arbitrary computations upon growth. Other avenues that were theoretically explored in the late 90's include DNA-based security and cryptography, computational capacity of DNA systems, DNA memories and disks, and DNA-based robotics." +871,"Before 2002, Lila Kari showed that the DNA operations performed by genetic recombination in some organisms are Turing complete." +872,"In 2003, John Reif's group first demonstrated the idea of a DNA-based walker that traversed along a track similar to a line follower robot. They used molecular biology as a source of energy for the walker. Since this first demonstration, a wide variety of DNA-based walkers have been demonstrated." +873,"In 1994 Leonard Adleman presented the first prototype of a DNA computer. The TT-100 was a test tube filled with 100 microliters of a DNA solution. He managed to solve an instance of the directed Hamiltonian path problem. In Adleman's experiment, the Hamiltonian Path Problem was implemented notationally as the ""travelling salesman problem"". For this purpose, different DNA fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction, the DNA fragments representing the longer routes were eliminated. The remains are the solution to the problem, but overall, the experiment lasted a week. However, current technical limitations prevent the evaluation of the results. Therefore, the experiment isn't suitable for the application, but it is nevertheless a proof of concept." +874,First results to these problems were obtained by Leonard Adleman. +875,"In 2002, J. Macdonald, D. Stefanović and M. Stojanović created a DNA computer able to play tic-tac-toe against a human player. The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are cut in half. The DNA enzymes simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND." +876,"By default, the computer is considered to have played first in the central square. The human player starts with eight different types of DNA strands corresponding to the eight remaining boxes that may be played. To play box number i, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting, in one of these bins, in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The DNA enzymes are divided among the bins in such a way as to ensure that the best the human player can achieve is a draw, as in real tic-tac-toe." +877,Kevin Cherry and Lulu Qian at Caltech developed a DNA-based artificial neural network that can recognize 100-bit hand-written digits. They achieve this by programming on computer in advance with appropriate set of weights represented by varying concentrations weight molecules which will later be added to the test tube that holds the input DNA strands. +878,"One of the challenges of DNA computing is its speed. While DNA as a substrate is biologically compatible i.e. it can be used at places where silicon technology cannot, its computation speed is still very slow. For example, the square-root circuit used as a benchmark in field took over 100 hours to complete. While newer ways with external enzyme sources are reporting faster and more compact circuits, Chatterjee et al. demonstrated an interesting idea in the field to speed up computation through localized DNA circuits, a concept being further explored by other groups. This idea, while originally proposed in the field of computer architecture, has been adopted in this field as well. In computer architecture, it is very well-known that if the instructions are executed in sequence, having them loaded in the cache will inevitably lead to fast performance, also called the principle of localization. This is because with instructions in fast cache memory, there is no need swap them in and out of main memory, which can be slow. Similarly, in localized DNA computing, the DNA strands responsible for computation are fixed on a breadboard-like substrate ensuring physical proximity of the computing gates. Such localized DNA computing techniques have shown to potentially reduce the computation time by orders of magnitude." +879,"Subsequent research on DNA computing has produced reversible DNA computing, bringing the technology one step closer to the silicon-based computing used in PCs. In particular, John Reif and his group at Duke University have proposed two different techniques to reuse the computing DNA complexes. The first design uses dsDNA gates, while the second design uses DNA hairpin complexes. +While both the designs face some issues , this appears to represent a significant breakthrough in the field of DNA computing. Some other groups have also attempted to address the gate reusability problem." +880,"Using strand displacement reactions , reversible proposals are presented in the ""Synthesis Strategy of Reversible Circuits on DNA Computers"" paper for implementing reversible gates and circuits on DNA computers by combining DNA computing and reversible computing techniques. This paper also proposes a universal reversible gate library for synthesizing n-bit reversible circuits on DNA computers with an average length and cost of the constructed circuits better than the previous methods." +881,"There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, and toehold exchange." +882,"The most fundamental operation in DNA computing and molecular programming is the strand displacement mechanism. Currently, there are two ways to perform strand displacement:" +883,"Besides simple strand displacement schemes, DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA." +884,"The full stack for DNA computing looks very similar to a traditional computer architecture. At the highest level, a C-like general purpose programming language is expressed using a set of chemical reaction networks . This intermediate representation gets translated to domain-level DNA design and then implemented using a set of DNA strands. In 2010, Erik Winfree's group showed that DNA can be used as a substrate to implement arbitrary chemical reactions. This opened the way to design and synthesis of biochemical controllers since the expressive power of CRNs is equivalent to a Turing machine. Such controllers can potentially be used in vivo for applications such as preventing hormonal imbalance." +885,"Catalytic DNA catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to 1-, 2-, and 3-input gates with no current implementation for evaluating statements in series." +886,"The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit. The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then ""used"", and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added." +887,"Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location. Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I and MAYA II machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme. While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need for a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo." +888,"A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent." +889,"Enzyme-based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA." +890,"Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN. Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor. On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in ""cells of higher organisms"". It should also be pointed out that the 'software' molecules can be reused in this case." +891,"DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays." +892,"DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once. For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer." +893,"DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation. +For example, +if the space required for the solution of a problem grows exponentially with the size of the problem on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines. +For very large EXPSPACE problems, the amount of DNA required is too large to be practical." +894,"A partnership between IBM and Caltech was established in 2009 aiming at ""DNA chips"" production. A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots. A compiler has been written in Perl." +895,"The slow processing speed of a DNA computer is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA computer than by a digital one." +896,"Most research projects focus on replacing current computer components with optical equivalents, resulting in an optical digital computer system processing binary data. This approach appears to offer the best short-term prospects for commercial optical computing, since optical components could be integrated into traditional computers to produce an optical-electronic hybrid. However, optoelectronic devices consume 30% of their energy converting electronic energy into photons and back; this conversion also slows the transmission of messages. All-optical computers eliminate the need for optical-electrical-optical conversions, thus reducing electrical power consumption." +897,"Application-specific devices, such as synthetic-aperture radar and optical correlators, have been designed to use the principles of optical computing. Correlators can be used, for example, to detect and track objects, and to classify serial time-domain optical data." +898,"The fundamental building block of modern electronic computers is the transistor. To replace electronic components with optical ones, an equivalent optical transistor is required. This is achieved by crystal optics . In particular, materials exist where the intensity of incoming light affects the intensity of the light transmitted through the material in a similar manner to the current response of a bipolar transistor. Such an optical transistor can be used to create optical logic gates, which in turn are assembled into the higher level components of the computer's central processing unit . These will be nonlinear optical crystals used to manipulate light beams into controlling other light beams." +899,"Like any computing system, an optical computing system needs three things to function well:" +900,optical processor +901,"optical data transfer, e.g. fiber-optic cable" +902,"optical storage," +903,"Substituting electrical components will need data format conversion from photons to electrons, which will make the system slower." +904,"There are some disagreements between researchers about the future capabilities of optical computers; whether or not they may be able to compete with semiconductor-based electronic computers in terms of speed, power consumption, cost, and size is an open question. Critics note that real-world logic systems require ""logic-level restoration, cascadability, fan-out and input–output isolation"", all of which are currently provided by electronic transistors at low cost, low power, and high speed. For optical logic to be competitive beyond a few niche applications, major breakthroughs in non-linear optical device technology would be required, or perhaps a change in the nature of computing itself." +905,"A significant challenge to optical computing is that computation is a nonlinear process in which multiple signals must interact. Light, which is an electromagnetic wave, can only interact with another electromagnetic wave in the presence of electrons in a material, and the strength of this interaction is much weaker for electromagnetic waves, such as light, than for the electronic signals in a conventional computer. This may result in the processing elements for an optical computer requiring more power and larger dimensions than those for a conventional electronic computer using transistors." +906,"A further misconception is that since light can travel much faster than the drift velocity of electrons, and at frequencies measured in THz, optical transistors should be capable of extremely high frequencies. However, any electromagnetic wave must obey the transform limit, and therefore the rate at which an optical transistor can respond to a signal is still limited by its spectral bandwidth. In fiber-optic communications, practical limits such as dispersion often constrain channels to bandwidths of tens of GHz, only slightly better than many silicon transistors. Obtaining dramatically faster operation than electronic transistors would therefore require practical methods of transmitting ultrashort pulses down highly dispersive waveguides." +907,Photonic logic is the use of photons in logic gates . Switching is obtained using nonlinear optical effects when two or more signals are combined. +908,"Resonators are especially useful in photonic logic, since they allow a build-up of energy from constructive interference, thus enhancing optical nonlinear effects." +909,"Other approaches that have been investigated include photonic logic at a molecular level, using photoluminescent chemicals. In a demonstration, Witlicki et al. performed logical operations using molecules and SERS." +910,The basic idea is to delay light in order to perform useful computations. Of interest would be to solve NP-complete problems as those are difficult problems for the conventional computers. +911,There are two basic properties of light that are actually used in this approach: +912,When solving a problem with time-delays the following steps must be followed: +913,The first problem attacked in this way was the Hamiltonian path problem. +914,"The simplest one is the subset sum problem. An optical device solving an instance with four numbers {a1, a2, a3, a4} is depicted below:" +915,The travelling salesman problem has been solved by Shaked et al. by using an optical approach. All possible TSP paths have been generated and stored in a binary matrix which was multiplied with another gray-scale vector containing the distances between cities. The multiplication is performed optically by using an optical correlator. +916,"Many computations, particularly in scientific applications, require frequent use of the 2D discrete Fourier transform – for example in solving differential equations describing propagation of waves or transfer of heat. Though modern GPU technologies typically enable high-speed computation of large 2D DFTs, techniques have been developed that can perform continuous Fourier transform optically by utilising the natural Fourier transforming property of lenses. The input is encoded using a liquid crystal spatial light modulator and the result is measured using a conventional CMOS or CCD image sensor. Such optical architectures can offer superior scaling of computational complexity due to the inherently highly interconnected nature of optical propagation, and have been used to solve 2D heat equations." +917,Physical computers whose design was inspired by the theoretical Ising model are called Ising machines. +918,"Yoshihisa Yamamoto's lab at Stanford pioneered building Ising machines using photons. Initially Yamamoto and his colleagues built an Ising machine using lasers, mirrors, and other optical components commonly found on an optical table." +919,"Later a team at Hewlett Packard Labs developed photonic chip design tools and used them to build an Ising machine on a single chip, integrating 1,052 optical components on that single chip." +920,"Some additional companies involved with optical computing development include IBM, Microsoft, Procyon Photonics, Lightelligence, Lightmatter, Optalysys, Xanadu Quantum Technologies, ORCA Computing, PsiQuantum, Quandela , and TundraSystems Global." +921,Media related to Optical computing at Wikimedia Commons +922,"A hard disk drive , hard disk, hard drive, or fixed disk, is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage with one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data when powered off. Modern HDDs are typically in the form of a small rectangular box." +923,"Hard disk drives were introduced by IBM in 1956, and were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like mobile phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation, most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced for servers. Though production is growing slowly , sales revenues and unit shipments are declining, because solid-state drives have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times." +924,"The revenues for SSDs, most of which use NAND flash memory, slightly exceeded those for HDDs in 2018. Flash storage products had more than twice the revenue of hard disk drives as of 2017. Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. As of 2019, the cost per bit of SSDs is falling, and the price premium over HDDs has narrowed." +925,"The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes, where 1 gigabyte = 1 000 megabytes = 1 000 000 kilobytes = 1 000 000 000 bytes . Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. There can be confusion regarding storage capacity, since capacities are stated in decimal gigabytes by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified as the time required to move the heads to a track or cylinder , the time it takes for the desired sector to move under the head , and finally, the speed at which the data is transmitted ." +926,"The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as SATA , USB, SAS , or PATA cables." +927,"The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two large refrigerators and stored five million six-bit characters on a stack of 52 disks . The 350 had a single arm with two read/write heads, one facing up and the other down, that moved both horizontally between a pair of adjacent platters and vertically from one pair of platters to a second set. Variants of the IBM 350 were the IBM 355, IBM 7300 and IBM 1405." +928,"Over time, as recording densities were greatly increased, further reductions in disk diameter to 3.5"" and 2.5"" were found to be optimum. Powerful rare earth magnet materials became affordable during this period, and were complementary to the swing arm actuator design to make possible the compact form factors of modern HDDs." +929,"As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s, their cost had been reduced to the point where they were standard on all but the cheapest computers." +930,"Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter, internal HDDs proliferated on personal computers." +931,"External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays , so on those models, external SCSI disks were the only reasonable option for expanding upon any internal storage." +932,"HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content." +933,"In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. As of 2018, HDDs were forecast to reach 100 TB capacities around 2025, but as of 2019, the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage , represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase." +934,The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013. +935,"In 2019, Western Digital closed its last Malaysian HDD factory due to decreasing demand, to focus on SSD production. All three remaining HDD manufacturers have had decreasing demand for their HDDs since 2014." +936,"A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions." +937,"A typical HDD design consists of a spindle that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is 0.07–0.18 mm thick." +938,"The platters in contemporary HDDs are spun at speeds varying from 4,200 RPM in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. As of November 2019, the platters in most consumer-grade HDDs spin at 5,400 or 7,200 RPM." +939,"Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it." +940,"In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm moves the heads on an arc across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or, in some older designs, a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track, but modern drives use zone bit recording, increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones." +941,"In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects⁠ ⁠— thermally induced magnetic instability which is commonly known as the ""superparamagnetic limit"". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording , first shipped in 2005, and as of 2007, used in certain HDDs. Perpendicular recording may be accompanied by changes in the manufacturing of the read/write heads to increase the strength of the magnetic field created by the heads." +942,"In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called exchange spring media magnetic storage technology, also known as exchange coupled composite media, allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer." +943,"The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head." +944,The HDD's electronics controls the movement of the actuator and the rotation of the disk and transfers data to/from a disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles or segments interspersed with real data . The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil motor to rotate the arm. A more modern servo system also employs milli and/or micro actuators to more accurately position the read/write heads. The spinning of the disks uses fluid-bearing spindle motors. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed. +945,"Modern drives make extensive use of error correction codes , particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data." +946,"In the newest drives, as of 2009, low-density parity-check codes were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon limit and thus provide the highest storage density available." +947,"Typical hard disk drives attempt to ""remap"" the data in a physical sector that is failing to a spare physical sector provided by the drive's ""spare sector pool"" , while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T feature counts the total number of errors in the entire HDD fixed by ECC , and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure." +948,"The ""No-ID Format"", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located." +949,Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include: +950,Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive. +951,The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host. +952,The World Wide Web is an information system that enables content sharing over the Internet through user-friendly ways meant to appeal to users beyond IT specialists and hobbyists. It allows documents and other web resources to be accessed over the Internet according to specific rules of the Hypertext Transfer Protocol . +953,"The Web was invented by English computer scientist Tim Berners-Lee while at CERN in 1989 and opened to the public in 1991. It was conceived as a ""universal linked information system"". Documents and other media content are made available to the network through web servers and can be accessed by programs such as web browsers. Servers and resources on the World Wide Web are identified and located through character strings called uniform resource locators ." +954,"The original and still very common document type is a web page formatted in Hypertext Markup Language . This markup language supports plain text, images, embedded video and audio contents, and scripts that implement complex user interaction. The HTML language also supports hyperlinks which provide immediate access to other web resources. Web navigation, or web surfing, is the common practice of following such hyperlinks across multiple websites. Web applications are web pages that function as application software. The information in the Web is transferred across the Internet using HTTP. Multiple web resources with a common theme and usually a common domain name make up a website. A single web server may provide multiple websites, while some websites, especially the most popular ones, may be provided by multiple servers. Website content is provided by a myriad of companies, organizations, government agencies, and individual users; and comprises an enormous amount of educational, entertainment, commercial, and government information." +955,The Web has become the world's dominant information systems platform. It is the primary tool that billions of people worldwide use to interact with the Internet. +956,"The Web was invented by English computer scientist Tim Berners-Lee while working at CERN. He was motivated by the problem of storing, updating, and finding documents and data files in that large and constantly changing organization, as well as distributing them to collaborators outside CERN. In his design, Berners-Lee dismissed the common tree structure approach, used for instance in the existing CERNDOC documentation system and in the Unix filesystem, as well as approaches that relied in tagging files with keywords, as in the VAX/NOTES system. Instead he adopted concepts he had put into practice with his private ENQUIRE system built at CERN. When he became aware of Ted Nelson's hypertext model , in which documents can be linked in unconstrained ways through hyperlinks associated with ""hot spots"" embedded in the text, it helped to confirm the validity of his concept." +957,"The model was later popularized by Apple's HyperCard system. Unlike Hypercard, Berners-Lee's new system from the outset was meant to support links between multiple databases on independent computers, and to allow simultaneous access by many users from any computer on the Internet. He also specified that the system should eventually handle other media besides text, such as graphics, speech, and video. Links could refer to mutable data files, or even fire up programs on their server computer. He also conceived ""gateways"" that would allow access through the new system to documents organized in other ways . Finally, he insisted that the system should be decentralized, without any central control or coordination over the creation of links." +958,"Berners-Lee submitted a proposal to CERN in May 1989, without giving the system a name. He got a working system implemented by the end of 1990, including a browser called WorldWideWeb and an HTTP server running at CERN. As part of that development he defined the first version of the HTTP protocol, the basic URL syntax, and implicitly made HTML the primary document format. The technology was released outside CERN to other research institutions starting in January 1991, and then to the whole Internet on 23 August 1991. The Web was a success at CERN, and began to spread to other scientific and academic institutions. Within the next two years, there were 50 websites created." +959,"CERN made the Web protocol and code available royalty free in 1993, enabling its widespread use. After the NCSA released the Mosaic web browser later that year, the Web's popularity grew rapidly as thousands of websites sprang up in less than a year. Mosaic was a graphical browser that could display inline images and submit forms that were processed by the HTTPd server. Marc Andreessen and Jim Clark founded Netscape the following year and released the Navigator browser, which introduced Java and JavaScript to the Web. It quickly became the dominant browser. Netscape became a public company in 1995 which triggered a frenzy for the Web and started the dot-com bubble. Microsoft responded by developing its own browser, Internet Explorer, starting the browser wars. By bundling it with Windows, it became the dominant browser for 14 years." +960,"Berners-Lee founded the World Wide Web Consortium which created XML in 1996 and recommended replacing HTML with stricter XHTML. In the meantime, developers began exploiting an IE feature called XMLHttpRequest to make Ajax applications and launched the Web 2.0 revolution. Mozilla, Opera, and Apple rejected XHTML and created the WHATWG which developed HTML5. In 2009, the W3C conceded and abandoned XHTML. In 2019, it ceded control of the HTML specification to the WHATWG." +961,The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. +962,"Tim Berners-Lee states that World Wide Web is officially spelled as three separate words, each capitalised, with no intervening hyphens. Nonetheless, it is often called simply the Web, and also often the web; see Capitalization of Internet for details. In Mandarin Chinese, World Wide Web is commonly translated via a phono-semantic matching to wàn wéi wǎng , which satisfies www and literally means ""10,000-dimensional net"", a translation that reflects the design concept and proliferation of the World Wide Web." +963,"Use of the www prefix has been declining, especially when web applications sought to brand their domain names and make them easily pronounceable. As the mobile Web grew in popularity, services like Gmail.com, Outlook.com, Myspace.com, Facebook.com and Twitter.com are most often mentioned without adding ""www."" to the domain." +964,"In English, www is usually read as double-u double-u double-u. Some users pronounce it dub-dub-dub, particularly in New Zealand. Stephen Fry, in his ""Podgrams"" series of podcasts, pronounces it wuh wuh wuh. The English writer Douglas Adams once quipped in The Independent on Sunday : ""The World Wide Web is the only thing I know of whose shortened form takes three times longer to say than what it's short for""." +965,"The terms Internet and World Wide Web are often used without much distinction. However, the two terms do not mean the same thing. The Internet is a global system of computer networks interconnected through telecommunications and optical networking. In contrast, the World Wide Web is a global collection of documents and other resources, linked by hyperlinks and URIs. Web resources are accessed using HTTP or HTTPS, which are application-level Internet protocols that use the Internet's transport protocols." +966,"Viewing a web page on the World Wide Web normally begins either by typing the URL of the page into a web browser or by following a hyperlink to that page or resource. The web browser then initiates a series of background communication messages to fetch and display the requested page. In the 1990s, using a browser to view web pages—and to move from one web page to another through hyperlinks—came to be known as 'browsing,' 'web surfing' , or 'navigating the Web'. Early studies of this new behavior investigated user patterns in using web browsers. One study, for example, found five user patterns: exploratory surfing, window surfing, evolved surfing, bounded navigation and targeted navigation." +967,The following example demonstrates the functioning of a web browser when accessing a page at the URL http://example.org/home.html. The browser resolves the server name of the URL into an Internet Protocol address using the globally distributed Domain Name System . This lookup returns an IP address such as 203.0.113.4 or 2001:db8:2e::7334. The browser then requests the resource by sending an HTTP request across the Internet to the computer at that address. It requests service from a specific TCP port number that is well known for the HTTP service so that the receiving host can distinguish an HTTP request from other network protocols it may be servicing. HTTP normally uses port number 80 and for HTTPS it normally uses port number 443. The content of the HTTP request can be as simple as two lines of text: +968,The computer receiving the HTTP request delivers it to web server software listening for requests on port 80. If the webserver can fulfil the request it sends an HTTP response back to the browser indicating success: +969,followed by the content of the requested page. Hypertext Markup Language for a basic web page might look like this: +970,"The web browser parses the HTML and interprets the markup that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources." +971,"Hypertext Markup Language is the standard markup language for creating web pages and web applications. With Cascading Style Sheets and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web." +972,Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document. +973,"HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as

surround and provide information about document text and may include other tags as sub-elements. Browsers do not display the HTML tags, but use them to interpret the content of the page." +974,"HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. Inclusion of CSS defines the look and layout of content. The World Wide Web Consortium , maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML since 1997." +975,"Most web pages contain hyperlinks to other related pages and perhaps to downloadable files, source documents, definitions and other web resources. In the underlying HTML, a hyperlink looks like this: +Example.org Homepage." +976,"Such a collection of useful, related resources, interconnected via hypertext links is dubbed a web of information. Publication on the Internet created what Tim Berners-Lee first called the WorldWideWeb in November 1990." +977,"The hyperlink structure of the web is described by the webgraph: the nodes of the web graph correspond to the web pages the directed edges between them to the hyperlinks. Over time, many web resources pointed to by hyperlinks disappear, relocate, or are replaced with different content. This makes hyperlinks obsolete, a phenomenon referred to in some circles as link rot, and the hyperlinks affected by it are often called ""dead"" links. The ephemeral nature of the Web has prompted many efforts to archive websites. The Internet Archive, active since 1996, is the best known of such efforts." +978,"Many hostnames used for the World Wide Web begin with www because of the long-standing practice of naming Internet hosts according to the services they provide. The hostname of a web server is often www, in the same way that it may be ftp for an FTP server, and news or nntp for a Usenet news server. These hostnames appear as Domain Name System or subdomain names, as in www.example.com. The use of www is not required by any technical or policy standard and many web sites do not use it; the first web server was nxoc01.cern.ch. According to Paolo Palazzi, who worked at CERN along with Tim Berners-Lee, the popular use of www as subdomain was accidental; the World Wide Web project page was intended to be published at www.cern.ch while info.cern.ch was intended to be the CERN home page; however the DNS records were never switched, and the practice of prepending www to an institution's website domain name was subsequently copied. Many established websites still use the prefix, or they employ other subdomain names such as www2, secure or en for special purposes. Many such web servers are set up so that both the main domain name and the www subdomain refer to the same site; others require one form or the other, or they may map to different web sites. The use of a subdomain name is useful for load balancing incoming web traffic by creating a CNAME record that points to a cluster of web servers. Since, currently, only a subdomain can be used in a CNAME, the same result cannot be achieved by using the bare domain root." +979,"When a user submits an incomplete domain name to a web browser in its address bar input field, some web browsers automatically try adding the prefix ""www"" to the beginning of it and possibly "".com"", "".org"" and "".net"" at the end, depending on what might be missing. For example, entering ""microsoft"" may be transformed to http://www.microsoft.com/ and ""openoffice"" to http://www.openoffice.org. This feature started appearing in early versions of Firefox, when it still had the working title 'Firebird' in early 2003, from an earlier practice in browsers such as Lynx. It is reported that Microsoft was granted a US patent for the same idea in 2008, but only for mobile devices." +980,"The scheme specifiers http:// and https:// at the start of a web URI refer to Hypertext Transfer Protocol or HTTP Secure, respectively. They specify the communication protocol to use for the request and response. The HTTP protocol is fundamental to the operation of the World Wide Web, and the added encryption layer in HTTPS is essential when browsers send or retrieve confidential data, such as passwords or banking information. Web browsers usually automatically prepend http:// to user-entered URIs, if omitted." +981,A web page is a document that is suitable for the World Wide Web and web browsers. A web browser displays a web page on a monitor or mobile device. +982,"The term web page usually refers to what is visible, but may also refer to the contents of the computer file itself, which is usually a text file containing hypertext written in HTML or a comparable markup language. Typical web pages provide hypertext for browsing to other web pages via hyperlinks, often referred to as links. Web browsers will frequently have to access multiple web resource elements, such as reading style sheets, scripts, and images, while presenting each web page." +983,"On a network, a web browser can retrieve a web page from a remote web server. The web server may restrict access to a private network such as a corporate intranet. The web browser uses the Hypertext Transfer Protocol to make such requests to the web server." +984,"A static web page is delivered exactly as stored, as web content in the web server's file system. In contrast, a dynamic web page is generated by a web application, usually driven by server-side software. Dynamic web pages are used when each user may require completely different information, for example, bank websites, web email etc." +985,"A static web page is a web page that is delivered to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application." +986,"Consequently, a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so." +987,"A server-side dynamic web page is a web page whose construction is controlled by an application server processing server-side scripts. In server-side scripting, parameters determine how the assembly of every new web page proceeds, including the setting up of more client-side processing." +988,"A client-side dynamic web page processes the web page using JavaScript running in the browser. JavaScript programs can interact with the document via Document Object Model, or DOM, to query page state and alter it. The same client-side techniques can then dynamically update or change the DOM in the same way." +989,"A dynamic web page is then reloaded by the user or by a computer program to change some variable content. The updating information could come from the server, or from changes made to that page's DOM. This may or may not truncate the browsing history or create a saved version to go back to, but a dynamic web page update using Ajax technologies will neither create a page to go back to nor truncate the web browsing history forward of the displayed page. Using Ajax technologies the end user gets one dynamic page managed as a single page in the web browser while the actual web content rendered on that page can vary. The Ajax engine sits only on the browser requesting parts of its DOM, the DOM, for its client, from an application server." +990,"Dynamic HTML, or DHTML, is the umbrella term for technologies and methods used to create web pages that are not static web pages, though it has fallen out of common use since the popularization of AJAX, a term which is now itself rarely used. Client-side-scripting, server-side scripting, or a combination of these make for the dynamic web experience in a browser." +991,"JavaScript is a scripting language that was initially developed in 1995 by Brendan Eich, then of Netscape, for use within web pages. The standardised version is ECMAScript. To make web pages more interactive, some web applications also use JavaScript techniques such as Ajax . Client-side script is delivered with the page that can make additional HTTP requests to the server, either in response to user actions such as mouse movements or clicks, or based on elapsed time. The server's responses are used to modify the current page rather than creating a new page with each response, so the server needs only to provide limited, incremental information. Multiple Ajax requests can be handled at the same time, and users can interact with the page while data is retrieved. Web pages may also regularly poll the server to check whether new information is available." +992,"A website is a collection of related web resources including web pages, multimedia content, typically identified with a common domain name, and published on at least one web server. Notable examples are wikipedia.org, google.com, and amazon.com." +993,"A website may be accessible via a public Internet Protocol network, such as the Internet, or a private local area network , by referencing a uniform resource locator that identifies the site." +994,"Websites can have many functions and can be used in various fashions; a website can be a personal website, a corporate website for a company, a government website, an organization website, etc. Websites are typically dedicated to a particular topic or purpose, ranging from entertainment and social networking to providing news and education. All publicly accessible websites collectively constitute the World Wide Web, while private websites, such as a company's website for its employees, are typically a part of an intranet." +995,"Web pages, which are the building blocks of websites, are documents, typically composed in plain text interspersed with formatting instructions of Hypertext Markup Language . They may incorporate elements from other websites with suitable markup anchors. Web pages are accessed and transported with the Hypertext Transfer Protocol , which may optionally employ encryption to provide security and privacy for the user. The user's application, often a web browser, renders the page content according to its HTML markup instructions onto a display terminal." +996,"Hyperlinking between web pages conveys to the reader the site structure and guides the navigation of the site, which often starts with a home page containing a directory of the site web content. Some websites require user registration or subscription to access content. Examples of subscription websites include many business sites, news websites, academic journal websites, gaming websites, file-sharing websites, message boards, web-based email, social networking websites, websites providing real-time price quotations for different types of markets, as well as sites providing various other services. End users can access websites on a range of devices, including desktop and laptop computers, tablet computers, smartphones and smart TVs." +997,"A web browser is a software user agent for accessing information on the World Wide Web. To connect to a website's server and display its pages, a user needs to have a web browser program. This is the program that the user runs to download, format, and display a web page on the user's computer." +998,"In addition to allowing users to find, display, and move between web pages, a web browser will usually have features like keeping bookmarks, recording history, managing cookies , and home pages and may have facilities for recording passwords for logging into web sites." +999,"The most popular browsers are Chrome, Firefox, Safari, Internet Explorer, and Edge." +1000,"A Web server is server software, or hardware dedicated to running said software, that can satisfy World Wide Web client requests. A web server can, in general, contain one or more websites. A web server processes incoming network requests over HTTP and several other related protocols." +1001,"The primary function of a web server is to store, process and deliver web pages to clients. The communication between client and server takes place using the Hypertext Transfer Protocol . Pages delivered are most frequently HTML documents, which may include images, style sheets and scripts in addition to the text content." +1002,"A user agent, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server's secondary storage, but this is not necessarily the case and depends on how the webserver is implemented." +1003,"While the primary function is to serve content, full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files." +1004,"Many generic web servers also support server-side scripting using Active Server Pages , PHP , or other scripting languages. This means that the behavior of the webserver can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to generate HTML documents dynamically as opposed to returning static documents. The former is primarily used for retrieving or modifying information from databases. The latter is typically much faster and more easily cached but cannot deliver dynamic content." +1005,"Web servers can also frequently be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring or administering the device in question. This usually means that no additional software has to be installed on the client computer since only a web browser is required ." +1006,"An HTTP cookie is a small piece of data sent from a website and stored on the user's computer by the user's web browser while the user is browsing. Cookies were designed to be a reliable mechanism for websites to remember stateful information or to record the user's browsing activity . They can also be used to remember arbitrary pieces of information that the user previously entered into form fields such as names, addresses, passwords, and credit card numbers." +1007,"Cookies perform essential functions in the modern web. Perhaps most importantly, authentication cookies are the most common method used by web servers to know whether the user is logged in or not, and which account they are logged in with. Without such a mechanism, the site would not know whether to send a page containing sensitive information or require the user to authenticate themselves by logging in. The security of an authentication cookie generally depends on the security of the issuing website and the user's web browser, and on whether the cookie data is encrypted. Security vulnerabilities may allow a cookie's data to be read by a hacker, used to gain access to user data, or used to gain access to the website to which the cookie belongs ." +1008,"Tracking cookies, and especially third-party tracking cookies, are commonly used as ways to compile long-term records of individuals' browsing histories – a potential privacy concern that prompted European and U.S. lawmakers to take action in 2011. European law requires that all websites targeting European Union member states gain ""informed consent"" from users before storing non-essential cookies on their device." +1009,"Google Project Zero researcher Jann Horn describes ways cookies can be read by intermediaries, like Wi-Fi hotspot providers. He recommends using the browser in incognito mode in such circumstances." +1010,"A web search engine or Internet search engine is a software system that is designed to carry out web search , which means to search the World Wide Web in a systematic way for particular information specified in a web search query. The search results are generally presented in a line of results, often referred to as search engine results pages . The information may be a mix of web pages, images, videos, infographics, articles, research papers, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler. Internet content that is not capable of being searched by a web search engine is generally described as the deep web." +1011,"The deep web, invisible web, or hidden web are parts of the World Wide Web whose contents are not indexed by standard web search engines. The opposite term to the deep web is the surface web, which is accessible to anyone using the Internet. Computer scientist Michael K. Bergman is credited with coining the term deep web in 2001 as a search indexing term." +1012,"The content of the deep web is hidden behind HTTP forms, and includes many very common uses such as web mail, online banking, and services that users must pay for, and which is protected by a paywall, such as video on demand, some online magazines and newspapers, among others." +1013,The content of the deep web can be located and accessed by a direct URL or IP address and may require a password or other security access past the public website page. +1014,"A web cache is a server computer located either on the public Internet or within an enterprise that stores recently accessed web pages to improve response time for users when the same content is requested within a certain time after the original request. Most web browsers also implement a browser cache by writing recently obtained data to a local data storage device. HTTP requests by a browser may ask only for data that has changed since the last access. Web pages and resources may contain expiration information to control caching to secure sensitive data, such as in online banking, or to facilitate frequently updated sites, such as news media. Even sites with highly dynamic content may permit basic resources to be refreshed only occasionally. Web site designers find it worthwhile to collate resources such as CSS data and JavaScript into a few site-wide files so that they can be cached efficiently. Enterprise firewalls often cache Web resources requested by one user for the benefit of many users. Some search engines store cached content of frequently accessed websites." +1015,"For criminals, the Web has become a venue to spread malware and engage in a range of cybercrimes, including identity theft, fraud, espionage and intelligence gathering. Web-based vulnerabilities now outnumber traditional computer security concerns, and as measured by Google, about one in ten web pages may contain malicious code. Most web-based attacks take place on legitimate websites, and most, as measured by Sophos, are hosted in the United States, China and Russia. The most common of all malware threats is SQL injection attacks against websites. Through HTML and URIs, the Web was vulnerable to attacks like cross-site scripting that came with the introduction of JavaScript and were exacerbated to some degree by Web 2.0 and Ajax web design that favours the use of scripts. Today by one estimate, 70% of all websites are open to XSS attacks on their users. Phishing is another common threat to the Web. In February 2013, RSA estimated the global losses from phishing at $1.5 billion in 2012. Two of the well-known phishing methods are Covert Redirect and Open Redirect." +1016,"Proposed solutions vary. Large security companies like McAfee already design governance and compliance suites to meet post-9/11 regulations, and some, like Finjan have recommended active real-time inspection of programming code and all content regardless of its source. Some have argued that for enterprises to see Web security as a business opportunity rather than a cost centre, while others call for ""ubiquitous, always-on digital rights management"" enforced in the infrastructure to replace the hundreds of companies that secure data and networks. Jonathan Zittrain has said users sharing responsibility for computing safety is far preferable to locking down the Internet." +1017,"Every time a client requests a web page, the server can identify the request's IP address. Web servers usually log IP addresses in a log file. Also, unless set not to do so, most web browsers record requested web pages in a viewable history feature, and usually cache much of the content locally. Unless the server-browser communication uses HTTPS encryption, web requests and responses travel in plain text across the Internet and can be viewed, recorded, and cached by intermediate systems. Another way to hide personally identifiable information is by using a virtual private network. A VPN encrypts online traffic and masks the original IP address lowering the chance of user identification." +1018,"When a web page asks for, and the user supplies, personally identifiable information—such as their real name, address, e-mail address, etc. web-based entities can associate current web traffic with that individual. If the website uses HTTP cookies, username, and password authentication, or other tracking techniques, it can relate other web visits, before and after, to the identifiable information provided. In this way, a web-based organization can develop and build a profile of the individual people who use its site or sites. It may be able to build a record for an individual that includes information about their leisure activities, their shopping interests, their profession, and other aspects of their demographic profile. These profiles are of potential interest to marketers, advertisers, and others. Depending on the website's terms and conditions and the local laws that apply information from these profiles may be sold, shared, or passed to other organizations without the user being informed. For many ordinary people, this means little more than some unexpected e-mails in their in-box or some uncannily relevant advertising on a future web page. For others, it can mean that time spent indulging an unusual interest can result in a deluge of further targeted marketing that may be unwelcome. Law enforcement, counterterrorism, and espionage agencies can also identify, target, and track individuals based on their interests or proclivities on the Web." +1019,"Social networking sites usually try to get users to use their real names, interests, and locations, rather than pseudonyms, as their executives believe that this makes the social networking experience more engaging for users. On the other hand, uploaded photographs or unguarded statements can be identified to an individual, who may regret this exposure. Employers, schools, parents, and other relatives may be influenced by aspects of social networking profiles, such as text posts or digital photos, that the posting individual did not intend for these audiences. Online bullies may make use of personal information to harass or stalk users. Modern social networking websites allow fine-grained control of the privacy settings for each posting, but these can be complex and not easy to find or use, especially for beginners. Photographs and videos posted onto websites have caused particular problems, as they can add a person's face to an online profile. With modern and potential facial recognition technology, it may then be possible to relate that face with other, previously anonymous, images, events, and scenarios that have been imaged elsewhere. Due to image caching, mirroring, and copying, it is difficult to remove an image from the World Wide Web." +1020,"Web standards include many interdependent standards and specifications, some of which govern aspects of the Internet, not just the World Wide Web. Even when not web-focused, such standards directly or indirectly affect the development and administration of websites and web services. Considerations include the interoperability, accessibility and usability of web pages and web sites." +1021,"Web standards, in the broader sense, consist of the following:" +1022,Web standards are not fixed sets of rules but are constantly evolving sets of finalized technical specifications of web technologies. Web standards are developed by standards organizations—groups of interested and often competing parties chartered with the task of standardization—not technologies developed and declared to be a standard by a single individual or company. It is crucial to distinguish those specifications that are under development from the ones that already reached the final development status . +1023,"There are methods for accessing the Web in alternative mediums and formats to facilitate use by individuals with disabilities. These disabilities may be visual, auditory, physical, speech-related, cognitive, neurological, or some combination. Accessibility features also help people with temporary disabilities, like a broken arm, or ageing users as their abilities change. The Web is receiving information as well as providing information and interacting with society. The World Wide Web Consortium claims that it is essential that the Web be accessible, so it can provide equal access and equal opportunity to people with disabilities. Tim Berners-Lee once noted, ""The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect."" Many countries regulate web accessibility as a requirement for websites. International co-operation in the W3C Web Accessibility Initiative led to simple guidelines that web content authors as well as software developers can use to make the Web accessible to persons who may or may not be using assistive technology." +1024,"The W3C Internationalisation Activity assures that web technology works in all languages, scripts, and cultures. Beginning in 2004 or 2005, Unicode gained ground and eventually in December 2007 surpassed both ASCII and Western European as the Web's most frequently used character encoding. Originally RFC 3986 allowed resources to be identified by URI in a subset of US-ASCII. RFC 3987 allows more characters—any character in the Universal Character Set—and now a resource can be identified by IRI in any language." +1025,The Advanced Research Projects Agency Network was the first wide-area packet-switched network with distributed control and one of the first computer networks to implement the TCP/IP protocol suite. Both technologies became the technical foundation of the Internet. The ARPANET was established by the Advanced Research Projects Agency of the United States Department of Defense. +1026,"Building on the ideas of J. C. R. Licklider, Bob Taylor initiated the ARPANET project in 1966 to enable resource sharing between remote computers. Taylor appointed Larry Roberts as program manager. Roberts made the key decisions about the request for proposal to build the network. He incorporated Donald Davies' concepts and designs for packet switching, and sought input from Paul Baran. ARPA awarded the contract to build the network to Bolt Beranek & Newman. The design was led by Bob Kahn who developed the first protocol for the network. Roberts engaged Leonard Kleinrock at UCLA to develop mathematical methods for analyzing the packet network technology." +1027,"The first computers were connected in 1969 and the Network Control Protocol was implemented in 1970, development of which was led by Steve Crocker at UCLA and other graduate students, including Jon Postel and Vint Cerf. The network was declared operational in 1971. Further software development enabled remote login and file transfer, which was used to provide an early form of email. The network expanded rapidly and operational control passed to the Defense Communications Agency in 1975." +1028,"Bob Kahn moved to DARPA and, together with Vint Cerf at Stanford University, formulated the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. As this work progressed, a protocol was developed by which multiple separate networks could be joined into a network of networks. Version 4 of TCP/IP was installed in the ARPANET for production use in January 1983 after the Department of Defense made it standard for all military computer networking." +1029,"Access to the ARPANET was expanded in 1981 when the National Science Foundation funded the Computer Science Network . In the early 1980s, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. The ARPANET was formally decommissioned in 1990, after partnerships with the telecommunication and computer industry had assured private sector expansion and commercialization of an expanded worldwide network, known as the Internet." +1030,"Historically, voice and data communications were based on methods of circuit switching, as exemplified in the traditional telephone network, wherein each telephone call is allocated a dedicated end-to-end electronic connection between the two communicating stations. The connection is established by switching systems that connected multiple intermediate call legs between these systems for the duration of the call." +1031,"The traditional model of the circuit-switched telecommunication network was challenged in the early 1960s by Paul Baran at the RAND Corporation, who had been researching systems that could sustain operation during partial destruction, such as by nuclear war. He developed the theoretical model of distributed adaptive message block switching. However, the telecommunication establishment rejected the development in favor of existing models. Donald Davies at the United Kingdom's National Physical Laboratory independently arrived at a similar concept in 1965." +1032,"The earliest ideas for a computer network intended to allow general communications among computer users were formulated by computer scientist J. C. R. Licklider of Bolt Beranek and Newman , in April 1963, in memoranda discussing the concept of the ""Intergalactic Computer Network"". Those ideas encompassed many of the features of the contemporary Internet. In October 1963, Licklider was appointed head of the Behavioral Sciences and Command and Control programs at the Defense Department's Advanced Research Projects Agency . He convinced Ivan Sutherland and Bob Taylor that this network concept was very important and merited development, although Licklider left ARPA before any contracts were assigned for development." +1033,"Sutherland and Taylor continued their interest in creating the network, in part, to allow ARPA-sponsored researchers at various corporate and academic locales to utilize computers provided by ARPA, and, in part, to quickly distribute new software and other computer science results. Taylor had three computer terminals in his office, each connected to separate computers, which ARPA was funding: one for the System Development Corporation Q-32 in Santa Monica, one for Project Genie at the University of California, Berkeley, and another for Multics at the Massachusetts Institute of Technology. Taylor recalls the circumstance: ""For each of these three terminals, I had three different sets of user commands. So, if I was talking online with someone at S.D.C., and I wanted to talk to someone I knew at Berkeley, or M.I.T., about this, I had to get up from the S.D.C. terminal, go over and log into the other terminal and get in touch with them. I said, 'Oh Man!', it's obvious what to do: If you have these three terminals, there ought to be one terminal that goes anywhere you want to go. That idea is the ARPANET""." +1034,"Donald Davies' work caught the attention of ARPANET developers at Symposium on Operating Systems Principles in October 1967. He gave the first public presentation, having coined the term packet switching, in August 1968 and incorporated it into the NPL network in England. The NPL network and ARPANET were the first two networks in the world to use packet switching. Roberts said the packet switching networks built in the 1970s were similar ""in nearly all respects"" to Davies' original 1965 design." +1035,"In February 1966, Bob Taylor successfully lobbied ARPA's Director Charles M. Herzfeld to fund a network project. Herzfeld redirected funds in the amount of one million dollars from a ballistic missile defense program to Taylor's budget. Taylor hired Larry Roberts as a program manager in the ARPA Information Processing Techniques Office in January 1967 to work on the ARPANET. Roberts met Paul Baran in February 1967, but did not discuss networks." +1036,"Roberts asked Frank Westervelt to explore the initial design questions for a network. In April 1967, ARPA held a design session on technical standards. The initial standards for identification and authentication of users, transmission of characters, and error checking and retransmission procedures were discussed. Roberts' proposal was that all mainframe computers would connect to one another directly. The other investigators were reluctant to dedicate these computing resources to network administration. Wesley Clark proposed minicomputers should be used as an interface to create a message switching network. Roberts modified the ARPANET plan to incorporate Clark's suggestion and named the minicomputers Interface Message Processors ." +1037,"The plan was presented at the inaugural Symposium on Operating Systems Principles in October 1967. Donald Davies' work on packet switching and the NPL network, presented by a colleague , came to the attention of the ARPA investigators at this conference. Roberts applied Davies' concept of packet switching for the ARPANET, and sought input from Paul Baran. The NPL network was using line speeds of 768 kbit/s, and the proposed line speed for the ARPANET was upgraded from 2.4 kbit/s to 50 kbit/s." +1038,"By mid-1968, Roberts and Barry Wessler wrote a final version of the IMP specification based on a Stanford Research Institute report that ARPA commissioned to write detailed specifications describing the ARPANET communications network. Roberts gave a report to Taylor on 3 June, who approved it on 21 June. After approval by ARPA, a Request for Quotation was issued for 140 potential bidders. Most computer science companies regarded the ARPA proposal as outlandish, and only twelve submitted bids to build a network; of the twelve, ARPA regarded only four as top-rank contractors. At year's end, ARPA considered only two contractors and awarded the contract to build the network to BBN in January 1969." +1039,"The initial, seven-person BBN team were much aided by the technical specificity of their response to the ARPA RFQ, and thus quickly produced the first working system. The ""IMP guys"" were led by Frank Heart and the theoretical design of the network was led by Bob Kahn; the team included Dave Walden, Severo Omstein, William Crowther and several others. The BBN-proposed network closely followed Roberts' ARPA plan: a network composed of small computers, the IMPs , that functioned as gateways interconnecting local resources. Routing, flow control, software design and network control were developed by the BBN IMP team. At each site, the IMPs performed store-and-forward packet switching functions and were interconnected with leased lines via telecommunication data sets , with initial data rates of 56kbit/s. The host computers were connected to the IMPs via custom serial communication interfaces. The system, including the hardware and the packet switching software, was designed and installed in nine months. The BBN team continued to interact with the NPL team with meetings between them taking place in the U.S. and the U.K." +1040,"The first-generation IMPs were built by BBN Technologies using a rugged computer version of the Honeywell DDP-516 computer, configured with 24KB of expandable magnetic-core memory, and a 16-channel Direct Multiplex Control direct memory access unit. The DMC established custom interfaces with each of the host computers and modems. In addition to the front-panel lamps, the DDP-516 computer also features a special set of 24 indicator lamps showing the status of the IMP communication channels. Each IMP could support up to four local hosts and could communicate with up to six remote IMPs via early Digital Signal 0 leased telephone lines. The network connected one computer in Utah with three in California. Later, the Department of Defense allowed the universities to join the network for sharing hardware and software resources." +1041,"According to Charles Herzfeld, ARPA Director :" +1042,"According to Stephen J. Lukasik, who as deputy director and Director of DARPA was the person who signed most of the checks for Arpanet's development, the goal was ""command and control"":" +1043,"The ARPANET did use distributed computation, and incorporated frequent re-computation of routing tables . These features increased the survivability of the network in the event of significant interruption. Furthermore, the ARPANET was designed to survive subordinate-network losses. However, the Internet Society agrees with Herzfeld in a footnote in their online article, A Brief History of the Internet:" +1044,"Paul Baran, the first to put forward a theoretical model for communication using packet switching, conducted the RAND study referenced above. Though the ARPANET did not exactly share Baran's project's goal, he said his work did contribute to the development of the ARPANET. Minutes taken by Elmer Shapiro of Stanford Research Institute at the ARPANET design meeting of 9–10 October 1967 indicate that a version of Baran's routing method may be used, consistent with the NPL team's proposal at the Symposium on Operating System Principles in Gatlinburg." +1045,"The first four nodes were designated as a testbed for developing and debugging the 1822 protocol, which was a major undertaking. While they were connected electronically in 1969, network applications were not possible until the Network Control Protocol was implemented in 1970 enabling the first two host-host protocols, remote login and file transfer which were specified and implemented between 1969 and 1973. The network was declared operational in 1971. Network traffic began to grow once email was established at the majority of sites by around 1973." +1046,The first four IMPs were: +1047,"The first successful host-to-host connection on the ARPANET was made between Stanford Research Institute and UCLA, by SRI programmer Bill Duvall and UCLA student programmer Charley Kline, at 10:30 pm PST on 29 October 1969 . Kline connected from UCLA's SDS Sigma 7 Host computer to the Stanford Research Institute's SDS 940 Host computer. Kline typed the command ""login,"" but initially the SDS 940 crashed after he typed two characters. About an hour later, after Duvall adjusted parameters on the machine, Kline tried again and successfully logged in. Hence, the first two characters successfully transmitted over the ARPANET were ""lo"". The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the initial four-node network was established." +1048,"Elizabeth Feinler created the first Resource Handbook for ARPANET in 1969 which led to the development of the ARPANET directory. The directory, built by Feinler and a team made it possible to navigate the ARPANET." +1049,"Roberts engaged Howard Frank to consult on the topological design of the network. Frank made recommendations to increase throughput and reduce costs in a scaled-up network. By March 1970, the ARPANET reached the East Coast of the United States, when an IMP at BBN in Cambridge, Massachusetts was connected to the network. Thereafter, the ARPANET grew: 9 IMPs by June 1970 and 13 IMPs by December 1970, then 18 by September 1971 ; 29 IMPs by August 1972, and 40 by September 1973. By June 1974, there were 46 IMPs, and in July 1975, the network numbered 57 IMPs. By 1981, the number was 213 host computers, with another host connecting approximately every twenty days." +1050,"Support for inter-IMP circuits of up to 230.4 kbit/s was added in 1970, although considerations of cost and IMP processing power meant this capability was not actively used." +1051,"Larry Roberts saw the ARPANET and NPL projects as complementary and sought in 1970 to connect them via a satellite link. Peter Kirstein's research group at University College London was subsequently chosen in 1971 in place of NPL for the UK connection. In June 1973, a transatlantic satellite link connected ARPANET to the Norwegian Seismic Array , via the Tanum Earth Station in Sweden, and onward via a terrestrial circuit to a TIP at UCL. UCL provided a gateway for interconnection of the ARPANET with British academic networks, the first international resource sharing network, and carried out some of the earliest experimental research work on internetworking." +1052,"1971 saw the start of the use of the non-ruggedized Honeywell 316 as an IMP. +It could also be configured as a Terminal Interface Processor , which provided terminal server support for up to 63 ASCII serial terminals through a multi-line controller in place of one of the hosts. The 316 featured a greater degree of integration than the 516, which made it less expensive and easier to maintain. The 316 was configured with 40 kB of core memory for a TIP. The size of core memory was later increased, to 32 kB for the IMPs, and 56 kB for TIPs, in 1973." +1053,The ARPANET was demonstrated at the International Conference on Computer Communications in October 1972. +1054,"In 1975, BBN introduced IMP software running on the Pluribus multi-processor. These appeared in a few sites. In 1981, BBN introduced IMP software running on its own C/30 processor product." +1055,"In 1968, Roberts contracted with Kleinrock to measure the performance of the network and find areas for improvement. Building on his earlier work on queueing theory, Kleinrock specified mathematical models of the performance of packet-switched networks, which underpinned the development of the ARPANET as it expanded rapidly in the early 1970s." +1056,"ARPA was intended to fund advanced research. The ARPANET was a research project that was communications-oriented, rather than user-oriented in design. Nonetheless, in the summer of 1975, operational control of the ARPANET passed to the Defense Communications Agency. At about this time, the first ARPANET encryption devices were deployed to support classified traffic." +1057,"The ARPANET Completion Report, published in 1981 jointly by BBN and DARPA, concludes that:" +1058,Access to the ARPANET was expanded in 1981 when the National Science Foundation funded the Computer Science Network . +1059,"The transatlantic connectivity with NORSAR and UCL later evolved into the SATNET. The ARPANET, SATNET and PRNET were interconnected in 1977." +1060,The DoD made TCP/IP the standard communication protocol for all military computer networking in 1980. NORSAR and University College London left the ARPANET and began using TCP/IP over SATNET in early 1982. +1061,"On January 1, 1983, known as flag day, TCP/IP protocols became the standard for the ARPANET, replacing the earlier Network Control Protocol." +1062,In September 1984 work was completed on restructuring the ARPANET giving U.S. military sites their own Military Network for unclassified defense department communications. Both networks carried unclassified information and were connected at a small number of controlled gateways which would allow total separation in the event of an emergency. MILNET was part of the Defense Data Network . +1063,"Separating the civil and military networks reduced the 113-node ARPANET by 68 nodes. After MILNET was split away, the ARPANET would continue to be used as an Internet backbone for researchers, but be slowly phased out." +1064,"In 1985, the NSF funded the establishment of national supercomputing centers at several universities and provided network access and network interconnectivity with the NSFNET project in 1986. NSFNET became the Internet backbone for government agencies and universities." +1065,"The ARPANET project was formally decommissioned in 1990. The original IMPs and TIPs were phased out as the ARPANET was shut down after the introduction of the NSFNet, but some IMPs remained in service as late as July 1990." +1066,"In the wake of the decommissioning of the ARPANET on 28 February 1990, Vinton Cerf wrote the following lamentation, entitled ""Requiem of the ARPANET"":" +1067,#NAME? +1068,"The technological advancements and practical applications achieved through the ARPANET were instrumental in shaping modern computer networking including the Internet. Development and implementation of the concepts of packet switching, decentralized communication, and the development of protocols like TCP/IP laid the foundation for a global network that revolutionized communication, information sharing and collaborative research across the world." +1069,"The ARPANET was related to many other research projects, which either influenced the ARPANET design, or which were ancillary projects or spun out of the ARPANET." +1070,"Senator Al Gore authored the High Performance Computing and Communication Act of 1991, commonly referred to as ""The Gore Bill"", after hearing the 1988 concept for a National Research Network submitted to Congress by a group chaired by Leonard Kleinrock. The bill was passed on 9 December 1991 and led to the National Information Infrastructure which Gore called the information superhighway." +1071,"The ARPANET project was honored with two IEEE Milestones, both dedicated in 2009." +1072,"Because it was never a goal for the ARPANET to support IMPs from vendors other than BBN, the IMP-to-IMP protocol and message format were not standardized. However, the IMPs did nonetheless communicate amongst themselves to perform link-state routing, to do reliable forwarding of messages, and to provide remote monitoring and management functions to ARPANET's Network Control Center. Initially, each IMP had a 6-bit identifier and supported up to 4 hosts, which were identified with a 2-bit index. An ARPANET host address, therefore, consisted of both the port index on its IMP and the identifier of the IMP, which was written with either port/IMP notation or as a single byte; for example, the address of MIT-DMG could be written as either 1/6 or 70. An upgrade in early 1976 extended the host and IMP numbering to 8-bit and 16-bit, respectively." +1073,"In addition to primary routing and forwarding responsibilities, the IMP ran several background programs, titled TTY, DEBUG, PARAMETER-CHANGE, DISCARD, TRACE, and STATISTICS. These were given host numbers in order to be addressed directly and provided functions independently of any connected host. For example, ""TTY"" allowed an on-site operator to send ARPANET packets manually via the teletype connected directly to the IMP." +1074,"The starting point for host-to-host communication on the ARPANET in 1969 was the 1822 protocol, which defined the transmission of messages to an IMP. The message format was designed to work unambiguously with a broad range of computer architectures. An 1822 message essentially consisted of a message type, a numeric host address, and a data field. To send a data message to another host, the transmitting host formatted a data message containing the destination host's address and the data message being sent, and then transmitted the message through the 1822 hardware interface. The IMP then delivered the message to its destination address, either by delivering it to a locally connected host, or by delivering it to another IMP. When the message was ultimately delivered to the destination host, the receiving IMP would transmit a Ready for Next Message acknowledgment to the sending, host IMP." +1075,"Unlike modern Internet datagrams, the ARPANET was designed to reliably transmit 1822 messages, and to inform the host computer when it loses a message; the contemporary IP is unreliable, whereas the TCP is reliable. Nonetheless, the 1822 protocol proved inadequate for handling multiple connections among different applications residing in a host computer. This problem was addressed with the Network Control Protocol , which provided a standard method to establish reliable, flow-controlled, bidirectional communications links among different processes in different host computers. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept later incorporated in the OSI model." +1076,"NCP was developed under the leadership of Steve Crocker, then a graduate student at UCLA. Crocker created and led the Network Working Group which was made up of a collection of graduate students at universities and research laboratories, including Jon Postel and Vint Cerf at UCLA. They were sponsored by ARPA to carry out the development of the ARPANET and the software for the host computers that supported applications." +1077,"NCP provided a standard set of network services that could be shared by several applications running on a single host computer. This led to the evolution of application protocols that operated, more or less, independently of the underlying network service, and permitted independent advances in the underlying protocols." +1078,"The various application protocols such as TELNET for remote time-sharing access and File Transfer Protocol , the latter used to enable rudimentary electronic mail, were developed and eventually ported to run over the TCP/IP protocol suite. In the 1980s, FTP for email was replaced by the Simple Mail Transfer Protocol and, later, POP and IMAP." +1079,"Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855." +1080,"The original specification for the File Transfer Protocol was written by Abhay Bhushan and published as RFC 114 on 16 April 1971. By 1973, the File Transfer Protocol specification had been defined and implemented, enabling file transfers over the ARPANET." +1081,"In 1971, Ray Tomlinson, of BBN sent the first network e-mail . An ARPA study in 1973, a year after network e-mail was introduced to the ARPANET community, found that three-quarters of the traffic over the ARPANET consisted of email messages. E-mail remained a very large part of the overall ARPANET traffic." +1082,"The Network Voice Protocol specifications were defined in 1977 , and implemented. But, because of technical shortcomings, conference calls over the ARPANET never worked well; the contemporary Voice over Internet Protocol was decades away." +1083,"Stephen J. Lukasik directed DARPA to focus on internetworking research in the early 1970s. Bob Kahn moved from BBN to DARPA in 1972, first as program manager for the ARPANET, under Larry Roberts, then as director of the IPTO when Roberts left to found Telenet. Kahn worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. Vint Cerf joined the International Networking Working Group in 1972 and became its Chair. This group considered how to interconnect packet switching networks with different specifications, that is, internetworking. Research led by Bob Kahn at DARPA and Vint Cerf at Stanford University resulted in the formulation of the Transmission Control Program, which incorporated concepts from the French CYCLADES project directed by Louis Pouzin. Its specification was written by Cerf with Yogen Dalal and Carl Sunshine at Stanford in December 1974 . The following year, testing began through concurrent implementations at Stanford, BBN and University College London. At first a monolithic design, the software was redesigned as a modular protocol stack in version 3 in 1978. Version 4 was installed in the ARPANET for production use in January 1983, replacing NCP. The development of the complete Internet protocol suite by 1989, as outlined in RFC 1122 and RFC 1123, and partnerships with the telecommunication and computer industry laid the foundation for the adoption of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet." +1084,"The Purdy Polynomial hash algorithm was developed for the ARPANET to protect passwords in 1971 at the request of Larry Roberts, head of ARPA at that time. It computed a polynomial of degree 224 + 17 modulo the 64-bit prime p = 264 − 59. The algorithm was later used by Digital Equipment Corporation to hash passwords in the VMS operating system and is still being used for this purpose." +1085,"Because of its government funding, certain forms of traffic were discouraged or prohibited." +1086,"Leonard Kleinrock claims to have committed the first illegal act on the Internet, having sent a request for return of his electric razor after a meeting in England in 1973. At the time, use of the ARPANET for personal reasons was unlawful." +1087,"In 1978, against the rules of the network, Gary Thuerk of Digital Equipment Corporation sent out the first mass email to approximately 400 potential clients via the ARPANET. He claims that this resulted in $13 million worth of sales in DEC products, and highlighted the potential of email marketing." +1088,A 1982 handbook on computing at MIT's AI Lab stated regarding network etiquette: +1089,The Defense Advanced Research Projects Agency is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military. +1090,"Originally known as the Advanced Research Projects Agency , the agency was created on February 7, 1958, by President Dwight D. Eisenhower in response to the Soviet launching of Sputnik 1 in 1957. By collaborating with academia, industry, and government partners, DARPA formulates and executes research and development projects to expand the frontiers of technology and science, often beyond immediate U.S. military requirements." +1091,"The Economist has called DARPA the agency that shaped the modern world, with technologies like ""weather satellites, GPS, drones, stealth technology, voice interfaces, the personal computer and the internet on the list of innovations for which DARPA can claim at least partial credit."" Its track record of success has inspired governments around the world to launch similar research and development agencies." +1092,"DARPA is independent of other military research and development and reports directly to senior Department of Defense management. DARPA comprises approximately 220 government employees in six technical offices, including nearly 100 program managers, who together oversee about 250 research and development programs." +1093,"The name of the organization first changed from its founding name, ARPA, to DARPA, in March 1972, changing back to ARPA in February 1993, then reverted to DARPA in March 1996." +1094,"The agency's current director, appointed in March 2021, is Stefanie Tompkins." +1095,"As of 2021, their mission statement is ""to make pivotal investments in breakthrough technologies for national security""." +1096,"The Advanced Research Projects Agency was suggested by the President's Scientific Advisory Committee to President Dwight D. Eisenhower in a meeting called after the launch of Sputnik. ARPA was formally authorized by President Eisenhower in 1958 for the purpose of forming and executing research and development projects to expand the frontiers of technology and science, and able to reach far beyond immediate military requirements. The two relevant acts are the Supplemental Military Construction Authorization and Department of Defense Directive 5105.15, in February 1958. It was placed within the Office of the Secretary of Defense and counted approximately 150 people. Its creation was directly attributed to the launching of Sputnik and to U.S. realization that the Soviet Union had developed the capacity to rapidly exploit military technology. Initial funding of ARPA was $520 million. ARPA's first director, Roy Johnson, left a $160,000 management job at General Electric for an $18,000 job at ARPA. Herbert York from Lawrence Livermore National Laboratory was hired as his scientific assistant." +1097,"Johnson and York were both keen on space projects, but when NASA was established later in 1958 all space projects and most of ARPA's funding were transferred to it. Johnson resigned and ARPA was repurposed to do ""high-risk"", ""high-gain"", ""far out"" basic research, a posture that was enthusiastically embraced by the nation's scientists and research universities. ARPA's second director was Brigadier General Austin W. Betts, who resigned in early 1961 and was succeeded by Jack Ruina who served until 1963. Ruina, the first scientist to administer ARPA, managed to raise its budget to $250 million. It was Ruina who hired J. C. R. Licklider as the first administrator of the Information Processing Techniques Office, which played a vital role in creation of ARPANET, the basis for the future Internet." +1098,"Additionally, the political and defense communities recognized the need for a high-level Department of Defense organization to formulate and execute R&D projects that would expand the frontiers of technology beyond the immediate and specific requirements of the Military Services and their laboratories. In pursuit of this mission, DARPA has developed and transferred technology programs encompassing a wide range of scientific disciplines that address the full spectrum of national security needs." +1099,"From 1958 to 1965, ARPA's emphasis centered on major national issues, including space, ballistic missile defense, and nuclear test detection. During 1960, all of its civilian space programs were transferred to the National Aeronautics and Space Administration and the military space programs to the individual services." +1100,"This allowed ARPA to concentrate its efforts on the Project Defender , Project Vela , and Project AGILE programs, and to begin work on computer processing, behavioral sciences, and materials sciences. The DEFENDER and AGILE programs formed the foundation of DARPA sensor, surveillance, and directed energy R&D, particularly in the study of radar, infrared sensing, and x-ray/gamma ray detection." +1101,"ARPA at this point played an early role in Transit a predecessor to the Global Positioning System . ""Fast-forward to 1959 when a joint effort between DARPA and the Johns Hopkins Applied Physics Laboratory began to fine-tune the early explorers' discoveries. TRANSIT, sponsored by the Navy and developed under the leadership of Richard Kirschner at Johns Hopkins, was the first satellite positioning system.""" +1102,"During the late 1960s, with the transfer of these mature programs to the Services, ARPA redefined its role and concentrated on a diverse set of relatively small, essentially exploratory research programs. The agency was renamed the Defense Advanced Research Projects Agency in 1972, and during the early 1970s, it emphasized direct energy programs, information processing, and tactical technologies." +1103,"Concerning information processing, DARPA made great progress, initially through its support of the development of time-sharing. All modern operating systems rely on concepts invented for the Multics system, developed by a cooperation among Bell Labs, General Electric and MIT, which DARPA supported by funding Project MAC at MIT with an initial two-million-dollar grant." +1104,"DARPA supported the evolution of the ARPANET , Packet Radio Network, Packet Satellite Network and ultimately, the Internet and research in the artificial intelligence fields of speech recognition and signal processing, including parts of Shakey the robot. DARPA also supported the early development of both hypertext and hypermedia. DARPA funded one of the first two hypertext systems, Douglas Engelbart's NLS computer system, as well as The Mother of All Demos. DARPA later funded the development of the Aspen Movie Map, which is generally seen as the first hypermedia system and an important precursor of virtual reality." +1105,The Mansfield Amendment of 1973 expressly limited appropriations for defense research only to projects with direct military application. +1106,"The resulting ""brain drain"" is credited with boosting the development of the fledgling personal computer industry. Some young computer scientists left the universities to startups and private research laboratories such as Xerox PARC." +1107,"Between 1976 and 1981, DARPA's major projects were dominated by air, land, sea, and space technology, tactical armor and anti-armor programs, infrared sensing for space-based surveillance, high-energy laser technology for space-based missile defense, antisubmarine warfare, advanced cruise missiles, advanced aircraft, and defense applications of advanced computing." +1108,"Many of the successful programs were transitioned to the Services, such as the foundation technologies in automatic target recognition, space-based sensing, propulsion, and materials that were transferred to the Strategic Defense Initiative Organization , later known as the Ballistic Missile Defense Organization , now titled the Missile Defense Agency ." +1109,"During the 1980s, the attention of the Agency was centered on information processing and aircraft-related programs, including the National Aerospace Plane or Hypersonic Research Program. The Strategic Computing Program enabled DARPA to exploit advanced processing and networking technologies and to rebuild and strengthen relationships with universities after the Vietnam War. In addition, DARPA began to pursue new concepts for small, lightweight satellites and directed new programs regarding defense manufacturing, submarine technology, and armor/anti-armor." +1110,"In 1981, two engineers, Robert McGhee and Kenneth Waldron, started to develop the Adaptive Suspension Vehicle nicknamed the ""Walker"" at the Ohio State University, under a research contract from DARPA. The vehicle was 17 feet long, 8 feet wide, and 10.5 feet high, and had six legs to support its three-ton aluminum body, in which it was designed to carry cargo over difficult terrains. However, DARPA lost interest in the ASV, after problems with cold-weather tests." +1111,"On February 4, 2004, the agency shut down its so called ""LifeLog Project"". The project's aim would have been, ""to gather in a single place just about everything an individual says, sees or does""." +1112,"On October 28, 2009, the agency broke ground on a new facility in Arlington County, Virginia a few miles from The Pentagon." +1113,"In fall 2011, DARPA hosted the 100-Year Starship Symposium with the aim of getting the public to start thinking seriously about interstellar travel." +1114,"On June 5, 2016, NASA and DARPA announced that it planned to build new X-planes with NASA's plan setting to create a whole series of X planes over the next 10 years." +1115,"Between 2014 and 2016, DARPA shepherded the first machine-to-machine computer security competition, the Cyber Grand Challenge , +bringing a group of top-notch computer security experts to search for security vulnerabilities, exploit them, and create fixes that patch those vulnerabilities in a fully automated fashion. It is one of DARPA prize competitions to spur innovations." +1116,"In June 2018, DARPA leaders demonstrated a number of new technologies that were developed within the framework of the GXV-T program. The goal of this program is to create a lightly armored combat vehicle of not very large dimensions, which, due to maneuverability and other tricks, can successfully resist modern anti-tank weapon systems." +1117,"In September 2020, DARPA and the US Air Force announced that the Hypersonic Air-breathing Weapon Concept are ready for free-flight tests within the next year." +1118,Victoria Coleman became the director of DARPA in November 2020. +1119,"In recent years, DARPA officials have contracted out core functions to corporations. For example, during fiscal year 2020, Chenega ran physical security on DARPA's premises, System High Corp. carried out program security, and Agile Defense ran unclassified IT services. General Dynamics runs classified IT services. Strategic Analysis Inc. provided support services regarding engineering, science, mathematics, and front office and administrative work." +1120,"DARPA has six technical offices that manage the agency's research portfolio, and two additional offices that manage special projects. All offices report to the DARPA director, including:" +1121,A 1991 reorganization created several offices which existed throughout the early 1990s: +1122,A 2010 reorganization merged two offices: +1123,"A list of DARPA's active and archived projects is available on the agency's website. Because of the agency's fast pace, programs constantly start and stop based on the needs of the U.S. government. Structured information about some of the DARPA's contracts and projects is publicly available." +1124,"DARPA publishes a list of current research programs, and a list of archived programs." +1125,"DARPA is well known as a high-tech government agency, and as such has many appearances in popular fiction. Some realistic references to DARPA in fiction are as ""ARPA"" in Tom Swift and the Visitor from Planet X , in episodes of television program The West Wing , the television program Numb3rs, and the Netflix film Spectral." +1126,The system's parent company is organized into three business units: +1127,"Sabre is headquartered in Southlake, Texas, and has many employees in various locations around the world." +1128,"The name of the travel reservation system is an abbreviation for ""Semi-automated Business Research Environment"", and was originally styled in all-capital letters as SABRE. It was developed to automate the way American Airlines booked reservations." +1129,"In the 1950s, American Airlines was facing a serious challenge in its ability to quickly handle airline reservations in an era that witnessed high growth in passenger volumes in the airline industry. Before the introduction of SABRE, the airline's system for booking flights was entirely manual, having developed from the techniques originally developed at its Little Rock, Arkansas, reservations center in the 1920s. In this manual system, a team of eight operators would sort through a rotating file with cards for every flight. When a seat was booked, the operators would place a mark on the side of the card, and knew visually whether it was full. This part of the process was not all that slow, at least when there were not that many planes, but the entire end-to-end task of looking for a flight, reserving a seat, and then writing up the ticket could take up to three hours in some cases, and 90 minutes on average. The system also had limited room to scale. It was limited to about eight operators because that was the maximum that could fit around the file. To handle more queries the only solution was to add more layers of hierarchy to filter down requests into batches." +1130,"American Airlines had already attacked the problem to some degree, and was in the process of introducing their new Magnetronic Reservisor, an electromechanical computer, in 1952 to replace the card files. This computer consisted of a single magnetic drum, each memory location holding the number of seats left on a particular flight. Using this system, a large number of operators could access information simultaneously, so the ticket agents could be told via phone if a seat was available. On the downside, a staff member was needed at each end of the phone line, and handling the ticket took considerable effort and filing. Something much more highly automated was needed if American Airlines was going to enter the jet age, booking many times more seats.: p.100" +1131,"During the testing phase of the Reservisor a high-ranking IBM salesman, Blair Smith, was flying on an American Airlines flight from Los Angeles back to IBM in New York City in 1953. He found himself sitting next to American Airlines president C. R. Smith. Noting that they shared a family name, they began talking." +1132,"Just prior to this chance meeting, IBM had been working with the United States Air Force on their Semi Automatic Ground Environment project. SAGE used a series of large computers to coordinate the message flow from radar sites to interceptors, dramatically reducing the time needed to direct an attack on an incoming bomber. The system used teleprinter machines located around the world to feed information into the system, which then sent orders back to teleprinters located at the fighter bases. It was one of the first online systems." +1133,"Smith and Watson observed that the SAGE system's basic architecture was suitable for use in American Airlines' booking services. Teleprinters would be placed at American Airlines' ticketing offices to send in requests and receive responses directly, without the need for anyone on the other end of the phone. The number of available seats on the aircraft could be tracked automatically, and if a seat was available the ticket agent could be notified. Booking simply took one more command, updating the availability and, if desired, could be followed by printing a ticket." +1134,"Thirty days later IBM sent a research proposal to American Airlines, suggesting that they join forces to study the problem. A team was set up consisting of IBM engineers led by John Siegfried and a large number of American Airlines' staff led by Malcolm Perry, taken from booking, reservations, and ticket sales, calling the effort the Semi-Automated Business Research Environment, or SABRE." +1135,"A formal development arrangement was signed in 1957. The first experimental system went online in 1960, based on two IBM 7090 mainframes in a new data center located in Briarcliff Manor, New York. The system was a success. Up to this point, it had cost $40 million to develop and install . The SABRE system by IBM in the 1960s was specified to process a very large number of transactions, such as handling 83,000 daily phone calls. The system took over all booking functions in 1964, when the name had changed to SABRE." +1136,"In 1972, SABRE was migrated to IBM System/360 systems in a new underground location in Tulsa, Oklahoma. Max Hopper joined American Airlines in 1972 as director of SABRE, and pioneered its use. Originally used only by American Airlines, the system was expanded to access by travel agents in 1976." +1137,"With SABRE up and running, IBM offered its expertise to other airlines, and soon developed Deltamatic for Delta Air Lines on the IBM 7074, and PANAMAC for Pan American World Airways using an IBM 7080. In 1968, they generalized their work into the PARS , which ran on any member of the IBM System/360 family and thus could support any sized airline. The operating system component of PARS evolved into ACP , and later to TPF . Application programs were originally written in assembly language, later in SabreTalk, a proprietary dialect of PL/I, and now in C and C++." +1138,"By the 1980s, SABRE offered airline reservations through the CompuServe Information Service, and General Electric's GEnie under the Eaasy SABRE brand. This service was extended to America Online in the 1990s." +1139,"American and Sabre separated on March 15, 2000. Sabre had been a publicly traded corporation, Sabre Holdings, stock symbol TSG on the New York Stock Exchange until taken private in March 2007. The corporation introduced the new logo and changed from the all-caps acronym ""SABRE"" to the mixed-case ""Sabre Holdings"", when the new corporation was formed. The Travelocity website, introduced in 1996, was owned by Sabre Holdings. Travelocity was acquired by Expedia in January 2015. Sabre Holdings' three remaining business units, Sabre Travel Network, Sabre Airline Solutions and Sabre Hospitality, today serves as a global travel technology company." +1140,"In 1982, Advertising Age reported that ""United Airlines operates a similar system, Apollo, while Eastern operates Mars and Delta operates Datas."" Braniff International's Cowboy system was considered by Electronic Data Systems for building an airline-neutral system." +1141,"A 1982 study by American Airlines found that travel agents selected the flight appearing on the first line more than half the time. Ninety-two percent of the time, the selected flight was on the first screen. This provided a huge incentive for American to manipulate its ranking formula, or even corrupt the search algorithm outright, to favor American flights over its competitors in the results of flight search results, and the airline did not resist the temptation." +1142,"At first this was limited to juggling the relative importance of factors such as the length of the flight, how close the actual departure time was to the desired time, and whether the flight had a connection, but with each success American became bolder. In late 1981, New York Air added a flight from La Guardia to Detroit, challenging American in an important market. Before long, the new flights suddenly started appearing at the bottom of the screen. Its reservations dried up, and it was forced to cut back from eight Detroit flights a day to none." +1143,"On one occasion, Sabre deliberately withheld Continental's discount fares on 49 routes where American competed. A Sabre staffer had been directed to work on a program that would automatically suppress any discount fares loaded into the system." +1144,"Congress investigated these practices, and in 1983 Bob Crandall, president of American, vocally defended the airline's preferential treatment of its own offerings in the system. ""The preferential display of our flights, and the corresponding increase in our market share, is the competitive raison d'être for having created the system in the first place,"" he told them. The U.S. government disagreed, and in 1984 it outlawed the biasing practices for the search results." +1145,"The fairness rules were eliminated or allowed to expire in 2010. By then, none of the major distribution systems was majority owned by the airlines." +1146,"In 1987 Sabre's success of selling to European travel agents was inhibited by the refusal of big European carriers led by British Airways to grant the system ticketing authority for their flights even though Sabre had obtained IATA Billing and Settlement Plan clearance for the UK in 1986. American brought High Court action which alleged that after the arrival of Sabre on its doorstep British Airways immediately offered financial incentives to travel agents who continued to use Travicom and would tie any override commissions to it. Travicom was created by Videcom, British Airways and British Caledonian and launched in 1976 as the world's first multi-access reservations system based on Videcom technology which eventually became part of Galileo UK. It connected 49 subscribing international airlines to thousands of travel agents in the UK. It allowed agents and airlines to communicate via a common distribution language and network, handling 97% of UK airline business trade bookings by 1987." +1147,"British Airways eventually bought out the stakes in Travicom held by Videcom and British Caledonian, to become the sole owner. Although Sabre's vice-president in London, David Schwarte, made representations to the U.S. Department of Transportation and the British Monopolies Commission, British Airways defended the use of Travicom as a truly non-discriminatory system in flight selection because an agent had access to some 50 carriers worldwide, including Sabre, for flight information." +1148,"The processing power behind SAGE was supplied by the largest discrete component-based computer ever built, the IBM-manufactured AN/FSQ-7. Each SAGE Direction Center housed an FSQ-7 which occupied an entire floor, approximately 22,000 square feet not including supporting equipment. The FSQ-7 was actually two computers, ""A"" side and ""B"" side. Computer processing was switched from ""A"" side to ""B"" side on a regular basis, allowing maintenance on the unused side. Information was fed to the DCs from a network of radar stations as well as readiness information from various defense sites. The computers, based on the raw radar data, developed ""tracks"" for the reported targets, and automatically calculated which defenses were within range. Operators used light guns to select targets on-screen for further information, select one of the available defenses, and issue commands to attack. These commands would then be automatically sent to the defense site via teleprinter." +1149,"Connecting the various sites was an enormous network of telephones, modems and teleprinters. Later additions to the system allowed SAGE's tracking data to be sent directly to CIM-10 Bomarc missiles and some of the US Air Force's interceptor aircraft in-flight, directly updating their autopilots to maintain an intercept course without operator intervention. Each DC also forwarded data to a Combat Center for ""supervision of the several sectors within the division"" .: 51" +1150,"SAGE became operational in the late 1950s and early 1960s at a combined cost of billions of dollars. It was noted that the deployment cost more than the Manhattan Project—which it was, in a way, defending against. Throughout its development, there were continual concerns about its real ability to deal with large attacks, and the Operation Sky Shield tests showed that only about one-fourth of enemy bombers would have been intercepted. Nevertheless, SAGE was the backbone of NORAD's air defense system into the 1980s, by which time the tube-based FSQ-7s were increasingly costly to maintain and completely outdated. Today the same command and control task is carried out by microcomputers, based on the same basic underlying data." +1151,"Just prior to World War II, Royal Air Force tests with the new Chain Home radars had demonstrated that relaying information to the fighter aircraft directly from the radar sites was not feasible. The radars determined the map coordinates of the enemy, but could generally not see the fighters at the same time. This meant the fighters had to be able to determine where to fly to perform an interception but were often unaware of their own exact location and unable to calculate an interception while also flying their aircraft." +1152,"The solution was to send all of the radar information to a central control station where operators collated the reports into single tracks, and then reported these tracks to the airbases, or sectors. The sectors used additional systems to track their own aircraft, plotting both on a single large map. Operators viewing the map could then see what direction their fighters would have to fly to approach their targets and relay that simply by telling them to fly along a certain heading or vector. This Dowding system was the first ground-controlled interception system of large scale, covering the entirety of the UK. It proved enormously successful during the Battle of Britain, and is credited as being a key part of the RAF's success." +1153,"The system was slow, often providing information that was up to five minutes out of date. Against propeller driven bombers flying at perhaps 225 miles per hour this was not a serious concern, but it was clear the system would be of little use against jet-powered bombers flying at perhaps 600 miles per hour . The system was extremely expensive in manpower terms, requiring hundreds of telephone operators, plotters and trackers in addition to the radar operators. This was a serious drain on manpower, making it difficult to expand the network." +1154,"The idea of using a computer to handle the task of taking reports and developing tracks had been explored beginning late in the war. By 1944, analog computers had been installed at the CH stations to automatically convert radar readings into map locations, eliminating two people. Meanwhile, the Royal Navy began experimenting with the Comprehensive Display System , another analog computer that took X and Y locations from a map and automatically generated tracks from repeated inputs. Similar systems began development with the Royal Canadian Navy, DATAR, and the US Navy, the Naval Tactical Data System. A similar system was also specified for the Nike SAM project, specifically referring to a US version of CDS, coordinating the defense over a battle area so that multiple batteries did not fire on a single target. All of these systems were relatively small in geographic scale, generally tracking within a city-sized area." +1155,"When the Soviet Union tested its first atomic bomb in August 1949, the topic of air defense of the US became important for the first time. A study group, the ""Air Defense Systems Engineering Committee"" was set up under the direction of Dr. George Valley to consider the problem, and is known to history as the ""Valley Committee""." +1156,"Their December report noted a key problem in air defense using ground-based radars. A bomber approaching a radar station would detect the signals from the radar long before the reflection off the bomber was strong enough to be detected by the station. The committee suggested that when this occurred, the bomber would descend to low altitude, thereby greatly limiting the radar horizon, allowing the bomber to fly past the station undetected. Although flying at low altitude greatly increased fuel consumption, the team calculated that the bomber would only need to do this for about 10% of its flight, making the fuel penalty acceptable." +1157,"The only solution to this problem was to build a huge number of stations with overlapping coverage. At that point the problem became one of managing the information. Manual plotting was ruled out as too slow, and a computerized solution was the only possibility. To handle this task, the computer would need to be fed information directly, eliminating any manual translation by phone operators, and it would have to be able to analyze that information and automatically develop tracks. A system tasked with defending cities against the predicted future Soviet bomber fleet would have to be dramatically more powerful than the models used in the NTDS or DATAR." +1158,"The Committee then had to consider whether or not such a computer was possible. The Valley Committee was introduced to Jerome Wiesner, associate director of the Research Laboratory of Electronics at MIT. Wiesner noted that the Servomechanisms Laboratory had already begun development of a machine that might be fast enough. This was the Whirlwind I, originally developed for the Office of Naval Research as a general purpose flight simulator that could simulate any current or future aircraft by changing its software." +1159,"Wiesner introduced the Valley Committee to Whirlwind's project lead, Jay Forrester, who convinced him that Whirlwind was sufficiently capable. In September 1950, an early microwave early-warning radar system at Hanscom Field was connected to Whirlwind using a custom interface developed by Forrester's team. An aircraft was flown past the site, and the system digitized the radar information and successfully sent it to Whirlwind. With this demonstration, the technical concept was proven. Forrester was invited to join the committee." +1160,"With this successful demonstration, Louis Ridenour, chief scientist of the Air Force, wrote a memo stating ""It is now apparent that the experimental work necessary to develop, test, and evaluate the systems proposals made by ADSEC will require a substantial amount of laboratory and field effort."" Ridenour approached MIT President James Killian with the aim of beginning a development lab similar to the war-era Radiation Laboratory that made enormous progress in radar technology. Killian was initially uninterested, desiring to return the school to its peacetime civilian charter. Ridenour eventually convinced Killian the idea was sound by describing the way the lab would lead to the development of a local electronics industry based on the needs of the lab and the students who would leave the lab to start their own companies. Killian agreed to at least consider the issue, and began Project Charles to consider the size and scope of such a lab." +1161,"Project Charles was placed under the direction of Francis Wheeler Loomis and included 28 scientists, about half of whom were already associated with MIT. Their study ran from February to August 1951, and in their final report they stated that ""We endorse the concept of a centralized system as proposed by the Air Defense Systems Engineering Committee, and we agree that the central coordinating apparatus of this system should be a high-speed electronic digital computer."" The report went on to describe a new lab that would be used for generic technology development for the Air Force, Army and Navy, and would be known as Project Lincoln." +1162,"Loomis took over direction of Project Lincoln and began planning by following the lead of the earlier RadLab. By September 1951, only months after the Charles report, Project Lincoln had more than 300 employees. By the end of the summer of 1952 this had risen to 1300, and after another year, 1800. The only building suitable for classified work at that point was Building 22, suitable for a few hundred people at most, although some relief was found by moving the non-classified portions of the project, administration and similar, to Building 20. But this was clearly insufficient space. After considering a variety of suitable locations, a site at Laurence G. Hanscom Field was selected, with the groundbreaking taking place in 1951." +1163,"The terms of the National Security Act were formulated during 1947, leading to the creation of the US Air Force out of the former US Army Air Force. During April of the same year, US Air Force staff were identifying specifically the requirement for the creation of automatic equipment for radar-detection which would relay information to an air defence control system, a system which would function without the inclusion of persons for its operation. The December 1949 ""Air Defense Systems Engineering Committee"" led by Dr. George Valley had recommended computerized networking for ""radar stations guarding the northern air approaches to the United States"" . After a January 1950 meeting, Valley and Jay Forrester proposed using the Whirlwind I for air defense. On August 18, 1950, when the ""1954 Interceptor"" requirements were issued, the USAF ""noted that manual techniques of aircraft warning and control would impose ""intolerable"" delays"": 484  published Electronic Air Defense Environment for 1954 in December .) During February–August 1951 at the new Lincoln Laboratory, the USAF conducted Project Claude which concluded an improved air defense system was needed." +1164,"In a test for the US military at Bedford, Massachusetts on 20 April 1951, data produced by a radar was transmitted through telephone lines to a computer for the first time, showing the detection of a mock enemy aircraft. This first test was directed by C. Robert Wieser." +1165,"The ""Summer Study Group"" of scientists in 1952 recommended ""computerized air direction centers…to be ready by 1954.""" +1166,"IBM's ""Project High"" assisted under their October 1952 Whirlwind subcontract with Lincoln Laboratory,: 210  and a 1952 USAF Project Lincoln ""fullscale study"" of ""a large scale integrated ground control system"" resulted in the SAGE approval ""first on a trial basis in 1953"".: 128  The USAF had decided by April 10, 1953, to cancel the competing ADIS , and the University of Michigan's Aeronautical Research Center withdrew in the spring.: 289  Air Research and Development Command planned to ""finalize a production contract for the Lincoln Transition System"".: 201  Similarly, the July 22, 1953, report by the Bull Committee identified completing the Mid-Canada Line radars as the top priority and ""on a second-priority-basis: the Lincoln automated system"" " +1167,"The Priority Permanent System with the initial radar stations was completed in 1952: 223  as a ""manual air defense system"" The Permanent System radar stations included 3 subsequent phases of deployments and by June 30, 1957, had 119 ""Fixed CONUS"" radars, 29 ""Gap-filler low altitude"" radars, and 23 control centers"". At ""the end of 1957, ADC operated 182 radar stations 17 control centers … 32 had been added during the last half of the year as low-altitude, unmanned gap-filler radars. The total consisted of 47 gap-filler stations, 75 Permanent System radars, 39 semimobile radars, 19 Pinetree stations,…1 Lashup -era radar and a single Texas Tower"".: 223  ""On 31 December 1958, USAF ADC had 187 operational land-based radar stations"" ." +1168,"Systems scientist Jay Forrester was instrumental in directing the development of the key concept of an interception system during his work at Servomechanisms Laboratory of MIT. The concept of the system, according to the Lincoln Laboratory site was to +""develop a digital computer that could receive vast quantities of data from multiple radars and perform real-time processing to produce targeting information for intercepting aircraft and missiles.""" +1169,"The AN/FSQ-7 was developed by the Lincoln Laboratory's Digital Computer Laboratory and Division 6, working closely with IBM as the manufacturer. Each FSQ-7 actually consisted of two nearly identical computers operating in ""duplex"" for redundancy. The design used an improved version of the Whirlwind I magnetic core memory and was an extension of the Whirlwind II computer program, renamed AN/FSQ-7 in 1953 to comply with Air Force nomenclature. It has been suggested the FSQ-7 was based on the IBM 701 but, while the 701 was investigated by MIT engineers, its design was ultimately rejected due to high error rates and generally being ""inadequate to the task."" IBM's contributions were essential to the success of the FSQ-7, and IBM benefited immensely from its association with the SAGE project, most evidently during development of the IBM 704." +1170,"On October 28, 1953, the Air Force Council recommended 1955 funding for ""ADC to convert to the Lincoln automated system"": 193  .: 201  The ""experimental SAGE subsector, located in Lexington, Mass., was completed in 1955…with a prototype AN/FSQ-7…known as XD-1"" . In 1955, Air Force personnel began IBM training at the Kingston, New York, prototype facility, and the ""4620th Air Defense Wing was established at Lincoln Laboratory""" +1171,"On May 3, 1956, General Partridge presented CINCNORAD's Operational Concept for Control of Air Defense Weapons to the Armed Forces Policy Council, and a June 1956 symposium presentation identified advanced programming methods of SAGE code. For SAGE consulting Western Electric and Bell Telephone Laboratories formed the Air Defense Engineering Service , which was contracted in January 1954. IBM delivered the FSQ-7 computer's prototype in June 1956, and Kingston's XD-2 with dual computers guided a Cape Canaveral BOMARC to a successful aircraft intercept on August 7, 1958.: 197  Initially contracted to RCA, the AN/FSQ-7 production units were started by IBM in 1958 IBM's production contract developed 56 SAGE computers for $.5 billion —cf. the $2 billion WWII Manhattan Project." +1172,"General Operational Requirements 79 and 97 were ""the basic USAF documents guiding development and improvement of ground environment.: 97  Prior to fielding the AN/FSQ-7 centrals, the USAF initially deployed ""pre-SAGE semiautomatic intercept systems"" to Air Defense Direction Centers, ADDCs: 11  . On April 22, 1958, NORAD approved Nike AADCPs to be collocated with the USAF manual ADDCs at Duncanville Air Force Station TX, Olathe Air Force Station KS, Belleville Air Force Station IL, and Osceola Air Force Station KS." +1173,"In 1957, SAGE System groundbreaking at McChord AFB was for DC-12 where the ""electronic brain"" began arriving in November 1958, and the ""first SAGE regional battle post began operating in Syracuse, New York in early 1959"".: 263  BOMARC ""crew training was activated January 1, 1958"", and AT&T ""hardened many of its switching centers, putting them in deep underground bunkers"", The North American Defense Objectives Plan submitted to Canada in December 1958 scheduled 5 Direction Centers and 1 Combat Center to be complete in Fiscal Year 1959, 12 DCs and 3 CCs complete at the end of FY 60, 19 DC/4 CC FY 61, 25/6 FY 62, and 30/10 FY 63. On June 30 NORAD ordered that ""Air Defense Sectors were to be designated as NORAD sectors"", : 7" +1174,"SAGE Geographic Reorganization: The SAGE Geographic Reorganization Plan of July 25, 1958, by NORAD was ""to provide a means for the orderly transition and phasing from the manual to the SAGE system."" The plan identified deactivation of the Eastern, Central, and Western Region/Defense Forces on July 1, 1960, and ""current manual boundaries"" were to be moved to the new ""eight SAGE divisions"" as soon as possible. Manual divisions ""not to get SAGE computers were to be phased out"" along with their Manual Air Defense Control Centers at the headquarters base: ""9th Geiger Field… 32d, Syracuse AFS… 35th, Dobbins AFB… 58th, Wright-Patterson AFB… 85th, Andrews AFB"". The 26th SAGE Division --the 1st of the SAGE divisions—became operational at Hancock Field on 1 January 1959 after the redesignation started for AC&W Squadrons October 1.): 156  Additional sectors included the Los Angeles Air Defense Sector designated in February 1959. A June 23 JCS memorandum approved the new ""March 1959 Reorganization Plan"" for HQ NORAD/CONAD/ADC.: 5" +1175,"Project Wild Goose teams of Air Materiel Command personnel installed c. 1960 the Ground Air Transmit Receive stations for the SAGE TDDL . By the middle of 1960, AMC had determined that about 800,000 man-hours would be required to bring the F-106 fleet to the point where it would be a valuable adjunct to the air defense system. Part of the work was accomplished by Sacramento Air Materiel Area. The remainder was done at ADC bases by roving AMC field assistance teams supported by ADC maintenance personnel. After a September 1959 experimental ATABE test between an ""abbreviated"" AN/FSQ-7 staged at Fort Banks and the Lexington XD-1, the 1961 ""SAGE/Missile Master test program"" conducted large-scale field testing of the ATABE ""mathematical model"" using radar tracks of actual SAC and ADC aircraft flying mock penetrations into defense sectors. Similarly conducted was the joint SAC-NORAD Sky Shield II exercise followed by Sky Shield III on 2 September 1962 On July 15, 1963, ESD's CMC Management Office assumed ""responsibilities in connection with BMEWS, Space Track, SAGE, and BUIC."" The Chidlaw Building's computerized NORAD/ADC Combined Operations Center in 1963 became the highest echelon of the SAGE computer network when operations moved from Ent AFB's 1954 manual Command Center to the partially underground ""war room"". Also in 1963, radar stations were renumbered and the vacuum-tube SAGE System was completed .: 9" +1176,"On ""June 26, 1958,…the New York sector became operational"": 207  and on December 1, 1958, the Syracuse sector's DC-03 was operational Construction of CFB North Bay in Canada was started in 1959 for a bunker ~700 feet underground , and by 1963 the system had 3 Combat Centers. The 23 SAGE centers included 1 in Canada, and the ""SAGE control centers reached their full 22 site deployments in 1961 ."" The completed Minot AFB blockhouse received an AN/FSQ-7, but never received the FSQ-8 ." +1177,The SAGE system included a direction center assigned to air defense sectors as they were defined at the time. +1178,"*Some of the originally planned 32 DCs were never completed and DCs were planned at installations for additional sectors: Calypso/Raleigh NC, England/Shreveport LA, Fort Knox KY, Kirtland/Albuquerque NM, Robins/Miami, Scott/St. Louis, Webb/San Antonio TX." +1179,"The environment allowed radar station personnel to monitor the radar data and systems' status and to use the range height equipment to process height requests from Direction Center personnel. DCs received the Long Range Radar Input from the sector's radar stations, and DC personnel monitored the radar tracks and IFF data provided by the stations, requested height-finder radar data on targets, and monitored the computer's evaluation of which fighter aircraft or Bomarc missile site could reach the threat first. The DC's ""NORAD sector commander's operational staff"" could designate fighter intercept of a target or, using the Senior Director's keyed console in the Weapons Direction room, launch a Bomarc intercept with automatic Q-7 guidance of the surface-to-air missile to a final homing dive ." +1180,"The ""NORAD sector direction center air defense artillery director consoles ADA battle staff officer"", and the NSDC automatically communicated crosstelling of ""SAGE reference track data"" to/from adjacent sectors' DCs and to 10 Nike Missile Master AADCPs. Forwardtelling automatically communicated data from multiple DCs to a 3-story Combat Center usually at one of the sector's DCs for coordinating the air battle in the NORAD region and which forwarded data to the NORAD Command Center . NORAD's integration of air warning data along with space surveillance, intelligence, and other data allowed attack assessment of an Air Defense Emergency for alerting the SAC command centers , The Pentagon/Raven Rock NMCC/ANMCC, and the public via CONELRAD radio stations." +1181,"The Burroughs 416L SAGE component was the Cold War network connecting IBM supplied computer system at the various DC and that created the display and control environment for operation of the separate radars and to provide outbound command guidance for ground-controlled interception by air defense aircraft in the ""SAGE Defense System"" . Burroughs Corporation was a prime contractor for SAGE network interface equipment which included 134 Burroughs AN/FST-2 Coordinate Data Transmitting Sets at radar stations and other sites, the IBM supplied AN/FSQ-7 at 23 Direction Centers, and the AN/FSQ-8 Combat Control Computers at 8 Combat Centers. The 2 computers of each AN/FSQ-7 together weighing 275 short tons-force used about ⅓ of the DC's 2nd floor space and at ~$50 per instruction had approximately 125,000 ""computer instructions support actual operational air-defense mission"" processing. The AN/FSQ-7 at Luke AFB had additional memory and was used as a ""computer center for all other"" DCs. Project 416L was the USAF predecessor of NORAD, SAC, and other military organizations' ""Big L"" computer systems ." +1182,Network communications: +1183,"The SAGE network of computers connected by a ""Digital Radar Relay"" used AT&T voice lines, microwave towers, switching centers , etc.; and AT&T's ""main underground station"" was in Kansas with other bunkers in Connecticut , California , Iowa and Maryland . CDTS modems at automated radar stations transmitted range and azimuth, and the Air Movements Identification Service provided air traffic data to the SAGE System. Radar tracks by telephone calls could be entered via consoles of the 4th floor ""Manual Inputs"" room adjacent to the ""Communication Recording-Monitoring and VHF"" room. In 1966, SAGE communications were integrated into the AUTOVON Network." +1184,"SAGE Sector Warning Networks provided the radar netting communications for each DC and eventually also allowed transfer of command guidance to autopilots of TDDL-equipped interceptors for vectoring to targets via the Ground to Air Data Link Subsystem and the Ground Air Transmit Receive network of radio sites for ""HF/VHF/UHF voice & TDDL"" each generally co-located at a CDTS site. SAGE Direction Centers and Combat Centers were also nodes of NORAD's Alert Network Number 1, and SAC Emergency War Order Traffic included ""Positive Control/Noah's Ark instructions"" through northern NORAD radio sites to confirm or recall SAC bombers if ""SAC decided to launch the alert force before receiving an execution order from the JCS""." +1185,"A SAGE System ergonomic test at Luke AFB in 1964 ""showed conclusively that the wrong timing of human and technical operations was leading to frequent truncation of the flight path tracking system"" .: 9  SAGE software development was ""grossly underestimated"": 370  : ""the biggest mistake the SAGE computer program was jump from the 35,000 instructions … to the more than 100,000 instructions on the"" AN/FSQ-8. NORAD conducted a Sage/Missile Master Integration/ECM-ECCM Test in 1963, and although SAGE used AMIS input of air traffic information, the 1959 plan developed by the July 1958 USAF Air Defense Systems Integration Division for SAGE Air Traffic Integration was cancelled by the DoD." +1186,"SAGE radar stations, including 78 DEW Line sites in December 1961, provided radar tracks to DCs and had frequency diversity radars United States Navy picket ships also provided radar tracks, and seaward radar coverage was provided. By the late 1960s EC-121 Warning Star aircraft based at Otis AFB MA and McClellan AFB CA provided radar tracks via automatic data link to the SAGE System. Civil Aeronautics Administration radars were at some stations , and the ARSR-1 Air Route Surveillance Radar rotation rate had to be modified ""for SAGE Modes III and IV"" : 21" +1187,"ADC aircraft such as the F-94 Starfire, F-89 Scorpion, F-101B Voodoo, and F-4 Phantom were controlled by SAGE GCI. The F-104 Starfighter was ""too small to be equipped with data link equipment"" and used voice-commanded GCI,: 229  but the F-106 Delta Dart was equipped for the automated data link . The ADL was designed to allow Interceptors that reached targets to transmit real-time tactical friendly and enemy movements and to determine whether sector defence reinforcement was necessary." +1188,Familiarization flights allowed SAGE weapons directors to fly on two-seat interceptors to observe GCI operations. Surface-to-air missile installations for CIM-10 Bomarc interceptors were displayed on SAGE consoles. +1189,"Partially solid-state AN/FST-2B and later AN/FYQ-47 computers replaced the AN/FST-2, and sectors without AN/FSQ-7 centrals requiring a ""weapon direction control device"" for USAF air defense used the solid-state AN/GSG-5 CCCS instead of the AN/GPA-73 recommended by ADC in June 1958. Back-Up Interceptor Control with CCCS dispersed to radar stations for survivability allowed a diminished but functional SAGE capability. In 1962, Burroughs ""won the contract to provide a military version of its D825"" modular data processing system for BUIC II. BUIC II was first used at North Truro Z-10 in 1966, and the Hamilton AFB BUIC II was installed in the former MCC building when it was converted to a SAGE Combat Center in 1966 . On June 3, 1963, the Direction Centers at Marysville CA, Marquette/K I Sawyer AFB MI, Stewart AFB NY , and Moses Lake WA were planned for closing and at the end of 1969, only 6 CONUS SAGE DCs remained all with the vacuum tube AN/FSQ-7 centrals.: 47  In 1966, NORAD Combined Operations Center operations at Chidlaw transferred to the Cheyenne Mountain Operations Center and in December 1963, the DoD approved solid state replacement of Martin AN/FSG-1 centrals: 317  with the AN/GSG-5 and subsequent Hughes AN/TSQ-51. The ""416L/M/N Program Office"" at Hanscom Field had deployed the BUIC III by 1971 , and the initial BUIC systems were phased out 1974–5. ADC had been renamed Aerospace Defense Command on January 15, 1968, and its general surveillance radar stations transferred to ADTAC in 1979 when the ADC major command was broken up " +1190,"For airborne command posts, ""as early as 1962 the Air Force began exploring possibilities for an Airborne Warning and Control System "",: 266  and the Strategic Defense Architecture planned an integrated air defense and air traffic control network. The USAF declared full operational capability of the first seven Joint Surveillance System ROCCs on December 23, 1980, with Hughes AN/FYQ-93 systems, and many of the SAGE radar stations became Joint Surveillance System sites The North Bay AN/FSQ-7 was dismantled and sent to Boston's Computer Museum. In 1996, AN/FSQ-7 components were moved to Moffett Federal Airfield for storage and later moved to the Computer History Museum in Mountain View, California. The last AN/FSQ-7 centrals were demolished at McChord AFB and Luke AFB . Decommissioned AN/FSQ-7 equipment was also used as science fiction cinema and TV series props ." +1191,"SAGE histories include a 1983 special issue of the Annals of the History of Computing, and various personal histories were published, e.g., Valley in 1985 and Jacobs in 1986. In 1998, the SAGE System was identified as 1 of 4 ""Monumental Projects"", and a SAGE lecture presented the vintage film In Your Defense followed by anecdotal information from Les Earnest, Jim Wong, and Paul Edwards. In 2013, a copy of a 1950s cover girl image programmed for SAGE display was identified as the ""earliest known figurative computer art"". Company histories identifying employees' roles in SAGE include the 1981 System Builders: The Story of SDC and the 1998 Architects of Information Advantage: The MITRE Corporation Since 1958." +1192,"The contract to build the Mark II was signed with Harvard in February 1945, after the successful demonstration of the Mark I in 1944. It was completed and debugged in 1947, and delivered to the US Navy Proving Ground at Dahlgren, Virginia in March 1948, becoming fully operational by the end of that year." +1193,"The Mark II was constructed with high-speed electromagnetic relays instead of the electro-mechanical counters used in the Mark I, making it much faster than its predecessor. It weighed 25 short tons and occupied over 4,000 square feet of floor space. Its addition time was 0.125 seconds and the multiplication time was 0.750 seconds. This was a factor of 2.6 faster for addition and a factor of 8 faster for multiplication compared to the Mark I. It was the second machine to have floating-point hardware. A unique feature of the Mark II is that it had built-in hardware for several functions such as the reciprocal, square root, logarithm, exponential, and some trigonometric functions. These took between five and twelve seconds to execute. Additionally, the Mark II was actually composed of two sub-computers that could either work in tandem or operate on separate functions, to cross-check results and debug malfunctions." +1194,"The Mark I and Mark II were not stored-program computers – they read instructions of the program one at a time from a tape and executed them. The Mark II had a peculiar programming method that was devised to ensure that the contents of a register were available when needed. The tape containing the program could encode only eight instructions, so what a particular instruction code meant depended on when it was executed. Each second was divided up into several periods, and a coded instruction could mean different things in different periods. An addition could be started in any of eight periods in the second, a multiplication could be started in any of four periods of the second, and a transfer of data could be started in any of twelve periods of the second. Although this system worked, it made the programming complicated, and it reduced the efficiency of the machine somewhat." +1195,"The Mark II is also known for being the computer with the first recorded instance of an actual bug disrupting its operation. The insect was extracted from the machine's electronics and taped to the log book, with the note ""first actual case of bug being found"", on September 9, 1947." +1196,There are several methods of classifying exploits. The most common is by how the exploit communicates to the vulnerable software. +1197,A remote exploit works over a network and exploits the security vulnerability without any prior access to the vulnerable system. +1198,"A local exploit requires prior access to the vulnerable system and usually increases the privileges of the person running the exploit past those granted by the system administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with a client application. A common form of exploits against client applications are browser exploits." +1199,"Exploits against client applications may also require some interaction with the user and thus may be used in combination with the social engineering method. Another classification is by the action against the vulnerable system; unauthorized data access, arbitrary code execution, and denial of service are examples." +1200,"Many exploits are designed to provide superuser-level access to a computer system. However, it is also possible to use several exploits, first to gain low-level access, then to escalate privileges repeatedly until one reaches the highest administrative level . In this case the attacker is chaining several exploits together to perform one attack, this is known as an exploit chain." +1201,"After an exploit is made known to the authors of the affected software, the vulnerability is often fixed through a patch and the exploit becomes unusable. That is the reason why some black hat hackers as well as military or intelligence agencies' hackers do not publish their exploits but keep them private." +1202,Exploits unknown to everyone except the people that found and developed them are referred to as zero day or “0day” exploits. +1203,"Exploits are used by hackers to bypass security controls and manipulate system vulnerabilities. Researchers have estimated that this costs over $450 billion every year from the global economy. In response, organizations are using cyber threat intelligence to protect their vulnerabilities." +1204,"Exploitations are commonly categorized and named by the type of vulnerability they exploit , whether they are local/remote and the result of running the exploit . One scheme that offers zero day exploits is exploit as a service." +1205,"A zero-click attack is an exploit that requires no user interaction to operate – that is to say, no key-presses or mouse clicks. FORCEDENTRY, discovered in 2021, is an example of a zero-click attack." +1206,These exploits are commonly the most sought after exploits because the target typically has no way of knowing they have been compromised at the time of exploitation. +1207,"In 2022, NSO Group was reportedly selling zero-click exploits to governments for breaking into individuals' phones." +1208,"Pivoting is a method used by hackers and penetration testers to expand the attack surface of a target organization. A compromised system to attack other systems on the same network that are not directly reachable from the Internet due to restrictions such as firewall. There tend to be more machines reachable from inside a network as compared to Internet facing hosts. For example, if an attacker compromises a web server on a corporate network, the attacker can then use the compromised web server to attack any reachable system on the network. These types of attacks are often called multi-layered attacks. Pivoting is also known as island hopping." +1209,Pivoting can further be distinguished into proxy pivoting and VPN pivoting: +1210,"Typically, the proxy or VPN applications enabling pivoting are executed on the target computer as the payload of an exploit." +1211,"Pivoting is usually done by infiltrating a part of a network infrastructure and using a scanner to find other devices connected to attack them. By attacking a vulnerable piece of networking, an attacker could infect most or all of a network and gain complete control." +1212,"In computing, a crash, or system crash, occurs when a computer program such as a software application or an operating system stops functioning properly and exits. On some operating systems or individual applications, a crash reporting service will report the crash and any details relating to it , usually to the developer of the application. If the program is a critical part of the operating system, the entire system may crash or hang, often resulting in a kernel panic or fatal system error." +1213,"Most crashes are the result of a software bug. Typical causes include accessing invalid memory addresses, incorrect address values in the program counter, buffer overflow, overwriting a portion of the affected program code due to an earlier bug, executing invalid machine instructions , or triggering an unhandled exception. The original software bug that started this chain of events is typically considered to be the cause of the crash, which is discovered through the process of debugging. The original bug can be far removed from the code that actually triggered the crash." +1214,"In early personal computers, attempting to write data to hardware addresses outside the system's main memory could cause hardware damage. Some crashes are exploitable and let a malicious program or hacker execute arbitrary code, allowing the replication of viruses or the acquisition of data which would normally be inaccessible." +1215,An application typically crashes when it performs an operation that is not allowed by the operating system. The operating system then triggers an exception or signal in the application. Unix applications traditionally responded to the signal by dumping core. Most Windows and Unix GUI applications respond by displaying a dialogue box with the option to attach a debugger if one is installed. Some applications attempt to recover from the error and continue running instead of exiting. +1216,An application can also contain code to crash after detecting a severe error. +1217,Typical errors that result in application crashes include: +1218,"A ""crash to desktop"" is said to occur when a program unexpectedly quits, abruptly taking the user back to the desktop. Usually, the term is applied only to crashes where no error is displayed, hence all the user sees as a result of the crash is the desktop. Many times there is no apparent action that causes a crash to desktop. During normal function, the program may freeze for a shorter period of time, and then close by itself. Also during normal function, the program may become a black screen and repeatedly play the last few seconds of sound that was being played before it crashes to desktop. Other times it may appear to be triggered by a certain action, such as loading an area." +1219,"Crash to desktop bugs are considered particularly problematic for users. Since they frequently display no error message, it can be very difficult to track down the source of the problem, especially if the times they occur and the actions taking place right before the crash do not appear to have any pattern or common ground. One way to track down the source of the problem for games is to run them in windowed-mode. Windows Vista has a feature that can help track down the cause of a CTD problem when it occurs on any program. Windows XP included a similar feature as well." +1220,"Some computer programs, such as StepMania and BBC's Bamzooki, also crash to desktop if in full-screen, but display the error in a separate window when the user has returned to the desktop." +1221,"The software running the web server behind a website may crash, rendering it inaccessible entirely or providing only an error message instead of normal content." +1222,"For example: if a site is using an SQL database for a script and that SQL database server crashes, then PHP will display a connection error." +1223,An operating system crash commonly occurs when a hardware exception occurs that cannot be handled. Operating system crashes can also occur when internal sanity-checking logic within the operating system detects that the operating system has lost its internal self-consistency. +1224,"Modern multi-tasking operating systems, such as Linux, and macOS, usually remain unharmed when an application program crashes." +1225,"Some operating systems, e.g., z/OS, have facilities for Reliability, availability and serviceability and the OS can recover from the crash of a critical component, whether due to hardware failure, e.g., uncorrectable ECC error, or to software failure, e.g., a reference to an unassigned page." +1226,"An Abnormal end or ABEND is an abnormal termination of software, or a program crash. Errors or crashes on the Novell NetWare network operating system are usually called ABENDs. Communities of NetWare administrators sprung up around the Internet, such as abend.org." +1227,"This usage derives from the ABEND macro on IBM OS/360, ..., z/OS operating systems. Usually capitalized, but may appear as ""abend"". Some common ABEND codes are System ABEND 0C7 and System ABEND 0CB . Abends can be ""soft"" or ""hard"" .The term is jocularly claimed to be derived from the German word ""abend"" meaning ""evening""." +1228,"Depending on the application, the crash may contain the user's sensitive and private information. Moreover, many software bugs which cause crashes are also exploitable for arbitrary code execution and other types of privilege escalation. For example, a stack buffer overflow can overwrite the return address of a subroutine with an invalid value, which will cause, e.g., a segmentation fault, when the subroutine returns. However, if an exploit overwrites the return address with a valid value, the code in that address will be executed." +1229,"When crashes are collected in the field using a crash reporter, the next step for developers is to be able to reproduce them locally. For this, several techniques exist: +STAR uses symbolic execution, +EvoCrash performs evolutionary search." +1230,"Hangs have varied causes and symptoms, including software or hardware defects, such as an infinite loop or long-running uninterruptible computation, resource exhaustion , under-performing hardware , external events such as a slow computer network, misconfiguration, and compatibility problems. The fundamental reason is typically resource exhaustion: resources necessary for some part of the system to run are not available, due to being in use by other processes or simply insufficient. Often the cause is an interaction of multiple factors, making ""hang"" a loose umbrella term rather than a technical one." +1231,"A hang may be temporary if caused by a condition that resolves itself, such as slow hardware, or it may be permanent and require manual intervention, as in the case of a hardware or software logic error. Many modern operating systems provide the user with a means to forcibly terminate a hung program without rebooting or logging out; some operating systems, such as those designed for mobile devices, may even do this automatically. In more severe hangs affecting the whole system, the only solution might be to reboot the machine, usually by power cycling with an off/on or reset button." +1232,"A hang differs from a crash, in which the failure is immediate and unrelated to the responsiveness of inputs." +1233,"In a multitasking operating system, it is possible for an individual process or thread to get stuck, such as blocking on a resource or getting into an infinite loop, though the effect on the overall system varies significantly. In a cooperative multitasking system, any thread that gets stuck without yielding will hang the system, as it will wedge itself as the running thread and prevent other threads from running." +1234,"By contrast, modern operating systems primarily use pre-emptive multitasking, such as Windows 2000 and its successors, as well as Linux and Apple Inc.'s macOS. In these cases, a single thread getting stuck will not necessarily hang the system, as the operating system will preempt it when its time slice expires, allowing another thread to run. If a thread does hang, the scheduler may switch to another group of interdependent tasks so that all processes will not hang. However, a stuck thread will still consume resources: at least an entry in scheduling, and if it is running , it will consume processor cycles and power when it is scheduled, slowing the system though it does not hang it." +1235,"However, even with preemptive multitasking, a system can hang, and a misbehaved or malicious task can hang the system, primarily by monopolizing some other resource, such as IO or memory, even though processor time cannot be monopolized. For example, a process that blocks the file system will often hang the system." +1236,Moving around a window on top of a hanging program during a hang may cause a window trail from redrawing. +1237,"Hardware can cause a computer to hang, either because it is intermittent or because it is mismatched with other hardware in the computer . Hardware can also become defective over time due to dirt or heat damage." +1238,"A hang can also occur due to the fact that the programmer has incorrect termination conditions for a loop, or, in a co-operative multitasking operating system, forgetting to yield to other tasks. Said differently, many software-related hangs are caused by threads waiting for an event to occur which will never occur. This is also known as an infinite loop." +1239,"Another cause of hangs is a race condition in communication between processes. One process may send a signal to a second process then stop execution until it receives a response. If the second process is busy the signal will be forced to wait until the process can get to it. However, if the second process was busy sending a signal to the first process then both processes would wait forever for the other to respond to signals and never see the other’s signal . If the processes are uninterruptible they will hang and have to be shut down. If at least one of the processes is a critical kernel process the whole system may hang and have to be restarted." +1240,"A computer may seem to hang when in fact it is simply processing very slowly. This can be caused by too many programs running at once, not enough memory , or memory fragmentation, slow hardware access , slow system APIs, etc. It can also be caused by hidden programs which were installed surreptitiously, such as spyware." +1241,"In many cases programs may appear to be hung, but are making slow progress, and waiting a few minutes will allow the task to complete." +1242,"Modern operating systems provide a mechanism for terminating hung processes, for instance, with the Unix kill command, or through a graphical means such as the Task Manager's ""end task"" button in Windows . Older systems, such as those running MS-DOS, early versions of Windows, or Classic Mac OS often needed to be completely restarted in the event of a hang." +1243,"On embedded devices where human interaction is limited, a watchdog timer can reboot the computer in the event of a hang." +1244,"A software bug is an error, flaw or fault in the design, development, or operation of computer software that causes it to produce an incorrect or unexpected result, or to behave in unintended ways. The process of finding and correcting bugs is termed ""debugging"" and often uses formal techniques or tools to pinpoint bugs. Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations." +1245,"Bugs in software can arise from mistakes and errors made in interpreting and extracting users' requirements, planning a program's design, writing its source code, and from interaction with humans, hardware and programs, such as operating systems or libraries. A program with many, or serious, bugs is often described as buggy. Bugs can trigger errors that may have ripple effects. The effects of bugs may be subtle, such as unintended text formatting, through to more obvious effects such as causing a program to crash, freezing the computer, or causing damage to hardware. Other bugs qualify as security bugs and might, for example, enable a malicious user to bypass access controls in order to obtain unauthorized privileges." +1246,"Some software bugs have been linked to disasters. Bugs in code that controlled the Therac-25 radiation therapy machine were directly responsible for patient deaths in the 1980s. In 1996, the European Space Agency's US$1 billion prototype Ariane 5 rocket was destroyed less than a minute after launch due to a bug in the on-board guidance computer program. In 1994, an RAF Chinook helicopter crashed, killing 29; this was initially blamed on pilot error, but was later thought to have been caused by a software bug in the engine-control computer. Buggy software caused the early 21st century British Post Office scandal, the most widespread miscarriage of justice in British legal history." +1247,"In 2002, a study commissioned by the US Department of Commerce's National Institute of Standards and Technology concluded that ""software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product""." +1248,"The Middle English word bugge is the basis for the terms ""bugbear"" and ""bugaboo"" as terms used for a monster." +1249,"The term ""bug"" to describe defects has been a part of engineering jargon since the 1870s and predates electronics and computers; it may have originally been used in hardware engineering to describe mechanical malfunctions. For instance, Thomas Edison wrote in a letter to an associate in 1878:" +1250,"Baffle Ball, the first mechanical pinball game, was advertised as being ""free of bugs"" in 1931. Problems with military gear during World War II were referred to as bugs . In a book published in 1942, Louise Dickinson Rich, speaking of a powered ice cutting machine, said, ""Ice sawing was suspended until the creator could be brought in to take the bugs out of his darling.""" +1251,"Isaac Asimov used the term ""bug"" to relate to issues with a robot in his short story ""Catch That Rabbit"", published in 1944." +1252,"The term ""bug"" was used in an account by computer pioneer Grace Hopper, who publicized the cause of a malfunction in an early electromechanical computer. A typical version of the story is:" +1253,"Hopper was not present when the bug was found, but it became one of her favorite stories. The date in the log book was September 9, 1947. The operators who found it, including William ""Bill"" Burke, later of the Naval Weapons Laboratory, Dahlgren, Virginia, were familiar with the engineering term and amusedly kept the insect with the notation ""First actual case of bug being found."" This log book, complete with attached moth, is part of the collection of the Smithsonian National Museum of American History." +1254,"The related term ""debug"" also appears to predate its usage in computing: the Oxford English Dictionary's etymology of the word contains an attestation from 1945, in the context of aircraft engines." +1255,"The concept that software might contain errors dates back to Ada Lovelace's 1843 notes on the analytical engine, in which she speaks of the possibility of program ""cards"" for Charles Babbage's analytical engine being erroneous:" +1256,"While the use of the term ""bug"" to describe software errors is common, many have suggested that it should be abandoned. One argument is that the word ""bug"" is divorced from a sense that a human being caused the problem, and instead implies that the defect arose on its own, leading to a push to abandon the term ""bug"" in favor of terms such as ""defect"", with limited success." +1257,"The term ""bug"" may also be used to cover up an intentional design decision. In 2011, after receiving scrutiny from US Senator Al Franken for recording and storing users' locations in unencrypted files, Apple called the behavior a bug. However, Justin Brookman of the Center for Democracy and Technology directly challenged that portrayal, stating ""I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users.""" +1258,"In software engineering, mistake metamorphism refers to the evolution of a defect in the final stage of software deployment. Transformation of a ""mistake"" committed by an analyst in the early stages of the software development lifecycle, which leads to a ""defect"" in the final stage of the cycle has been called 'mistake metamorphism'." +1259,"Different stages of a ""mistake"" in the entire cycle may be described as ""mistakes"", ""anomalies"", ""faults"", ""failures"", ""errors"", ""exceptions"", ""crashes"", ""glitches"", ""bugs"", ""defects"", ""incidents"", or ""side effects""." +1260,The software industry has put much effort into reducing bug counts. These include: +1261,"Bugs usually appear when the programmer makes a logic error. Various innovations in programming style and defensive programming are designed to make these bugs less likely, or easier to spot. Some typos, especially of symbols or operators, allow the program to operate incorrectly, while others such as a missing symbol or misspelled name may prevent the program from operating. Compiled languages can reveal some typos when the source code is compiled." +1262,"Several schemes assist managing programmer activity so that fewer bugs are produced. Software engineering applies many techniques to prevent defects. For example, formal program specifications state the exact behavior of programs so that design bugs may be eliminated. Formal specifications are impractical for anything but the shortest programs, because of problems of combinatorial explosion and indeterminacy." +1263,Unit testing involves writing a test for every function that a program is to perform. +1264,In test-driven development unit tests are written before the code and the code is not considered complete until all tests complete successfully. +1265,Agile software development involves frequent software releases with relatively small changes. Defects are revealed by user feedback. +1266,"Open source development allows anyone to examine source code. A school of thought popularized by Eric S. Raymond as Linus's law says that popular open-source software has more chance of having few or no bugs than other software, because ""given enough eyeballs, all bugs are shallow"". This assertion has been disputed, however: computer security specialist Elias Levy wrote that ""it is easy to hide vulnerabilities in complex, little understood and undocumented source code,"" because, ""even if people are reviewing the code, that doesn't mean they're qualified to do so."" An example of an open-source software bug was the 2008 OpenSSL vulnerability in Debian." +1267,"Programming languages include features to help prevent bugs, such as static type systems, restricted namespaces and modular programming. For example, when a programmer writes LET REAL_VALUE PI = ""THREE AND A BIT"", although this may be syntactically correct, the code fails a type check. Compiled languages catch this without having to run the program. Interpreted languages catch such errors at runtime. Some languages deliberately exclude features that easily lead to bugs, at the expense of slower performance: the general principle being that, it is almost always better to write simpler, slower code than inscrutable code that runs slightly faster, especially considering that maintenance cost is substantial. For example, the Java programming language does not support pointer arithmetic; implementations of some languages such as Pascal and scripting languages often have runtime bounds checking of arrays, at least in a debugging build." +1268,"Tools for code analysis help developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable , these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software." +1269,"Tools to monitor the performance of the software as it is running, either specifically to find problems such as bottlenecks or to give assurance as to correct working, may be embedded in the code explicitly , or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten." +1270,"Software testers are people whose primary task is to find bugs, or write code to support testing. On some efforts, more resources may be spent on testing than in developing the program." +1271,Measurements during testing can provide an estimate of the number of likely bugs remaining; this becomes more reliable the longer a product is tested and developed. +1272,"Finding and fixing bugs, or debugging, is a major part of computer programming. Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that much of the rest of his life would be spent finding mistakes in his programs." +1273,"Usually, the most difficult part of debugging is finding the bug. Once it is found, correcting it is usually relatively easy. Programs known as debuggers help programmers locate bugs by executing code line by line, watching variable values, and other features to observe program behavior. Without a debugger, code may be added so that messages or values may be written to a console or to a window or log file to trace program execution or show values." +1274,"However, even with the aid of a debugger, locating bugs is something of an art. It is not uncommon for a bug in one section of a program to cause failures in a completely different section, thus making it especially difficult to track , in an apparently unrelated part of the system." +1275,"Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmer. Such logic errors require a section of the program to be overhauled or rewritten. As a part of code review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such." +1276,"More typically, the first step in locating a bug is to reproduce it reliably. Once the bug is reproducible, the programmer may use a debugger or other tool while reproducing the error to find the point at which the program went astray." +1277,"Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of the Therac-25 radiation machine deaths was a bug that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are called heisenbugs ." +1278,"Since the 1990s, particularly following the Ariane 5 Flight 501 disaster, interest in automated aids to debugging rose, such as static code analysis by abstract interpretation." +1279,"Some classes of bugs have nothing to do with the code. Faulty documentation or hardware may lead to problems in system use, even though the code matches the documentation. In some cases, changes to the code eliminate the problem even though the code then no longer matches the documentation. Embedded systems frequently work around hardware bugs, since to make a new version of a ROM is much cheaper than remanufacturing the hardware, especially if they are commodity items." +1280,"To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs:" +1281,"Bug management includes the process of documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code. Proposed changes to software – bugs as well as enhancement requests and even entire releases – are commonly tracked and managed using bug tracking systems or issue tracking systems. The items added may be called defects, tickets, issues, or, following the agile development paradigm, stories and epics. Categories may be objective, subjective or a combination, such as version number, area of the software, severity and priority, as well as what type of issue it is, such as a feature request or a bug." +1282,"A bug triage reviews bugs and decides whether and when to fix them. The decision is based on the bug's priority, and factors such as development schedules. The triage is not meant to investigate the cause of bugs, but rather the cost of fixing them. The triage happens regularly, and goes through bugs opened or reopened since the previous meeting. The attendees of the triage process typically are the project manager, development manager, test manager, build manager, and technical experts." +1283,"Severity is the intensity of the impact the bug has on system operation. This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized. Impacts differ across industry. A crash in a video game has a totally different impact than a crash in a web browser, or real time monitoring system. For example, bug severity levels might be ""crash or hang"", ""no workaround"" , ""has workaround"" , ""visual defect"" , or ""documentation error"". Some software publishers use more qualified severities such as ""critical"", ""high"", ""low"", ""blocker"" or ""trivial"". The severity of a bug may be a separate category to its priority for fixing, and the two may be quantified and managed separately." +1284,"Priority controls where a bug falls on the list of planned changes. The priority is decided by each software producer. Priorities may be numerical, such as 1 through 5, or named, such as ""critical"", ""high"", ""low"", or ""deferred"". These rating scales may be similar or even identical to severity ratings, but are evaluated as a combination of the bug's severity with its estimated effort to fix; a bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires excessive effort to fix. Priority ratings may be aligned with product releases, such as ""critical"" priority indicating all the bugs that must be fixed before the next software release." +1285,"A bug severe enough to delay or halt the release of the product is called a ""show stopper"" or ""showstopper bug"". It is named so because it ""stops the show"" – causes unacceptable product failure." +1286,"It is common practice to release software with known, low-priority bugs. Bugs of sufficiently high priority may warrant a special release of part of the code containing only modules with those fixes. These are known as patches. Most releases include a mixture of behavior changes and multiple bug fixes. Releases that emphasize bug fixes are known as maintenance releases, to differentiate it from major releases that emphasize feature additions or changes." +1287,Reasons that a software publisher opts not to patch or even fix a particular bug include: +1288,"In software development, a mistake or error may be introduced at any stage. Bugs arise from oversight or misunderstanding by a software team during specification, design, coding, configuration, data entry or documentation. For example, a relatively simple program to alphabetize a list of words, the design might fail to consider what should happen when a word contains a hyphen. Or when converting an abstract design into code, the coder might inadvertently create an off-by-one error which can be a ""<"" where ""<="" was intended, and fail to sort the last word in a list." +1289,"Another category of bug is called a race condition that may occur when programs have multiple components executing at the same time. If the components interact in a different order than the developer intended, they could interfere with each other and stop the program from completing its tasks. These bugs may be difficult to detect or anticipate, since they may not occur during every execution of a program." +1290,"Conceptual errors are a developer's misunderstanding of what the software must do. The resulting software may perform according to the developer's understanding, but not what is really needed. Other types:" +1291,"In operations on numerical values, problems can arise that result in unexpected output, slowing of a process, or crashing. These can be from a lack of awareness of the qualities of the data storage such as a loss of precision due to rounding, numerically unstable algorithms, arithmetic overflow and underflow, or from lack of awareness of how calculations are handled by different software coding languages such as division by zero which in some languages may throw an exception, and in others may return a special value such as NaN or infinity." +1292,"Control flow bugs are those found in processes with valid logic, but that lead to unintended results, such as infinite loops and infinite recursion, incorrect comparisons for conditional statements such as using the incorrect comparison operator, and off-by-one errors ." +1293,"The amount and type of damage a software bug may cause naturally affects decision-making, processes and policy regarding software quality. In applications such as human spaceflight, aviation, nuclear power, health care, public transport or automotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application." +1294,"Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing. In 2020, research on GitHub repositories showed the median is 20%." +1295,"In 1994, NASA's Goddard Space Flight Center managed to reduce their average number of errors from 4.5 per 1000 lines of code down to 1 per 1000 SLOC." +1296,"Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC. This figure is iterated in literature such as Code Complete by Steve McConnell, and the NASA study on Flight Software Complexity. Some projects even attained zero defects: the firmware in the IBM Wheelwriter typewriter which consists of 63,000 SLOC, and the Space Shuttle software with 500,000 SLOC." +1297,"A number of software bugs have become well-known, usually due to their severity: examples include various space and military aircraft crashes. Possibly the most famous bug is the Year 2000 problem or Y2K bug, which caused many programs written long before the transition from 19xx to 20xx dates to malfunction. For instance, the date ""25 December 04,"" referring to 2004, may have been incorrectly treated as 1904, or the year ""2000"" may have been incorrectly displayed as ""19100."" As a result of a major effort taken by programmers near the end of the 20th century, the most severe consequences of this bug were averted. " +1298,The 2012 stock trading disruption involved one such incompatibility between the old API and a new API. +1299,"The Open Technology Institute, run by the group, New America, released a report ""Bugs in the System"" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report ""highlights the need for reform in the field of software vulnerability discovery and disclosure."" One of the report's authors said that Congress has not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security." +1300,"Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws." +1301,"Software is a collection of programs and data that tell a computer how to perform specific tasks. Software often includes associated software documentation. This is in contrast to hardware, from which the system is built and which actually performs the work." +1302,"At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit or a graphics processing unit . Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to ""jump"" to a different instruction or is interrupted by the operating system. As of 2024, most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past." +1303,"The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler." +1304,"An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer." +1305,"The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1936 essay, On Computable Numbers, with an Application to the Entscheidungsproblem . This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software , whereas software engineering is the application of engineering principles to development of software." +1306,"In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper ""The Teaching of Concrete Mathematics"" contained the earliest known usage of the term ""software"" found in a search of JSTOR's electronic archives, predating the Oxford English Dictionary's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term ""software"" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum." +1307,"On virtually all computer platforms, software can be grouped into a few broad categories." +1308,"Based on the goal, computer software can be divided into:" +1309,"Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software." +1310,"Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment , which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE." +1311,"People who use modern general purpose computers usually see three layers of software performing a variety of tasks: platform, application, and user software." +1312,"Computer software has to be ""loaded"" into the computer's storage . Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions." +1313,"Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using ""pointers"" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together." +1314,"Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called ""bugs"" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs." +1315,"Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that ""every program has at least one more bug"" . In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together." +1316,"The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies." +1317,Proprietary software can be divided into two types: +1318,"Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software." +1319,"Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code." +1320,"Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming , which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents." +1321,"Design and implementation of software vary depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the former has much more basic functionality." +1322,"Software is usually developed in integrated development environments like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface that the underlying software provides like GTK+, JavaBeans or Swing. Libraries can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close and Form1.Show to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them." +1323,"Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software." +1324,"Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods." +1325,"A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as ""coder"" and ""hacker"" – although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems." +1326,"A video game console is an electronic device that outputs a video signal or image to display a video game that can be played with a game controller. These may be home consoles, which are generally placed in a permanent location connected to a television or other display devices and controlled with a separate game controller, or handheld consoles, which include their own display unit and controller functions built into the unit and which can be played anywhere. Hybrid consoles combine elements of both home and handheld consoles." +1327,"Video game consoles are a specialized form of a home computer geared towards video game playing, designed with affordability and accessibility to the general public in mind, but lacking in raw computing power and customization. Simplicity is achieved in part through the use of game cartridges or other simplified methods of distribution, easing the effort of launching a game. However, this leads to ubiquitous proprietary formats that create competition for market share. More recent consoles have shown further confluence with home computers, making it easy for developers to release games on multiple platforms. Further, modern consoles can serve as replacements for media players with capabilities to play films and music from optical media or streaming media services." +1328,"Video game consoles are usually sold on a 5–7 year cycle called a generation, with consoles made with similar technical capabilities or made around the same time period grouped into one generation. The industry has developed a razor and blades model: manufacturers often sell consoles at low prices, sometimes at a loss, while primarily making a profit from the licensing fees for each game sold. Planned obsolescence then draws consumers into buying the next console generation. While numerous manufacturers have come and gone in the history of the console market, there have always been two or three dominant leaders in the market, with the current market led by Sony , Microsoft , and Nintendo . Previous console developers include Sega, Atari, Coleco, Mattel, NEC, SNK, Fujitsu, and 3D0." +1329,"The first video game consoles were produced in the early 1970s. Ralph H. Baer devised the concept of playing simple, spot-based games on a television screen in 1966, which later became the basis of the Magnavox Odyssey in 1972. Inspired by the table tennis game on the Odyssey, Nolan Bushnell, Ted Dabney, and Allan Alcorn at Atari, Inc. developed the first successful arcade game, Pong, and looked to develop that into a home version, which was released in 1975. The first consoles were capable of playing only a very limited number of games built into the hardware. Programmable consoles using swappable ROM cartridges were introduced with the Fairchild Channel F in 1976, though popularized with the Atari 2600 released in 1977." +1330,"Handheld consoles emerged from technology improvements in handheld electronic games as these shifted from mechanical to electronic/digital logic, and away from light-emitting diode indicators to liquid-crystal displays that resembled video screens more closely. Early examples include the Microvision in 1979 and Game & Watch in 1980, and the concept was fully realized by the Game Boy in 1989." +1331,"Both home and handheld consoles have become more advanced following global changes in technology. These technological shifts include including improved electronic and computer chip manufacturing to increase computational power at lower costs and size, the introduction of 3D graphics and hardware-based graphic processors for real-time rendering, digital communications such as the Internet, wireless networking and Bluetooth, and larger and denser media formats as well as digital distribution." +1332,"Following the same type of Moore's law progression, home consoles are grouped into generations; each lasting approximately five years. Consoles within each generation share similar specifications and features, such as processor word size. While no one grouping of consoles by generation is universally accepted, one breakdown of generations, showing representative consoles, of each is shown below." +1333,"Most consoles are considered programmable consoles and have the means for the player to switch between different games. Traditionally, this has been done by switching a physical game cartridge or game card or through using optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices." +1334,Dedicated consoles were very popular in the first generation until they were gradually replaced by second generation that use ROM cartridges. The fourth generation gradually merged to optical media. It is now common to download games through digital distribution and store them on internal or external digital storage devices. +1335,"Some consoles are considered dedicated consoles, in which games available for the console are ""baked"" onto the hardware, either by being programmed via the circuitry or set in the read-only flash memory of the console. Thus, the console's game library cannot be added to or changed directly by the user. The user can typically switch between games on dedicated consoles using hardware switches on the console, or through in-game menus. Dedicated consoles were common in the first generation of home consoles, such as the Magnavox Odyssey and the home console version of Pong, and more recently have been used for retro-consoles such as the NES Classic Edition and Sega Genesis Mini." +1336,"Home video game consoles are meant to be connected to a television or other type of monitor, with power supplied through an outlet. This requires the unit to be used in a fixed location, typically at home in one's living room. Separate game controllers, connected through wired or wireless connections, are used to provide input to the game. Early examples include the Atari 2600, the Nintendo Entertainment System, and the Sega Genesis; newer examples include the Wii U, the PlayStation 5, and the Xbox Series X. Specific types of home consoles include:" +1337,"Handheld game consoles are devices that typically include a built-in screen and game controller in their case, and contain a rechargeable battery or battery compartment. This allows the unit to be carried around and played anywhere, in contrast to a home game console. Examples include the Game Boy, the PlayStation Portable, and the Nintendo 3DS." +1338,"Hybrid video game consoles are devices that can be used either as a handheld or as a home console. They have either a wired connection or docking station that connects the console unit to a television screen and fixed power source, and the potential to use a separate controller. However, they can also be used as a handheld. While prior handhelds like the Sega Nomad and PlayStation Portable, or home consoles such as the Wii U, have had these features, some consider the Nintendo Switch to be the first true hybrid console." +1339,"A microconsole is a home video game console that is typically powered by low-cost computing hardware, making the console lower-priced compared to other home consoles on the market. The majority of microconsoles, with a few exceptions such as the PlayStation TV and OnLive Game System, are Android-based digital media players that are bundled with gamepads and marketed as gaming devices. Such microconsoles can be connected to the television to play video games downloaded from an application store such as Google Play." +1340,"During the later part of video game history, there have been specialized consoles using computing components to offer multiple games to players. Most of these plug directly into one's television, and thus are often called plug-and-play consoles. They are also considered dedicated consoles since it is generally impossible to access the computing components by an average consumer, though tech-savvy consumers often have found ways to hack the console to install additional functionality, voiding the manufacturer's warranty. Plug-and-play consoles usually come with the console unit itself, one or more controllers, and the required components for power and video hookup. Many recent plug-and-play releases have been for distributing a number of retro games for a specific console platform. Examples of these include the Atari Flashback series, the NES Classic Edition, Sega Genesis Mini and also handheld retro console such as Nintendo Game & Watch color screen series." +1341,"Early console hardware was designed as customized printed circuit boards s, selecting existing integrated circuit chips that performed known functions, or programmable chips like erasable programmable read-only memory chips that could perform certain functions. Persistent computer memory was expensive, so dedicated consoles were generally limited to the use of processor registers for storage of the state of a game, thus limiting the complexities of such titles. Pong in both its arcade and home format had a handful of logic and calculation chips that used the current input of the players' paddles and resisters storing the ball's position to update the game's state and sent to the display device. Even with more advanced integrated circuits s of the time, designers were limited to what could be done through the electrical process rather than through programming as normally associated with video game development." +1342,"Improvements in console hardware followed with improvements in microprocessor technology and semiconductor device fabrication. Manufacturing processes have been able to reduce the feature size on chips , allowing more transistors and other components to fit on a chip, and at the same time increasing the circuit speeds and the potential frequency the chip can run at, as well as reducing thermal dissipation. Chips were able to be made on larger dies, further increasing the number of features and effective processing power. Random-access memory became more practical with the higher density of transistors per chip, but to address the correct blocks of memory, processors needed to be updated to use larger word sizes and allot for larger bandwidth in chip communications. All these improvements did increase the cost of manufacturing but at a rate far less than the gains in overall processing power, which helped to make home computers and consoles inexpensive for the consumer, all related to Moore's law of technological improvements." +1343,"For the consoles of the 1980s to 1990s, these improvements were evident in the marketing in the late 1980s to 1990s during the ""bit wars"", where console manufacturers had focused on their console's processor's word size as a selling point. Consoles since the 2000s are more similar to personal computers, building in memory, storage features, and networking capabilities to avoid the limitations of the past. The confluence with personal computers eased software development for both computer and console games, allowing developers to target both platforms. However, consoles differ from computers as most of the hardware components are preselected and customized between the console manufacturer and hardware component provider to assure a consistent performance target for developers. Whereas personal computer motherboards are designed with the needs for allowing consumers to add their desired selection of hardware components, the fixed set of hardware for consoles enables console manufacturers to optimize the size and design of the motherboard and hardware, often integrating key hardware components into the motherboard circuitry itself. Often, multiple components such as the central processing unit and graphics processing unit can be combined into a single chip, otherwise known as a system on a chip , which is a further reduction in size and cost. In addition, consoles tend to focus on components that give the unit high game performance such as the CPU and GPU, and as a tradeoff to keep their prices in expected ranges, use less memory and storage space compared to typical personal computers." +1344,"In comparison to the early years of the industry, where most consoles were made directly by the company selling the console, many consoles of today are generally constructed through a value chain that includes component suppliers, such as AMD and NVidia for CPU and GPU functions, and contract manufacturers including electronics manufacturing services, factories which assemble those components into the final consoles such as Foxconn and Flextronics. Completed consoles are then usually tested, distributed, and repaired by the company itself. Microsoft and Nintendo both use this approach to their consoles, while Sony maintains all production in-house with the exception of their component suppliers." +1345,Some of the commons elements that can be found within console hardware include: +1346,"All game consoles require player input through a game controller to provide a method to move the player character in a specific direction and a variation of buttons to perform other in-game actions such as jumping or interacting with the game world. Though controllers have become more featured over the years, they still provide less control over a game compared to personal computers or mobile gaming. The type of controller available to a game can fundamentally change the style of how a console game will or can be played. However, this has also inspired changes in game design to create games that accommodate for the comparatively limited controls available on consoles." +1347,Controllers have come in a variety of styles over the history of consoles. Some common types include: +1348,"Numerous other controller types exist, including those that support motion controls, touchscreen support on handhelds and some consoles, and specialized controllers for specific types of games, such as racing wheels for racing games, light guns for shooting games, and musical instrument controllers for rhythm games. Some newer consoles also include optional support for a mouse and keyboard devices. Some older consoles such as 1988 Sega Genesis aka Mega Drive and 1993 3DO Interactive Multiplayer, supported optional mice, both with special mice made for them, but the 3DO mouse like that console was a flop, and the mouse for the Sega had very limited game support. The Sega also supported the optional Menacer, a wireless infrared light gun, and such were at one point popular for games. It also support BatterUP, a baseball bat-shaped controller." +1349,"A controller may be attached through a wired connection onto the console itself, or in some unique cases like the Famicom hardwired to the console, or with a wireless connection. Controllers require power, either provided by the console via the wired connection, or from batteries or a rechargeable battery pack for wireless connections. Controllers are nominally built into a handheld unit, though some newer ones allow for separate wireless controllers to also be used." +1350,"While the first game consoles were dedicated game systems, with the games programmed into the console's hardware, the Fairchild Channel F introduced the ability to store games in a form separate from the console's internal circuitry, thus allowing the consumer to purchase new games to play on the system. Since the Channel F, nearly all game consoles have featured the ability to purchase and swap games through some form, through those forms have changes with improvements in technology." +1351,"While magnetic storage, such as tape drives and floppy disks, had been popular for software distribution with early personal computers in the 1980s and 1990s, this format did not see much use in console system. There were some attempts, such as the Bally Astrocade and APF-M1000 using tape drives, as well as the Disk System for the Nintendo Famicom, and the Nintendo 64DD for the Nintendo 64, but these had limited applications, as magnetic media was more fragile and volatile than game cartridges." +1352,"In addition to built-in internal storage, newer consoles often give the consumer the ability to use external storage media to save game date, downloaded games, or other media files from the console. Early iterations of external storage were achieved through the use of flash-based memory cards, first used by the Neo Geo but popularized with the PlayStation. Nintendo continues to support this approach with extending the storage capabilities of the 3DS and Switch, standardizing on the current SD card format. As consoles began incorporating the use of USB ports, support for USB external hard drives was also added, such as with the Xbox 360." +1353,"With Internet-enabled consoles, console manufacturers offer both free and paid-subscription services that provide value-added services atop the basic functions of the console. Free services generally offer user identity services and access to a digital storefront, while paid services allow players to play online games, interact with other uses through social networking, use cloud saves for supported games, and gain access to free titles on a rotating basis. Examples of such services include the Xbox network, PlayStation Network, and Nintendo Switch Online." +1354,"Certain consoles saw various add-ons or accessories that were designed to attach to the existing console to extend its functionality. The best example of this was through the various CD-ROM add-ons for consoles of the fourth generation such as the TurboGrafx CD, Atari Jaguar CD, and the Sega CD. Other examples of add-ons include the 32X for the Sega Genesis intended to allow owners of the aging console to play newer games but has several technical faults, and the Game Boy Player for the GameCube to allow it to play Game Boy games." +1355,Consumers can often purchase a range of accessories for consoles outside of the above categories. These can include: +1356,"Console or game development kits are specialized hardware units that typically include the same components as the console and additional chips and components to allow the unit to be connected to a computer or other monitoring device for debugging purposes. A console manufacturer will make the console's dev kit available to registered developers months ahead of the console's planned launch to give developers time to prepare their games for the new system. These initial kits will usually be offered under special confidentiality clauses to protect trade secrets of the console's design, and will be sold at a high cost to the developer as part of keeping this confidentiality. Newer consoles that share features in common with personal computers may no longer use specialized dev kits, though developers are still expected to register and purchase access to software development kits from the manufacturer. For example, any consumer Xbox One can be used for game development after paying a fee to Microsoft to register one intent to do so." +1357,"Since the release of the Nintendo Famicom / Nintendo Entertainment System, most video game console manufacturers employ a strict licensing scheme that limit what games can be developed for it. Developers and their publishers must pay a fee, typically based on royalty per unit sold, back to the manufacturer. The cost varies by manufacturer but was estimated to be about US$3−10 per unit in 2012. With additional fees, such as branding rights, this has generally worked out to be an industry-wide 30% royalty rate paid to the console manufacturer for every game sold. This is in addition to the cost of acquiring the dev kit to develop for the system." +1358,"The licensing fee may be collected in a few different ways. In the case of Nintendo, the company generally has controlled the production of game cartridges with its lockout chips and optical media for its systems, and thus charges the developer or publisher for each copy it makes as an upfront fee. This also allows Nintendo to review the game's content prior to release and veto games it does not believe appropriate to include on its system. This had led to over 700 unlicensed games for the NES, and numerous others on other Nintendo cartridge-based systems that had found ways to bypass the hardware lockout chips and sell without paying any royalties to Nintendo, such as by Atari in its subsidiary company Tengen. This licensing approach was similarly used by most other cartridge-based console manufacturers using lockout chip technology." +1359,"With optical media, where the console manufacturer may not have direct control on the production of the media, the developer or publisher typically must establish a licensing agreement to gain access to the console's proprietary storage format for the media as well as to use the console and manufacturer's logos and branding for the game's packaging, paid back through royalties on sales. In the transition to digital distribution, where now the console manufacturer runs digital storefronts for games, license fees apply to registering a game for distribution on the storefront – again gaining access to the console's branding and logo – with the manufacturer taking its cut of each sale as its royalty. In both cases, this still gives console manufacturers the ability to review and reject games it believes unsuitable for the system and deny licensing rights." +1360,"With the rise of indie game development, the major console manufacturers have all developed entry level routes for these smaller developers to be able to publish onto consoles at far lower costs and reduced royalty rates. Programs like Microsoft's ID@Xbox give developers most of the needed tools for free after validating the small development size and needs of the team." +1361,Similar licensing concepts apply for third-party accessory manufacturers. +1362,"Consoles, like most consumer electronic devices, have limited lifespans. There is great interest in preservation of older console hardware for archival and historical purposes, as games from older consoles, as well as arcade and personal computers, remain of interest. Computer programmers and hackers have developed emulators that can be run on personal computers or other consoles that simulate the hardware of older consoles that allow games from that console to be run. The development of software emulators of console hardware is established to be legal, but there are unanswered legal questions surrounding copyrights, including acquiring a console's firmware and copies of a game's ROM image, which laws such as the United States' Digital Millennium Copyright Act make illegal save for certain archival purposes. Even though emulation itself is legal, Nintendo is recognized to be highly protective of any attempts to emulate its systems and has taken early legal actions to shut down such projects." +1363,"To help support older games and console transitions, manufacturers started to support backward compatibility on consoles in the same family. Sony was the first to do this on a home console with the PlayStation 2 which was able to play original PlayStation content, and subsequently became a sought-after feature across many consoles that followed. Backward compatibility functionality has included direct support for previous console games on the newer consoles such as within the Xbox console family, the distribution of emulated games such as Nintendo's Virtual Console, or using cloud gaming services for these older games as with the PlayStation Now service." +1364,"Consoles may be shipped in a variety of configurations, but typically will include one base configuration that include the console, one controller, and sometimes a pack-in game. Manufacturers may offer alternate stock keeping unit options that include additional controllers and accessories or different pack-in games. Special console editions may feature unique cases or faceplates with art dedicated to a specific video game or series and are bundled with that game as a special incentive for its fans. Pack-in games are typically first-party games, often featuring the console's primary mascot characters." +1365,"The more recent console generations have also seen multiple versions of the same base console system either offered at launch or presented as a mid-generation refresh. In some cases, these simply replace some parts of the hardware with cheaper or more efficient parts, or otherwise streamline the console's design for production going forward; the PlayStation 3 underwent several such hardware refreshes during its lifetime due to technological improvements such as significant reduction of the process node size for the CPU and GPU. In these cases, the hardware revision model will be marked on packaging so that consumers can verify which version they are acquiring." +1366,"In other cases, the hardware changes create multiple lines within the same console family. The base console unit in all revisions share fundamental hardware, but options like internal storage space and RAM size may be different. Those systems with more storage and RAM would be marked as a higher performance variant available at a higher cost, while the original unit would remain as a budget option. For example, within the Xbox One family, Microsoft released the mid-generation Xbox One X as a higher performance console, the Xbox One S as the lower-cost base console, and a special Xbox One S All-Digital Edition revision that removed the optical drive on the basis that users could download all games digitally, offered at even a lower cost than the Xbox One S. In these cases, developers can often optimize games to work better on the higher-performance console with patches to the retail version of the game. In the case of the Nintendo 3DS, the New Nintendo 3DS, featured upgraded memory and processors, with new games that could only be run on the upgraded units and cannot be run on an older base unit. There have also been a number of ""slimmed-down"" console options with significantly reduced hardware components that significantly reduced the price they could sell the console to the consumer, but either leaving certain features off the console, such as the Wii Mini that lacked any online components compared to the Wii, or that required the consumer to purchase additional accessories and wiring if they did not already own it, such as the New-Style NES that was not bundled with the required RF hardware to connect to a television." +1367,"Consoles when originally launched in the 1970s and 1980s were about US$200−300, and with the introduction of the ROM cartridge, each game averaged about US$30−40. Over time the launch price of base consoles units has generally risen to about US$400−500, with the average game costing US$60. Exceptionally, the period of transition from ROM cartridges to optical media in the early 1990s saw several consoles with high price points exceeding US$400 and going as high as US$700. Resultingly, sales of these first optical media consoles were generally poor." +1368,"When adjusted for inflation, the price of consoles has generally followed a downward trend, from US$800−1,000 from the early generations down to US$500−600 for current consoles. This is typical for any computer technology, with the improvements in computing performance and capabilities outpacing the additional costs to achieve those gains. Further, within the United States, the price of consoles has generally remained consistent, being within 0.8% to 1% of the median household income, based on the United States Census data for the console's launch year." +1369,"Since the Nintendo Entertainment System, console pricing has stabilized on the razorblade model, where the consoles are sold at little to no profit for the manufacturer, but they gain revenue from each game sold due to console licensing fees and other value-added services around the console . Console manufacturers have even been known to take losses on the sale of consoles at the start of a console's launch with expectation to recover with revenue sharing and later price recovery on the console as they switch to less expensive components and manufacturing processes without changing the retail price. Consoles have been generally designed to have a five-year product lifetime, though manufacturers have considered their entries in the more recent generations to have longer lifetimes of seven to potentially ten years." +1370,"The competition within the video game console market as subset of the video game industry is an area of interest to economics with its relatively modern history, its rapid growth to rival that of the film industry, and frequent changes compared to other sectors." +1371,"Effects of unregulated competition on the market were twice seen early in the industry. The industry had its first crash in 1977 following the release of the Magnavox Odyssey, Atari's home versions of Pong and the Coleco Telstar, which led other third-party manufacturers, using inexpensive General Instruments processor chips, to make their own home consoles which flooded the market by 1977.: 81–89  The video game crash of 1983 was fueled by multiple factors including competition from lower-cost personal computers, but unregulated competition was also a factor, as numerous third-party game developers, attempting to follow on the success of Activision in developing third-party games for the Atari 2600 and Intellivision, flooded the market with poor quality games, and made it difficult for even quality games to sell. Nintendo implemented a lockout chip, the Checking Integrated Circuit, on releasing the Nintendo Entertainment System in Western territories, as a means to control which games were published for the console. As part of their licensing agreements, Nintendo further prevented developers from releasing the same game on a different console for a period of two years. This served as one of the first means of securing console exclusivity for games that existed beyond technical limitation of console development." +1372,"The Nintendo Entertainment System also brought the concept of a video game mascot as the representation of a console system as a means to sell and promote the unit, and for the NES was Mario. The use of mascots in businesses had been a tradition in Japan, and this had already proven successful in arcade games like Pac-Man. Mario was used to serve as an identity for the NES as a humor-filled, playful console. Mario caught on quickly when the NES released in the West, and when the next generation of consoles arrived, other manufacturers pushed their own mascots to the forefront of their marketing, most notably Sega with the use of Sonic the Hedgehog. The Nintendo and Sega rivalry that involved their mascot's flagship games served as part of the fourth console generation's ""console wars"". Since then, manufacturers have typically positioned their mascot and other first-party games as key titles in console bundles used to drive sales of consoles at launch or at key sales periods such as near Christmas." +1373,"Another type of competitive edge used by console manufacturers around the same time was the notion of ""bits"" or the size of the word used by the main CPU. The TurboGrafx-16 was the first console to push on its bit-size, advertising itself as a ""16-bit"" console, though this only referred to part of its architecture while its CPU was still an 8-bit unit. Despite this, manufacturers found consumers became fixated on the notion of bits as a console selling point, and over the fourth, fifth and sixth generation, these ""bit wars"" played heavily into console advertising. The use of bits waned as CPU architectures no longer needed to increase their word size and instead had other means to improve performance such as through multicore CPUs." +1374,"Generally, increased console numbers gives rise to more consumer options and better competition, but the exclusivity of titles made the choice of console for consumers an ""all-or-nothing"" decision for most. Further, with the number of available consoles growing with the fifth and sixth generations, game developers became pressured to which systems to focus on, and ultimately narrowed their target choice of platforms to those that were the best-selling. This cased a contraction in the market, with major players like Sega leaving the hardware business after the Dreamcast but continuing in the software area. Effectively, each console generation was shown to have two or three dominant players." +1375,"Competition in the console market in the 2010s and 2020s is considered an oligarchy between three main manufacturers: Nintendo, Sony, and Microsoft. The three use a combination of first-party games exclusive to their console and negotiate exclusive agreements with third-party developers to have their games be exclusive for at least an initial period of time to drive consumers to their console. They also worked with CPU and GPU manufacturers to tune and customize hardware for computers to make it more amenable and effective for video games, leading to lower-cost hardware needed for video game consoles. Finally, console manufacturers also work with retailers to help with promotion of consoles, games, and accessories. While there is little difference in pricing on the console hardware from the manufacturer's suggested retail price for the retailer to profit from, these details with the manufacturers can secure better profits on sales of game and accessory bundles for premier product placement. These all form network effects, with each manufacturer seeking to maximize the size of their network of partners to increase their overall position in the competition." +1376,"Of the three, Microsoft and Sony, both with their own hardware manufacturing capabilities, remain at a leading edge approach, attempting to gain a first-mover advantage over the other with adaption of new console technology. Nintendo is more reliant on its suppliers and thus instead of trying to compete feature for feature with Microsoft and Sony, had instead taken a ""blue ocean"" strategy since the Nintendo DS and Wii." +1377,"In computer science, a high-level programming language is a programming language with strong abstraction from the details of the computer. In contrast to low-level programming languages, it may use natural language elements, be easier to use, or may automate significant areas of computing systems , making the process of developing a program simpler and more understandable than when using a lower-level language. The amount of abstraction provided defines how ""high-level"" a programming language is." +1378,"Note that languages are not strictly interpreted languages or compiled languages. Rather, implementations of language behavior use interpreting or compiling. For example, ALGOL 60 and Fortran have both been interpreted . Similarly, Java shows the difficulty of trying to apply these labels to languages, rather than to implementations; Java is compiled to bytecode which is then executed by either interpreting ) or compiling . Moreover, compiling, transcompiling, and interpreting is not strictly limited to only a description of the compiler artifact ." +1379,"Alternatively, it is possible for a high-level language to be directly implemented by a computer – the computer directly executes the HLL code. This is known as a high-level language computer architecture – the computer architecture itself is designed to be targeted by a specific high-level language. The Burroughs large systems were target machines for ALGOL 60, for example." +1380,"x86 is a family of complex instruction set computer instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term ""x86"" came into being because the names of several successors to Intel's 8086 processor end in ""86"", including the 80186, 80286, 80386 and 80486 processors. Colloquially, their names were ""186"", ""286"", ""386"" and ""486""." +1381,"The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. Embedded systems and general-purpose computers used x86 chips before the PC-compatible market started, some of them before the IBM PC debut." +1382,"As of June 2022, most desktop and laptop computers sold are based on the x86 architecture family, while mobile categories such as smartphones or tablets are dominated by ARM. At the high end, x86 continues to dominate computation-intensive workstation and cloud computing segments. The fastest supercomputer in the TOP500 list for June 2022 was the first exascale system, Frontier, built using AMD Epyc CPUs based on the x86 ISA; it broke the 1 exaFLOPS barrier in May 2022." +1383,"In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 usually represented any 8086-compatible CPU. Today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. This is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and probably also because the term became common after the introduction of the 80386 in 1985." +1384,"A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the ""iAPX"" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, and simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were also terms iRMX , iSBC , and iSBX , all together under the heading Microsystem 80. However, this naming scheme was quite temporary, lasting for a few years during the early 1980s." +1385,"Although the 8086 was primarily developed for embedded systems and small multi-user or single-user computers, largely as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, and is also used in midrange computers, workstations, servers, and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware." +1386,"Modern x86 is relatively uncommon in embedded systems, however, and small low power applications , and low-cost microprocessor markets, such as home appliances and toys, lack significant x86 presence. Simple 8- and 16-bit based architectures are common here, as well as simpler RISC architectures like RISC-V, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some relatively low-power and low-cost segments." +1387,"There have been several attempts, including by Intel, to end the market dominance of the ""inelegant"" x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX 432 , the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips in the form of modern multi-core CPUs, is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from completely new architectures." +1388,"The table below lists processor models and model series implementing various architectures in the x86 family, in chronological order. Each line item is characterized by significantly improved or commercially successful processor microarchitecture designs." +1389,"At various times, companies such as IBM, VIA, NEC, AMD, TI, STM, Fujitsu, OKI, Siemens, Cyrix, Intersil, C&T, NexGen, UMC, and DM&P started to design or manufacture x86 processors intended for personal computers and embedded systems. Other companies that designed or manufactured x86 or x87 processors include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek." +1390,"Such x86 implementations were seldom simple copies but often employed different internal microarchitectures and different solutions at the electronic and physical levels. Quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, often named similarly to Intel's original chips." +1391,"After the fully pipelined i486, in 1993 Intel introduced the Pentium brand name for their new set of superscalar x86 designs. With the x86 naming scheme now legally cleared, other x86 vendors had to choose different names for their x86-compatible products, and initially some chose to continue with variations of the numbering scheme: IBM partnered with Cyrix to produce the 5x86 and then the very efficient 6x86 and 6x86MX lines of Cyrix designs, which were the first x86 microprocessors implementing register renaming to enable speculative execution." +1392,"AMD meanwhile designed and manufactured the advanced but delayed 5k86 , which, internally, was closely based on AMD's earlier 29K RISC design; similar to NexGen's Nx586, it used a strategy such that dedicated pipeline stages decode x86 instructions into uniform and easily handled micro-operations, a method that has remained the basis for most x86 designs to this day." +1393,"Some early versions of these microprocessors had heat dissipation problems. The 6x86 was also affected by a few minor compatibility problems, the Nx586 lacked a floating-point unit and pin-compatibility, while the K5 had somewhat disappointing performance when it was introduced." +1394,"Customer ignorance of alternatives to the Pentium series further contributed to these designs being comparatively unsuccessful, despite the fact that the K5 had very good Pentium compatibility and the 6x86 was significantly faster than the Pentium on integer code. AMD later managed to grow into a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron." +1395,"There were also other contenders, such as Centaur Technology , Rise Technology, and Transmeta. VIA Technologies' energy efficient C3 and C7 processors, which were designed by the Centaur company, were sold for many years following their release in 2005. Centaur's 2008 design, the VIA Nano, was their first processor with superscalar and speculative execution. It was introduced at about the same time as Intel introduced the Intel Atom, its first ""in-order"" processor after the P5 Pentium." +1396,"Many additions and extensions have been added to the original x86 instruction set over the years, almost consistently with full backward compatibility. The architecture family has been implemented in processors from Intel, Cyrix, AMD, VIA Technologies and many other companies; there are also open implementations, such as the Zet SoC platform . Nevertheless, of those, only Intel, AMD, VIA Technologies, and DM&P Electronics hold x86 architectural licenses, and from these, only the first two actively produce modern 64-bit designs, leading to what has been called a ""duopoly"" of Intel and AMD in x86 processors." +1397,"However, in 2014 the Shanghai-based Chinese company Zhaoxin, a joint venture between a Chinese company and VIA Technologies, began designing VIA based x86 processors for desktops and laptops. The release of its newest ""7"" family of x86 processors , which are not quite as fast as AMD or Intel chips but are still state of the art, had been planned for 2021; as of March 2022 the release had not taken place, however." +1398,"The instruction set architecture has twice been extended to a larger word size. In 1985, Intel released the 32-bit 80386 which gradually replaced the earlier 16-bit chips in computers during the following years; this extended programming model was originally referred to as the i386 architecture but Intel later dubbed it IA-32 when introducing its IA-64 architecture." +1399,"In 1999–2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents and later as AMD64. Intel soon adopted AMD's architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64. Microsoft and Sun Microsystems/Oracle also use term ""x64"", while many Linux distributions, and the BSDs also use the ""amd64"" term. Microsoft Windows, for example, designates its 32-bit versions as ""x86"" and 64-bit versions as ""x64"", while installation files of 64-bit Windows versions are required to be placed into a directory called ""AMD64""." +1400,"In 2023, Intel proposed a major change to the architecture referred to as x86-S , which aims to remove support for legacy execution modes and instructions. A processor implementing this proposal would start execution directly in long mode and would only support 64-bit operating systems. 32-bit code would only be supported for user applications running in ring 3, and would use the same simplified segmentation as long mode." +1401,"The x86 architecture is a variable instruction length, primarily ""CISC"" design with emphasis on backward compatibility. The instruction set is not typical CISC, however, but basically an extended version of the simple eight-bit 8008 and 8080 architectures. Byte-addressing is enabled and words are stored in memory with little-endian byte order. Memory access to unaligned addresses is allowed for almost all instructions. The largest native size for integer arithmetic and memory addresses is 16, 32 or 64 bits depending on architecture generation . Multiple scalar values can be handled simultaneously via the SIMD unit present in later generations, as described below. Immediate addressing offsets and immediate data may be expressed as 8-bit quantities for the frequently occurring cases or contexts where a −128..127 range is enough. Typical instructions are therefore 2 or 3 bytes in length ." +1402,"To further conserve encoding space, most registers are expressed in opcodes using three or four bits, the latter via an opcode prefix in 64-bit mode, while at most one operand to an instruction can be a memory location. However, this memory operand may also be the destination , while the other operand, the source, can be either register or immediate. Among other factors, this contributes to a code size that rivals eight-bit machines and enables efficient use of instruction cache memory. The relatively small number of general registers has made register-relative addressing an important method of accessing operands, especially on the stack. Much work has therefore been invested in making such accesses as fast as register accesses—i.e., a one cycle instruction throughput, in most circumstances where the accessed data is available in the top-level cache." +1403,"A dedicated floating-point processor with 80-bit internal registers, the 8087, was developed for the original 8086. This microprocessor subsequently developed into the extended 80387, and later processors incorporated a backward compatible version of this functionality on the same microprocessor as the main processor. In addition to this, modern x86 designs also contain a SIMD-unit where instructions can work in parallel on 128-bit words, each containing two or four floating-point numbers , or alternatively, 2, 4, 8 or 16 integers ." +1404,"The presence of wide SIMD registers means that existing x86 processors can load or store up to 128 bits of memory data in a single instruction and also perform bitwise operations on full 128-bits quantities in parallel. Intel's Sandy Bridge processors added the Advanced Vector Extensions instructions, widening the SIMD registers to 256 bits. The Intel Initial Many Core Instructions implemented by the Knights Corner Xeon Phi processors, and the AVX-512 instructions implemented by the Knights Landing Xeon Phi processors and by Skylake-X processors, use 512-bit wide SIMD registers." +1405,"During execution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces called micro-operations. These are then handed to a control unit that buffers and schedules them in compliance with x86-semantics so that they can be executed, partly in parallel, by one of several execution units. These modern x86 designs are thus pipelined, superscalar, and also capable of out of order and speculative execution , which means they may execute multiple x86 instructions simultaneously, and not necessarily in the same order as given in the instruction stream. +Some Intel CPUs and AMD CPUs are also capable of simultaneous multithreading with two threads per core . Some Intel CPUs support transactional memory ." +1406,"When introduced, in the mid-1990s, this method was sometimes referred to as a ""RISC core"" or as ""RISC translation"", partly for marketing reasons, but also because these micro-operations share some properties with certain types of RISC instructions. However, traditional microcode also inherently shares many of the same properties; the new method differs mainly in that the translation to micro-operations now occurs asynchronously. Not having to synchronize the execution units with the decode steps opens up possibilities for more analysis of the code stream, and therefore permits detection of operations that can be performed in parallel, simultaneously feeding more than one execution unit." +1407,The latest processors also do the opposite when appropriate; they combine certain x86 sequences into a more complex micro-op which fits the execution model better and thus can be executed faster or with fewer machine resources involved. +1408,"Another way to try to improve performance is to cache the decoded micro-operations, so the processor can directly access the decoded micro-operations from a special cache, instead of decoding them again. Intel followed this approach with the Execution Trace Cache feature in their NetBurst microarchitecture and later in the Decoded Stream Buffer ." +1409,Transmeta used a completely different method in their Crusoe x86 compatible CPUs. They used just-in-time translation to convert x86 instructions to the CPU's native VLIW instruction set. Transmeta argued that their approach allows for more power efficient designs since the CPU can forgo the complicated decode step of more traditional x86 implementations. +1410,Addressing modes for 16-bit processor modes can be summarized by the formula: +1411,Addressing modes for 32-bit x86 processor modes can be summarized by the formula: +1412,Addressing modes for the 64-bit processor mode can be summarized by the formula: +1413,Instruction relative addressing in 64-bit code simplifies the implementation of position-independent code . +1414,"The 8086 had 64 KB of eight-bit I/O space, and a 64 KB stack in memory supported by computer hardware. Only words can be pushed to the stack. The stack grows toward numerically lower addresses, with SS:SP pointing to the most recently pushed item. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return address." +1415,"The original Intel 8086 and 8088 have fourteen 16-bit registers. Four of them are general-purpose registers , although each may have an additional purpose; for example, only CX can be used as a counter with the loop instruction. Each can be accessed as two separate bytes . Two pointer registers have special roles: SP points to the ""top"" of the stack, and BP is often used to point at some other place in the stack, typically above the local variables . The registers SI, DI, BX and BP are address registers, and may also be used for array indexing." +1416,"One of four possible 'segment registers' is used to form a memory address. In the original 8086 / 8088 / 80186 / 80188 every address was built from a segment register and one of the general purpose registers. For example ds:si is the notation for an address formed as to allow 20-bit addressing rather than 16 bits, although this changed in later processors. At that time only certain combinations were supported." +1417,"The FLAGS register contains flags such as carry flag, overflow flag and zero flag. Finally, the instruction pointer points to the next instruction that will be fetched from memory and then executed; this register cannot be directly accessed by a program." +1418,"The Intel 80186 and 80188 are essentially an upgraded 8086 or 8088 CPU, respectively, with on-chip peripherals added, and they have the same CPU registers as the 8086 and 8088 ." +1419,"The 8086, 8088, 80186, and 80188 can use an optional floating-point coprocessor, the 8087. The 8087 appears to the programmer as part of the CPU and adds eight 80-bit wide registers, st to st, each of which can hold numeric data in one of seven formats: 32-, 64-, or 80-bit floating point, 16-, 32-, or 64-bit integer, and 80-bit packed decimal integer.: S-6, S-13..S-15  It also has its own 16-bit status register accessible through the fstsw instruction, and it is common to simply use some of its bits for branching by copying it into the normal FLAGS." +1420,"In the Intel 80286, to support protected mode, three special registers hold descriptor table addresses , and a fourth task register is used for task switching. The 80287 is the floating-point coprocessor for the 80286 and has the same registers as the 8087 with the same data formats." +1421,"With the advent of the 32-bit 80386 processor, the 16-bit general-purpose registers, base registers, index registers, instruction pointer, and FLAGS register, but not the segment registers, were expanded to 32 bits. The nomenclature represented this by prefixing an ""E"" to the register names in x86 assembly language. Thus, the AX register corresponds to the lower 16 bits of the new 32-bit EAX register, SI corresponds to the lower 16 bits of ESI, and so on. The general-purpose registers, base registers, and index registers can all be used as the base in addressing modes, and all of those registers except for the stack pointer can be used as the index in addressing modes." +1422,"Two new segment registers were added. With a greater number of registers, instructions and operands, the machine code format was expanded. To provide backward compatibility, segments with executable code can be marked as containing either 16-bit or 32-bit instructions. Special prefixes allow inclusion of 32-bit instructions in a 16-bit segment or vice versa." +1423,"The 80386 had an optional floating-point coprocessor, the 80387; it had eight 80-bit wide registers: st to st, like the 8087 and 80287. The 80386 could also use an 80287 coprocessor. With the 80486 and all subsequent x86 models, the floating-point processing unit is integrated on-chip." +1424,"The Pentium MMX added eight 64-bit MMX integer vector registers . With the Pentium III, Intel added a 32-bit Streaming SIMD Extensions control/status register and eight 128-bit SSE floating-point registers ." +1425,"Starting with the AMD Opteron processor, the x86 architecture extended the 32-bit registers into 64-bit registers in a way similar to how the 16 to 32-bit extension took place. An R-prefix identifies the 64-bit registers , and eight additional 64-bit general registers were also introduced in the creation of x86-64. Also, eight more SSE vector registers were added. However, these extensions are only usable in 64-bit mode, which is one of the two modes only available in long mode. The addressing modes were not dramatically changed from 32-bit mode, except that addressing was extended to 64 bits, virtual addresses are now sign extended to 64 bits , and other selector details were dramatically reduced. In addition, an addressing mode was added to allow memory references relative to RIP , to ease the implementation of position-independent code, used in shared libraries in some operating systems." +1426,SIMD registers XMM0–XMM15 . +1427,SIMD registers YMM0–YMM15 . Lower half of each of the YMM registers maps onto the corresponding XMM register. +1428,SIMD registers ZMM0–ZMM31. Lower half of each of the ZMM registers maps onto the corresponding YMM register. +1429,"x86 processors that have a protected mode, i.e. the 80286 and later processors, also have three descriptor registers and a task register ." +1430,"32-bit x86 processors also include various special/miscellaneous registers such as control registers , debug registers , test registers , and model-specific registers ." +1431,"AVX-512 has eight extra 64-bit mask registers K0–K7 for selecting elements in a vector register. Depending on the vector register and element widths, only a subset of bits of the mask register may be used by a given instruction." +1432,"Although the main registers are ""general-purpose"" in the 32-bit and 64-bit versions of the instruction set and can be used for anything, it was originally envisioned that they be used for the following purposes:" +1433,Segment registers: +1434,No particular purposes were envisioned for the other 8 registers available only in 64-bit mode. +1435,"Some instructions compile and execute more efficiently when using these registers for their designed purpose. For example, using AL as an accumulator and adding an immediate byte value to it produces the efficient add to AL opcode of 04h, whilst using the BL register produces the generic and longer add to register opcode of 80C3h. Another example is double precision division and multiplication that works specifically with the AX and DX registers." +1436,"Modern compilers benefited from the introduction of the sib byte that allows registers to be treated uniformly . However, using the sib byte universally is non-optimal, as it produces longer encodings than only using it selectively when necessary. Some special instructions lost priority in the hardware design and became slower than equivalent small code sequences. A notable example is the LODSW instruction." +1437,Note: The ?PL registers are only available in 64-bit mode. +1438,Note: The ?IL registers are only available in 64-bit mode. +1439,"Real Address mode, commonly called Real mode, is an operating mode of 8086 and later x86-compatible CPUs. Real mode is characterized by a 20-bit segmented memory address space , direct software access to peripheral hardware, and no concept of memory protection or multitasking at the hardware level. All x86 CPUs in the 80286 series and later start up in real mode at power-on; 80186 CPUs and earlier had only one operational mode, which is equivalent to real mode in later chips. " +1440,"In order to use more than 64 KB of memory, the segment registers must be used. This created great complications for compiler implementors who introduced odd pointer modes such as ""near"", ""far"" and ""huge"" to leverage the implicit nature of segmented architecture to different degrees, with some pointers containing 16-bit offsets within implied segments and other pointers containing segment addresses and offsets within segments. It is technically possible to use up to 256 KB of memory for code and data, with up to 64 KB for code, by setting all four segment registers once and then only using 16-bit offsets to address memory, but this puts substantial restrictions on the way data can be addressed and memory operands can be combined, and it violates the architectural intent of the Intel designers, which is for separate data items to be contained in separate segments and addressed by their own segment addresses, in new programs that are not ported from earlier 8-bit processors with 16-bit address spaces." +1441,Unreal mode is used by some 16-bit operating systems and some 32-bit boot loaders. +1442,"The System Management Mode is only used by the system firmware , not by operating systems and applications software. The SMM code is running in SMRAM." +1443,"In addition to real mode, the Intel 80286 supports protected mode, expanding addressable physical memory to 16 MB and addressable virtual memory to 1 GB, and providing protected memory, which prevents programs from corrupting one another. This is done by using the segment registers only for storing an index into a descriptor table that is stored in memory. There are two such tables, the Global Descriptor Table and the Local Descriptor Table , each holding up to 8192 segment descriptors, each segment giving access to 64 KB of memory. In the 80286, a segment descriptor provides a 24-bit base address, and this base address is added to a 16-bit offset to create an absolute address. The base address from the table fulfills the same role that the literal value of the segment register fulfills in real mode; the segment registers have been converted from direct registers to indirect registers. Each segment can be assigned one of four ring levels used for hardware-based computer security. Each segment descriptor also contains a segment limit field which specifies the maximum offset that may be used with the segment. Because offsets are 16 bits, segments are still limited to 64 KB each in 80286 protected mode." +1444,"Each time a segment register is loaded in protected mode, the 80286 must read a 6-byte segment descriptor from memory into a set of hidden internal registers. Thus, loading segment registers is much slower in protected mode than in real mode, and changing segments very frequently is to be avoided. Actual memory operations using protected mode segments are not slowed much because the 80286 and later have hardware to check the offset against the segment limit in parallel with instruction execution." +1445,"The Intel 80386 extended offsets and also the segment limit field in each segment descriptor to 32 bits, enabling a segment to span the entire memory space. It also introduced support in protected mode for paging, a mechanism making it possible to use paged virtual memory . Paging allows the CPU to map any page of the virtual memory space to any page of the physical memory space. To do this, it uses additional mapping tables in memory called page tables. Protected mode on the 80386 can operate with paging either enabled or disabled; the segmentation mechanism is always active and generates virtual addresses that are then mapped by the paging mechanism if it is enabled. The segmentation mechanism can also be effectively disabled by setting all segments to have a base address of 0 and size limit equal to the whole address space; this also requires a minimally-sized segment descriptor table of only four descriptors ." +1446,"Paging is used extensively by modern multitasking operating systems. Linux, 386BSD and Windows NT were developed for the 386 because it was the first Intel architecture CPU to support paging and 32-bit segment offsets. The 386 architecture became the basis of all further development in the x86 series." +1447,"x86 processors that support protected mode boot into real mode for backward compatibility with the older 8086 class of processors. Upon power-on , the processor initializes in real mode, and then begins executing instructions. Operating system boot code, which might be stored in read-only memory, may place the processor into the protected mode to enable paging and other features. Conversely, segment arithmetic, a common practice in real mode code, is not allowed in protected mode." +1448,"There is also a sub-mode of operation in 32-bit protected mode called virtual 8086 mode, also known as V86 mode. This is basically a special hybrid operating mode that allows real mode programs and operating systems to run while under the control of a protected mode supervisor operating system. This allows for a great deal of flexibility in running both protected mode programs and real mode programs simultaneously. This mode is exclusively available for the 32-bit version of protected mode; it does not exist in the 16-bit version of protected mode, or in long mode." +1449,"In the mid 1990s, it was obvious that the 32-bit address space of the x86 architecture was limiting its performance in applications requiring large data sets. A 32-bit address space would allow the processor to directly address only 4 GB of data, a size surpassed by applications such as video processing and database engines. Using 64-bit addresses, it is possible to directly address 16 EiB of data, although most 64-bit architectures do not support access to the full 64-bit address space; for example, AMD64 supports only 48 bits from a 64-bit address, split into four paging levels." +1450,"In 1999, AMD published a complete specification for a 64-bit extension of the x86 architecture which they called x86-64 with claimed intentions to produce. That design is currently used in almost all x86 processors, with some exceptions intended for embedded systems." +1451,"Mass-produced x86-64 chips for the general market were available four years later, in 2003, after the time was spent for working prototypes to be tested and refined; about the same time, the initial name x86-64 was changed to AMD64. The success of the AMD64 line of processors coupled with lukewarm reception of the IA-64 architecture forced Intel to release its own implementation of the AMD64 instruction set. Intel had previously implemented support for AMD64 but opted not to enable it in hopes that AMD would not bring AMD64 to market before Itanium's new IA-64 instruction set was widely adopted. It branded its implementation of AMD64 as EM64T, and later rebranded it Intel 64." +1452,"In its literature and product version names, Microsoft and Sun refer to AMD64/Intel 64 collectively as x64 in the Windows and Solaris operating systems. Linux distributions refer to it either as ""x86-64"", its variant ""x86_64"", or ""amd64"". BSD systems use ""amd64"" while macOS uses ""x86_64""." +1453,"Long mode is mostly an extension of the 32-bit instruction set, but unlike the 16–to–32-bit transition, many instructions were dropped in the 64-bit mode. This does not affect actual binary backward compatibility , but it changes the way assembler and compilers for new code have to work." +1454,This was the first time that a major extension of the x86 architecture was initiated and originated by a manufacturer other than Intel. It was also the first time that Intel accepted technology of this nature from an outside source. +1455,"Early x86 processors could be extended with floating-point hardware in the form of a series of floating-point numerical co-processors with names like 8087, 80287 and 80387, abbreviated x87. This was also known as the NPX , an apt name since the coprocessors, while used mainly for floating-point calculations, also performed integer operations on both binary and decimal formats. With very few exceptions, the 80486 and subsequent x86 processors then integrated this x87 functionality on chip which made the x87 instructions a de facto integral part of the x86 instruction set." +1456,"Each x87 register, known as ST through ST, is 80 bits wide and stores numbers in the IEEE floating-point standard double extended precision format. These registers are organized as a stack with ST as the top. This was done in order to conserve opcode space, and the registers are therefore randomly accessible only for either operand in a register-to-register instruction; ST0 must always be one of the two operands, either the source or the destination, regardless of whether the other operand is ST or a memory operand. However, random access to the stack registers can be obtained through an instruction which exchanges any specified ST with ST." +1457,"The operations include arithmetic and transcendental functions, including trigonometric and exponential functions, and instructions that load common constants ; and log10) into one of the stack registers. While the integer ability is often overlooked, the x87 can operate on larger integers with a single instruction than the 8086, 80286, 80386, or any x86 CPU without to 64-bit extensions can, and repeated integer calculations even on small values can be accelerated by executing integer instructions on the x86 CPU and the x87 in parallel. " +1458,MMX is a SIMD instruction set designed by Intel and introduced in 1997 for the Pentium MMX microprocessor. The MMX instruction set was developed from a similar concept first used on the Intel i860. It is supported on most subsequent IA-32 processors by Intel and other vendors. MMX is typically used for video processing . +1459,"MMX added 8 new registers to the architecture, known as MM0 through MM7 . In reality, these new registers were just aliases for the existing x87 FPU stack registers. Hence, anything that was done to the floating-point stack would also affect the MMX registers. Unlike the FP stack, these MMn registers were fixed, not relative, and therefore they were randomly accessible. The instruction set did not adopt the stack-like semantics so that existing operating systems could still correctly save and restore the register state when multitasking without modifications." +1460,"Each of the MMn registers are 64-bit integers. However, one of the main concepts of the MMX instruction set is the concept of packed data types, which means instead of using the whole register for a single 64-bit integer , one may use it to contain two 32-bit integers , four 16-bit integers or eight 8-bit integers . Given that the MMX's 64-bit MMn registers are aliased to the FPU stack and each of the floating-point registers are 80 bits wide, the upper 16 bits of the floating-point registers are unused in MMX. These bits are set to all ones by any MMX instruction, which correspond to the floating-point representation of NaNs or infinities." +1461,"In 1997, AMD introduced 3DNow!. The introduction of this technology coincided with the rise of 3D entertainment applications and was designed to improve the CPU's vector processing performance of graphic-intensive applications. 3D video game developers and 3D graphics hardware vendors use 3DNow! to enhance their performance on AMD's K6 and Athlon series of processors." +1462,"3DNow! was designed to be the natural evolution of MMX from integers to floating point. As such, it uses exactly the same register naming convention as MMX, that is MM0 through MM7. The only difference is that instead of packing integers into these registers, two single-precision floating-point numbers are packed into each register. The advantage of aliasing the FPU registers is that the same instruction and data structures used to save the state of the FPU registers can also be used to save 3DNow! register states. Thus no special modifications are required to be made to operating systems which would otherwise not know about them." +1463,"In 1999, Intel introduced the Streaming SIMD Extensions instruction set, following in 2000 with SSE2. The first addition allowed offloading of basic floating-point operations from the x87 stack and the second made MMX almost obsolete and allowed the instructions to be realistically targeted by conventional compilers. Introduced in 2004 along with the Prescott revision of the Pentium 4 processor, SSE3 added specific memory and thread-handling instructions to boost the performance of Intel's HyperThreading technology. AMD licensed the SSE3 instruction set and implemented most of the SSE3 instructions for its revision E and later Athlon 64 processors. The Athlon 64 does not support HyperThreading and lacks those SSE3 instructions used only for HyperThreading." +1464,"SSE discarded all legacy connections to the FPU stack. This also meant that this instruction set discarded all legacy connections to previous generations of SIMD instruction sets like MMX. But it freed the designers up, allowing them to use larger registers, not limited by the size of the FPU registers. The designers created eight 128-bit registers, named XMM0 through XMM7. However, the downside was that operating systems had to have an awareness of this new set of instructions in order to be able to save their register states. So Intel created a slightly modified version of Protected mode, called Enhanced mode which enables the usage of SSE instructions, whereas they stay disabled in regular Protected mode. An OS that is aware of SSE will activate Enhanced mode, whereas an unaware OS will only enter into traditional Protected mode." +1465,"SSE is a SIMD instruction set that works only on floating-point values, like 3DNow!. However, unlike 3DNow! it severs all legacy connection to the FPU stack. Because it has larger registers than 3DNow!, SSE can pack twice the number of single precision floats into its registers. The original SSE was limited to only single-precision numbers, like 3DNow!. The SSE2 introduced the capability to pack double precision numbers too, which 3DNow! had no possibility of doing since a double precision number is 64-bit in size which would be the full size of a single 3DNow! MMn register. At 128 bits, the SSE XMMn registers could pack two double precision floats into one register. Thus SSE2 is much more suitable for scientific calculations than either SSE1 or 3DNow!, which were limited to only single precision. SSE3 does not introduce any additional registers." +1466,"The Advanced Vector Extensions doubled the size of SSE registers to 256-bit YMM registers. It also introduced the VEX coding scheme to accommodate the larger registers, plus a few instructions to permute elements. AVX2 did not introduce extra registers, but was notable for the addition for masking, gather, and shuffle instructions." +1467,"AVX-512 features yet another expansion to 32 512-bit ZMM registers and a new EVEX scheme. Unlike its predecessors featuring a monolithic extension, it is divided into many subsets that specific models of CPUs can choose to implement." +1468,"Physical Address Extension or PAE was first added in the Intel Pentium Pro, and later by AMD in the Athlon processors, to allow up to 64 GB of RAM to be addressed. Without PAE, physical RAM in 32-bit protected mode is usually limited to 4 GB. PAE defines a different page table structure with wider page table entries and a third level of page table, allowing additional bits of physical address. Although the initial implementations on 32-bit processors theoretically supported up to 64 GB of RAM, chipset and other platform limitations often restricted what could actually be used. x86-64 processors define page table structures that theoretically allow up to 52 bits of physical address, although again, chipset and other platform concerns prevent such a large physical address space to be realized. On x86-64 processors PAE mode must be active before the switch to long mode, and must remain active while long mode is active, so while in long mode there is no ""non-PAE"" mode. PAE mode does not affect the width of linear or virtual addresses." +1469,"By the 2000s, 32-bit x86 processors' limits in memory addressing were an obstacle to their use in high-performance computing clusters and powerful desktop workstations. The aged 32-bit x86 was competing with much more advanced 64-bit RISC architectures which could address much more memory. Intel and the whole x86 ecosystem needed 64-bit memory addressing if x86 was to survive the 64-bit computing era, as workstation and desktop software applications were soon to start hitting the limits of 32-bit memory addressing. However, Intel felt that it was the right time to make a bold step and use the transition to 64-bit desktop computers for a transition away from the x86 architecture in general, an experiment which ultimately failed." +1470,"In 2001, Intel attempted to introduce a non-x86 64-bit architecture named IA-64 in its Itanium processor, initially aiming for the high-performance computing market, hoping that it would eventually replace the 32-bit x86. While IA-64 was incompatible with x86, the Itanium processor did provide emulation abilities for translating x86 instructions into IA-64, but this affected the performance of x86 programs so badly that it was rarely, if ever, actually useful to the users: programmers should rewrite x86 programs for the IA-64 architecture or their performance on Itanium would be orders of magnitude worse than on a true x86 processor. The market rejected the Itanium processor since it broke backward compatibility and preferred to continue using x86 chips, and very few programs were rewritten for IA-64." +1471,"AMD decided to take another path toward 64-bit memory addressing, making sure backward compatibility would not suffer. In April 2003, AMD released the first x86 processor with 64-bit general-purpose registers, the Opteron, capable of addressing much more than 4 GB of virtual memory using the new x86-64 extension . The 64-bit extensions to the x86 architecture were enabled only in the newly introduced long mode, therefore 32-bit and 16-bit applications and operating systems could simply continue using an AMD64 processor in protected or other modes, without even the slightest sacrifice of performance and with full compatibility back to the original instructions of the 16-bit Intel 8086.: 13–14  The market responded positively, adopting the 64-bit AMD processors for both high-performance applications and business or home computers." +1472,"Seeing the market rejecting the incompatible Itanium processor and Microsoft supporting AMD64, Intel had to respond and introduced its own x86-64 processor, the Prescott Pentium 4, in July 2004. As a result, the Itanium processor with its IA-64 instruction set is rarely used and x86, through its x86-64 incarnation, is still the dominant CPU architecture in non-embedded computers." +1473,"x86-64 also introduced the NX bit, which offers some protection against security bugs caused by buffer overruns." +1474,"As a result of AMD's 64-bit contribution to the x86 lineage and its subsequent acceptance by Intel, the 64-bit RISC architectures ceased to be a threat to the x86 ecosystem and almost disappeared from the workstation market. x86-64 began to be utilized in powerful supercomputers , a market which was previously the natural habitat for 64-bit RISC designs . The great leap toward 64-bit computing and the maintenance of backward compatibility with 32-bit and 16-bit software enabled the x86 architecture to become an extremely flexible platform today, with x86 chips being utilized from small low-power systems to fast gaming desktop computers , and even dominate large supercomputing clusters, effectively leaving only the ARM 32-bit and 64-bit RISC architecture as a competitor in the smartphone and tablet market." +1475,"Prior to 2005, x86 architecture processors were unable to meet the Popek and Goldberg requirements – a specification for virtualization created in 1974 by Gerald J. Popek and Robert P. Goldberg. However, both proprietary and open-source x86 virtualization hypervisor products were developed using software-based virtualization. Proprietary systems include Hyper-V, Parallels Workstation, VMware ESX, VMware Workstation, VMware Workstation Player and Windows Virtual PC, while free and open-source systems include QEMU, Kernel-based Virtual Machine, VirtualBox, and Xen." +1476,The introduction of the AMD-V and Intel VT-x instruction sets in 2005 allowed x86 processors to meet the Popek and Goldberg virtualization requirements. +1477,"APX are extensions to double the number of general-purpose registers from 16 to 32 and add new features to improve general-purpose performance. These extensions have been called ""generational"" and ""the biggest x86 addition since 64 bits"". Intel contributed APX support to GNU Compiler Collection 14." +1478,"According to the architecture specification, the main features of APX are:" +1479,"Extended GPRs for general purpose instructions are encoded using 2-byte REX2 prefix, while new instructions and extended operands for existing AVX/AVX2/AVX-512 instructions are encoded with extended EVEX prefix which has four variants used for different groups of instructions." +1480,"A handheld game console, or simply handheld console, is a small, portable self-contained video game console with a built-in screen, game controls and speakers. Handheld game consoles are smaller than home video game consoles and contain the console, screen, speakers, and controls in one unit, allowing players to carry them and play them at any time or place." +1481,"In 1976, Mattel introduced the first handheld electronic game with the release of Auto Race. Later, several companies—including Coleco and Milton Bradley—made their own single-game, lightweight table-top or handheld electronic game devices. The first commercial successful handheld console was Merlin from 1978 which sold more than 5 million units. The first handheld game console with interchangeable cartridges is the Milton Bradley Microvision in 1979." +1482,"Nintendo is credited with popularizing the handheld console concept with the release of the Game Boy in 1989 and continues to dominate the handheld console market. The first internet-enabled handheld console and the first with a touchscreen was the Game.com released by Tiger Electronics in 1997. The Nintendo DS, released in 2004, introduced touchscreen controls and wireless online gaming to a wider audience, becoming the best-selling handheld console with over 150 million units sold worldwide." +1483,"This table describes handheld games consoles by generation, with over 1 million sales. No handheld achieved this prior to the fourth generation of game consoles. This list does not include dedicated consoles, such as LCD games and the Tamagotchi." +1484,"The origins of handheld game consoles are found in handheld and tabletop electronic game devices of the 1970s and early 1980s. These electronic devices are capable of playing only a single game, they fit in the palm of the hand or on a tabletop, and they may make use of a variety of video displays such as LED, VFD, or LCD. In 1978, handheld electronic games were described by Popular Electronics magazine as ""nonvideo electronic games"" and ""non-TV games"" as distinct from devices that required use of a television screen. Handheld electronic games, in turn, find their origins in the synthesis of previous handheld and tabletop electro-mechanical devices such as Waco's Electronic Tic-Tac-Toe Cragstan's Periscope-Firing Range , and the emerging optoelectronic-display-driven calculator market of the early 1970s. This synthesis happened in 1976, when ""Mattel began work on a line of calculator-sized sports games that became the world's first handheld electronic games. The project began when Michael Katz, Mattel's new product category marketing director, told the engineers in the electronics group to design a game the size of a calculator, using LED technology.""" +1485,"The result was the 1976 release of Auto Race. Followed by Football later in 1977, the two games were so successful that according to Katz, ""these simple electronic handheld games turned into a '$400 million category.'"" Mattel would later win the honor of being recognized by the industry for innovation in handheld game device displays. Soon, other manufacturers including Coleco, Parker Brothers, Milton Bradley, Entex, and Bandai began following up with their own tabletop and handheld electronic games." +1486,"In 1979 the LCD-based Microvision, designed by Smith Engineering and distributed by Milton-Bradley, became the first handheld game console and the first to use interchangeable game cartridges. The Microvision game Cosmic Hunter also introduced the concept of a directional pad on handheld gaming devices, and is operated by using the thumb to manipulate the on-screen character in any of four directions." +1487,"In 1979, Gunpei Yokoi, traveling on a bullet train, saw a bored businessman playing with an LCD calculator by pressing the buttons. Yokoi then thought of an idea for a watch that doubled as a miniature game machine for killing time. Starting in 1980, Nintendo began to release a series of electronic games designed by Yokoi called the Game & Watch games. Taking advantage of the technology used in the credit-card-sized calculators that had appeared on the market, Yokoi designed the series of LCD-based games to include a digital time display in the corner of the screen. For later, more complicated Game & Watch games, Yokoi invented a cross shaped directional pad or ""D-pad"" for control of on-screen characters. Yokoi also included his directional pad on the NES controllers, and the cross-shaped thumb controller soon became standard on game console controllers and ubiquitous across the video game industry since. When Yokoi began designing Nintendo's first handheld game console, he came up with a device that married the elements of his Game & Watch devices and the Famicom console, including both items' D-pad controller. The result was the Nintendo Game Boy." +1488,"In 1982, the Bandai LCD Solarpower was the first solar-powered gaming device. Some of its games, such as the horror-themed game Terror House, features two LCD panels, one stacked on the other, for an early 3D effect. In 1983, Takara Tomy's Tomytronic 3D simulates 3D by having two LCD panels that were lit by external light through a window on top of the device, making it the first dedicated home video 3D hardware." +1489,"The late 1980s and early 1990s saw the beginnings of the modern-day handheld game console industry, after the demise of the Microvision. As backlit LCD game consoles with color graphics consume a lot of power, they were not battery-friendly like the non-backlit original Game Boy whose monochrome graphics allowed longer battery life. By this point, rechargeable battery technology had not yet matured and so the more advanced game consoles of the time such as the Sega Game Gear and Atari Lynx did not have nearly as much success as the Game Boy." +1490,"Even though third-party rechargeable batteries were available for the battery-hungry alternatives to the Game Boy, these batteries employed a nickel-cadmium process and had to be completely discharged before being recharged to ensure maximum efficiency; lead-acid batteries could be used with automobile circuit limiters ; but the batteries had mediocre portability. The later NiMH batteries, which do not share this requirement for maximum efficiency, were not released until the late 1990s, years after the Game Gear, Atari Lynx, and original Game Boy had been discontinued. During the time when technologically superior handhelds had strict technical limitations, batteries had a very low mAh rating since batteries with heavy power density were not yet available." +1491,"Modern game systems such as the Nintendo DS and PlayStation Portable have rechargeable Lithium-Ion batteries with proprietary shapes. Other seventh-generation consoles, such as the GP2X, use standard alkaline batteries. Because the mAh rating of alkaline batteries has increased since the 1990s, the power needed for handhelds like the GP2X may be supplied by relatively few batteries." +1492,"Nintendo released the Game Boy on April 21, 1989 . The design team headed by Gunpei Yokoi had also been responsible for the Game & Watch system, as well as the Nintendo Entertainment System games Metroid and Kid Icarus. The Game Boy came under scrutiny by Nintendo president Hiroshi Yamauchi, saying that the monochrome screen was too small, and the processing power was inadequate. The design team had felt that low initial cost and battery economy were more important concerns, and when compared to the Microvision, the Game Boy was a huge leap forward." +1493,"Yokoi recognized that the Game Boy needed a killer app—at least one game that would define the console, and persuade customers to buy it. In June 1988, Minoru Arakawa, then-CEO of Nintendo of America saw a demonstration of the game Tetris at a trade show. Nintendo purchased the rights for the game, and packaged it with the Game Boy system as a launch title. It was almost an immediate hit. By the end of the year more than a million units were sold in the US. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell over 118 million units worldwide." +1494,"In 1987, Epyx created the Handy Game; a device that would become the Atari Lynx in 1989. It is the first color handheld console ever made, as well as the first with a backlit screen. It also features networking support with up to 17 other players, and advanced hardware that allows the zooming and scaling of sprites. The Lynx can also be turned upside down to accommodate left-handed players. However, all these features came at a very high price point, which drove consumers to seek cheaper alternatives. The Lynx is also very unwieldy, consumes batteries very quickly, and lacked the third-party support enjoyed by its competitors. Due to its high price, short battery life, production shortages, a dearth of compelling games, and Nintendo's aggressive marketing campaign, and despite a redesign in 1991, the Lynx became a commercial failure. Despite this, companies like Telegames helped to keep the system alive long past its commercial relevance, and when new owner Hasbro released the rights to develop for the public domain, independent developers like Songbird have managed to release new commercial games for the system every year until 2004's Winter Games." +1495,"The TurboExpress is a portable version of the TurboGrafx, released in 1990 for $249.99. Its Japanese equivalent is the PC Engine GT." +1496,"It is the most advanced handheld of its time and can play all the TurboGrafx-16's games . It has a 66 mm screen, the same as the original Game Boy, but in a much higher resolution, and can display 64 sprites at once, 16 per scanline, in 512 colors. Although the hardware can only handle 481 simultaneous colors. It has 8 kilobytes of RAM. The Turbo runs the HuC6820 CPU at 1.79 or 7.16 MHz." +1497,"The optional ""TurboVision"" TV tuner includes RCA audio/video input, allowing users to use TurboExpress as a video monitor. The ""TurboLink"" allowed two-player play. Falcon, a flight simulator, included a ""head-to-head"" dogfight mode that can only be accessed via TurboLink. However, very few TG-16 games offered co-op play modes especially designed with the TurboExpress in mind." +1498,The Bitcorp Gamate is one of the first handheld game systems created in response to the Nintendo Game Boy. It was released in Asia in 1990 and distributed worldwide by 1991. +1499,"Like the Sega Game Gear, it was horizontal in orientation and like the Game Boy, required 4 AA batteries. Unlike many later Game Boy clones, its internal components were professionally assembled . Unfortunately the system's fatal flaw is its screen. Even by the standards of the day, its screen is rather difficult to use, suffering from similar ghosting problems that were common complaints with the first generation Game Boys. Likely because of this fact sales were quite poor, and Bitcorp closed by 1992. However, new games continued to be published for the Asian market, possibly as late as 1994. The total number of games released for the system remains unknown." +1500,"Gamate games were designed for stereo sound, but the console is only equipped with a mono speaker." +1501,"The Game Gear is the third color handheld console, after the Lynx and the TurboExpress; produced by Sega. Released in Japan in 1990 and in North America and Europe in 1991, it is based on the Master System, which gave Sega the ability to quickly create Game Gear games from its large library of games for the Master System. While never reaching the level of success enjoyed by Nintendo, the Game Gear proved to be a fairly durable competitor, lasting longer than any other Game Boy rivals." +1502,"While the Game Gear is most frequently seen in black or navy blue, it was also released in a variety of additional colors: red, light blue, yellow, clear, and violet. All of these variations were released in small quantities and frequently only in the Asian market." +1503,"Following Sega's success with the Game Gear, they began development on a successor during the early 1990s, which was intended to feature a touchscreen interface, many years before the Nintendo DS. However, such a technology was very expensive at the time, and the handheld itself was estimated to have cost around $289 were it to be released. Sega eventually chose to shelve the idea and instead release the Genesis Nomad, a handheld version of the Genesis, as the successor." +1504,"The Watara Supervision was released in 1992 in an attempt to compete with the Nintendo Game Boy. The first model was designed very much like a Game Boy, but it is grey in color and has a slightly larger screen. The second model was made with a hinge across the center and can be bent slightly to provide greater comfort for the user. While the system did enjoy a modest degree of success, it never impacted the sales of Nintendo or Sega. The Supervision was redesigned a final time as ""The Magnum"". Released in limited quantities it was roughly equivalent to the Game Boy Pocket. It was available in three colors: yellow, green and grey. Watara designed many of the games themselves, but did receive some third party support, most notably from Sachen." +1505,"A TV adapter was available in both PAL and NTSC formats that could transfer the Supervision's black-and-white palette to 4 colors, similar in some regards to the Super Game Boy from Nintendo." +1506,"The Hartung Game Master is an obscure handheld released at an unknown point in the early 1990s. Its graphics fidelity was much lower than most of its contemporaries, displaying just 64x64 pixels. It was available in black, white, and purple, and was frequently rebranded by its distributors, such as Delplay, Videojet and Systema." +1507,"The exact number of games released is not known, but is likely around 20. The system most frequently turns up in Europe and Australia." +1508,"By this time, the lack of significant development in Nintendo's product line began allowing more advanced systems such as the Neo Geo Pocket Color and the WonderSwan Color to be developed." +1509,"The Nomad was released in October 1995 in North America only. The release was six years into the market span of the Genesis, with an existing library of more than 500 Genesis games. According to former Sega of America research and development head Joe Miller, the Nomad was not intended to be the Game Gear's replacement; he believed that there was little planning from Sega of Japan for the new handheld. Sega was supporting five different consoles: Saturn, Genesis, Game Gear, Pico, and the Master System, as well as the Sega CD and 32X add-ons. In Japan, the Mega Drive had never been successful and the Saturn was more successful than Sony's PlayStation, so Sega Enterprises CEO Hayao Nakayama decided to focus on the Saturn. By 1999, the Nomad was being sold at less than a third of its original price." +1510,"The Game Boy Pocket is a redesigned version of the original Game Boy having the same features. It was released in 1996. Notably, this variation is smaller and lighter. It comes in seven different colors; red, yellow, green, black, clear, silver, blue, and pink. It has space for two AAA batteries, which provide approximately 10 hours of game play. The screen was changed to a true black-and-white display, rather than the ""pea soup"" monochromatic display of the original Game Boy. Although, like its predecessor, the Game Boy Pocket has no backlight to allow play in a darkened area, it did notably improve visibility and pixel response-time ." +1511,"The first model of the Game Boy Pocket did not have an LED to show battery levels, but the feature was added due to public demand. The Game Boy Pocket was not a new software platform and played the same software as the original Game Boy model." +1512,"The Game.com is a handheld game console released by Tiger Electronics in September 1997. It featured many new ideas for handheld consoles and was aimed at an older target audience, sporting PDA-style features and functions such as a touch screen and stylus. However, Tiger hoped it would also challenge Nintendo's Game Boy and gain a following among younger gamers too. Unlike other handheld game consoles, the first game.com consoles included two slots for game cartridges, which would not happen again until the Tapwave Zodiac, the DS and DS Lite, and could be connected to a 14.4 kbit/s modem. Later models had only a single cartridge slot." +1513,"The Game Boy Color is Nintendo's successor to the Game Boy and was released on October 21, 1998, in Japan and in November of the same year in the United States. It features a color screen, and is slightly bigger than the Game Boy Pocket. The processor is twice as fast as a Game Boy's and has twice as much memory. It also had an infrared communications port for wireless linking which did not appear in later versions of the Game Boy, such as the Game Boy Advance." +1514,"The Game Boy Color was a response to pressure from game developers for a new system, as they felt that the Game Boy, even in its latest incarnation, the Game Boy Pocket, was insufficient. The resulting product was backward compatible, a first for a handheld console system, and leveraged the large library of games and great installed base of the predecessor system. This became a major feature of the Game Boy line, since it allowed each new launch to begin with a significantly larger library than any of its competitors. As of March 31, 2005, the Game Boy and Game Boy Color combined to sell 118.69 million units worldwide." +1515,"The console is capable of displaying up to 56 different colors simultaneously on screen from its palette of 32,768, and can add basic four-color shading to games that had been developed for the original Game Boy. It can also give the sprites and backgrounds separate colors, for a total of more than four colors." +1516,"The Neo Geo Pocket Color was released in 1999 in Japan, and later that year in the United States and Europe. It is a 16-bit color handheld game console designed by SNK, the maker of the Neo Geo home console and arcade machine. It came after SNK's original Neo Geo Pocket monochrome handheld, which debuted in 1998 in Japan." +1517,"In 2000 following SNK's purchase by Japanese Pachinko manufacturer Aruze, the Neo Geo Pocket Color was dropped from both the US and European markets, purportedly due to commercial failure." +1518,"The system seemed well on its way to being a success in the U.S. It was more successful than any Game Boy competitor since Sega's Game Gear, but was hurt by several factors, such as SNK's infamous lack of communication with third-party developers, and anticipation of the Game Boy Advance. The decision to ship U.S. games in cardboard boxes in a cost-cutting move rather than hard plastic cases that Japanese and European releases were shipped in may have also hurt US sales." +1519,"The WonderSwan Color is a handheld game console designed by Bandai. It was released on December 9, 2000, in Japan, Although the WonderSwan Color was slightly larger and heavier compared to the original WonderSwan, the color version featured 512 KB of RAM and a larger color LCD screen. In addition, the WonderSwan Color is compatible with the original WonderSwan library of games." +1520,"Prior to WonderSwan's release, Nintendo had virtually a monopoly in the Japanese video game handheld market. After the release of the WonderSwan Color, Bandai took approximately 8% of the market share in Japan partly due to its low price of 6800 yen . Another reason for the WonderSwan's success in Japan was the fact that Bandai managed to get a deal with Square to port over the original Famicom Final Fantasy games with improved graphics and controls. However, with the popularity of the Game Boy Advance and the reconciliation between Square and Nintendo, the WonderSwan Color and its successor, the SwanCrystal quickly lost its competitive advantage." +1521,"The 2000s saw a major leap in innovation, particularly in the second half with the release of the DS and PSP." +1522,"In 2001, Nintendo released the Game Boy Advance , which added two shoulder buttons, a larger screen, and more computing power than the Game Boy Color." +1523,"The design was revised two years later when the Game Boy Advance SP , a more compact version, was released. The SP features a ""clamshell"" design , as well as a frontlit color display and rechargeable battery. Despite the smaller form factor, the screen remained the same size as that of the original. In 2005, the Game Boy Micro was released. This revision sacrifices screen size and backwards compatibility with previous Game Boys for a dramatic reduction in total size and a brighter backlit screen. A new SP model with a backlit screen was released in some regions around the same time." +1524,"Along with the GameCube, the GBA also introduced the concept of ""connectivity"": using a handheld system as a console controller. A handful of games use this feature, most notably Animal Crossing, Pac-Man Vs., Final Fantasy Crystal Chronicles, The Legend of Zelda: Four Swords Adventures, The Legend of Zelda: The Wind Waker, Metroid Prime, and Sonic Adventure 2: Battle." +1525,"As of December 31, 2007, the GBA, GBA SP, and the Game Boy Micro combined have sold 80.72 million units worldwide." +1526,"The original GP32 was released in 2001 by the South Korean company Game Park a few months after the launch of the Game Boy Advance. It featured a 32-bit CPU, 133 MHz processor, MP3 and Divx player, and e-book reader. SmartMedia cards were used for storage, and could hold up to 128mb of anything downloaded through a USB cable from a PC. The GP32 was redesigned in 2003. A front-lit screen was added and the new version was called GP32 FLU . In summer 2004, another redesign, the GP32 BLU, was made, and added a backlit screen. This version of the handheld was planned for release outside South Korea; in Europe, and it was released for example in Spain . While not a commercial success on a level with mainstream handhelds , it ended up being used mainly as a platform for user-made applications and emulators of other systems, being popular with developers and more technically adept users." +1527,"Nokia released the N-Gage in 2003. It was designed as a combination MP3 player, cellphone, PDA, radio, and gaming device. The system received much criticism alleging defects in its physical design and layout, including its vertically oriented screen and requirement of removing the battery to change game cartridges. The most well known of these was ""sidetalking"", or the act of placing the phone speaker and receiver on an edge of the device instead of one of the flat sides, causing the user to appear as if they are speaking into a taco." +1528,"The N-Gage QD was later released to address the design flaws of the original. However, certain features available in the original N-Gage, including MP3 playback, FM radio reception, and USB connectivity were removed." +1529,"Second generation of N-Gage launched on April 3, 2008 in the form of a service for selected Nokia Smartphones." +1530,"In 2003, Tapwave released the Zodiac. It was designed to be a PDA-handheld game console hybrid. It supported photos, movies, music, Internet, and documents. The Zodiac used a special version Palm OS 5, 5.2T, that supported the special gaming buttons and graphics chip. Two versions were available, Zodiac 1 and 2, differing in memory and looks. The Zodiac line ended in July 2005 when Tapwave declared bankruptcy." +1531,"The Nintendo DS was released in November 2004. Among its new features were the incorporation of two screens, a touchscreen, wireless connectivity, and a microphone port. As with the Game Boy Advance SP, the DS features a clamshell design, with the two screens aligned vertically on either side of the hinge." +1532,"The DS's lower screen is touch sensitive, designed to be pressed with a stylus, a user's finger or a special ""thumb pad"" . More traditional controls include four face buttons, two shoulder buttons, a D-pad, and ""Start"" and ""Select"" buttons. The console also features online capabilities via the Nintendo Wi-Fi Connection and ad-hoc wireless networking for multiplayer games with up to sixteen players. It is backwards-compatible with all Game Boy Advance games, but like the Game Boy Micro, it is not compatible with games designed for the Game Boy or Game Boy Color." +1533,"In January 2006, Nintendo revealed an updated version of the DS: the Nintendo DS Lite with an updated, smaller form factor , a cleaner design, longer battery life, and brighter, higher-quality displays, with adjustable brightness. It is also able to connect wirelessly with Nintendo's Wii console." +1534,"On October 2, 2008, Nintendo announced the Nintendo DSi, with larger, 3.25-inch screens and two integrated cameras. It has an SD card storage slot in place of the Game Boy Advance slot, plus internal flash memory for storing downloaded games. It was released on November 1, 2008, in Japan, April 2, 2009, in Australia, April 3, 2009, in Europe, and April 5, 2009, in North America. On October 29, 2009, Nintendo announced a larger version of the DSi, called the DSi XL, which was released on November 21, 2009, in Japan, March 5, 2010, in Europe, March 28, 2010, in North America, and April 15, 2010, in Australia." +1535,"As of December 31, 2009, the Nintendo DS, Nintendo DS Lite, and Nintendo DSi combined have sold 125.13 million units worldwide." +1536,"The GameKing is a handheld game console released by the Chinese company TimeTop in 2004. The first model while original in design owes a large debt to Nintendo's Game Boy Advance. The second model, the GameKing 2, is believed to be inspired by Sony's PSP. This model also was upgraded with a backlit screen, with a distracting background transparency . A color model, the GameKing 3 apparently exists, but was only made for a brief time and was difficult to purchase outside of Asia. Whether intentionally or not, the GameKing has the most primitive graphics of any handheld released since the Game Boy of 1989. " +1537,"As many of the games have an ""old school"" simplicity, the device has developed a small cult following. The Gameking's speaker is quite loud and the cartridges' sophisticated looping soundtracks are seemingly at odds with its primitive graphics." +1538,"TimeTop made at least one additional device sometimes labeled as ""GameKing"", but while it seems to possess more advanced graphics, is essentially an emulator that plays a handful of multi-carts . Outside of Asia however the Gameking remains relatively unheard of due to the enduring popularity of Japanese handhelds such as those manufactured by Nintendo and Sony." +1539,"The PlayStation Portable is a handheld game console manufactured and marketed by Sony Computer Entertainment. Development of the console was first announced during E3 2003, and it was unveiled on May 11, 2004, at a Sony press conference before E3 2004. The system was released in Japan on December 12, 2004, in North America on March 24, 2005, and in the PAL region on September 1, 2005." +1540,"The PlayStation Portable is the first handheld video game console to use an optical disc format, Universal Media Disc , for distribution of its games. UMD Video discs with movies and television shows were also released. The PSP utilized the Sony/SanDisk Memory Stick Pro Duo format as its primary storage medium. Other distinguishing features of the console include its large viewing screen, multi-media capabilities, and connectivity with the PlayStation 3, other PSPs, and the Internet." +1541,"Tiger's Gizmondo came out in the UK during March 2005 and it was released in the U.S. during October 2005. It is designed to play music, movies, and games, have a camera for taking and storing photos, and have GPS functions. It also has Internet capabilities. It has a phone for sending text and multimedia messages. Email was promised at launch, but was never released before Gizmondo, and ultimately Tiger Telematics', downfall in early 2006. Users obtained a second service pack, unreleased, hoping to find such functionality. However, Service Pack B did not activate the e-mail functionality." +1542,"The GP2X is an open-source, Linux-based handheld video game console and media player created by GamePark Holdings of South Korea, designed for homebrew developers as well as commercial developers. It is commonly used to run emulators for game consoles such as Neo-Geo, Genesis, Master System, Game Gear, Amstrad CPC, Commodore 64, Nintendo Entertainment System, TurboGrafx-16, MAME and others." +1543,"A new version called the ""F200"" was released October 30, 2007, and features a touchscreen, among other changes. Followed by GP2X Wiz and GP2X Caanoo ." +1544,"The Dingoo A320 is a micro-sized gaming handheld that resembles the Game Boy Micro and is open to game development. It also supports music, radio, emulators and video playing capabilities with its own interface much like the PSP. There is also an onboard radio and recording program. It is currently available in two colors — white and black. Other similar products from the same manufacturer are the Dingoo A330 , Dingoo A360, Dingoo A380, and Dingoo A320E." +1545,"The PSP Go is a version of the PlayStation Portable handheld game console manufactured by Sony. It was released on October 1, 2009, in American and European territories, and on November 1 in Japan. It was revealed prior to E3 2009 through Sony's Qore VOD service. Although its design is significantly different from other PSPs, it is not intended to replace the PSP 3000, which Sony continued to manufacture, sell, and support. On April 20, 2011, the manufacturer announced that the PSP Go would be discontinued so that they may concentrate on the PlayStation Vita. Sony later said that only the European and Japanese versions were being cut, and that the console would still be available in the US. +Unlike previous PSP models, the PSP Go does not feature a UMD drive, but instead has 16 GB of internal flash memory to store games, video, pictures, and other media. This can be extended by up to 32 GB with the use of a Memory Stick Micro flash card. Also unlike previous PSP models, the PSP Go's rechargeable battery is not removable or replaceable by the user. The unit is 43% lighter and 56% smaller than the original PSP-1000, and 16% lighter and 35% smaller than the PSP-3000. It has a 3.8"" 480 × 272 LCD . The screen slides up to reveal the main controls. The overall shape and sliding mechanism are similar to that of Sony's mylo COM-2 internet device." +1546,"The Pandora is a handheld game console/UMPC/PDA hybrid designed to take advantage of existing open source software and to be a target for home-brew development. It runs a full distribution of Linux, and in functionality is like a small PC with gaming controls. It is developed by OpenPandora, which is made up of former distributors and community members of the GP32 and GP2X handhelds." +1547,"OpenPandora began taking pre-orders for one batch of 4000 devices in November 2008 and after manufacturing delays, began shipping to customers on May 21, 2010." +1548,"The FC-16 Go is a portable Super NES hardware clone manufactured by Yobo Gameware in 2009. It features a 3.5-inch display, two wireless controllers, and CRT cables that allow cartridges to be played on a television screen. Unlike other Super NES clone consoles, it has region tabs that only allow NTSC North American cartridges to be played. Later revisions feature stereo sound output, larger shoulder buttons, and a slightly re-arranged button, power, and A/V output layout." +1549,"The Nintendo 3DS is the successor to Nintendo's DS handheld. The autostereoscopic device is able to project stereoscopic three-dimensional effects without requirement of active shutter or passive polarized glasses, which are required by most current 3D televisions to display the 3D effect. The 3DS was released in Japan on February 26, 2011; in Europe on March 25, 2011; in North America on March 27, 2011, and in Australia on March 31, 2011. The system features backward compatibility with Nintendo DS series software, including Nintendo DSi software except those that require the Game Boy Advance slot. It also features an online service called the Nintendo eShop, launched on June 6, 2011, in North America and June 7, 2011, in Europe and Japan, which allows owners to download games, demos, applications and information on upcoming film and game releases. On November 24, 2011, a limited edition Legend of Zelda 25th Anniversary 3DS was released that contained a unique Cosmo Black unit decorated with gold Legend of Zelda related imagery, along with a copy of The Legend of Zelda: Ocarina of Time 3D." +1550,"There are also other models including the Nintendo 2DS and the New Nintendo 3DS, the latter with a larger variant, like the original Nintendo 3DS, as well as the New Nintendo 2DS XL." +1551,"The Sony Ericsson Xperia PLAY is a handheld game console smartphone produced by Sony Ericsson under the Xperia smartphone brand. The device runs Android 2.3 Gingerbread, and is the first to be part of the PlayStation Certified program which means that it can play PlayStation Suite games. The device is a horizontally sliding phone with its original form resembling the Xperia X10 while the slider below resembles the slider of the PSP Go. The slider features a D-pad on the left side, a set of standard PlayStation buttons on the right, a long rectangular touchpad in the middle, start and select buttons on the bottom right corner, a menu button on the bottom left corner, and two shoulder buttons on the back of the device. It is powered by a 1 GHz Qualcomm Snapdragon processor, a Qualcomm Adreno 205 GPU, and features a display measuring 4.0 inches , an 8-megapixel camera, 512 MB RAM, 8 GB internal storage, and a micro-USB connector. It supports microSD cards, versus the Memory Stick variants used in PSP consoles. The device was revealed officially for the first time in a Super Bowl ad on Sunday, February 6, 2011. On February 13, 2011, at Mobile World Congress 2011, it was announced that the device would be shipping globally in March 2011, with a launch lineup of around 50 software titles." +1552,"The PlayStation Vita is the successor to Sony's PlayStation Portable Handheld series. It was released in Japan on December 17, 2011, and in Europe, Australia, North, and South America on February 22, 2012." +1553,"The handheld includes two analog sticks, a 5-inch OLED/LCD multi-touch capacitive touchscreen, and supports Bluetooth, Wi-Fi and optional 3G. Internally, the PS Vita features a 4 core ARM Cortex-A9 MPCore processor and a 4 core SGX543MP4+ graphics processing unit, as well as LiveArea software as its main user interface, which succeeds the XrossMediaBar." +1554,"The device is fully backwards-compatible with PlayStation Portable games digitally released on the PlayStation Network via the PlayStation Store. However, PSone Classics and PS2 titles were not compatible at the time of the primary public release in Japan. The Vita's dual analog sticks will be supported on selected PSP games. The graphics for PSP releases will be up-scaled, with a smoothing filter to reduce pixelation." +1555,"On September 20, 2018, Sony announced at Tokyo Game Show 2018 that the Vita would be discontinued in 2019, ending its hardware production. Production of Vita hardware officially ended on March 1, 2019." +1556,"The Razer Switchblade was a prototype pocket-sized like a Nintendo DSi XL designed to run Windows 7, featured a multi-touch LCD screen and an adaptive keyboard that changed keys depending on the game the user would play. It also was to feature a full mouse." +1557,"It was first unveiled on January 5, 2011, on the Consumer Electronics Show . The Switchblade won The Best of CES 2011 People's Voice award. It has since been in development and the release date is still unknown. The device has likely been suspended indefinitely." +1558,"Project Shield is a handheld system developed by Nvidia announced at CES 2013. It runs on Android 4.2 and uses Nvidia Tegra 4 SoC. The hardware includes a 5-inches multitouch screen with support for HD graphics . The console allows for the streaming of games running on a compatible desktop PC, or laptop." +1559,"Nvidia Shield Portable has received mixed reception from critics. Generally, reviewers praised the performance of the device, but criticized the cost and lack of worthwhile games. Engadget's review noted the system's ""extremely impressive PC gaming"", but also that due to its high price, the device was ""a hard sell as a portable game console"", especially when compared to similar handhelds on the market. CNET's Eric Franklin states in his review of the device that ""The Nvidia Shield is an extremely well made device, with performance that pretty much obliterates any mobile product before it; but like most new console launches, there is currently a lack of available games worth your time."" Eurogamer's comprehensive review of the device provides a detailed account of the device and its features; concluded by saying: ""In the here and now, the first-gen Shield Portable is a gloriously niche, luxury product - the most powerful Android system on the market by a clear stretch and possessing a unique link to PC gaming that's seriously impressive in beta form, and can only get better.""" +1560,"The Nintendo Switch is a hybrid console that can either be used in a handheld form, or inserted into a docking station attached to a television to play on a bigger screen. The Switch features two detachable wireless controllers, called Joy-Con, which can be used individually or attached to a grip to provide a traditional gamepad form. A handheld-only revision named Nintendo Switch Lite was released on September 20, 2019." +1561,"The Switch Lite had sold about 1.95 million units worldwide by September 30, 2019, only 10 days after its launch." +1562,"Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games." +1563,"Arc System Works, Atari, Data East, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the NES, the SNES, and the Sega Genesis/Mega Drive." +1564,"The Analogue Pocket is a FPGA-based handheld game console designed and manufactured by Analogue, It is designed to play games designed for handhelds of the fourth, fifth and sixth generation of video game consoles. The console features a design reminiscent of the Game Boy, with additional buttons for the supported platforms. It features a 3.5"" 1600x1440 LTPS LCD display, an SD card port, and a link cable port compatible with Game Boy link cables. The Analogue Pocket uses an Altera Cyclone V processor, and is compatible with the original Game Boy, Game Boy Color and Game Boy Advance cartridges out of the box. With cartridge adapters the Analogue Pocket can play Game Gear, Neo Geo Pocket, Neo Geo Pocket Color and Atari Lynx game cartridges. The Analogue Pocket includes an additional FPGA, allowing 3rd party FPGA development. The Analogue Pocket was released in December 2021." +1565,"The Steam Deck is a handheld computer device, developed by Valve, which runs SteamOS 3.0, a tailored distro of Arch Linux and includes support for Proton, a compatibility layer that allows most Microsoft Windows games to be played on the Linux-based operating system. In terms of hardware, the Deck includes a custom AMD APU based on their Zen 2 and RDNA 2 architectures, with the CPU running a four-core/eight-thread unit and the GPU running on eight compute units with a total estimated performance of 1.6 TFLOPS. Both the CPU and GPU use variable timing frequencies, with the CPU running between 2.4 and 3.5 GHz and the GPU between 1.0 and 1.6 GHz based on current processor needs. Valve stated that the CPU has comparable performance to Ryzen 3000 desktop computer processors and the GPU performance to the Radeon RX 6000 series. The Deck includes 16 GB of LPDDR5 RAM in a quad channel configuration." +1566,"Valve revealed the Steam Deck on July 15, 2021, with pre-orders being made option the next day. The Deck was expected to ship in December 2021 to the US, Canada, the EU and the UK but was delayed to February 2022, with other regions to follow in 2022. Pre-orders were limited to those with Steam accounts opened before June 2021 to prevent resellers from controlling access to the device. Pre-orders reservations on July 16, 2021, through the Steam storefront briefly crashed the servers due to the demand. While initial shipments are still planned by February 2022, Valve has reported to new purchasers that wider availability will be later, with the 64 GB model and 256 GB NVMe model due in Q2 2022, and the 512 GB NVMe model by Q3 2022. Steam Deck was released on February 25, 2022." +1567,"ARM is a family of RISC instruction set architectures for computer processors. Arm Ltd. develops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licenses cores that implement these ISAs." +1568,"Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, including smartphones, laptops, and tablet computers, as well as embedded systems. However, ARM processors are also used for desktops and servers, including the world's fastest supercomputer from 2020 to 2022. With over 230 billion ARM chips produced, as of 2022, ARM is the most widely used family of instruction set architectures." +1569,"There have been several generations of the ARM design. The original ARM1 used a 32-bit internal structure but had a 26-bit address space that limited it to 64 MB of main memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. Arm Ltd. has also released a series of additional instruction sets for different rules; the ""Thumb"" extension adds both 32- and 16-bit instructions for improved code density, while Jazelle added instructions for directly handling Java bytecode. More recent changes include the addition of simultaneous multithreading for improved performance or fault tolerance." +1570,"Acorn Computers' first widely successful design was the BBC Micro, introduced in December 1981. This was a relatively conventional machine based on the MOS Technology 6502 CPU but ran at roughly double the performance of competing designs like the Apple II due to its use of faster dynamic random-access memory . Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal with Hitachi for a supply of faster 4 MHz parts." +1571,"Machines of the era generally shared memory between the processor and the framebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separate input/output . As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market." +1572,"1981 was also the year that the IBM Personal Computer was introduced. Using the recently introduced Intel 8088, a 16-bit CPU compared to the 6502's 8-bit design, it offered higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer 32-bit designs were also coming to market, such as the Motorola 68000 and National Semiconductor NS32016." +1573,"Acorn began considering how to compete in this market and produced a new paper design named the Acorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price. This would outperform and underprice the PC. At the same time, the recent introduction of the Apple Lisa brought the graphical user interface concept to a wider audience and suggested the future belonged to machines with a GUI. The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and a hard disk drive, all very expensive then." +1574,"The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still ""a bit crap"", offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal. They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/s bandwidth." +1575,"Two key events led Acorn down the path to ARM. One was the publication of a series of reports from the University of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market. The second was a visit by Steve Furber and Sophie Wilson to the Western Design Center, a company run by Bill Mensch and his sister, which had become the logical successor to the MOS team and was offering new versions like the WDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it. In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members which were already on revision H of their design and yet it still contained bugs. This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine." +1576,"The original Berkeley RISC designs were in some sense teaching systems, not designed specifically for outright performance. To the RISC's basic register-heavy and load/store concepts, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serve interrupts, which allowed the machines to offer reasonable input/output performance with no added external hardware. To offer interrupts with similar performance as the 6502, the ARM design limited its physical address space to 64 MB of total addressable space, requiring 26 bits of address. As instructions were 4 bytes long, and required to be aligned on 4-byte boundaries, the lower 2 bits of an instruction address were always zero. This meant the program counter only needed to be 24 bits, allowing it to be stored along with the eight bit processor flags in a single 32-bit register. That meant that upon receiving an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags. This decision halved the interrupt overhead." +1577,"Another change, and among the most important in terms of practical real-world performance, was the modification of the instruction set to take advantage of page mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or ""page"", in the DRAM chip. Berkeley's design did not consider page mode and treated all memory equally. The ARM design added special vector-like memory access instructions, the ""S-cycles"", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance." +1578,The Berkeley RISC designs used register windows to reduce the number of register saves and restores performed in procedure calls; the ARM design did not adopt this. +1579,"Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a second 6502 processor. This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO, Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA. The official Acorn RISC Machine project started in October 1983." +1580,"Acorn chose VLSI Technology as the ""silicon partner"", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985. Known as ARM1, these versions ran at 6 MHz." +1581,"The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips , and sped up the CAD software used in ARM2 development. Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator." +1582,"The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz. A significant change in the underlying architecture was the addition of a Booth multiplier, whereas formerly multiplication had to be carried out in software. Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts." +1583,"According to the Dhrystone benchmark, the ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like the Amiga or Macintosh SE. It was twice as fast as an Intel 80386 running at 16 MHz, and about the same speed as a multi-processor VAX-11/784 superminicomputer. The only systems that beat it were the Sun SPARC and MIPS R2000 RISC-based workstations. Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines; notably, it lacked any dedicated direct memory access controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops." +1584,"The ARM2 featured a 32-bit data bus, 26-bit address space and 27 32-bit registers, of which 16 are accessible at any one time . The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack of microcode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of a cache. This simplicity enabled the ARM2 to have a low power consumption and simpler thermal packaging, through having fewer powered transistors, yet offering better performance than the contemporary, 1987, IBM PS/2 Model 50, which initially utilised an Intel 80286, offering 1.8 MIPS @ 10 MHz, and later in 1987, the 2 MIPS of the PS/2 70, with its Intel 386 DX @ 16 MHz." +1585,"A successor, ARM3, was produced with a 4 KB cache, which further improved performance. The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags." +1586,"In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd., which became ARM Ltd. when its parent company, Arm Holdings plc, floated on the London Stock Exchange and Nasdaq in 1998. The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA." +1587,"In 1994, Acorn used the ARM610 as the main central processing unit in their RiscPC computers. DEC licensed the ARMv4 architecture and produced the StrongARM. At 233 MHz, this CPU drew only one watt . This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement their i960 line with the StrongARM. Intel later developed its own high performance implementation named XScale, which it has since sold to Marvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors, while ARM6 grew only to 35,000." +1588,"In 2005, about 98% of all mobile phones sold used at least one ARM processor. In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billion ARM-based processors, representing 95% of smartphones, 35% of digital televisions and set-top boxes, and 10% of mobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems. In 2013, 10 billion were produced and ""ARM-based chips are found in nearly 60 percent of the world's mobile devices""." +1589,"Arm Ltd.'s primary business is selling IP cores, which licensees use to create microcontrollers , CPUs, and systems-on-chips based on those cores. The original design manufacturer combines the ARM core with other parts to produce a complete device, typically one that can be built in existing semiconductor fabrication plants at low cost and still deliver substantial performance. The most successful implementation has been the ARM7TDMI with hundreds of millions sold. Atmel has been a precursor design center in the ARM7TDMI-based embedded system." +1590,"The ARM architectures used in smartphones, PDAs and other mobile devices range from ARMv5 to ARMv8-A." +1591,"In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based on Intel Atom." +1592,"Arm Ltd. offers a variety of licensing terms, varying in cost and deliverables. Arm Ltd. provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset , and the right to sell manufactured silicon containing the ARM CPU." +1593,"SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments's OMAP products, Samsung's Hummingbird and Exynos products, Apple's A4, A5, and A5X, and NXP's i.MX." +1594,"Fabless licensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verified semiconductor intellectual property core. For these customers, Arm Ltd. delivers a gate netlist description of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers and foundry operators, choose to acquire the processor IP in synthesizable RTL form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist . While Arm Ltd. does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems. Merchant foundries can be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers." +1595,"Arm Ltd. prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee." +1596,"Compared to dedicated semiconductor foundries without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufactured wafer. For low to mid volume applications, a design service foundry offers lower overall pricing . For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE costs, making the dedicated foundry a better choice." +1597,"Companies that have developed chips with cores designed by Arm include Amazon.com's Annapurna Labs subsidiary, Analog Devices, Apple, AppliedMicro , Atmel, Broadcom, Cavium, Cypress Semiconductor, Freescale Semiconductor , Huawei, Intel, Maxim Integrated, Nvidia, NXP, Qualcomm, Renesas, Samsung Electronics, ST Microelectronics, Texas Instruments, and Xilinx." +1598,"In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for example Kryo 280." +1599,Companies that are current licensees of Built on ARM Cortex Technology include Qualcomm. +1600,"Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro , Broadcom, Cavium , Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics, Fujitsu, and NUVIA Inc. ." +1601,"On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARM intellectual property for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping." +1602,75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019: +1603,"Arm provides a list of vendors who implement ARM cores in their design , microprocessor and microcontrollers)." +1604,"ARM cores are used in a number of products, particularly PDAs and smartphones. Some computing examples are Microsoft's first generation Surface, Surface 2 and Pocket PC devices , Apple's iPads, and Asus's Eee Pad Transformer tablet computers, and several Chromebook laptops. Others include Apple's iPhone smartphones and iPod portable media players, Canon PowerShot digital cameras, Nintendo Switch hybrid, the Wii security processor and 3DS handheld game consoles, and TomTom turn-by-turn navigation systems." +1605,"In 2005, Arm took part in the development of Manchester University's computer SpiNNaker, which used ARM cores to simulate the human brain." +1606,"ARM chips are also used in Raspberry Pi, BeagleBoard, BeagleBone, PandaBoard, and other single-board computers, because they are very small, inexpensive, and consume very little power." +1607,"The 32-bit ARM architecture , such as ARMv7-A , was the most widely used architecture in mobile devices as of 2011." +1608,"Since 1995, various versions of the ARM Architecture Reference Manual have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture ""profiles"":" +1609,"Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture as a subset of the ARMv7-M profile with fewer instructions." +1610,"Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events or programmatically." +1611,"The original ARM implementation was hardwired without microcode, like the much simpler 8-bit 6502 processor used in prior Acorn microcomputers." +1612,The 32-bit ARM architecture includes the following RISC features: +1613,"To compensate for the simpler design, compared with processors like the Intel 80286 and Motorola 68020, some additional design features were used:" +1614,"ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations." +1615,"ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores do not support 64-bit results. Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies." +1616,The divide instructions are only included in the following ARM architectures: +1617,Registers R0 through R7 are the same across all CPU modes; they are never banked. +1618,Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers. +1619,"R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively." +1620,Aliases: +1621,The Current Program Status Register has the following 32 bits. +1622,"Almost every ARM instruction has a conditional execution feature called predication, which is implemented with a 4-bit condition code selector . To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions." +1623,"Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for small if statements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction." +1624,"An algorithm that provides a good example of conditional execution is the subtraction-based Euclidean algorithm for computing the greatest common divisor. In the C programming language, the algorithm can be written as:" +1625,The same algorithm can be rewritten in a way closer to target ARM instructions as: +1626,and coded in assembly language as: +1627,"which avoids the branches around the then and else clauses. If r0 and r1 are equal then neither of the SUB instructions will be executed, eliminating the need for a conditional branch to implement the while check at the top of the loop, for example had SUBLE been used." +1628,One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions. +1629,"Another feature of the instruction set is the ability to fold shifts and rotates into the data processing instructions, so that, for example, the statement in C language:" +1630,"could be rendered as a one-word, one-cycle instruction:" +1631,This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently. +1632,"The ARM processor also has features rarely seen in other RISC architectures, such as PC-relative addressing and pre- and post-increment addressing modes." +1633,"The ARM instruction set has increased over time. Some early ARM processors , for example, have no instruction to store a two-byte quantity." +1634,"The ARM7 and earlier implementations have a three-stage pipeline; the stages being fetch, decode, and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a faster adder and more extensive branch prediction logic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added ""M""." +1635,"The ARM architecture provides a non-intrusive way of extending the instruction set using ""coprocessors"" that can be addressed using MCR, MRC, MRRC, MCRR, and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 being reserved for some typical control functions like managing the caches and MMU operation on processors that have one." +1636,"In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors." +1637,"In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives." +1638,"All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built using JTAG support, though some newer cores optionally support ARM's own two-wire ""SWD"" protocol. In ARM7TDMI cores, the ""D"" represented JTAG debug support, and the ""I"" represented presence of an ""EmbeddedICE"" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed." +1639,"The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a ""Debug Mode""; similar facilities were also available with EmbeddedICE. Both ""halt mode"" and ""monitor"" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support." +1640,"There is a separate ARM ""CoreSight"" debug architecture, which is not architecturally required by ARMv7 processors." +1641,"The Debug Access Port is an implementation of an ARM Debug Interface. +There are two different supported implementations, the Serial Wire JTAG Debug Port and the Serial Wire Debug Port . +CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU." +1642,"To improve the ARM architecture for digital signal processing and multimedia applications, DSP instructions were added to the instruction set. These are signified by an ""E"" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I." +1643,"The new instructions are common in digital signal processor architectures. They include variations on signed multiply–accumulate, saturated add and subtract, and count leading zeros." +1644,"First introduced in 1999, this extension of the core instruction set contrasted with ARM's earlier DSP coprocessor known as Piccolo, which employed a distinct, incompatible instruction set whose execution involved a separate program counter. Piccolo instructions employed a distinct register file of sixteen 32-bit registers, with some instructions combining registers for use as 48-bit accumulators and other instructions addressing 16-bit half-registers. Some instructions were able to operate on two such 16-bit values in parallel. Communication with the Piccolo register file involved load to Piccolo and store from Piccolo coprocessor instructions via two buffers of eight 32-bit entries. Described as reminiscent of other approaches, notably Hitachi's SH-DSP and Motorola's 68356, Piccolo did not employ dedicated local memory and relied on the bandwidth of the ARM core for DSP operand retrieval, impacting concurrent performance. Piccolo's distinct instruction set also proved not to be a ""good compiler target""." +1645,"Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also named Neon." +1646,"Jazelle DBX is a technique that allows Java bytecode to be executed directly in the ARM architecture as a third execution state alongside the existing ARM and Thumb-mode. Support for this state is signified by the ""J"" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 , though newer cores only include a trivial implementation that provides no hardware acceleration." +1647,"To improve compiled code density, processors since the ARM7TDMI have featured the Thumb compressed instruction set, which have their own state. When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set. Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state." +1648,"In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth." +1649,"Unlike processor architectures with variable length instructions, such as the Cray-1 and Hitachi SuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as the Game Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory." +1650,"The first processor with a Thumb instruction decoder was the ARM7TDMI. All processors supporting 32-bit instruction sets, starting with ARM9, and including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the Hitachi SuperH , which was licensed by ARM. ARM's smallest processor families implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications. ARM processors that don't support 32-bit addressing also omit Thumb." +1651,"Thumb-2 technology was introduced in the ARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory." +1652,"Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new ""Unified Assembly Language"" supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code . This requires a bit of care, and use of a new ""IT"" instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example:" +1653,"All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series that support ARMv7, all Cortex-R series, and all ARM11 series support both ""ARM instruction set state"" and ""Thumb instruction set state"", while chips in the Cortex-M series support only the Thumb instruction set." +1654,"ThumbEE , which was marketed as Jazelle RCT , was announced in 2005 and deprecated in 2011. It first appeared in the Cortex-A8 processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime in managed Execution Environments. ThumbEE is a target for languages such as Java, C#, Perl, and Python, and allows JIT compilers to output smaller compiled code without reducing performance." +1655,"New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 . Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state." +1656,"On 23 November 2011, Arm deprecated any use of the ThumbEE instruction set, and Armv8 removes support for ThumbEE." +1657,"VFP technology is a floating-point unit coprocessor extension to the ARM architecture . It provides low-cost single-precision and double-precision floating-point computation fully compliant with the ANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short ""vector mode"" instructions but these operated on each vector element sequentially and thus did not offer the performance of true single instruction, multiple data vector parallelism. This vector mode was therefore removed shortly after its introduction, to be replaced with the much more powerful Advanced SIMD, also named Neon." +1658,"Some devices such as the ARM Cortex-A8 have a cut-down VFPLite module instead of a full VFP module, and require roughly ten times more clock cycles per float operation. Pre-Armv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface include FPA, FPE, iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are not opcode-compatible with it. FPA10 also provides extended precision, but implements correct rounding only in single precision." +1659,"In Debian Linux and derivatives such as Ubuntu and Linux Mint, armhf refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate." +1660,"The Advanced SIMD extension is a combined 64- and 128-bit SIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices. Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run the GSM adaptive multi-rate speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware. Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time, whereas newer Cortex-A15 devices can execute 128 bits at a time." +1661,"A quirk of Neon in Armv7 devices is that it flushes all subnormal numbers to zero, and as a result the GCC compiler will not use it unless -funsafe-math-optimizations, which allows losing denormals, is turned on. ""Enhanced"" Neon defined since Armv8 does not have this quirk, but as of GCC 8.2 the same flag is still required to enable Neon instructions. On the other hand, GCC does consider Neon safe on AArch64 for Armv8." +1662,"ProjectNe10 is ARM's first open-source project . The Ne10 library is a set of common, useful functions written in both Neon and C . The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub." +1663,Helium is the M-Profile Vector Extension . It adds more than 150 scalar and vector instructions. +1664,"The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to as worlds , to prevent information leaking from the more trusted world to the less trusted world. This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device." +1665,"Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce the attack surface. Typical applications include DRM functionality for controlling the use of media on ARM-based devices, and preventing any unapproved use of the device." +1666,"In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a given threat model, but they are not immune from attack." +1667,Open Virtualization is an open source implementation of the trusted world architecture for TrustZone. +1668,"AMD has licensed and incorporated TrustZone technology into its Secure Processor Technology. Enabled in some but not all products, AMD's APUs include a Cortex-A5 processor for handling secure processing. In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints." +1669,"Samsung Knox uses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys." +1670,"The Security Extension, marketed as TrustZone for Armv8-M Technology, was introduced in the Armv8-M architecture. While containing similar concepts to TrustZone for Armv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions. It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M and PSA Certified." +1671,"As of ARMv6, the ARM architecture supports no-execute page protection, which is referred to as XN, for eXecute Never." +1672,"The Large Physical Address Extension , which extends the physical address size from 32 bits to 40 bits, was added to the Armv7-A architecture in 2011." +1673,"The physical address size may be even larger in processors based on the 64-bit architecture. For example, it is 44 bits in Cortex-A75 and Cortex-A65AE." +1674,"The Armv8-R and Armv8-M architectures, announced after the Armv8-A architecture, share some features with Armv8-A. However, Armv8-M does not include any 64-bit AArch64 instructions, and Armv8-R originally did not include any AArch64 instructions; those instructions were added to Armv8-R later." +1675,"The Armv8.1-M architecture, announced in February 2019, is an enhancement of the Armv8-M architecture. It brings new features including:" +1676,"Announced in October 2011, Armv8-A represents a fundamental change to the ARM architecture. It adds an optional 64-bit architecture named ""AArch64"" and the associated new ""A64"" instruction set. AArch64 provides user-space compatibility with Armv7-A, the 32-bit architecture, therein referred to as ""AArch32"" and the old 32-bit instruction set, now named ""A32"". The Thumb instruction set is referred to as ""T32"" and has no 64-bit counterpart. Armv8-A allows 32-bit applications to be executed in a 64-bit OS, and a 32-bit OS to be under the control of a 64-bit hypervisor. ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012. Apple was the first to release an Armv8-A compatible core in a consumer product . AppliedMicro, using an FPGA, was the first to demo Armv8-A. The first Armv8-A SoC from Samsung is the Exynos 5433 used in the Galaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in a big.LITTLE configuration; but it will run only in AArch32 mode." +1677,"To both AArch32 and AArch64, Armv8-A makes VFPv3/v4 and advanced SIMD standard. It also adds cryptography instructions supporting AES, SHA-1/SHA-256 and finite field arithmetic. AArch64 was introduced in Armv8-A and its subsequent revision. AArch64 is not included in the 32-bit Armv8-R and Armv8-M architectures." +1678,"Optional AArch64 support was added to the Armv8-R profile, with the first ARM core implementing it being the Cortex-R82. It adds the A64 instruction set." +1679,"Announced in March 2021, the updated architecture places a focus on secure execution and compartmentalisation." +1680,"Arm SystemReady, formerly named Arm ServerReady, is a certification program that helps land the generic off-the-shelf operating systems and hypervisors on to the Arm-based systems from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are:" +1681,These specifications are co-developed by Arm and its partners in the System Architecture Advisory Committee . +1682,Architecture Compliance Suite is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications. +1683,This program was introduced by Arm in 2020 at the first DevSummit event. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes four bands: +1684,"PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secure Internet of Things devices built on system-on-a-chip processors. It was introduced to increase security where a full trusted execution environment is too large or complex." +1685,"The architecture was introduced by Arm in 2017 at the annual TechCon event. Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products. It also provides freely downloadable application programming interface packages, architectural specifications, open-source firmware implementations, and related test suites." +1686,"Following the development of the architecture security framework in 2017, the PSA Certified assurance scheme launched two years later at Embedded World in 2019. PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers. The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time. Level 2 certification became a usable standard in February 2020." +1687,"The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device. The certification also removes industry fragmentation for IoT product manufacturers and developers." +1688,"The first 32-bit ARM-based personal computer, the Acorn Archimedes, was originally intended to run an ambitious operating system called ARX. The machines shipped with RISC OS which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run a Unix port called RISC iX. " +1689,"The 32-bit ARM architecture is supported by a large number of embedded and real-time operating systems, including:" +1690,"As of March 2024, the 32-bit ARM architecture used to be the primary hardware environment for most mobile device operating systems such as the following but many of these platforms such as Android and Apple iOS have evolved to the 64-bit ARM architecture:" +1691,"Formerly, but now discontinued:" +1692,The 32-bit ARM architecture is supported by RISC OS and by multiple Unix-like operating systems including: +1693,"Windows applications recompiled for ARM and linked with Winelib, from the Wine project, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems. x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM using QEMU with Wine , but do not work at full speed or same capability as with Winelib." +1694,"The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching , decoding and execution by directing the coordinated operations of the ALU, registers, and other components. Modern CPUs devote a lot of semiconductor area to caches and instruction-level parallelism to increase performance and to CPU modes to support operating systems and virtualization." +1695,"Most modern CPUs are implemented on integrated circuit microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are called multi-core processors. The individual physical CPUs, called processor cores, can also be multithreaded to support CPU-level multithreading." +1696,"An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip ." +1697,"Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called ""fixed-program computers"". The ""central processing unit"" term has been in use since as early as 1955. Since the term ""CPU"" is generally defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer." +1698,"The idea of a stored-program computer had been already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that ENIAC could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed a paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC was not the first stored-program computer; the Manchester Baby, which was a small-scale experimental stored-program computer, ran its first program on 21 June 1948 and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949." +1699,"Early CPUs were custom designs used as part of a larger and sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers, and has rapidly accelerated with the popularization of the integrated circuit . The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, and sometimes even in toys." +1700,"While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard-architecture processors." +1701,"Relays and vacuum tubes were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers—such as the slower but earlier Harvard Mark I—failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with." +1702,"The design complexity of CPUs increased as various technologies facilitated the building of smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable, and fragile switching elements, like vacuum tubes and relays. With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete components." +1703,"In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speeds and performances. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram , which still sees widespread use in modern CPUs. The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is continued by similar modern computers like the IBM zSeries. In 1965, Digital Equipment Corporation introduced another influential computer aimed at the scientific and research markets—the PDP-8." +1704,"Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. The increased reliability and dramatically increased speed of the switching elements, which were almost exclusively transistors by this time; CPU clock rates in the tens of megahertz were easily obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd." +1705,"During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit allowed a large number of transistors to be manufactured on a single semiconductor-based die, or ""chip"". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these ""building block"" ICs are generally referred to as ""small-scale integration"" devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs." +1706,"IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs, but was eventually implemented with LSI components once these became practical." +1707,"Lee Boysel published influential articles, including a 1967 ""manifesto"", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits . The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor semiconductor manufacturing process . However, some companies continued to build processors out of bipolar transistor–transistor logic chips because bipolar junction transistors were faster than MOS chips up until the 1970s . In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s." +1708,"As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits." +1709,"Since microprocessors were first introduced they have almost completely overtaken all other central processing unit implementation methods. The first commercially available microprocessor, made in 1971, was the Intel 4004, and the first widely used microprocessor, made in 1974, was the Intel 8080. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs can be combined in a single processing chip." +1710,"Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU complexity until 2016." +1711,"While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the use of parallelism and other methods that extend the usefulness of the classical von Neumann model." +1712,"The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle." +1713,"After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the ""classic RISC pipeline"", which is quite common among the simple CPUs used in many electronic devices . It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline." +1714,"Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called ""jumps"" and facilitate program behavior like loops, conditional program execution , and existence of functions. In some processors, some other instructions change the state of bits in a ""flags"" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a ""compare"" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow." +1715,"Fetch involves retrieving an instruction from program memory. The instruction's location in program memory is determined by the program counter , which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures ." +1716,"The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU." +1717,"The way in which the instruction is interpreted is defined by the CPU's instruction set architecture . Often, one group of bits within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value , or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode." +1718,"In some CPU designs the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions." +1719,"After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory." +1720,"For example, if an instruction that performs addition is to be executed, registers containing operands are activated, as are the parts of the arithmetic logic unit that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled to move the output to storage . If the resulting sum is too large , an arithmetic overflow flag will be set, influencing the next operation." +1721,"Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation . Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes." +1722,"The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Besides the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit ." +1723,"The control unit is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor." +1724,"It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction." +1725,"The arithmetic logic unit is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on , status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers, external memory, or constants generated by the ALU itself." +1726,"When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose." +1727,Modern CPUs typically contain more than one ALU to improve performance. +1728,"The address generation unit , sometimes also called the address computation unit , is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements." +1729,"While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle." +1730,"Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, which brings further performance improvements due to the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel." +1731,"Many microprocessors have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU." +1732,"A CPU cache is a hardware cache used by the central processing unit of a computer to reduce the average cost to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels ." +1733,"All modern CPUs have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d and L1i . Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory , rather than on static random-access memory , on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and is optimized differently." +1734,"Other types of caches exist , such as the translation lookaside buffer that is part of the memory management unit that most CPUs have." +1735,"Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB sizes, although the IBM z13 has a 96 KiB L1 instruction cache." +1736,"Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second." +1737,"To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate through the CPU. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the ""edges"" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism ." +1738,"However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions." +1739,"One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components . However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable recent CPU design that uses extensive clock gating is the IBM PowerPC-based Xenon used in the Xbox 360; this reduces the power requirements of the Xbox 360." +1740,"Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS." +1741,"Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers." +1742,Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption. +1743,"Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal numeral system values, and others have employed more unusual representations such as ternary . Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a ""high"" or ""low"" voltage." +1744,"Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits that the CPU can process in one operation, which is commonly called word size, bit width, data path width, integer precision, or integer size. A CPU's integer size determines the range of integer values on which it can it can directly operate. For example, an 8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 discrete integer values." +1745,"Integer range can also affect the number of memory locations the CPU can directly address . For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms that allow additional memory to be addressed." +1746,"CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power . As a result, smaller 4- or 8-bit microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes are available. When higher performance is required, however, the benefits of a larger word size may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the IBM System/360 instruction set architecture was a 32-bit instruction set, the System/360 Model 30 and Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the Motorola 68000 series instruction set was a 32-bit instruction set, the Motorola 68000 and Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles." +1747,"To gain some of the advantages afforded by both lower and higher bit lengths, many instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM System/360 instruction set was primarily 32 bit, but supported 64-bit floating-point values to facilitate greater accuracy and range in floating-point numbers. The System/360 Model 65 had an 8-bit adder for decimal and fixed-point binary arithmetic and a 60-bit adder for floating-point arithmetic. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose use where a reasonable balance of integer and floating-point capability is required." +1748,"The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one instruction per clock cycle ." +1749,"This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets ""hung up"" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance . However, the performance is nearly always subscalar ." +1750,"Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques:" +1751,"Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application." +1752,"One of the simplest methods for increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is a technique known as instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining allows multiple instruction to be executed at a time by breaking the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired." +1753,"Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. Therefore, pipelined processors must check for these sorts of conditions and delay a portion of the pipeline if necessary. A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls ." +1754,"Improvements in instruction pipelining led to further decreases in the idle time of CPU components. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units, such as load–store units, arithmetic–logic units, floating-point units and address generation units. In a superscalar pipeline, instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel . If so, they are dispatched to execution units, resulting in their simultaneous execution. In general, the number of instructions that a superscalar CPU will complete in a cycle is dependent on the number of instructions it is able to dispatch simultaneously to execution units." +1755,"Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and requires significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, register renaming, out-of-order execution and transactional memory crucial to maintaining high levels of performance. By attempting to predict which branch a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of single instruction stream, multiple data stream, a case when a lot of data from the same type has to be processed, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing." +1756,"When a fraction of the CPU is superscalar, the part that is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock cycle each, but its FPU could not. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar abilities to its floating-point features." +1757,"Simple pipelining and superscalar design increase a CPU's ILP by allowing it to execute instructions at rates surpassing one instruction per clock cycle. Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or instruction set architecture . The strategy of the very long instruction word causes some ILP to become implied directly by the software, reducing the CPU's work in boosting ILP and thereby reducing design complexity." +1758,"Another strategy of achieving performance is to execute multiple threads or processes in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as multiple instruction stream, multiple data stream ." +1759,"One technology used for this purpose is multiprocessing . The initial type of this technology is known as symmetric multiprocessing , where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single chip, the technology is known as chip-level multiprocessing and the single chip as a multi-core processor." +1760,"It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading . The approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU are replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as temporal multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly context switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC T1. Another type of MT is simultaneous multithreading, where instructions from multiple threads are executed in parallel within one CPU clock cycle." +1761,"For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques." +1762,"CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or process." +1763,"This reversal of emphasis is evidenced by the proliferation of dual and more core processor designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PlayStation 3's 7-core Cell microprocessor." +1764,"A less common but increasingly important paradigm of processors deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as single instruction stream, multiple data stream and single instruction stream, single data stream , respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation to be performed on a large set of data. Some classic examples of these types of tasks include multimedia applications , as well as many types of scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data." +1765,"Most early vector processors, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose processors has become significant. Shortly after inclusion of floating-point units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose processors. Some of these early SIMD specifications – like HP's Multimedia Acceleration eXtensions and Intel's MMX – were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating-point numbers. Progressively, developers refined and remade these early designs into some of the common modern SIMD specifications, which are usually associated with one instruction set architecture . Some notable modern examples include Intel's Streaming SIMD Extensions and the PowerPC-related AltiVec ." +1766,"Many modern architectures often include hardware performance counters , which enables low-level collection, benchmarking, debugging or analysis of running software metrics. HPC may also be used to discover and analyze unusual or suspicious activity of the software, such as return-oriented programming or sigreturn-oriented programming exploits etc. This is usually done by software-security teams to assess and find malicious binary programs." +1767,"Many major vendors provide software interfaces that can be used to collected data from CPUs registers in order to get metrics. Operating system vendors also provide software like perf to record, benchmark, or trace CPU events running kernels and applications." +1768,"Hardware counters provide a low-overhead method for collecting comprehensive performance metrics related to a CPU's core elements – a significant advantage over software profilers. Additionally, they generally eliminate the need to modify the underlying source code of a program. Because hardware designs differ between architectures, the specific types and interpretations of hardware counters will also change." +1769,Most modern CPUs have privileged modes to support operating systems and virtualization. +1770,Cloud computing can use virtualization to provide virtual central processing units for separate users. +1771,"A host is the virtual equivalent of a physical machine, on which a virtual system is operating. When there are several physical machines operating in tandem and managed as a whole, the grouped computing and memory resources form a cluster. In some systems, it is possible to dynamically add and remove from a cluster. Resources available at a host and cluster level can be partitioned into resources pools with fine granularity." +1772,"The performance or speed of a processor depends on, among many other factors, the clock rate and the instructions per clock , which together are the factors for the instructions per second that the CPU can perform. +Many reported IPS values have represented ""peak"" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, various standardized tests, often called ""benchmarks"" for this purpose‍—‌ such as SPECint‍—‌have been developed to attempt to measure the real effective performance in commonly used applications." +1773,"Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors into one integrated circuit. Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, the performance gain is far smaller, only about 50%, due to imperfect software algorithms and implementation. Increasing the number of cores in a processor increases the workload that can be handled. This means that the processor can now handle numerous asynchronous events, interrupts, etc. which can take a toll on the CPU when overwhelmed. These cores can be thought of as different floors in a processing plant, with each floor handling a different task. Sometimes, these cores will handle the same tasks as cores adjacent to them if a single core is not enough to handle the information. Multi-core CPUs enhance a computer's ability to run several tasks simultaneously by providing additional processing power. However, the increase in speed is not directly proportional to the number of cores added. This is because the cores need to interact through specific channels, and this inter-core communication consumes a portion of the available processing speed." +1774,"Due to specific capabilities of modern CPUs, such as simultaneous multithreading and uncore, which involve sharing of actual CPU resources while aiming at increased utilization, monitoring performance levels and hardware use gradually became a more complex task. As a response, some CPUs implement additional hardware logic that monitors actual use of various parts of a CPU and provides various counters accessible to software; an example is Intel's Performance Counter Monitor technology." +1775,"Low-level languages can convert to machine code without a compiler or interpreter—second-generation programming languages use a simpler processor called an assembler—and the resulting code runs directly on the processor. A program written in a low-level language can be made to run very quickly, with a small memory footprint. An equivalent program in a high-level language can be less efficient and use more memory. Low-level languages are simple, but considered difficult to use, due to numerous technical details that the programmer must remember. By comparison, a high-level programming language isolates execution semantics of a computer architecture from the specification of the program, which simplifies development." +1776,"Machine code is the only language a computer can process directly without a previous transformation. Currently, programmers almost never write programs directly in machine code, because it requires attention to numerous details that a high-level programming language handles automatically. Furthermore, unlike programming in an assembly language, it requires memorizing or looking up numerical codes for every instruction, and is extremely difficult to modify." +1777,"True machine code is a stream of raw, usually binary, data. A programmer coding in ""machine code"" normally codes instructions and data in a more readable form such as decimal, octal, or hexadecimal which is translated to internal format by a program called a loader or toggled into the computer's memory from a front panel." +1778,"Although few programs are written in machine languages, programmers often become adept at reading it through working with core dumps or debugging from the front panel." +1779,"Example of a function in hexadecimal representation of x86-64 machine code to calculate the nth Fibonacci number, with each line corresponding to one instruction:" +1780,"Second-generation languages provide one abstraction level on top of the machine code. In the early days of coding on computers like TX-0 and PDP-1, the first thing MIT hackers did was to write assemblers. +Assembly language has little semantics or formal specification, being only a mapping of human-readable symbols, including symbolic addresses, to opcodes, addresses, numeric constants, strings and so on. Typically, one machine instruction is represented as one line of assembly code. Assemblers produce object files that can link with other object files or be loaded on their own." +1781,Most assemblers provide macros to generate common sequences of instructions. +1782,"Example: The same Fibonacci number calculator as above, but in x86-64 assembly language using AT&T syntax:" +1783,"In this code example, the registers of the x86-64 processor are named and manipulated directly. The function loads its 32-bit argument from %edi in accordance to the System V application binary interface for x86-64 and performs its calculation by manipulating values in the %eax, %ecx, %esi, and %edi registers until it has finished and returns. Note that in this assembly language, there is no concept of returning a value. The result having been stored in the %eax register, again in accordance with System V application binary interface, the ret instruction simply removes the top 64-bit element on the stack and causes the next instruction to be fetched from that location , with the result of the function being stored in %eax. x86-64 assembly language imposes no standard for passing values to a function or returning values from a function ; those are defined by an application binary interface, such as the System V ABI for a particular instruction set." +1784,Compare this with the same function in C: +1785,This code is similar in structure to the assembly language example but there are significant differences in terms of abstraction: +1786,These abstractions make the C code compilable without modification on any architecture for which a C compiler has been written. The x86 assembly language code is specific to the x86-64 architecture and the System V application binary interface for that architecture. +1787,"During the late 1960s and 1970s, high-level languages that included some degree of access to low-level programming functions, such as PL/S, BLISS, BCPL, extended ALGOL and ESPOL , and C, were introduced. One method for this is inline assembly, in which assembly code is embedded in a high-level language that supports this feature. Some of these languages also allow architecture-dependent compiler optimization directives to adjust the way a compiler uses the target processor architecture." +1788,Parse the source code and perform its behavior directly; +1789,Translate source code into some efficient intermediate representation or object code and immediately execute that; +1790,Explicitly execute stored precompiled bytecode made by a compiler and matched with the interpreter's Virtual Machine. +1791,"Early versions of Lisp programming language and minicomputer and microcomputer BASIC dialects would be examples of the first type. Perl, Raku, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler . Some systems, such as Smalltalk and contemporary versions of BASIC and Java, may also combine two and three types. Interpreters of various types have also been constructed for many languages traditionally associated with compilation, such as Algol, Fortran, Cobol, C and C++." +1792,"While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms ""interpreted language"" or ""compiled language"" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high-level language is ideally an abstraction independent of particular implementations." +1793,"Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time . Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. The first interpreted high-level language was Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, ""Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I"", and realized that the Lisp eval function could be implemented in machine code. The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, ""evaluate Lisp expressions""." +1794,"An interpreter usually consists of a set of known commands it can execute, and a list of these commands in the order a programmer wishes to execute them. Each command contains the data the programmer wants to mutate, and information on how to mutate the data. For example, an interpreter might read ADD Books, 5 and interpret it as a request to add five to the Books variable." +1795,"Interpreters have a wide variety of instructions which are specialized to perform different tasks, but you will commonly find interpreter instructions for basic mathematical operations, branching, and memory management, making most interpreters Turing complete. Many interpreters are also closely integrated with a garbage collector and debugger." +1796,Programs written in a high-level language are either directly executed by some kind of interpreter or converted into machine code by a compiler for the CPU to execute. +1797,"While compilers generally produce machine code directly executable by computer hardware, they can often produce an intermediate form called object code. This is basically the same machine specific code but augmented with a symbol table with names and tags to make executable blocks identifiable and relocatable. Compiled programs will typically use building blocks kept in a library of such object code modules. A linker is used to combine library files with the object file of the application to form a single executable file. The object files that are used to generate an executable file are thus often produced at different times, and sometimes even by different languages ." +1798,"A simple interpreter written in a low-level language may have similar machine code blocks implementing functions of the high-level language stored, and executed when a function's entry in a look up table points to that code. However, an interpreter written in a high-level language typically uses another approach, such as generating and then walking a parse tree, or by generating and executing intermediate software-defined instructions, or both." +1799,"Thus, both compilers and interpreters generally turn source code into tokens, both may generate a parse tree, and both may generate immediate instructions . The basic difference is that a compiler system, including a linker, generates a stand-alone machine code program, while an interpreter system instead performs the actions described by the high-level program." +1800,"A compiler can thus make almost all the conversions from source code semantics to the machine level once and for all while an interpreter has to do some of this conversion work every time a statement or function is executed. However, in an efficient interpreter, much of the translation work is factored out and done only the first time a program, module, function, or even statement, is run, thus quite akin to how a compiler works. However, a compiled program still runs much faster, under most circumstances, in part because compilers are designed to optimize code, and may be given ample time for this. This is especially true for simpler high-level languages without dynamic data structures, checks, or type checking." +1801,"In traditional compilation, the executable output of the linkers is typically relocatable when run under a general operating system, much like the object code modules are but with the difference that this relocation is done dynamically at run time, i.e. when the program is loaded for execution. On the other hand, compiled and linked programs for small embedded systems are typically statically allocated, often hard coded in a NOR flash memory, as there is often no secondary storage and no operating system in this sense." +1802,"Historically, most interpreter systems have had a self-contained editor built in. This is becoming more common also for compilers , although some programmers prefer to use an editor of their choice and run the compiler, linker and other tools manually. Historically, compilers predate interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation." +1803,"During the software development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and link all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation , thus requiring much less time before the changes can be tested. Effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a Makefile and program. The Makefile lists compiler and linker command lines and program source code files, but might take a simple command line menu input which selects the third group of instructions then issues the commands to the compiler, and linker feeding the specified source code files." +1804,"A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A cross compiler can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled." +1805,"An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable, since the interpreter itself is part of what needs to be installed." +1806,"The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of copyright. However, various systems of encryption and obfuscation exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a decompiler or disassembler." +1807,The main disadvantage of interpreters is that an interpreted program typically runs more slowly than if it had been compiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle. +1808,"Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as ""interpretive overhead"". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time." +1809,"There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed. Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally ""16-bit"" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a ""bit offset"". Many BASIC interpreters can store and read back their own tokenized internal representation." +1810,"An interpreter might well use the same lexical analyzer and parser as the compiler and then interpret the resulting abstract syntax tree. Example data type definitions for the latter, and a toy interpreter for syntax trees obtained from C expressions are shown in the box." +1811,"Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute." +1812,"There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, Emacs Lisp is compiled to bytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code . This ""compiled"" code is then interpreted by a bytecode interpreter . The compiled code in this case is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. Such compiling interpreters are sometimes also called compreters. In a bytecode interpreter each instruction starts with a byte, and therefore bytecode interpreters have up to 256 instructions, although not all may be used. Some bytecodes may take multiple bytes, and may be arbitrarily complicated." +1813,Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters. +1814,"Threaded code interpreters are similar to bytecode interpreters but instead of bytes they use pointers. Each ""instruction"" is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. Unlike bytecode there is no effective limit on the number of different instructions other than available memory and address space. The classic example of threaded code is the Forth code used in Open Firmware systems: the source language is compiled into ""F code"" , which is then interpreted by a virtual machine." +1815,"In the spectrum between interpreting and compiling, another approach is to transform the source code into an optimized abstract syntax tree , then execute the program following this tree structure, or use it to generate native code just-in-time. In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements , and when compressed provides a more compact representation. Thus, using AST has been proposed as a better intermediate format for just-in-time compilers than bytecode. Also, it allows the system to perform better analysis during runtime." +1816,"However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation and of overhead visiting the tree." +1817,"Further blurring the distinction between interpreters, bytecode interpreters and compilation is just-in-time compilation, a technique in which the intermediate representation is compiled to native machine code at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. The earliest published JIT compiler is generally attributed to work on LISP by John McCarthy in 1960. Adaptive optimization is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. The latter technique is a few decades old, appearing in languages such as Smalltalk in the 1980s." +1818,"Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with Java, the .NET Framework, most modern JavaScript implementations, and Matlab now including JIT compilers." +1819,"Making the distinction between compilers and interpreters yet again even more vague is a special interpreter design known as a template interpreter. Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs , known as a ""Template"". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware. Due to its design, the template interpreter very strongly resembles a just-in-time compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine, and the Ignition Interpreter in the Google V8 javascript execution engine." +1820,A self-interpreter is a programming language interpreter written in a programming language which can interpret itself; an example is a BASIC interpreter written in BASIC. Self-interpreters are related to self-hosting compilers. +1821,"If no compiler exists for the language to be interpreted, creating a self-interpreter requires the implementation of the language in a host language . By having a first interpreter such as this, the system is bootstrapped and new versions of the interpreter can be developed in the language itself. It was in this way that Donald Knuth developed the TANGLE interpreter for the language WEB of the de-facto standard TeX typesetting system." +1822,"Defining a computer language is usually done in relation to an abstract machine or as a mathematical function . A language may also be defined by an interpreter in which the semantics of the host language is given. The definition of a language by a self-interpreter is not well-founded , but a self-interpreter tells a reader about the expressiveness and elegance of a language. It also enables the interpreter to interpret its source code, the first step towards reflective interpreting." +1823,"An important design dimension in the implementation of a self-interpreter is whether a feature of the interpreted language is implemented with the same feature in the interpreter's host language. An example is whether a closure in a Lisp-like language is implemented using closures in the interpreter language or implemented ""manually"" with a data structure explicitly storing the environment. The more features implemented by the same feature in the host language, the less control the programmer of the interpreter has; for example, a different behavior for dealing with number overflows cannot be realized if the arithmetic operations are delegated to corresponding operations in the host language." +1824,"Some languages such as Lisp and Prolog have elegant self-interpreters. Much research on self-interpreters has been conducted in the Scheme programming language, a dialect of Lisp. In general, however, any Turing-complete language allows writing of its own interpreter. Lisp is such a language, because Lisp programs are lists of symbols and other lists. XSLT is such a language, because XSLT programs are written in XML. A sub-domain of metaprogramming is the writing of domain-specific languages ." +1825,"Clive Gifford introduced a measure quality of self-interpreter , the limit of the ratio between computer time spent running a stack of N self-interpreters and time spent to run a stack of N − 1 self-interpreters as N goes to infinity. This value does not depend on the program being run." +1826,The book Structure and Interpretation of Computer Programs presents examples of meta-circular interpretation for Scheme and its dialects. Other examples of languages with a self-interpreter are Forth and Pascal. +1827,"Microcode is a very commonly used technique ""that imposes an interpreter between the hardware and the architectural level of a computer"". As such, the microcode is a layer of hardware-level instructions that implement higher-level machine code instructions or internal state machine sequencing in many digital processing elements. Microcode is used in general-purpose central processing units, as well as in more specialized processors such as microcontrollers, digital signal processors, channel controllers, disk controllers, network interface controllers, network processors, graphics processing units, and in other hardware." +1828,"Microcode typically resides in special high-speed memory and translates machine instructions, state machine data or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlying electronics so that instructions can be designed and altered more freely. It also facilitates the building of complex multi-step instructions, while reducing the complexity of computer circuits. Writing microcode is often called microprogramming and the microcode in a particular processor implementation is sometimes called a microprogram." +1829,"More extensive microcoding allows small and simple microarchitectures to emulate more powerful architectures with wider word length, more execution units and so on, which is a relatively simple way to achieve software compatibility between different products in a processor family." +1830,Even a non microcoding computer processor itself can be considered to be a parsing immediate execution interpreter that is written in a general purpose hardware description language such as VHDL to create a system that parses the machine code instructions and immediately executes them. +1831,"Interpreters, such as those written in Java, Perl, and Tcl, are now necessary for a wide range of computational tasks, including binary emulation and internet applications. Interpreter performance is still a worry despite their adaptability, particularly on systems with limited hardware resources. Advanced instrumentation and tracing approaches provide insights into interpreter implementations and processor resource utilization during execution through evaluations of interpreters tailored for the MIPS instruction set and programming languages such as Tcl, Perl, and Java. Performance characteristics are influenced by interpreter complexity, as demonstrated by comparisons with compiled code. It is clear that interpreter performance is more dependent on the nuances and resource needs of the interpreter than it is on the particular application that is being interpreted." +1832,"In computer programming, assembly language , often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction , but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported." +1833,"The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term ""assembler"" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean ""a program that assembles another program consisting of several sections into a single program"". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time." +1834,"Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture." +1835,"Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling." +1836,"In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In ""No Silver Bullet"", Fred Brooks summarised the effects of the switch away from assembly language programming: ""Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.""" +1837,"Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C." +1838,"Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging." +1839,"Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s." +1840,"An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines." +1841,"Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible." +1842,"Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples." +1843,"There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,, in original Intel syntax, whereas this would be written addl ,%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations ." +1844,There are two types of assemblers based on how many passes through the source are needed to produce the object file. +1845,"In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more +""no-operation"" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target." +1846,"The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory , rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories , had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process faster." +1847,"Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2." +1848,More sophisticated high-level assemblers provide language abstractions such as: +1849,See Language design below for more details. +1850,"A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements , comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be ""implied"", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed." +1851,"For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001." +1852,This binary computer code can be made more human-readable by expressing it in hexadecimal as follows. +1853,"Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember." +1854,"In some assembly languages the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate addresses. Other assemblers may use separate opcode mnemonics such as L for ""move memory to register"", ST for ""move register to memory"", LR for ""move register to register"", MVI for ""move immediate operand to memory"", etc." +1855,"If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data , depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:" +1856,"The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded to specify that both operands are registers, the source is AH, and the destination is AL." +1857,"In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant , so only the 88 instruction can be applicable." +1858,"Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. " +1859,"Returning to the original example, while the x86 opcode 10110000 copies an 8-bit value into the AL register, 10110001 moves it into CL and 10110010 does so into DL. Assembly language examples for these follow." +1860,The syntax of MOV can also be more complex as the following examples show. +1861,"In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which." +1862,"Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a ""branch if greater or equal"" instruction, an assembler may provide a pseudoinstruction that expands to the machine's ""set if less than"" and ""branch if zero "". Most full-featured assemblers also provide a rich macro language which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments." +1863,"Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences." +1864,"Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation." +1865,"Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics , some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic , there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products." +1866,"There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation . A typical assembly language consists of 3 types of instruction statements that are used to define program operations:" +1867,"Instructions in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction , and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate , registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP for BC with a mask of 0." +1868,"Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction xchg ax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchg ax,ax. Some disassemblers recognize this and will decode the xchg ax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions." +1869,"Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b. These are sometimes known as pseudo-opcodes." +1870,Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn. +1871,"There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops." +1872,"Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler ""directing it to perform operations other than assembling instructions"". Directives affect how the assembler operates and ""may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters"". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data." +1873,"The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values." +1874,"Symbolic assemblers let programmers associate arbitrary names with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols ." +1875,"Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses." +1876,"Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The ""raw"" assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made." +1877,"Many assemblers support predefined macros, and others support programmer-defined macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file . Macros in this sense date to IBM autocoders of the 1950s." +1878,"Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code , e.g., AIF and COPY in HLASM." +1879,"In assembly language, the term ""macro"" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy ""programs"" by themselves, executed by interpretation by the assembler during assembly." +1880,"Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features." +1881,"Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or ""unrolled"" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a ""sort"" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 , which was written in the SNOBOL Implementation Language , an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time." +1882,"Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine and with IBM's ""real time transaction processing"" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems and credit card systems today." +1883,"It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements." +1884,"This is because, as was realized in the 1960s, the concept of ""macro processing"" is independent of the concept of ""assembly"", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports ""preprocessor instructions"" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or ""go to"", the latter allowing programs to loop." +1885,"Despite the power of macro processing, it fell into disuse in many high level languages while remaining a perennial for assemblers." +1886,"Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:" +1887,"the intention was that the caller would provide the name of a variable, and the ""global"" variable or constant b would be used to multiply ""a"". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters." +1888,"Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills , and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s . IBM's High Level Assembler Toolkit includes such a macro package." +1889,"A curious design was A-Natural, a ""stream-oriented"" assembler for 8080/Z80, processors from Whitesmiths Ltd. . The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans." +1890,"There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages." +1891,"Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package :" +1892,"Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth ""is credited with inventing assembly language"" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study." +1893,"In late 1948, the Electronic Delay Storage Automatic Calculator had an assembler integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first ""assembler"". Reports on the EDSAC introduced the term ""assembly"" for the process of combining fields into an instruction word. SOAP was an assembly language for the IBM 650 computer written by Stan Poley in 1955." +1894,"Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the late 1950s, their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems ." +1895,"Numerous programs have been written entirely in assembly language. The Burroughs MCP was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language , an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s." +1896,"Assembly language has long been the primary development language for 8-bit home computers such Atari 8-bit family, Apple II, MSX, ZX Spectrum, and Commodore 64. Interpreted BASIC dialects on these systems offer insufficient execution speed and insufficient facilities to take full advantage of the available hardware. These systems have severe resource constraints, idiosyncratic memory and display architectures, and provide limited system services. There are also few high-level language compilers suitable for microcomputer use. Similarly, assembly language is the default choice for 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System." +1897,"Key software for IBM PC compatibles was written in assembly language, such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to get performance out of systems such as the Sega Saturn and as the primary language for arcade hardware based on the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam." +1898,There has been debate over the usefulness and performance of assembly language relative to high-level languages. +1899,"Although assembly language has specific niche uses where it is important , there are other tools for optimization." +1900,"As of July 2017, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers." +1901,There are some situations in which developers might choose to use assembly language: +1902,"Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages." +1903,"In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that occurs naturally in a human community by a process of use, repetition, and change without conscious planning or premeditation. It can take different forms, namely either a spoken language or a sign language. Natural languages are distinguished from constructed and formal languages such as those used to program computers or to study logic." +1904,Natural language can be broadly defined as different from +1905,"All varieties of world languages are natural languages, including those that are associated with linguistic prescriptivism or language regulation. An official language with a regulating academy such as Standard French, overseen by the Académie Française, is classified as a natural language , as its prescriptive aspects do not make it constructed enough to be a constructed language or controlled enough to be a controlled natural language." +1906,"Controlled natural languages are subsets of natural languages whose grammars and dictionaries have been restricted in order to reduce ambiguity and complexity. This may be accomplished by decreasing usage of superlative or adverbial forms, or irregular verbs. Typical purposes for developing and implementing a controlled natural language are to aid understanding by non-native speakers or to ease computer processing. An example of a widely-used controlled natural language is Simplified Technical English, which was originally developed for aerospace and avionics industry manuals." +1907,"Being constructed, International auxiliary languages such as Esperanto and Interlingua are not considered natural languages, with the possible exception of true native speakers of such languages. Natural languages evolve, through fluctuations in vocabulary and syntax, to incrementally improve human communication. In contrast, Esperanto was created by Polish ophthalmologist L. L. Zamenhof in the late 19th century." +1908,"Some natural languages have become organically ""standardized"" through the synthesis of two or more pre-existing natural languages over a relatively short period of time through the development of a pidgin, which is not considered a language, into a stable creole language. A creole such as Haitian Creole has its own grammar, vocabulary and literature. It is spoken by over 10 million people worldwide and is one of the two official languages of the Republic of Haiti." +1909,"As of 1996, there were 350 attested families with one or more native speakers of Esperanto. Latino sine flexione, another international auxiliary language, is no longer widely spoken." +1910,A programming language is a system of notation for writing computer programs. +1911,"Programming languages are described in terms of their syntax and semantics , usually defined by a formal language. Languages usually provide features such as a type system, variables and mechanisms for error handling. An implementation of a programming language in the form of a compiler or interpreter allows programs to be executed, either directly or by producing an executable." +1912,"Computer architecture has strongly influenced the design of programming languages, with the most common type developed to perform well on the popular von Neumann architecture. While early programming languages were closely tied to the hardware, over time, they have developed more abstraction to hide implementation details for greater simplicity." +1913,"Thousands of programming languages—often classified as imperative, functional, logic, or object-oriented—have been developed for a wide variety of uses. Many aspects of programming language design involve tradeoffs—for example, exception handling simplifies error handling, but at a performance cost. Programming language theory is the subfield of computer science that studies the design, implementation, analysis, characterization, and classification of programming languages." +1914,There are a variety of criteria that may be considered when defining what constitutes a programming language. +1915,"The term computer language is sometimes used interchangeably with programming language. However, the usage of both terms varies among authors, including the exact scope of each. One usage describes programming languages as a subset of computer languages. Similarly, languages used in computing that have a different goal than expressing computer programs are generically designated computer languages. For instance, markup languages are sometimes referred to as computer languages to emphasize that they are not meant to be used for programming. +One way of classifying computer languages is by the computations they are capable of expressing, as described by the theory of computation. The majority of practical programming languages are Turing complete, and all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, yet are often called programming languages. However, some authors restrict the term ""programming language"" to Turing complete languages." +1916,"Another usage regards programming languages as theoretical constructs for programming abstract machines and computer languages as the subset thereof that runs on physical computers, which have finite hardware resources. John C. Reynolds emphasizes that formal specification languages are just as much programming languages as are the languages intended for execution. He also argues that textual and even graphical input formats that affect the behavior of a computer are programming languages, despite the fact they are commonly not Turing-complete, and remarks that ignorance of programming language concepts is the reason for many flaws in input formats." +1917,"In most practical contexts, a programming language involves a computer; consequently, programming languages are usually defined and studied this way. Programming languages differ from natural languages in that natural languages are only used for interaction between people, while programming languages also allow humans to communicate instructions to machines." +1918,"The domain of the language is also worth consideration. Markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined. XSLT, for example, is a Turing complete language entirely using XML syntax. Moreover, LaTeX, which is mostly used for structuring documents, also contains a Turing complete subset." +1919,Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. The practical necessity that a programming language supports adequate abstractions is expressed by the abstraction principle. This principle is sometimes formulated as a recommendation to the programmer to make proper use of such abstractions. +1920,"The first programmable computers were invented at the end of the 1940s, and with them, the first programming languages. The earliest computers were programmed in first-generation programming languages , machine language . This code was very difficult to debug and was not portable between different computer systems. In order to improve the ease of programming, assembly languages were invented, diverging from the machine language to make programs easier to understand for humans, although they did not increase portability." +1921,"Initially, hardware resources were scarce and expensive, while human resources were cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored. The introduction of high-level programming languages —revolutionized programming. These languages abstracted away the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute. In 1957, Fortran was invented. Often considered the first compiled high-level programming language, Fortran has remained in use into the twenty-first century." +1922,"Around 1960, the first mainframes—general purpose computers—were developed, although they could only be operated by professionals and the cost was extreme. The data and instructions were input by punch cards, meaning that no input could be added while the program was running. The languages developed at this time therefore are designed for minimal interaction. After the invention of the microprocessor, computers in the 1970s became dramatically cheaper. New computers also allowed more user interaction, which was supported by newer programming languages." +1923,"Lisp, implemented in 1958, was the first functional programming language. Unlike Fortran, it supports recursion and conditional expressions, and it also introduced dynamic memory management on a heap and automatic garbage collection. For the next decades, Lisp dominated artificial intelligence applications. In 1978, another functional language, ML, introduced inferred types and polymorphic parameters." +1924,"After ALGOL was released in 1958 and 1960, it became the standard in computing literature for describing algorithms. Although its commercial success was limited, most popular imperative languages—including C, Pascal, Ada, C++, Java, and C#—are directly or indirectly descended from ALGOL 60. Among its innovations adopted by later programming languages included greater portability and the first use of context-free, BNF grammar. Simula, the first language to support object-oriented programming , also descends from ALGOL and achieved commercial success. C, another ALGOL descendant, has sustained popularity into the twenty-first century. C allows access to lower-level machine operations more than other contemporary languages. Its power and efficiency, generated in part with flexible pointer operations, comes at the cost of making it more difficult to write correct code." +1925,"Prolog, designed in 1972, was the first logic programming language, communicating with a computer using formal logic notation. With logic programming, the programmer specifies a desired result and allows the interpreter to decide how to achieve it." +1926,"During the 1980s, the invention of the personal computer transformed the roles for which programming languages were used. New languages introduced in the 1980s included C++, a superset of C that can compile C programs but also supports classes and inheritance. Ada and other new languages introduced support for concurrency. The Japanese government invested heavily into the so-called fifth-generation languages that added support for concurrency to logic programming constructs, but these languages were outperformed by other concurrency-supporting languages." +1927,"Due to the rapid growth of the Internet and the World Wide Web in the 1990s, new programming languages were introduced to support Web pages and networking. Java, based on C++ and designed for increased portability across systems and security, enjoyed large-scale success because these features are essential for many Internet applications. Another development was that of dynamically typed scripting languages—Python, JavaScript, PHP, and Ruby—designed to quickly produce small programs that coordinate existing applications. Due to their integration with HTML, they have also been used for building web pages hosted on servers." +1928,"During the 2000s, there was a slowdown in the development of new programming languages that achieved widespread popularity. One innovation was service-oriented programming, designed to exploit distributed systems whose components are connected by a network. Services are similar to objects in object-oriented programming, but run on a separate process. C# and F# cross-pollinated ideas between imperative and functional programming. After 2010, several new languages—Rust, Go, Swift, Zig and Carbon —competed for the performance-critical software for which C had historically been used.Most of the new programming languages uses static typing while a few numbers of new languages use dynamic typing like Ring and Julia." +1929,"Some of the new programming languages are classified as visual programming languages like Scratch, LabVIEW and PWCT. Also, some of these languages mix between textual and visual programming usage like Ballerina. Also, this trend lead to developing projects that help in developing new VPLs like Blockly by Google.Many game engines like Unreal and Unity added support for visual scripting too." +1930,All programming languages have some primitive building blocks for the description of data and the processes or transformations applied to them . These primitives are defined by syntactic and semantic rules which describe their structure and meaning respectively. +1931,"A programming language's surface form is known as its syntax. Most programming languages are purely textual; they use sequences of text including words, numbers, and punctuation, much like written natural languages. On the other hand, some programming languages are more graphical in nature, using visual relationships between symbols to specify a program." +1932,"The syntax of a language describes the possible combinations of symbols that form a syntactically correct program. The meaning given to a combination of symbols is handled by semantics . Since most languages are textual, this article discusses textual syntax." +1933,"The programming language syntax is usually defined using a combination of regular expressions and Backus–Naur form . Below is a simple grammar, based on Lisp:" +1934,This grammar specifies the following: +1935,"The following are examples of well-formed token sequences in this grammar: 12345, and )." +1936,"Not all syntactically correct programs are semantically correct. Many syntactically correct programs are nonetheless ill-formed, per the language's rules; and may result in an error on translation or execution. In some cases, such programs may exhibit undefined behavior. Even when a program is well-defined within a language, it may still have a meaning that is not intended by the person who wrote it." +1937,"Using natural language as an example, it may not be possible to assign a meaning to a grammatically correct sentence or the sentence may be false:" +1938,"The following C language fragment is syntactically correct, but performs operations that are not semantically defined :" +1939,"If the type declaration on the first line were omitted, the program would trigger an error on the undefined variable p during compilation. However, the program would still be syntactically correct since type declarations provide only semantic information." +1940,"The grammar needed to specify a programming language can be classified by its position in the Chomsky hierarchy. The syntax of most programming languages can be specified using a Type-2 grammar, i.e., they are context-free grammars. Some languages, including Perl and Lisp, contain constructs that allow execution during the parsing phase. Languages that have constructs that allow the programmer to alter the behavior of the parser make syntax analysis an undecidable problem, and generally blur the distinction between parsing and execution. In contrast to Lisp's macro system and Perl's BEGIN blocks, which may contain general computations, C macros are merely string replacements and do not require code execution." +1941,"The term semantics refers to the meaning of languages, as opposed to their form ." +1942,"Static semantics defines restrictions on the structure of valid texts that are hard or impossible to express in standard syntactic formalisms. For compiled languages, static semantics essentially include those semantic rules that can be checked at compile time. Examples include checking that every identifier is declared before it is used or that the labels on the arms of a case statement are distinct. Many important restrictions of this type, like checking that identifiers are used in the appropriate context , or that subroutine calls have the appropriate number and type of arguments, can be enforced by defining them as rules in a logic called a type system. Other forms of static analyses like data flow analysis may also be part of static semantics. Programming languages such as Java and C# have definite assignment analysis, a form of data flow analysis, as part of their respective static semantics." +1943,"Once data has been specified, the machine must be instructed to perform operations on the data. For example, the semantics may define the strategy by which expressions are evaluated to values, or the manner in which control structures conditionally execute statements. The dynamic semantics of a language defines how and when the various constructs of a language should produce a program behavior. There are many ways of defining execution semantics. Natural language is often used to specify the execution semantics of languages commonly used in practice. A significant amount of academic research goes into formal semantics of programming languages, which allows execution semantics to be specified in a formal manner. Results from this field of research have seen limited application to programming language design and implementation outside academia." +1944,"A type system defines how a programming language classifies values and expressions into types, how it can manipulate those types and how they interact. The goal of a type system is to verify and usually enforce a certain level of correctness in programs written in that language by detecting certain incorrect operations. Any decidable type system involves a trade-off: while it rejects many incorrect programs, it can also prohibit some correct, albeit unusual programs. In order to bypass this downside, a number of languages have type loopholes, usually unchecked casts that may be used by the programmer to explicitly allow a normally disallowed operation between different types. In most typed languages, the type system is used only to type check programs, but a number of languages, usually functional ones, infer types, relieving the programmer from the need to write type annotations. The formal design and study of type systems is known as type theory." +1945,"A language is typed if the specification of every operation defines types of data to which the operation is applicable. For example, the data represented by ""this text between the quotes"" is a string, and in many programming languages, dividing a number by a string has no meaning and will not be executed. The invalid operation may be detected when the program is compiled and will be rejected by the compiler with a compilation error message, or it may be detected while the program is running , resulting in a run-time exception. Many languages allow a function called an exception handler to handle this exception and, for example, always return ""-1"" as the result." +1946,"A special case of typed languages is the single-typed languages. These are often scripting or markup languages, such as REXX or SGML, and have only one data type–—most commonly character strings which are used for both symbolic and numeric data." +1947,"In contrast, an untyped language, such as most assembly languages, allows any operation to be performed on any data, generally sequences of bits of various lengths. High-level untyped languages include BCPL, Tcl, and some varieties of Forth." +1948,"In practice, while few languages are considered typed from the type theory , most modern languages offer a degree of typing. Many production languages provide means to bypass or subvert the type system, trading type safety for finer control over the program's execution ." +1949,"In static typing, all expressions have their types determined before a program executes, typically at compile-time. For example, 1 and are integer expressions; they cannot be passed to a function that expects a string or stored in a variable that is defined to hold dates." +1950,"Statically-typed languages can be either manifestly typed or type-inferred. In the first case, the programmer must explicitly write types at certain textual positions . In the second case, the compiler infers the types of expressions and declarations based on context. Most mainstream statically-typed languages, such as C++, C#, and Java, are manifestly typed. Complete type inference has traditionally been associated with functional languages such as Haskell and ML. However, many manifestly-typed languages support partial type inference; for example, C++, Java, and C# all infer types in certain limited cases. Additionally, some programming languages allow for some types to be automatically converted to other types; for example, an int can be used where the program expects a float." +1951,"Dynamic typing, also called latent typing, determines the type-safety of operations at run time; in other words, types are associated with run-time values rather than textual expressions. As with type-inferred languages, dynamically-typed languages do not require the programmer to write explicit type annotations on expressions. Among other things, this may permit a single variable to refer to values of different types at different points in the program execution. However, type errors cannot be automatically detected until a piece of code is actually executed, potentially making debugging more difficult. Lisp, Smalltalk, Perl, Python, JavaScript, Ruby, Ring and Julia are all examples of dynamically-typed languages." +1952,"Weak typing allows a value of one type to be treated as another, for example treating a string as a number. This can occasionally be useful, but it can also allow some kinds of program faults to go undetected at compile time and even at run time." +1953,Strong typing prevents these program faults. An attempt to perform an operation on the wrong type of value raises an error. Strongly-typed languages are often termed type-safe or safe. +1954,"An alternative definition for ""weakly typed"" refers to languages, such as Perl, Ring and JavaScript, which permit a large number of implicit type conversions. In JavaScript, for example, the expression 2 * x implicitly converts x to a number, and this conversion succeeds even if x is null, undefined, an Array, or a string of letters. Such implicit conversions are often useful, but they can mask programming errors. Strong and static are now generally considered orthogonal concepts, but usage in the literature differs. Some use the term strongly typed to mean strongly, statically typed, or, even more confusingly, to mean simply statically typed. Thus C has been called both strongly typed and weakly, statically typed." +1955,"It may seem odd to some professional programmers that C could be ""weakly, statically typed"". However, the use of the generic pointer, the void* pointer, does allow casting pointers to other pointers without needing to do an explicit cast. This is extremely similar to somehow casting an array of bytes to any kind of datatype in C without using an explicit cast, such as or ." +1956,"Most programming languages have an associated core library , which is conventionally made available by all implementations of the language. Core libraries typically include definitions for commonly used algorithms, data structures, and mechanisms for input and output." +1957,"The line between a language and its core library differs from language to language. In some cases, the language designers may treat the library as a separate entity from the language. However, a language's core library is often treated as part of the language by its users, and some language specifications even require that this library be made available in all implementations. Indeed, some languages are designed so that the meanings of certain syntactic constructs cannot even be described without referring to the core library. For example, in Java, a string literal is defined as an instance of the java.lang.String class; similarly, in Smalltalk, an anonymous function expression constructs an instance of the library's BlockContext class. Conversely, Scheme contains multiple coherent subsets that suffice to construct the rest of the language as library macros, and so the language designers do not even bother to say which portions of the language must be implemented as language constructs, and which must be implemented as parts of a library." +1958,"In computing, multiple instructions can be executed simultaneously. Many programming languages support instruction-level and subprogram-level concurrency. By the twenty-first century, additional processing power on computers was increasingly coming from the use of additional processors, which requires programmers to design software that makes use of multiple processors simultaneously to achieve improved performance. Interpreted languages such as Python and Ruby do not support the concurrent use of multiple processors. Other programming languages do support managing data shared between different threads by controlling the order of execution of key instructions via the use of semaphores, controlling access to shared data via monitor, or enabling message passing between threads." +1959,"Many programming languages include exception handlers, a section of code triggered by runtime errors that can deal with them in two main ways:" +1960,Some programming languages support dedicating a block of code to run regardless of whether an exception occurs before the code is reached; this is called finalization. +1961,"There is a tradeoff between increased ability to handle exceptions and reduced performance. For example, even though array index errors are common C does not check them for performance reasons. Although programmers can write code to catch user-defined exceptions, this can clutter a program. Standard libraries in some languages, such as C, use their return values to indicate an exception. Some languages and their compilers have the option of turning on and off error handling capability, either temporarily or permanently." +1962,"Programming languages share properties with natural languages related to their purpose as vehicles for communication, having a syntactic form separate from its semantics, and showing language families of related languages branching one from another. But as artificial constructs, they also differ in fundamental ways from languages that have evolved through usage. A significant difference is that a programming language can be fully described and studied in its entirety since it has a precise and finite definition. By contrast, natural languages have changing meanings given by their users in different communities. While constructed languages are also artificial languages designed from the ground up with a specific purpose, they lack the precise and complete semantic definition that a programming language has." +1963,"Many programming languages have been designed from scratch, altered to meet new needs, and combined with other languages. Many have eventually fallen into disuse. Although there have been attempts to design one ""universal"" programming language that serves all purposes, all of them have failed to be generally accepted as filling this role. The need for diverse programming languages arises from the diversity of contexts in which languages are used:" +1964,"One common trend in the development of programming languages has been to add more ability to solve problems using a higher level of abstraction. The earliest programming languages were tied very closely to the underlying hardware of the computer. As new programming languages have developed, features have been added that let programmers express ideas that are more remote from simple translation into underlying hardware instructions. Because programmers are less tied to the complexity of the computer, their programs can do more computing with less effort from the programmer. This lets them write more functionality per time unit." +1965,"Natural-language programming has been proposed as a way to eliminate the need for a specialized language for programming. However, this goal remains distant and its benefits are open to debate. Edsger W. Dijkstra took the position that the use of a formal language is essential to prevent the introduction of meaningless constructs, and dismissed natural-language programming as ""foolish"". Alan Perlis was similarly dismissive of the idea. Hybrid approaches have been taken in Structured English and SQL." +1966,A language's designers and users must construct a number of artifacts that govern and enable the practice of programming. The most important of these artifacts are the language specification and implementation. +1967,"The specification of a programming language is an artifact that the language users and the implementors can use to agree upon whether a piece of source code is a valid program in that language, and if so what its behavior shall be." +1968,"A programming language specification can take several forms, including the following:" +1969,"An implementation of a programming language provides a way to write programs in that language and execute them on one or more configurations of hardware and software. There are, broadly, two approaches to programming language implementation: compilation and interpretation. It is generally possible to implement a language using either technique." +1970,"The output of a compiler may be executed by hardware or a program called an interpreter. In some implementations that make use of the interpreter approach, there is no distinct boundary between compiling and interpreting. For instance, some implementations of BASIC compile and then execute the source one line at a time." +1971,Programs that are executed directly on the hardware usually run much faster than those that are interpreted in software. +1972,"One technique for improving the performance of interpreted programs is just-in-time compilation. Here the virtual machine, just before execution, translates the blocks of bytecode which are going to be used to machine code, for direct execution on the hardware." +1973,"Although most of the most commonly used programming languages have fully open specifications and implementations, many programming languages exist only as proprietary programming languages with the implementation available only from a single vendor, which may claim that such a proprietary language is their intellectual property. Proprietary programming languages are commonly domain-specific languages or internal scripting languages for a single product; some proprietary languages are used only internally within a vendor, while others are available to external users." +1974,"Some programming languages exist on the border between proprietary and open; for example, Oracle Corporation asserts proprietary rights to some aspects of the Java programming language, and Microsoft's C# programming language, which has open implementations of most parts of the system, also has Common Language Runtime as a closed environment." +1975,"Many proprietary languages are widely used, in spite of their proprietary nature; examples include MATLAB, VBScript, and Wolfram Language. Some languages may make the transition from closed to open; for example, Erlang was originally Ericsson's internal programming language." +1976,"Open source programming languages are particularly helpful for open science applications, enhancing the capacity for replication and code sharing." +1977,"Thousands of different programming languages have been created, mainly in the computing field. +Individual software projects commonly use five programming languages or more." +1978,"Programming languages differ from most other forms of human expression in that they require a greater degree of precision and completeness. When using a natural language to communicate with other people, human authors and speakers can be ambiguous and make small errors, and still expect their intent to be understood. However, figuratively speaking, computers ""do exactly what they are told to do"", and cannot ""understand"" what code the programmer intended to write. The combination of the language definition, a program, and the program's inputs must fully specify the external behavior that occurs when the program is executed, within the domain of control of that program. On the other hand, ideas about an algorithm can be communicated to humans without the precision required for execution by using pseudocode, which interleaves natural language with code written in a programming language." +1979,"A programming language provides a structured mechanism for defining pieces of data, and the operations or transformations that may be carried out automatically on that data. A programmer uses the abstractions present in the language to represent the concepts involved in a computation. These concepts are represented as a collection of the simplest elements available . Programming is the process by which programmers combine these primitives to compose new programs, or adapt existing ones to new uses or a changing environment." +1980,"Programs for a computer might be executed in a batch process without human interaction, or a user might type commands in an interactive session of an interpreter. In this case the ""commands"" are simply programs, whose execution is chained together. When a language can run its commands through an interpreter , without compiling, it is called a scripting language." +1981,"Determining which is the most widely used programming language is difficult since the definition of usage varies by context. One language may occupy the greater number of programmer hours, a different one has more lines of code, and a third may consume the most CPU time. Some languages are very popular for particular kinds of applications. For example, COBOL is still strong in the corporate data center, often on large mainframes; Fortran in scientific and engineering applications; Ada in aerospace, transportation, military, real-time, and embedded applications; and C in embedded applications and operating systems. Other languages are regularly used to write many different kinds of applications." +1982,"Various methods of measuring language popularity, each subject to a different bias over what is measured, have been proposed:" +1983,"Combining and averaging information from various internet sites, stackify.com reported the ten most popular programming languages : Java, C, C++, Python, C#, JavaScript, VB .NET, R, PHP, and MATLAB." +1984,"As of February 2024, the top five programming languages as measured by TIOBE index are Python, C, C++, Java and C#. TIOBE provide a list of top 100 programming languages according to popularity and update this list every month." +1985,"A dialect of a programming language or a data exchange language is a variation or extension of the language that does not change its intrinsic nature. With languages such as Scheme and Forth, standards may be considered insufficient, inadequate, or illegitimate by implementors, so often they will deviate from the standard, making a new dialect. In other cases, a dialect is created for use in a domain-specific language, often a subset. In the Lisp world, most languages that use basic S-expression syntax and Lisp-like semantics are considered Lisp dialects, although they vary wildly as do, say, Racket and Clojure. As it is common for one language to have several dialects, it can become quite difficult for an inexperienced programmer to find the right documentation. The BASIC language has many dialects." +1986,"Programming languages are often placed into four main categories: imperative, functional, logic, and object oriented." +1987,"Although markup languages are not programming languages, some have extensions that support limited programming. Additionally, there are special-purpose languages that are not easily compared to other programming languages." +1988,"In computer programming, assembly language , often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction , but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported." +1989,"The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term ""assembler"" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean ""a program that assembles another program consisting of several sections into a single program"". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time." +1990,"Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture." +1991,"Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, much more complicated tasks than assembling." +1992,"In the first decades of computing, it was commonplace for both systems programming and application programming to take place entirely in assembly language. While still irreplaceable for some purposes, the majority of programming is now conducted in higher-level interpreted and compiled languages. In ""No Silver Bullet"", Fred Brooks summarised the effects of the switch away from assembly language programming: ""Surely the most powerful stroke for software productivity, reliability, and simplicity has been the progressive use of high-level languages for programming. Most observers credit that development with at least a factor of five in productivity, and with concomitant gains in reliability, simplicity, and comprehensibility.""" +1993,"Today, it is typical to use small amounts of assembly language code within larger systems implemented in a higher-level language, for performance reasons or to interact directly with hardware in ways unsupported by the higher-level language. For instance, just under 2% of version 4.9 of the Linux kernel source code is written in assembly; more than 97% is written in C." +1994,"Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging." +1995,"Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s." +1996,"An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines." +1997,"Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible." +1998,"Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples." +1999,"There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,, in original Intel syntax, whereas this would be written addl ,%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations ." +2000,There are two types of assemblers based on how many passes through the source are needed to produce the object file. +2001,"In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more +""no-operation"" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target." +2002,"The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory , rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories , had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process faster." +2003,"Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2." +2004,More sophisticated high-level assemblers provide language abstractions such as: +2005,See Language design below for more details. +2006,"A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements , comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be ""implied"", which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed." +2007,"For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001." +2008,This binary computer code can be made more human-readable by expressing it in hexadecimal as follows. +2009,"Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember." +2010,"In some assembly languages the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate addresses. Other assemblers may use separate opcode mnemonics such as L for ""move memory to register"", ST for ""move register to memory"", LR for ""move register to register"", MVI for ""move immediate operand to memory"", etc." +2011,"If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data , depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:" +2012,"The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded to specify that both operands are registers, the source is AH, and the destination is AL." +2013,"In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant , so only the 88 instruction can be applicable." +2014,"Assembly languages are always designed so that this sort of lack of ambiguity is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. " +2015,"Returning to the original example, while the x86 opcode 10110000 copies an 8-bit value into the AL register, 10110001 moves it into CL and 10110010 does so into DL. Assembly language examples for these follow." +2016,The syntax of MOV can also be more complex as the following examples show. +2017,"In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which." +2018,"Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a ""branch if greater or equal"" instruction, an assembler may provide a pseudoinstruction that expands to the machine's ""set if less than"" and ""branch if zero "". Most full-featured assemblers also provide a rich macro language which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments." +2019,"Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences." +2020,"Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation." +2021,"Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics , some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic , there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products." +2022,"There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation . A typical assembly language consists of 3 types of instruction statements that are used to define program operations:" +2023,"Instructions in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction , and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate , registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use B as an extended mnemonic for BC with a mask of 15 and NOP for BC with a mask of 0." +2024,"Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction xchg ax,ax is used for nop, with nop being a pseudo-opcode to encode the instruction xchg ax,ax. Some disassemblers recognize this and will decode the xchg ax,ax instruction as nop. Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics NOP and NOPR for BC and BCR with zero masks. For the SPARC architecture, these are known as synthetic instructions." +2025,"Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction ld hl,bc is recognized to generate ld l,c followed by ld h,b. These are sometimes known as pseudo-opcodes." +2026,Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn. +2027,"There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops." +2028,"Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler ""directing it to perform operations other than assembling instructions"". Directives affect how the assembler operates and ""may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters"". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data." +2029,"The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values." +2030,"Symbolic assemblers let programmers associate arbitrary names with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols ." +2031,"Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses." +2032,"Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The ""raw"" assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made." +2033,"Many assemblers support predefined macros, and others support programmer-defined macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file . Macros in this sense date to IBM autocoders of the 1950s." +2034,"Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code , e.g., AIF and COPY in HLASM." +2035,"In assembly language, the term ""macro"" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy ""programs"" by themselves, executed by interpretation by the assembler during assembly." +2036,"Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features." +2037,"Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or ""unrolled"" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a ""sort"" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 , which was written in the SNOBOL Implementation Language , an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time." +2038,"Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine and with IBM's ""real time transaction processing"" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems and credit card systems today." +2039,"It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements." +2040,"This is because, as was realized in the 1960s, the concept of ""macro processing"" is independent of the concept of ""assembly"", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports ""preprocessor instructions"" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or ""go to"", the latter allowing programs to loop." +2041,"Despite the power of macro processing, it fell into disuse in many high level languages while remaining a perennial for assemblers." +2042,"Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:" +2043,"the intention was that the caller would provide the name of a variable, and the ""global"" variable or constant b would be used to multiply ""a"". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters." +2044,"Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills , and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s . IBM's High Level Assembler Toolkit includes such a macro package." +2045,"A curious design was A-Natural, a ""stream-oriented"" assembler for 8080/Z80, processors from Whitesmiths Ltd. . The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans." +2046,"There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages." +2047,"Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package :" +2048,"Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth ""is credited with inventing assembly language"" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study." +2049,"In late 1948, the Electronic Delay Storage Automatic Calculator had an assembler integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first ""assembler"". Reports on the EDSAC introduced the term ""assembly"" for the process of combining fields into an instruction word. SOAP was an assembly language for the IBM 650 computer written by Stan Poley in 1955." +2050,"Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses. They were once widely used for all sorts of programming. However, by the late 1950s, their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems ." +2051,"Numerous programs have been written entirely in assembly language. The Burroughs MCP was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language , an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s." +2052,"Assembly language has long been the primary development language for 8-bit home computers such Atari 8-bit family, Apple II, MSX, ZX Spectrum, and Commodore 64. Interpreted BASIC dialects on these systems offer insufficient execution speed and insufficient facilities to take full advantage of the available hardware. These systems have severe resource constraints, idiosyncratic memory and display architectures, and provide limited system services. There are also few high-level language compilers suitable for microcomputer use. Similarly, assembly language is the default choice for 8-bit consoles such as the Atari 2600 and Nintendo Entertainment System." +2053,"Key software for IBM PC compatibles was written in assembly language, such as MS-DOS, Turbo Pascal, and the Lotus 1-2-3 spreadsheet. As computer speed grew exponentially, assembly language became a tool for speeding up parts of programs, such as the rendering of Doom, rather than a dominant development language. In the 1990s, assembly language was used to get performance out of systems such as the Sega Saturn and as the primary language for arcade hardware based on the TMS34010 integrated CPU/GPU such as Mortal Kombat and NBA Jam." +2054,There has been debate over the usefulness and performance of assembly language relative to high-level languages. +2055,"Although assembly language has specific niche uses where it is important , there are other tools for optimization." +2056,"As of July 2017, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers." +2057,There are some situations in which developers might choose to use assembly language: +2058,"Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages." +2059,"It makes use of elaborative encoding, retrieval cues and imagery as specific tools to encode information in a way that allows for efficient storage and retrieval. It aids original information in becoming associated with something more accessible or meaningful—which in turn provides better retention of the information." +2060,"Commonly encountered mnemonics are often used for lists and in auditory form such as short poems, acronyms, initialisms or memorable phrases. They can also be used for other types of information and in visual or kinesthetic forms. Their use is based on the observation that the human mind more easily remembers spatial, personal, surprising, physical, sexual, humorous and otherwise ""relatable"" information rather than more abstract or impersonal forms of information." +2061,"Ancient Greeks and Romans distinguished between two types of memory: the ""natural"" memory and the ""artificial"" memory. The former is inborn and is the one that everyone uses instinctively. The latter in contrast has to be trained and developed through the learning and practice of a variety of mnemonic techniques." +2062,Mnemonic systems are techniques or strategies consciously used to improve memory. They help use information already stored in long-term memory to make memorization an easier task. +2063,"Mnemonic is derived from the Ancient Greek word μνημονικός which means 'of memory' or 'relating to memory'. It is related to Mnemosyne, the name of the goddess of memory in Greek mythology. Both of these words are derived from μνήμη , 'remembrance, memory'. Mnemonics in antiquity were most often considered in the context of what is today known as the art of memory." +2064,"The general name of mnemonics, or memoria technica, was the name applied to devices for aiding the memory, to enable the mind to reproduce a relatively unfamiliar idea, and especially a series of dissociated ideas, by connecting it, or them, in some artificial whole, the parts of which are mutually suggestive. Mnemonic devices were much cultivated by Greek sophists and philosophers and are frequently referred to by Plato and Aristotle." +2065,Philosopher Charmadas was famous for his outstanding memory and for his ability to memorize whole books and then recite them. +2066,"In later times, the poet Simonides was credited for development of these techniques, perhaps for no reason other than that the power of his memory was famous. Cicero, who attaches considerable importance to the art, but more to the principle of order as the best help to memory, speaks of Carneades of Athens and Metrodorus of Scepsis as distinguished examples of people who used well-ordered images to aid the memory. The Romans valued such helps in order to support facility in public speaking." +2067,"The Greek and the Roman system of mnemonics was founded on the use of mental places and signs or pictures, known as ""topical"" mnemonics. The most usual method was to choose a large house, of which the apartments, walls, windows, statues, furniture, etc., were each associated with certain names, phrases, events or ideas, by means of symbolic pictures. To recall these, an individual had only to search over the apartments of the house until discovering the places where images had been placed by the imagination." +2068,"In accordance with this system, if it were desired to fix a historic date in memory, it was localised in an imaginary town divided into a certain number of districts, each with ten houses, each house with ten rooms, and each room with a hundred quadrates or memory-places, partly on the floor, partly on the four walls, partly on the ceiling. Therefore, if it were desired to fix in the memory the date of the invention of printing , an imaginary book, or some other symbol of printing, would be placed in the thirty-sixth quadrate or memory-place of the fourth room of the first house of the historic district of the town. Except that the rules of mnemonics are referred to by Martianus Capella, nothing further is known regarding the practice until the 13th century." +2069,"Among the voluminous writings of Roger Bacon is a tractate De arte memorativa. Ramon Llull devoted special attention to mnemonics in connection with his ars generalis. The first important modification of the method of the Romans was that invented by the German poet Conrad Celtes, who, in his Epitoma in utramque Ciceronis rhetoricam cum arte memorativa nova , used letters of the alphabet for associations, rather than places. About the end of the 15th century, Peter of Ravenna provoked such astonishment in Italy by his mnemonic feats that he was believed by many to be a necromancer. His Phoenix artis memoriae went through as many as nine editions, the seventh being published at Cologne in 1608." +2070,"About the end of the 16th century, Lambert Schenkel , who taught mnemonics in France, Italy and Germany, similarly surprised people with his memory. He was denounced as a sorcerer by the University of Louvain, but in 1593 he published his tractate De memoria at Douai with the sanction of that celebrated theological faculty. The most complete account of his system is given in two works by his pupil Martin Sommer, published in Venice in 1619. In 1618 John Willis published Mnemonica; sive ars reminiscendi, containing a clear statement of the principles of topical or local mnemonics. Giordano Bruno included a memoria technica in his treatise De umbris idearum, as part of his study of the ars generalis of Llull. Other writers of this period are the Florentine Publicius ; Johannes Romberch ; Hieronimo Morafiot, Ars memoriae ;and B. Porta, Ars reminiscendi ." +2071,"In 1648 Stanislaus Mink von Wennsshein revealed what he called the ""most fertile secret"" in mnemonics—using consonants for figures, thus expressing numbers by words , in order to create associations more readily remembered. The philosopher Gottfried Wilhelm Leibniz adopted an alphabet very similar to that of Wennsshein for his scheme of a form of writing common to all languages." +2072,"Wennsshein's method was adopted with slight changes afterward by the majority of subsequent ""original"" systems. It was modified and supplemented by Richard Grey , a priest who published a Memoria technica in 1730. The principal part of Grey's method is briefly this:" +2073,"Wennsshein's method is comparable to a Hebrew system by which letters also stand for numerals, and therefore words for dates." +2074,"To assist in retaining the mnemonical words in the memory, they were formed into memorial lines. Such strange words in difficult hexameter scansion, are by no means easy to memorise. The vowel or consonant, which Grey connected with a particular figure, was chosen arbitrarily." +2075,"A later modification was made in 1806 Gregor von Feinaigle, a German monk from Salem near Constance. While living and working in Paris, he expounded a system of mnemonics in which the numerical figures are represented by letters chosen due to some similarity to the figure or an accidental connection with it. This alphabet was supplemented by a complicated system of localities and signs. Feinaigle, who apparently did not publish any written documentation of this method, travelled to England in 1811. The following year one of his pupils published The New Art of Memory , giving Feinaigle's system. In addition, it contains valuable historical material about previous systems." +2076,"Other mnemonists later published simplified forms, as the more complicated mnemonics were generally abandoned. Methods founded chiefly on the so-called laws of association were taught with some success in Germany." +2077,"A wide range of mnemonics are used for several purposes. The most commonly used mnemonics are those for lists, numerical sequences, foreign-language acquisition, and medical treatment for patients with memory deficits." +2078,A common mnemonic technique for remembering a list is to create an easily remembered acronym. Another is to create a memorable phrase with words which share the same first letter as the list members. Mnemonic techniques can be applied to most memorization of novel materials. +2079,Some common examples for first-letter mnemonics: +2080,"Mnemonic phrases or poems can be used to encode numeric sequences by various methods, one common one is to create a new phrase in which the number of letters in each word represents the according digit of pi. For example, the first 15 digits of the mathematical constant pi can be encoded as ""Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics""; ""Now"", having 3 letters, represents the first number, 3. Piphilology is the practice dedicated to creating mnemonics for pi." +2081,"Another is used for ""calculating"" the multiples of 9 up to 9 × 10 using one's fingers. Begin by holding out both hands with all fingers stretched out. Now count left to right the number of fingers that indicates the multiple. For example, to figure 9 × 4, count four fingers from the left, ending at your left-hand index finger. Bend this finger down and count the remaining fingers. Fingers to the left of the bent finger represent tens, fingers to the right are ones. There are three fingers to the left and six to the right, which indicates 9 × 4 = 36. This works for 9 × 1 up through 9 × 10." +2082,"For remembering the rules in adding and multiplying two signed numbers, Balbuena and Buayan made the letter strategies LAUS and LPUN , respectively." +2083,"PUIMURI is a Finnish mnemonic regarding electricity: the first and last three letters can be arranged into the equations + + + + P + = + U + × + I + + + {\displaystyle P=U\times I} + + and + + + + U + = + R + × + I + + + {\displaystyle U=R\times I} + +. " +2084,"Mnemonics may be helpful in learning foreign languages, for example by transposing difficult foreign words with words in a language the learner knows already, also called ""cognates"" which are very common in Romance languages and other Germanic languages. A useful such technique is to find linkwords, words that have the same pronunciation in a known language as the target word, and associate them visually or auditorially with the target word." +2085,"For example, in trying to assist the learner to remember ohel , the Hebrew word for tent, the linguist Ghil'ad Zuckermann proposes the memorable sentence ""Oh hell, there's a raccoon in my tent"". The memorable sentence ""There's a fork in Ma's leg"" helps the learner remember that the Hebrew word for fork is mazleg . Similarly, to remember the Hebrew word bayit , meaning house, one can use the sentence ""that's a lovely house, I'd like to buy it."" The linguist Michel Thomas taught students to remember that estar is the Spanish word for to be by using the phrase ""to be a star""." +2086,"Another Spanish example is by using the mnemonic ""Vin Diesel Has Ten Weapons"" to teach irregular command verbs in the you form. Spanish verb forms and tenses are regularly seen as the hardest part of learning the language. With a high number of verb tenses, and many verb forms that are not found in English, Spanish verbs can be hard to remember and then conjugate. The use of mnemonics has been proven to help students better learn foreign languages, and this holds true for Spanish verbs. A particularly hard verb tense to remember is command verbs. Command verbs in Spanish are conjugated differently depending on who the command is being given to. The phrase, when pronounced with a Spanish accent, is used to remember ""Ven Di Sal Haz Ten Ve Pon Sé"", all of the irregular Spanish command verbs in the you form. This mnemonic helps students attempting to memorize different verb tenses. +Another technique is for learners of gendered languages to associate their mental images of words with a colour that matches the gender in the target language. An example here is to remember the Spanish word for ""foot"", pie, with the image of a foot stepping on a pie, which then spills blue filling ." +2087,"For French verbs which use être as an auxiliary verb for compound tenses: DR and MRS VANDERTRAMPP: descendre, rester, monter, revenir, sortir, venir, arriver, naître, devenir, entrer, rentrer, tomber, retourner, aller, mourir, partir, passer." +2088,"Masculine countries in French : ""Neither can a breeze make a sane Japanese chilly in the USA."" Netherlands , Canada, Brazil , Mexico , Senegal, Japan , Chile , & USA ." +2089,"Mnemonics can be used in aiding patients with memory deficits that could be caused by head injuries, strokes, epilepsy, multiple sclerosis and other neurological conditions." +2090,"In a study conducted by Doornhein and De Haan, the patients were treated with six different memory strategies including the mnemonics technique. The results concluded that there were significant improvements on the immediate and delayed subtest of the RBMT, delayed recall on the Appointments test, and relatives rating on the MAC from the patients that received mnemonics treatment. However, in the case of stroke patients, the results did not reach statistical significance." +2091,"Academic study of the use of mnemonics has shown their effectiveness. In one such experiment, subjects of different ages who applied mnemonic techniques to learn novel vocabulary outperformed control groups that applied contextual learning and free-learning styles." +2092,"Mnemonics were seen to be more effective for groups of people who struggled with or had weak long-term memory, like the elderly. Five years after a mnemonic training study, a research team followed-up 112 community-dwelling older adults, 60 years of age and over. Delayed recall of a word list was assessed prior to, and immediately following mnemonic training, and at the 5-year follow-up. Overall, there was no significant difference between word recall prior to training and that exhibited at follow-up. However, pre-training performance gains scores in performance immediately post-training and use of the mnemonic predicted performance at follow-up. Individuals who self-reported using the mnemonic exhibited the highest performance overall, with scores significantly higher than at pre-training. The findings suggest that mnemonic training has long-term benefits for some older adults, particularly those who continue to employ the mnemonic." +2093,This contrasts with a study from surveys of medical students that approximately only 20% frequently used mnemonic acronyms. +2094,"In humans, the process of aging particularly affects the medial temporal lobe and hippocampus, in which the episodic memory is synthesized. The episodic memory stores information about items, objects, or features with spatiotemporal contexts. Since mnemonics aid better in remembering spatial or physical information rather than more abstract forms, its effect may vary according to a subject's age and how well the subject's medial temporal lobe and hippocampus function." +2095,"This could be further explained by one recent study which indicates a general deficit in the memory for spatial locations in aged adults compared to young adults . At first, the difference in target recognition was not significant." +2096,"The researchers then divided the aged adults into two groups, aged unimpaired and aged impaired, according to a neuropsychological testing. With the aged groups split, there was an apparent deficit in target recognition in aged impaired adults compared to both young adults and aged unimpaired adults. This further supports the varying effectiveness of mnemonics in different age groups." +2097,"Moreover, different research was done previously with the same notion, which presented with similar results to that of Reagh et al. in a verbal mnemonics discrimination task." +2098,"Studies have suggested that the short-term memory of adult humans can hold only a limited number of items; grouping items into larger chunks such as in a mnemonic might be part of what permits the retention of a larger total amount of information in short-term memory, which in turn can aid in the creation of long-term memories." +2099,The dictionary definition of mnemonic at Wiktionary +2100,"The Harvard Mark I, or IBM Automatic Sequence Controlled Calculator , was one of the earliest general-purpose electromechanical computers used in the war effort during the last part of World War II." +2101,"One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann. At that time, von Neumann was working on the Manhattan Project, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his ""analytical engine"" in 1837." +2102,"According to Edmund Berkeley, the operators of the Mark I often called the machine “Bessy, the Bessel engine,” after Bessel functions." +2103,"The Mark I was disassembled in 1959; part of it was given to IBM, part went to the Smithsonian Institution, and part entered the Harvard Collection of Historical Scientific Instruments. For decades, Harvard's portion was on display in the lobby of the Aiken Computation Lab. About 1997, it was moved to the Harvard Science Center. In 2021, it was moved again, to the lobby of Harvard's new Science and Engineering Complex in Allston, Massachusetts." +2104,"The original concept was presented to IBM by Howard Aiken in November 1937. After a feasibility study by IBM engineers, the company chairman Thomas Watson Sr. personally approved the project and its funding in February 1939." +2105,"Howard Aiken had started to look for a company to design and build his calculator in early 1937. After two rejections, he was shown a demonstration set that Charles Babbage’s son had given to Harvard University 70 years earlier. This led him to study Babbage and to add references to the Analytical Engine to his proposal; the resulting machine ""brought Babbage’s principles of the Analytical Engine almost to full realization, while adding important new features.""" +2106,"The ASCC was developed and built by IBM at their Endicott plant and shipped to Harvard in February 1944. It began computations for the US Navy Bureau of Ships in May and was officially presented to the university on August 7, 1944." +2107,"Although not the first working computer, the machine was the first to automate the execution of complex calculations, making it a significant step forward for computing." +2108,"The ASCC was built from switches, relays, rotating shafts, and clutches. It used 765,000 electromechanical components and hundreds of miles of wire, comprising a volume of 816 cubic feet – 51 feet in length, 8 feet in height, and 2 feet deep. It weighed about 9,445 pounds . The basic calculating units had to be synchronized and powered mechanically, so they were operated by a 50-foot drive shaft coupled to a 5 horsepower electric motor, which served as the main power source and system clock. From the IBM Archives:" +2109,The enclosure for the Mark I was designed by futuristic American industrial designer Norman Bel Geddes at IBM's expense. Aiken was annoyed that the cost was not used to build additional computer equipment. +2110,"The Mark I had 60 sets of 24 switches for manual data entry and could store 72 numbers, each 23 decimal digits long. It could do 3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute." +2111,"The Mark I read its instructions from a 24-channel punched paper tape. It executed the current instruction and then read the next one. A separate tape could contain numbers for input, but the tape formats were not interchangeable. Instructions could not be executed from the storage registers. Because instructions were not stored in working memory, it is widely claimed that the Harvard Mark I was the origin of the Harvard architecture. However, this is disputed in The Myth of the Harvard Architecture published in the IEEE Annals of History of Computing, which shows the term 'Harvard architecture' did not come into use until the 1970s and was only retrospectively applied to the Harvard machines, and that the term could only be applied to the Mark III and IV, not to the Mark I or II." +2112,"The main sequence mechanism was unidirectional. This meant that complex programs had to be physically lengthy. A program loop was accomplished by loop unrolling or by joining the end of the paper tape containing the program back to the beginning of the tape . At first, conditional branching in Mark I was performed manually. Later modifications in 1946 introduced automatic program branching . The first programmers of the Mark I were computing pioneers Richard Milton Bloch, Robert Campbell, and Grace Hopper. There was also a small technical team whose assignment was to actually operate the machine; some had been IBM employees before being required to join the Navy to work on the machine. This technical team was not informed of the overall purpose of their work while at Harvard." +2113,"The 24 channels of the input tape were divided into three fields of eight channels. Each storage location, each set of switches, and the registers associated with the input, output, and arithmetic units were assigned a unique identifying index number. These numbers were represented in binary on the control tape. The first field was the binary index of the result of the operation, the second was the source datum for the operation and the third field was a code for the operation to be performed." +2114,"In 1928 L.J. Comrie was the first to turn IBM ""punched-card equipment to scientific use: computation of astronomical tables by the method of finite differences, as envisioned by Babbage 100 years earlier for his Difference Engine"". Very soon after, IBM started to modify its tabulators to facilitate this kind of computation. One of these tabulators, built in 1931, was The Columbia Difference Tabulator." +2115,"John von Neumann had a team at Los Alamos that used ""modified IBM punched-card machines"" to determine the effects of the implosion. In March 1944, he proposed to run certain problems regarding implosion of the Mark I, and in 1944 he arrived with two mathematicians to write a simulation program to study the implosion of the first atomic bomb." +2116,"""Von Neumann joined the Manhattan Project in 1943, working on the immense number of calculations needed to build the atomic bomb. He showed that the implosion design, which would later be used in the Trinity and Fat Man bombs, was likely faster and more efficient than the gun design.""" +2117,"Aiken published a press release announcing the Mark I listing himself as the sole “inventor”. James W. Bryce was the only IBM person mentioned, even though several IBM engineers including Clair Lake and Frank Hamilton had helped to build various elements. IBM chairman Thomas J. Watson was enraged, and only reluctantly attended the dedication ceremony on August 7, 1944. Aiken, in turn, decided to build further machines without IBM's help, and the ASCC came to be generally known as the ""Harvard Mark I"". IBM went on to build its Selective Sequence Electronic Calculator to both test new technology and provide more publicity for the company's efforts." +2118,"The Mark I was followed by the Harvard Mark II , Mark III/ADEC , and Harvard Mark IV – all the work of Aiken. The Mark II was an improvement over the Mark I, although it still was based on electromechanical relays. The Mark III used mostly electronic components—vacuum tubes and crystal diodes—but also included mechanical components: rotating magnetic drums for storage, plus relays for transferring data between drums. The Mark IV was all-electronic, replacing the remaining mechanical components with magnetic core memory. The Mark II and Mark III were delivered to the US Navy base at Dahlgren, Virginia. The Mark IV was built for the US Air Force, but it stayed at Harvard." +2119,"The Mark I was disassembled in 1959, and portions of it went on display in the Science Center, as part of the Harvard Collection of Historical Scientific Instruments. It was relocated to the new Science and Engineering Complex in Allston in July 2021. Other sections of the original machine had much earlier been transferred to IBM and the Smithsonian Institution." +2120,"In computing, an opcode is the portion of a machine language instruction that specifies the operation to be performed. Beside the opcode itself, most instructions also specify the data they will process, in the form of operands. In addition to opcodes used in the instruction set architectures of various CPUs, which are hardware devices, they can also be used in abstract computing machines as part of their byte code specifications." +2121,"Specifications and format of the opcodes are laid out in the instruction set architecture of the processor in question, which may be a general CPU or a more specialized processing unit. Opcodes for a given instruction set can be described through the use of an opcode table detailing all possible opcodes. Apart from the opcode itself, an instruction normally also has one or more specifiers for operands on which the operation should act, although some operations may have implicit operands, or none at all. There are instruction sets with nearly uniform fields for opcode and operand specifiers, as well as others with a more complicated, variable-length structure. Instruction sets can be extended through the use of opcode prefixes which add a subset of new instructions made up of existing opcodes following reserved byte sequences." +2122,"Depending on architecture, the operands may be register values, values in the stack, other memory values, I/O ports , etc., specified and accessed using more or less complex addressing modes. The types of operations include arithmetic, data copying, logical operations, and program control, as well as special instructions ." +2123,"Assembly language, or just assembly, is a low-level programming language, which uses mnemonic instructions and operands to represent machine code. This enhances the readability while still giving precise control over the machine instructions. Most programming is currently done using high-level programming languages, which are typically easier for humans to understand and write. These languages need to be compiled by a system-specific compiler, or run through other compiled programs." +2124,"Opcodes can also be found in so-called byte codes and other representations intended for a software interpreter rather than a hardware device. These software-based instruction sets often employ slightly higher-level data types and operations than most hardware counterparts, but are nevertheless constructed along similar lines. Examples include the byte code found in Java class files which are then interpreted by the Java Virtual Machine , the byte code used in GNU Emacs for compiled Lisp code, .NET Common Intermediate Language , and many others." +2125,"In computer programming, machine code is computer code consisting of machine language instructions, which are used to control a computer's central processing unit . Although decimal computers were once common, the contemporary marketplace is dominated by binary computers; for those computers, machine code is ""the binary representation of a computer program which is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions .""" +2126,"Each instruction causes the CPU to perform a very specific task, such as a load, a store, a jump, or an arithmetic logic unit operation on one or more units of data in the CPU's registers or memory." +2127,"Early CPUs had specific machine code that might break backward compatibility with each new CPU released. The notion of an instruction set architecture  defines and specifies the behavior and encoding in memory of the instruction set of the system, without specifying its exact implementation. This acts as an abstraction layer, enabling compatibility within the same family of CPUs, so that machine code written or generated according to the ISA for the family will run on all CPUs in the family, including future CPUs." +2128,"In general, each architecture family has its own ISA, and hence its own specific machine code language. There are exceptions, such as the VAX architecture, which included optional support of the PDP-11 instruction set and IA-64, which included optional support of the IA-32 instruction set. Another example is the PowerPC 615, a processor designed to natively process both PowerPC and x86 instructions." +2129,"Machine code is a strictly numerical language, and is the lowest-level interface to the CPU intended for a programmer. Assembly language provides a direct mapping between the numerical machine code and a human-readable version where numerical opcodes and operands are replaced by readable strings . While it is possible to write programs directly in machine code, managing individual bits and calculating numerical addresses and constants manually is tedious and error-prone. For this reason, programs are very rarely written directly in machine code in modern contexts, but may be done for low-level debugging, program patching and assembly language disassembly." +2130,"The majority of practical programs today are written in higher-level languages. Those programs are either translated into machine code by a compiler, or are interpreted by an interpreter, usually after being translated into an intermediate code, such as a bytecode, that is then interpreted." +2131,"Machine code is by definition the lowest level of programming detail visible to the programmer, but internally many processors use microcode or optimize and transform machine code instructions into sequences of micro-ops. Microcode and micro-ops are not generally considered to be machine code; except on some machines, the user cannot write microcode or micro-ops, and the operation of microcode and the transformation of machine-code instructions into micro-ops happens transparently to the programmer except for performance related side effects." +2132,"Every processor or processor family has its own instruction set. Instructions are patterns of bits, digits, or characters that correspond to machine commands. Thus, the instruction set is specific to a class of processors using the same architecture. Successor or derivative processor designs often include instructions of a predecessor and may add new additional instructions. Occasionally, a successor design will discontinue or alter the meaning of some instruction code , affecting code compatibility to some extent; even compatible processors may show slightly different behavior for some instructions, but this is rarely a problem. Systems may also differ in other details, such as memory arrangement, operating systems, or peripheral devices. Because a program normally relies on such factors, different systems will typically not run the same machine code, even when the same type of processor is used." +2133,"A processor's instruction set may have fixed-length or variable-length instructions. How the patterns are organized varies with the particular architecture and type of instruction. Most instructions have one or more opcode fields that specify the basic instruction type , the operation , and other fields that may give the type of the operand, the addressing mode, the addressing offset or index, or the operand value itself ." +2134,"Not all machines or individual instructions have explicit operands. On a machine with a single accumulator, the accumulator is implicitly both the left operand and result of most arithmetic instructions. Some other architectures, such as the x86 architecture, have accumulator versions of common instructions, with the accumulator regarded as one of the general registers by longer instructions. A stack machine has most or all of its operands on an implicit stack. Special purpose instructions also often lack explicit operands; for example, CPUID in the x86 architecture writes values into four implicit destination registers. This distinction between explicit and implicit operands is important in code generators, especially in the register allocation and live range tracking parts. A good code optimizer can track implicit as well as explicit operands which may allow more frequent constant propagation, constant folding of registers and other code enhancements." +2135,"A computer program is a list of instructions that can be executed by a central processing unit . A program's execution is done in order for the CPU that is executing it to solve a problem and thus accomplish a result. While simple processors are able to execute instructions one after another, superscalar processors are able under certain circumstances of executing two or more instructions simultaneously. As an example, the original Intel Pentium from 1993 can execute at most two instructions per clock cycle when its pipeline is full." +2136,"Program flow may be influenced by special 'jump' instructions that transfer execution to an address other than the next numerically sequential address. Whether these conditional jumps occur is dependent upon a condition such as a value being greater than, less than, or equal to another value." +2137,"A much more human friendly rendition of machine language, called assembly language, uses mnemonic codes to refer to machine code instructions, rather than using the instructions' numeric values directly, and uses symbolic names to refer to storage locations and sometimes registers. For example, on the Zilog Z80 processor, the machine code 00000101, which causes the CPU to decrement the B general-purpose register, would be represented in assembly language as DEC B." +2138,"The MIPS architecture provides a specific example for a machine code whose instructions are always 32 bits long.: 299  The general type of instruction is given by the op field, the highest 6 bits. J-type and I-type instructions are fully specified by op. R-type instructions include an additional field funct to determine the exact operation. The fields used in these types are:" +2139,"rs, rt, and rd indicate register operands; shamt gives a shift amount; and the address or immediate fields contain an operand directly.: 299–301" +2140,"For example, adding the registers 1 and 2 and placing the result in register 6 is encoded:: 554" +2141,"Load a value into register 8, taken from the memory cell 68 cells after the location listed in register 3:: 552" +2142,Jumping to the address 1024:: 552 +2143,"On processor architectures with variable-length instruction sets it is, within the limits of the control-flow resynchronizing phenomenon known as the Kruskal count, sometimes possible through opcode-level programming to deliberately arrange the resulting code so that two code paths share a common fragment of opcode sequences. These are called overlapping instructions, overlapping opcodes, overlapping code, overlapped code, instruction scission, or jump into the middle of an instruction, and represent a form of superposition." +2144,"In the 1970s and 1980s, overlapping instructions were sometimes used to preserve memory space. One example were in the implementation of error tables in Microsoft's Altair BASIC, where interleaved instructions mutually shared their instruction bytes. The technique is rarely used today, but might still be necessary to resort to in areas where extreme optimization for size is necessary on byte-level such as in the implementation of boot loaders which have to fit into boot sectors." +2145,It is also sometimes used as a code obfuscation technique as a measure against disassembly and tampering. +2146,The principle is also utilized in shared code sequences of fat binaries which must run on multiple instruction-set-incompatible processor platforms. +2147,This property is also used to find unintended instructions called gadgets in existing code repositories and is utilized in return-oriented programming as alternative to code injection for exploits such as return-to-libc attacks. +2148,"In some computers, the machine code of the architecture is implemented by an even more fundamental underlying layer called microcode, providing a common machine language interface across a line or family of different models of computer with widely different underlying dataflows. This is done to facilitate porting of machine language programs between different models. An example of this use is the IBM System/360 family of computers and their successors. With dataflow path widths of 8 bits to 64 bits and beyond, they nevertheless present a common architecture at the machine language level across the entire line." +2149,"Using microcode to implement an emulator enables the computer to present the architecture of an entirely different computer. The System/360 line used this to allow porting programs from earlier IBM machines to the new family of computers, e.g. an IBM 1401/1440/1460 emulator on the IBM S/360 model 40." +2150,"Machine code is generally different from bytecode , which is either executed by an interpreter or itself compiled into machine code for faster execution. An exception is when a processor is designed to use a particular bytecode directly as its machine code, such as is the case with Java processors." +2151,Machine code and assembly code are sometimes called native code when referring to platform-dependent parts of language features or libraries. +2152,"From the point of view of the CPU, machine code is stored in RAM, but is typically also kept in a set of caches for performance reasons. There may be different caches for instructions and data, depending on the architecture." +2153,"The CPU knows what machine code to execute, based on its internal program counter. The program counter points to a memory address and is changed based on special instructions which may cause programmatic branches. The program counter is typically set to a hard coded value when the CPU is first powered on, and will hence execute whatever machine code happens to be at this address." +2154,"Similarly, the program counter can be set to execute whatever machine code is at some arbitrary address, even if this is not valid machine code. This will typically trigger an architecture specific protection fault." +2155,"The CPU is oftentimes told, by page permissions in a paging based system, if the current page actually holds machine code by an execute bit — pages have multiple such permission bits for various housekeeping functionality. E.g. on Unix-like systems memory pages can be toggled to be executable with the mprotect system call, and on Windows, VirtualProtect can be used to achieve a similar result. If an attempt is made to execute machine code on a non-executable page, an architecture specific fault will typically occur. Treating data as machine code, or finding new ways to use existing machine code, by various techniques, is the basis of some security vulnerabilities." +2156,"Similarly, in a segment based system, segment descriptors can indicate whether a segment can contain executable code and in what rings that code can run." +2157,"From the point of view of a process, the code space is the part of its address space where the code in execution is stored. In multitasking systems this comprises the program's code segment and usually shared libraries. In multi-threading environment, different threads of one process share code space along with data space, which reduces the overhead of context switching considerably as compared to process switching." +2158,"Pamela Samuelson wrote that machine code is so unreadable that the United States Copyright Office cannot identify whether a particular encoded program is an original work of authorship; however, the US Copyright Office does allow for copyright registration of computer programs and a program's machine code can sometimes be decompiled in order to make its functioning more easily understandable to humans. However, the output of a decompiler or disassembler will be missing the comments and symbolic references, so while the output may be easier to read than the object code, it will still be more difficult than the original source code. This problem does not exist for object-code formats like SQUOZE, where the source code is included in the file." +2159,"Cognitive science professor Douglas Hofstadter has compared machine code to genetic code, saying that ""Looking at a program written in machine language is vaguely comparable to looking at a DNA molecule atom by atom.""" +2160,"MIPS is a family of reduced instruction set computer instruction set architectures : A-1 : 19  developed by MIPS Computer Systems, now MIPS Technologies, based in the United States." +2161,"There are multiple versions of MIPS: including MIPS I, II, III, IV, and V; as well as five releases of MIPS32/64 . The early MIPS architectures were 32-bit; 64-bit versions were developed later. As of April 2017, the current version of MIPS is MIPS32/64 Release 6. MIPS32/64 primarily differs from MIPS I–V by defining the privileged kernel mode System Control Coprocessor in addition to the user mode architecture." +2162,"The MIPS architecture has several optional extensions. MIPS-3D which is a simple set of floating-point SIMD instructions dedicated to common 3D tasks, MDMX which is a more extensive integer SIMD instruction set using the 64-bit floating-point registers, MIPS16e which adds compression to the instruction stream to make programs take up less room, and MIPS MT, which adds multithreading capability." +2163,"Computer architecture courses in universities and technical schools often study the MIPS architecture. The architecture greatly influenced later RISC architectures such as Alpha. In March 2021, MIPS announced that the development of the MIPS architecture had ended as the company is making the transition to RISC-V." +2164,"The first version of the MIPS architecture was designed by MIPS Computer Systems for its R2000 microprocessor, the first MIPS implementation. Both MIPS and the R2000 were introduced together in 1985. When MIPS II was introduced, MIPS was renamed MIPS I to distinguish it from the new version.: 32" +2165,"MIPS Computer Systems' R6000 microprocessor was the first MIPS II implementation.: 8  Designed for servers, the R6000 was fabricated and sold by Bipolar Integrated Technology, but was a commercial failure. During the mid-1990s, many new 32-bit MIPS processors for embedded systems were MIPS II implementations because the introduction of the 64-bit MIPS III architecture in 1991 left MIPS II as the newest 32-bit MIPS architecture until MIPS32 was introduced in 1999.: 19" +2166,"MIPS Computer Systems' R4000 microprocessor was the first MIPS III implementation. It was designed for use in personal, workstation, and server computers. MIPS Computer Systems aggressively promoted the MIPS architecture and R4000, establishing the Advanced Computing Environment consortium to advance its Advanced RISC Computing standard, which aimed to establish MIPS as the dominant personal computing platform. ARC found little success in personal computers, but the R4000 were widely used in workstation and server computers, especially by its largest user, Silicon Graphics. Other uses of the R4000 included high-end embedded systems and supercomputers. MIPS III was eventually implemented by a number of embedded microprocessors. Quantum Effect Design's R4600 and its derivatives was widely used in high-end embedded systems and low-end workstations and servers. MIPS Technologies' R4200 , was designed for embedded systems, laptop, and personal computers. A derivative, the R4300i, fabricated by NEC Electronics, was used in the Nintendo 64 game console. The Nintendo 64, along with the PlayStation, were among the highest volume users of MIPS architecture processors in the mid-1990s." +2167,"The first MIPS IV implementation was the MIPS Technologies R8000 microprocessor chipset . The design of the R8000 began at Silicon Graphics, Inc. and it was only used in high-end workstations and servers for scientific and technical applications where high performance on large floating-point workloads was important. Later implementations were the MIPS Technologies R10000 and the Quantum Effect Devices R5000 and RM7000 . The R10000, fabricated and sold by NEC Electronics and Toshiba, and its derivatives were used by NEC, Pyramid Technology, Silicon Graphics, and Tandem Computers in workstations, servers, and supercomputers. The R5000 and R7000 found use in high-end embedded systems, personal computers, and low-end workstations and servers. A derivative of the R5000 from Toshiba, the R5900, was used in Sony Computer Entertainment's Emotion Engine, which powered its PlayStation 2 game console." +2168,"Announced on October 21, 1996, at the Microprocessor Forum 1996 alongside the MIPS Digital Media Extensions extension, MIPS V was designed to improve the performance of 3D graphics transformations. In the mid-1990s, a major use of non-embedded MIPS microprocessors were graphics workstations from Silicon Graphics. MIPS V was completed by the integer-only MDMX extension to provide a complete system for improving the performance of 3D graphics applications. MIPS V implementations were never introduced. On May 12, 1997, Silicon Graphics announced the H1 and H2 microprocessors. The former was to have been the first MIPS V implementation, and was due to be introduced in the first half of 1999. The H1 and H2 projects were later combined and eventually canceled in 1998. While there have not been any MIPS V implementations, MIPS64 Release 1 was based on MIPS V and retains all of its features as an optional Coprocessor 1 feature called Paired-Single." +2169,"When MIPS Technologies was spun-out of Silicon Graphics in 1998, it refocused on the embedded market. Through MIPS V, each successive version was a strict superset of the previous version, but this property was found to be a problem, and the architecture definition was changed to define a 32-bit and a 64-bit architecture: MIPS32 and MIPS64. Both were introduced in 1999. MIPS32 is based on MIPS II with some additional features from MIPS III, MIPS IV, and MIPS V; MIPS64 is based on MIPS V. NEC, Toshiba and SiByte each obtained licenses for MIPS64 as soon as it was announced. Philips, LSI Logic, IDT, Raza Microelectronics, Inc., Cavium, Loongson Technology and Ingenic Semiconductor have since joined them. MIPS32/MIPS64 Release 5 was announced on December 6, 2012. According to the Product Marketing Director at MIPS, Release 4 was skipped because the number four is perceived as unlucky in many Asian cultures." +2170,"In December 2018, Wave Computing, the new owner of the MIPS architecture, announced that MIPS ISA would be open-sourced in a program dubbed the MIPS Open initiative. The program was intended to open up access to the most recent versions of both the 32-bit and 64-bit designs making them available without any licensing or royalty fees as well as granting participants licenses to existing MIPS patents." +2171,"In March 2019, one version of the architecture was made available under a royalty-free license, but later that year the program was shut down again." +2172,"In March 2021, Wave Computing announced that the development of the MIPS architecture has ceased. The company has joined the RISC-V foundation and future processor designs will be based on the RISC-V architecture. In spite of this, some licensees such as Loongson continue with new extension of MIPS-compatible ISAs on their own." +2173,"In January 2024, Loongson won a case over rights to use MIPS architecture." +2174,"MIPS is a modular architecture supporting up to four coprocessors . In MIPS terminology, CP0 is the System Control Coprocessor , CP1 is an optional floating-point unit and CP2/3 are optional implementation-defined coprocessors . For example, in the PlayStation video game console, CP2 is the Geometry Transformation Engine , which accelerates the processing of geometry in 3D computer graphics." +2175,"MIPS is a load/store architecture ; except for the load/store instructions used to access memory, all instructions operate on the registers." +2176,"MIPS I has thirty-two 32-bit general-purpose registers . Register $0 is hardwired to zero and writes to it are discarded. Register $31 is the link register. For integer multiplication and division instructions, which run asynchronously from other instructions, a pair of 32-bit registers, HI and LO, are provided. There is a small set of instructions for copying data between the general-purpose registers and the HI/LO registers." +2177,The program counter has 32 bits. The two low-order bits always contain zero since MIPS I instructions are 32 bits long and are aligned to their natural word boundaries. +2178,"Instructions are divided into three types: R , I , and J . Every instruction starts with a 6-bit opcode. In addition to the opcode, R-type instructions specify three registers, a shift amount field, and a function field; I-type instructions specify two registers and a 16-bit immediate value; J-type instructions follow the opcode with a 26-bit jump target.: A-174" +2179,The following are the three formats used for the core instruction set: +2180,"MIPS I has instructions that load and store 8-bit bytes, 16-bit halfwords, and 32-bit words. Only one addressing mode is supported: base + displacement. Since MIPS I is a 32-bit architecture, loading quantities fewer than 32 bits requires the datum to be either sign-extended or zero-extended to 32 bits. The load instructions suffixed by ""unsigned"" perform zero extension; otherwise sign extension is performed. Load instructions source the base from the contents of a GPR and write the result to another GPR . Store instructions source the base from the contents of a GPR and the store data from another GPR . All load and store instructions compute the memory address by summing the base with the sign-extended 16-bit immediate. MIPS I requires all memory accesses to be aligned to their natural word boundaries, otherwise an exception is signaled. To support efficient unaligned memory accesses, there are load/store word instructions suffixed by ""left"" or ""right"". All load instructions are followed by a load delay slot. The instruction in the load delay slot cannot use the data loaded by the load instruction. The load delay slot can be filled with an instruction that is not dependent on the load; a nop is substituted if such an instruction cannot be found." +2181,"MIPS I has instructions to perform addition and subtraction. These instructions source their operands from two GPRs , and write the result to a third GPR . Alternatively, addition can source one of the operands from a 16-bit immediate . The instructions for addition and subtraction have two variants: by default, an exception is signaled if the result overflows; instructions with the ""unsigned"" suffix do not signal an exception. The overflow check interprets the result as a 32-bit two's complement integer. MIPS I has instructions to perform bitwise logical AND, OR, XOR, and NOR. These instructions source their operands from two GPRs and write the result to a third GPR. The AND, OR, and XOR instructions can alternatively source one of the operands from a 16-bit immediate . The Set on relation instructions write one or zero to the destination register if the specified relation is true or false. These instructions source their operands from two GPRs or one GPR and a 16-bit immediate , and write the result to a third GPR. By default, the operands are interpreted as signed integers. The variants of these instructions that are suffixed with ""unsigned"" interpret the operands as unsigned integers ." +2182,The Load Immediate Upper instruction copies the 16-bit immediate into the high-order 16 bits of a GPR. It is used in conjunction with the Or Immediate instruction to load a 32-bit immediate into a register. +2183,"MIPS I has instructions to perform left and right logical shifts and right arithmetic shifts. The operand is obtained from a GPR , and the result is written to another GPR . The shift distance is obtained from either a GPR or a 5-bit ""shift amount"" ." +2184,"MIPS I has instructions for signed and unsigned integer multiplication and division. These instructions source their operands from two GPRs and write their results to a pair of 32-bit registers called HI and LO, since they may execute separately from the other CPU instructions. For multiplication, the high- and low-order halves of the 64-bit product is written to HI and LO . For division, the quotient is written to LO and the remainder to HI. To access the results, a pair of instructions is provided to copy the contents of HI or LO to a GPR. These instructions are interlocked: reads of HI and LO do not proceed past an unfinished arithmetic instruction that will write to HI and LO. Another pair of instructions copies the contents of a GPR to HI and LO. These instructions are used to restore HI and LO to their original state after exception handling. Instructions that read HI or LO must be separated by two instructions that do not write to HI or LO." +2185,"All MIPS I control flow instructions are followed by a branch delay slot. Unless the branch delay slot is filled by an instruction performing useful work, an nop is substituted. MIPS I branch instructions compare the contents of a GPR against zero or another GPR as signed integers and branch if the specified condition is true. Control is transferred to the address computed by shifting the 16-bit offset left by two bits, sign-extending the 18-bit result, and adding the 32-bit sign-extended result to the sum of the program counter and 810. Jumps have two versions: absolute and register-indirect. Absolute jumps compute the address to which control is transferred by shifting the 26-bit instr_index left by two bits and concatenating the 28-bit result with the four high-order bits of the address of the instruction in the branch delay slot. Register-indirect jumps transfer control to the instruction at the address sourced from a GPR . The address sourced from the GPR must be word-aligned, else an exception is signaled after the instruction in the branch delay slot is executed. Branch and jump instructions that link save the return address to GPR 31. The ""Jump and Link Register"" instruction permits the return address to be saved to any writable GPR." +2186,MIPS I has two instructions for software to signal an exception: System Call and Breakpoint. System Call is used by user mode software to make kernel calls; and Breakpoint is used to transfer control to a debugger via the kernel's exception handler. Both instructions have a 20-bit Code field that can contain operating environment-specific information for the exception handler. +2187,"MIPS has 32 floating-point registers. Two registers are paired for double precision numbers. Odd numbered registers cannot be used for arithmetic or branching, just as part of a double precision register pair, resulting in 16 usable registers for most instructions ." +2188,"Single precision is denoted by the .s suffix, while double precision is denoted by the .d suffix." +2189,"MIPS II removed the load delay slot: 41  and added several sets of instructions. For shared-memory multiprocessing, the Synchronize Shared Memory, Load Linked Word, and Store Conditional Word instructions were added. A set of Trap-on-Condition instructions were added. These instructions caused an exception if the evaluated condition is true. All existing branch instructions were given branch-likely versions that executed the instruction in the branch delay slot only if the branch is taken.: 40  These instructions improve performance in certain cases by allowing useful instructions to fill the branch delay slot.: 212  Doubleword load and store instructions for COP1–3 were added. Consistent with other memory access instructions, these loads and stores required the doubleword to be naturally aligned." +2190,The instruction set for the floating point coprocessor also had several instructions added to it. An IEEE 754-compliant floating-point square root instruction was added. It supported both single- and double-precision operands. A set of instructions that converted single- and double-precision floating-point numbers to 32-bit words were added. These complemented the existing conversion instructions by allowing the IEEE rounding mode to be specified by the instruction instead of the Floating Point Control and Status Register. +2191,"MIPS III is a backwards-compatible extension of MIPS II that added support for 64-bit memory addressing and integer operations. The 64-bit data type is called a doubleword, and MIPS III extended the general-purpose registers, HI/LO registers, and program counter to 64 bits to support it. New instructions were added to load and store doublewords, to perform integer addition, subtraction, multiplication, division, and shift operations on them, and to move doubleword between the GPRs and HI/LO registers. For shared-memory multiprocessing, the Load Linked Double Word, and Store Conditional Double Word instructions were added. Existing instructions originally defined to operate on 32-bit words were redefined, where necessary, to sign-extend the 32-bit results to permit words and doublewords to be treated identically by most instructions. Among those instructions redefined was Load Word. In MIPS III it sign-extends words to 64 bits. To complement Load Word, a version that zero-extends was added." +2192,"The R instruction format's inability to specify the full shift distance for 64-bit shifts required MIPS III to provide three 64-bit versions of each MIPS I shift instruction. The first version is a 64-bit version of the original shift instructions, used to specify constant shift distances of 0–31 bits. The second version is similar to the first, but adds 3210 the shift amount field's value so that constant shift distances of 32–63 bits can be specified. The third version obtains the shift distance from the six low-order bits of a GPR." +2193,MIPS III added a supervisor privilege level in between the existing kernel and user privilege levels. This feature only affected the implementation-defined System Control Processor . +2194,"MIPS III removed the Coprocessor 3 support instructions, and reused its opcodes for the new doubleword instructions. The remaining coprocessors gained instructions to move doublewords between coprocessor registers and the GPRs. The floating general registers were extended to 64 bits and the requirement for instructions to use even-numbered register only was removed. This is incompatible with earlier versions of the architecture; a bit in the floating-point control/status register is used to operate the MIPS III floating-point unit in a MIPS I- and II-compatible mode. The floating-point control registers were not extended for compatibility. The only new floating-point instructions added were those to copy doublewords between the CPU and FPU convert single- and double-precision floating-point numbers into doubleword integers and vice versa." +2195,"MIPS IV is the fourth version of the architecture. It is a superset of MIPS III and is compatible with all existing versions of MIPS.: A-1  MIPS IV was designed to mainly improve floating-point performance. To improve access to operands, an indexed addressing mode for FP loads and stores was added, as were prefetch instructions for performing memory prefetching and specifying cache hints ." +2196,"MIPS IV added several features to improve instruction-level parallelism. To alleviate the bottleneck caused by a single condition bit, seven condition code bits were added to the floating-point control and status register, bringing the total to eight. FP comparison and branch instructions were redefined so they could specify which condition bit was written or read ; and the delay slot in between an FP branch that read the condition bit written to by a prior FP comparison was removed. Support for partial predication was added in the form of conditional move instructions for both GPRs and FPRs; and an implementation could choose between having precise or imprecise exceptions for IEEE 754 traps." +2197,"MIPS IV added several new FP arithmetic instructions for both single- and double-precision FPNs: fused-multiply add or subtract, reciprocal, and reciprocal square-root. The FP fused-multiply add or subtract instructions perform either one or two roundings , to exceed or meet IEEE 754 accuracy requirements . The FP reciprocal and reciprocal square-root instructions do not comply with IEEE 754 accuracy requirements, and produce results that differ from the required accuracy by one or two units of last place . These instructions serve applications where instruction latency is more important than accuracy." +2198,"MIPS V added a new data type, the Paired Single , which consisted of two single-precision floating-point numbers stored in the existing 64-bit floating-point registers. Variants of existing floating-point instructions for arithmetic, compare and conditional move were added to operate on this data type in a SIMD fashion. New instructions were added for loading, rearranging and converting PS data.: 426–429  It was the first instruction set to exploit floating-point SIMD with existing resources." +2199,"The first release of MIPS32, based on MIPS II, added conditional moves, prefetch instructions, and other features from the R4000 and R5000 families of 64-bit processors. The first release of MIPS64 adds a MIPS32 mode to run 32-bit code. The MUL and MADD instructions, previously available in some implementations, were added to the MIPS32 and MIPS64 specifications, as were cache control instructions. For the purpose of cache control, both SYNC and SYNCI instructions were prepared." +2200,MIPS32/MIPS64 Release 6 in 2014 added the following: +2201,Removed infrequently used instructions: +2202,"Reorganized the instruction encoding, freeing space for future expansions." +2203,"The microMIPS32/64 architectures are supersets of the MIPS32 and MIPS64 architectures designed to replace the MIPS16e ASE. A disadvantage of MIPS16e is that it requires a mode switch before any of its 16-bit instructions can be processed. microMIPS adds versions of the most-frequently used 32-bit instructions that are encoded as 16-bit instructions. This allows programs to intermix 16- and 32-bit instructions without having to switch modes. microMIPS was introduced alongside of MIPS32/64 Release 3, and each subsequent release of MIPS32/64 has a corresponding microMIPS32/64 version. A processor may implement microMIPS32/64 or both microMIPS32/64 and its corresponding MIPS32/64 subset. Starting with MIPS32/64 Release 6, support for MIPS16e ended, and microMIPS is the only form of code compression in MIPS." +2204,"The base MIPS32 and MIPS64 architectures can be supplemented with a number of optional architectural extensions, which are collectively referred to as application-specific extensions . These ASEs provide features that improve the efficiency and performance of certain workloads, such as digital signal processing." +2205,"MIPS has had several calling conventions, especially on the 32-bit platform." +2206,"The O32 ABI is the most commonly-used ABI, owing to its status as the original System V ABI for MIPS. It is strictly stack-based, with only four registers $a0-$a3 available to pass arguments. Space on the stack is reserved in case the callee needs to save its arguments, but the registers are not stored there by the caller. The return value is stored in register $v0; a second return value may be stored in $v1. The ABI took shape in 1990 and was last updated in 1994. This perceived slowness, along with an antique floating-point model with only 16 registers, has encouraged the proliferation of many other calling conventions. It is only defined for 32-bit MIPS, but GCC has created a 64-bit variation called O64." +2207,"For 64-bit, the N64 ABI by Silicon Graphics is most commonly used. The most important improvement is that eight registers are now available for argument passing; it also increases the number of floating-point registers to 32. There is also an ILP32 version called N32, which uses 32-bit pointers for smaller code, analogous to the x32 ABI. Both run under the 64-bit mode of the CPU. The N32 and N64 ABIs pass the first eight arguments to a function in the registers $a0-$a7; subsequent arguments are passed on the stack. The return value is stored in the registers $v0; a second return value may be stored in $v1. In both the N32 and N64 ABIs all registers are considered to be 64-bits wide." +2208,"A few attempts have been made to replace O32 with a 32-bit ABI that resembles N32 more. A 1995 conference came up with MIPS EABI, for which the 32-bit version was quite similar. EABI inspired MIPS Technologies to propose a more radical ""NUBI"" ABI additionally reuse argument registers for the return value. MIPS EABI is supported by GCC but not LLVM, and neither supports NUBI." +2209,"For all of O32 and N32/N64, the return address is stored in a $ra register. This is automatically set with the use of the JAL or JALR instructions. The function prologue of a MIPS subroutine pushes the return address to the stack." +2210,"On both O32 and N32/N64 the stack grows downwards, but the N32/N64 ABIs require 64-bit alignment for all stack entries. The frame pointer is optional and in practice rarely used except when the stack allocation in a function is determined at runtime, for example, by calling alloca." +2211,"For N32 and N64, the return address is typically stored 8 bytes before the stack pointer although this may be optional." +2212,"For the N32 and N64 ABIs, a function must preserve the $s0-$s7 registers, the global pointer , the stack pointer and the frame pointer . The O32 ABI is the same except the calling function is required to save the $gp register instead of the called function." +2213,"For multi-threaded code, the thread local storage pointer is typically stored in special hardware register $29 and is accessed by using the mfhw instruction. At least one vendor is known to store this information in the $k0 register which is normally reserved for kernel use, but this is not standard." +2214,"The $k0 and $k1 registers are reserved for kernel use and should not be used by applications since these registers can be changed at any time by the kernel due to interrupts, context switches or other events." +2215,"Registers that are preserved across a call are registers that will not be changed by a system call or procedure call. For example, $s-registers must be saved to the stack by a procedure that needs to use them, and $sp and $fp are always incremented by constants, and decremented back after the procedure is done with them . By contrast, $ra is changed automatically by any normal function call , and $t-registers must be saved by the program before any procedure call ." +2216,The userspace calling convention of position-independent code on Linux additionally requires that when a function is called the $t9 register must contain the address of that function. This convention dates back to the System V ABI supplement for MIPS. +2217,"MIPS processors are used in embedded systems such as residential gateways and routers. Originally, MIPS was designed for general-purpose computing. During the 1980s and 1990s, MIPS processors for personal, workstation, and server computers were used by many companies such as Digital Equipment Corporation, MIPS Computer Systems, NEC, Pyramid Technology, SiCortex, Siemens Nixdorf, Silicon Graphics, and Tandem Computers." +2218,"Historically, video game consoles such as the Nintendo 64, Sony PlayStation, PlayStation 2, and PlayStation Portable used MIPS processors. MIPS processors also used to be popular in supercomputers during the 1990s, but all such systems have dropped off the TOP500 list. These uses were complemented by embedded applications at first, but during the 1990s, MIPS became a major presence in the embedded processor market, and by the 2000s, most MIPS processors were for these applications." +2219,"In the mid- to late-1990s, it was estimated that one in three RISC microprocessors produced was a MIPS processor." +2220,"By the late 2010s, MIPS machines were still commonly used in embedded markets, including automotive, wireless router, LTE modems , and microcontrollers . They have mostly faded out of the personal, server, and application space." +2221,"Open Virtual Platforms includes the freely available for non-commercial use simulator OVPsim, a library of models of processors, peripherals and platforms, and APIs which enable users to develop their own models. The models in the library are open source, written in C, and include the MIPS 4K, 24K, 34K, 74K, 1004K, 1074K, M14K, microAptiv, interAptiv, proAptiv 32-bit cores and the MIPS 64-bit 5K range of cores. These models are created and maintained by Imperas and in partnership with MIPS Technologies have been tested and assigned the MIPS-Verified mark. Sample MIPS-based platforms include both bare metal environments and platforms for booting unmodified Linux binary images. These platforms–emulators are available as source or binaries and are fast, free for non-commercial usage, and are easy to use. OVPsim is developed and maintained by Imperas and is very fast , and built to handle multicore homogeneous and heterogeneous architectures and systems." +2222,"There is a freely available MIPS32 simulator called SPIM for use in education. EduMIPS64 is a GPL graphical cross-platform MIPS64 CPU simulator, written in Java/Swing. It supports a wide subset of the MIPS64 ISA and allows the user to graphically see what happens in the pipeline when an assembly program is run by the CPU." +2223,"MARS is another GUI-based MIPS emulator designed for use in education, specifically for use with Hennessy's Computer Organization and Design." +2224,"WebMIPS is a browser-based MIPS simulator with visual representation of a generic, pipelined processor. This simulator is quite useful for register tracking during step by step execution." +2225,"QtMips provides simple 5-stages pipeline visualization as well as cache principle visualization for basic computer architectures courses. Windows, Linux, macOS and online version is available." +2226,More advanced free emulators are available from the GXemul and QEMU projects. These emulate the various MIPS III and IV microprocessors in addition to entire computer systems which use them. +2227,"Commercial simulators are available especially for the embedded use of MIPS processors, for example Wind River Simics , Imperas , VaST Systems , and CoWare ." +2228,The Creator simulator is portable and allows the user to learn various assembly languages of different processors . +2229,"An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics." +2230,"The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom." +2231,"Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools." +2232,"Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and interactive BASIC could be used to do calculations on most 1970s and 1980s home computers. Calculator functions are included in most smartphones, tablets and personal digital assistant type devices." +2233,"In addition to general purpose calculators, there are those designed for specific markets. For example, there are scientific calculators which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher-dimensional Euclidean space. As of 2016, basic calculators cost little, but scientific and graphing models tend to cost more." +2234,"With the very wide availability of smartphones and the like, dedicated hardware calculators, while still widely used, are less common than they once were. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. By 2007, this had diminished to less than 0.05%." +2235,"Electronic calculators contain a keyboard with buttons for digits and arithmetical operations; some even contain ""00"" and ""000"" buttons to make larger or smaller numbers easier to enter. Most basic calculators assign only one digit or operation on each button; however, in more specific calculators, a button can perform multi-function working with key combinations." +2236,Calculators usually have liquid-crystal displays as output in place of historical light-emitting diode displays and vacuum fluorescent displays ; details are provided in the section Technical improvements. +2237,"Large-sized figures are often used to improve readability; while using decimal separator instead of or in addition to vulgar fractions. Various symbols for function commands may also be shown on the display. Fractions such as 1⁄3 are displayed as decimal approximations, for example rounded to 0.33333333. Also, some fractions can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers." +2238,Calculators also have the ability to save numbers into computer memory. Basic calculators usually store only one number at a time; more specific types are able to store many numbers represented in variables. Usually these variables are named ans or ans. The variables can also be used for constructing formulas. Some models have the ability to extend memory capacity to store more numbers; the extended memory address is termed an array index. +2239,"Power sources of calculators are batteries, solar cells or mains electricity , turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off . Crank-powered calculators were also common in the early computer era." +2240,"The following keys are common to most pocket calculators. While the arrangement of the digits is standard, the positions of other keys vary from model to model; the illustration is an example." +2241,The arrangement of digits on calculator and other numeric keypads with the 7-8-9 keys two rows above the 1-2-3 keys is derived from calculators and cash registers. It is notably different from the layout of telephone Touch-Tone keypads which have the 1-2-3 keys on top and 7-8-9 keys on the third row. +2242,"In general, a basic electronic calculator consists of the following components:" +2243,"Clock rate of a processor chip refers to the frequency at which the central processing unit is running. It is used as an indicator of the processor's speed, and is measured in clock cycles per second or hertz . For basic calculators, the speed can vary from a few hundred hertz to the kilohertz range." +2244,A basic explanation as to how calculations are performed in a simple four-function calculator: +2245,"To perform the calculation 25 + 9, one presses keys in the following sequence on most calculators: 2 5 + 9 =." +2246,Other functions are usually performed using repeated additions or subtractions. +2247,"Most pocket calculators do all their calculations in binary-coded decimal rather than binary. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. " +2248,"The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities." +2249,"Where calculators have added functions , software algorithms are required to produce high precision results. Sometimes significant design effort is needed to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time." +2250,"The first known tools used to aid arithmetic calculations were: bones , pebbles, and counting boards, and the abacus, known to have been used by Sumerians and Egyptians before 2000 BC. Except for the Antikythera mechanism , development of computing tools arrived near the start of the 17th century: the geometric-military compass , logarithms and Napier bones , and the slide rule ." +2251,"The Renaissance saw the invention of the mechanical calculator by Wilhelm Schickard in 1623, and later by Blaise Pascal in 1642. A device that was at times somewhat over-promoted as being able to perform all four arithmetic operations with minimal human intervention. Pascal's calculator could add and subtract two numbers directly and thus, if the tedium could be borne, multiply and divide by repetition. Schickard's machine, constructed several decades earlier, used a clever set of mechanised multiplication tables to ease the process of multiplication and division with the adding machine as a means of completing this operation. There is a debate about whether Pascal or Shickard should be credited as the known inventor of a calculating machine due to the differences of both inventions. Schickard and Pascal were followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, the stepped reckoner, inventing in the process his leibniz wheel, but who couldn't design a fully operational machine. There were also five unsuccessful attempts to design a calculating clock in the 17th century." +2252,"The 18th century saw the arrival of some notable improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always one of a kind. Luigi Torchi invented the first direct multiplication machine in 1834: this was also the second key-driven machine in the world, following that of James White . It was not until the 19th century and the Industrial Revolution that real developments began to occur. Although machines capable of performing all four arithmetic functions existed prior to the 19th century, the refinement of manufacturing and fabrication processes during the eve of the industrial revolution made large scale production of more compact and modern units possible. The Arithmometer, invented in 1820 as a four-operation mechanical calculator, was released to production in 1851 as an adding machine and became the first commercially successful unit; forty years later, by 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two arithmometer clone makers and Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers." +2253,"It wasn't until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States." +2254,"In 1921, Edith Clarke invented the ""Clarke calculator"", a simple graph-based calculator for solving line equations involving hyperbolic functions. This allowed electrical engineers to simplify calculations for inductance and capacitance in power transmission lines." +2255,"The Curta calculator was developed in 1948 and, although costly, became popular for its portability. This purely mechanical hand-held device could do addition, subtraction, multiplication and division. By the early 1970s electronic pocket calculators ended manufacture of mechanical calculators, although the Curta remains a popular collectable item." +2256,"The first mainframe computers, initially using vacuum tubes and later transistors in the logic circuits, appeared in the 1940s and 1950s. Electronic circuits developed for computers also had application to electronic calculators." +2257,"The Casio Computer Company, in Japan, released the Model 14-A calculator in 1957, which was the world's first all-electric compact calculator. It did not use electronic logic but was based on relay technology, and was built into a desk. The IBM 608 plugboard programmable calculator was IBM's first all-transistor product, released in 1957; this was a console type system, with input and output on punched cards, and replaced the earlier, larger, vacuum-tube IBM 603." +2258,"In October 1961, the world's first all-electronic desktop calculator, the British Bell Punch/Sumlock Comptometer ANITA was announced. This machine used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode ""Nixie"" tubes for its display. Two models were displayed, the Mk VII for continental Europe and the Mk VIII for Britain and the rest of the world, both for delivery from early 1962. The Mk VII was a slightly earlier design with a more complicated mode of multiplication, and was soon dropped in favour of the simpler Mark VIII. The ANITA had a full keyboard, similar to mechanical comptometers of the time, a feature that was unique to it and the later Sharp CS-10A among electronic calculators. The ANITA weighed roughly 33 pounds due to its large tube system. Bell Punch had been producing key-driven mechanical calculators of the comptometer type under the names ""Plus"" and ""Sumlock"", and had realised in the mid-1950s that the future of calculators lay in electronics. They employed the young graduate Norbert Kitz, who had worked on the early British Pilot ACE computer project, to lead the development. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick." +2259,"The tube technology of the ANITA was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a 5-inch cathode ray tube , and introduced Reverse Polish Notation to the calculator market for a price of $2200, which was about three times the cost of an electromechanical calculator of the time. Like Bell Punch, Friden was a manufacturer of mechanical calculators that had decided that the future lay in electronics. In 1964 more all-transistor electronic calculators were introduced: Sharp introduced the CS-10A, which weighed 25 kilograms and cost 500,000 yen , and Industria Macchine Elettroniche of Italy introduced the IME 84, to which several extra keyboard and display units could be connected so that several people could make use of it . The Victor 3900 was the first to use integrated circuits in place of individual transistors, but production problems delayed sales until 1966." +2260,"There followed a series of electronic calculator models from these and other manufacturers, including Canon, Mathatronics, Olivetti, SCM , Sony, Toshiba, and Wang. The early calculators used hundreds of germanium transistors, which were cheaper than silicon transistors, on multiple circuit boards. Display types used were CRT, cold-cathode Nixie tubes, and filament lamps. Memory technology was usually based on the delay-line memory or the magnetic-core memory, though the Toshiba ""Toscal"" BC-1411 appears to have used an early form of dynamic RAM built from discrete components. Already there was a desire for smaller and less power-hungry machines." +2261,"Bulgaria's ELKA 6521, introduced in 1965, was developed by the Central Institute for Calculation Technologies and built at the Elektronika factory in Sofia. The name derives from ELektronen KAlkulator, and it weighed around 8 kg . It is the first calculator in the world which includes the square root function. Later that same year were released the ELKA 22 and the ELKA 25, with an built-in printer. Several other models were developed until the first pocket model, the ELKA 101, was released in 1974. The writing on it was in Roman script, and it was exported to western countries." +2262,"The first desktop programmable calculators were produced in the mid-1960s. They included the Mathatronics Mathatron and the Olivetti Programma 101 which were solid-state, desktop, printing, floating point, algebraic entry, programmable, stored-program electronic calculators. Both could be programmed by the end user and print out their results. The Programma 101 saw much wider distribution and had the added feature of offline storage of programs via magnetic cards." +2263,Another early programmable desktop calculator was the Casio produced in 1967. It featured a nixie tubes display and had transistor electronics and ferrite core memory. +2264,"The Monroe Epic programmable calculator came on the market in 1967. A large, printing, desk-top unit, with an attached floor-standing logic tower, it could be programmed to perform many computer-like functions. However, the only branch instruction was an implied unconditional branch at the end of the operation stack, returning the program to its starting instruction. Thus, it was not possible to include any conditional branch logic. During this era, the absence of the conditional branch was sometimes used to distinguish a programmable calculator from a computer." +2265,"The first Soviet programmable desktop calculator ISKRA 123, powered by the power grid, was released at the start of the 1970s." +2266,"The electronic calculators of the mid-1960s were large and heavy desktop machines due to their use of hundreds of transistors on several circuit boards with a large power consumption that required an AC power supply. There were great efforts to put the logic required for a calculator into fewer and fewer integrated circuits and calculator electronics was one of the leading edges of semiconductor development. U.S. semiconductor manufacturers led the world in large scale integration semiconductor development, squeezing more and more functions into individual integrated circuits. This led to alliances between Japanese calculator manufacturers and U.S. semiconductor companies: Canon Inc. with Texas Instruments, Hayakawa Electric with North-American Rockwell Microelectronics , Busicom with Mostek and Intel, and General Instrument with Sanyo." +2267,"By 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered from rechargeable batteries. The first handheld calculator was a 1967 prototype called Cal Tech, whose development was led by Jack Kilby at Texas Instruments in a research project to produce a portable calculator. It could add, multiply, subtract, and divide, and its output device was a paper tape. As a result of the ""Cal-Tech"" project, Texas Instruments was granted master patents on portable calculators." +2268,"The first commercially produced portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 ""Mini Calculator"", the Canon Pocketronic, and the Sharp QT-8B ""micro Compet"". The Canon Pocketronic was a development from the ""Cal-Tech"" project. It had no traditional display; numerical output was on thermal paper tape." +2269,"Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed 1.59 pounds , had a vacuum fluorescent display, rechargeable NiCad batteries, and initially sold for US$395." +2270,"However, integrated circuit development efforts culminated in early 1971 with the introduction of the first ""calculator on a chip"", the MK6010 by Mostek, followed by Texas Instruments later in the year. Although these early hand-held calculators were very costly, these advances in electronics, together with developments in display technology , led within a few years to the cheap pocket calculator available to all." +2271,"In 1971, Pico Electronics and General Instrument also introduced their first collaboration in ICs, a full single chip calculator IC for the Monroe Royal Digital III calculator. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. Pico and GI went on to have significant success in the burgeoning handheld calculator market." +2272,"The first truly pocket-sized electronic calculator was the Busicom LE-120A ""HANDY"", which was marketed early in 1971. Made in Japan, this was also the first calculator to use an LED display, the first hand-held calculator to use a single integrated circuit , the Mostek MK6010, and the first electronic calculator to run off replaceable batteries. Using four AA-size cells the LE-120A measures 4.9 by 2.8 by 0.9 inches ." +2273,"The first European-made pocket-sized calculator, DB 800 was made in May 1971 by Digitron in Buje, Croatia with four functions and an eight-digit display and special characters for a negative number and a warning that the calculation has too many digits to display." +2274,"The first American-made pocket-sized calculator, the Bowmar 901B , measuring 5.2 by 3.0 by 1.5 inches , came out in the Autumn of 1971, with four functions and an eight-digit red LED display, for US$240, while in August 1972 the four-function Sinclair Executive became the first slimline pocket calculator measuring 5.4 by 2.2 by 0.35 inches and weighing 2.5 ounces . It retailed for around £79 . By the end of the decade, similar calculators were priced less than £5 . Following protracted development over the course of two years including a botched partnership with Texas Instruments, Eldorado Electrodata released five pocket calculators in 1972. One called the Touch Magic was ""no bigger than a pack of cigarettes"" according to Administrative Management." +2275,"The first Soviet Union made pocket-sized calculator, the Elektronika B3-04 was developed by the end of 1973 and sold at the start of 1974." +2276,"One of the first low-cost calculators was the Sinclair Cambridge, launched in August 1973. It retailed for £29.95 , or £5 less in kit form, and later models included some scientific functions. The Sinclair calculators were successful because they were far cheaper than the competition; however, their design led to slow and less accurate computations of transcendental functions ." +2277,"Meanwhile, Hewlett-Packard had been developing a pocket calculator. Launched in early 1972, it was unlike the other basic four-function pocket calculators then available in that it was the first pocket calculator with scientific functions that could replace a slide rule. The $395 HP-35, along with nearly all later HP engineering calculators, uses reverse Polish notation , also called postfix notation. A calculation like ""8 plus 5"" is, using RPN, performed by pressing 8, Enter↑, 5, and +; instead of the algebraic infix notation: 8, +, 5, =. It had 35 buttons and was based on Mostek Mk6020 chip." +2278,"The first Soviet scientific pocket-sized calculator the ""B3-18"" was completed by the end of 1975." +2279,"In 1973, Texas Instruments introduced the SR-10, an algebraic entry pocket calculator using scientific notation for $150. Shortly after the SR-11 featured an added key for entering pi . It was followed the next year by the SR-50 which added log and trig functions to compete with the HP-35, and in 1977 the mass-marketed TI-30 line which is still produced." +2280,"In 1978, a new company, Calculated Industries arose which focused on specialized markets. Their first calculator, the Loan Arranger was a pocket calculator marketed to the Real Estate industry with preprogrammed functions to simplify the process of calculating payments and future values. In 1985, CI launched a calculator for the construction industry called the Construction Master which came preprogrammed with common construction calculations . This would be the first in a line of construction related calculators." +2281,"The first programmable pocket calculator was the HP-65, in 1974; it had a capacity of 100 instructions, and could store and retrieve programs with a built-in magnetic card reader. Two years later the HP-25C introduced continuous memory, i.e., programs and data were retained in CMOS memory during power-off. In 1979, HP released the first alphanumeric, programmable, expandable calculator, the HP-41C. It could be expanded with random-access memory and read-only memory modules, and peripherals like bar code readers, microcassette and floppy disk drives, paper-roll thermal printers, and miscellaneous communication interfaces ." +2282,"The first Soviet pocket battery-powered programmable calculator, Elektronika B3-21, was developed by the end of 1976 and released at the start of 1977. The successor of B3-21, the Elektronika B3-34 wasn't backward compatible with B3-21, even if it kept the reverse Polish notation . Thus B3-34 defined a new command set, which later was used in a series of later programmable Soviet calculators. Despite very limited abilities , people managed to write all kinds of programs for them, including adventure games and libraries of calculus-related functions for engineers. Hundreds, perhaps thousands, of programs were written for these machines, from practical scientific and business software, which were used in real-life offices and labs, to fun games for children. The Elektronika MK-52 calculator was used in Soviet spacecraft program as a backup of the board computer." +2283,"This series of calculators was also noted for a large number of highly counter-intuitive mysterious undocumented features, somewhat similar to ""synthetic programming"" of the American HP-41, which were exploited by applying normal arithmetic operations to error messages, jumping to nonexistent addresses and other methods. A number of respected monthly publications, including the popular science magazine Nauka i Zhizn , featured special columns, dedicated to optimization methods for calculator programmers and updates on undocumented features for hackers, which grew into a whole esoteric science with many branches, named ""yeggogology"" . The error messages on those calculators appear as a Russian word ""YEGGOG"" which, unsurprisingly, is translated to ""Error""." +2284,"A similar hacker culture in the US revolved around the HP-41, which was also noted for a large number of undocumented features and was much more powerful than B3-34." +2285,"Through the 1970s the hand-held electronic calculator underwent rapid development. The red LED and blue/green vacuum fluorescent displays consumed a lot of power and the calculators either had a short battery life or were large so that they could take larger, higher capacity batteries. In the early 1970s liquid-crystal displays were in their infancy and there was a great deal of concern that they only had a short operating lifetime. Busicom introduced the Busicom LE-120A ""HANDY"" calculator, the first pocket-sized calculator and the first with an LED display, and announced the Busicom LC with LCD. However, there were problems with this display and the calculator never went on sale. The first successful calculators with LCDs were manufactured by Rockwell International and sold from 1972 by other companies under such names as: Dataking LC-800, Harden DT/12, Ibico 086, Lloyds 40, Lloyds 100, Prismatic 500 , Rapid Data Rapidman 1208LC. The LCDs were an early form using the Dynamic Scattering Mode DSM with the numbers appearing as bright against a dark background. To present a high-contrast display these models illuminated the LCD using a filament lamp and solid plastic light guide, which negated the low power consumption of the display. These models appear to have been sold only for a year or two." +2286,"A more successful series of calculators using a reflective DSM-LCD was launched in 1972 by Sharp Inc with the Sharp EL-805, which was a slim pocket calculator. This, and another few similar models, used Sharp's Calculator On Substrate technology. An extension of one glass plate needed for the liquid crystal display was used as a substrate to mount the needed chips based on a new hybrid technology. The COS technology may have been too costly since it was only used in a few models before Sharp reverted to conventional circuit boards." +2287,"In the mid-1970s the first calculators appeared with field-effect, twisted nematic LCDs with dark numerals against a grey background, though the early ones often had a yellow filter over them to cut out damaging ultraviolet rays. The advantage of LCDs is that they are passive light modulators reflecting light, which require much less power than light-emitting displays such as LEDs or VFDs. This led the way to the first credit-card-sized calculators, such as the Casio Mini Card LC-78 of 1978, which could run for months of normal use on button cells." +2288,"There were also improvements to the electronics inside the calculators. All of the logic functions of a calculator had been squeezed into the first ""calculator on a chip"" integrated circuits in 1971, but this was leading edge technology of the time and yields were low and costs were high. Many calculators continued to use two or more ICs, especially the scientific and the programmable ones, into the late 1970s." +2289,"The power consumption of the integrated circuits was also reduced, especially with the introduction of CMOS technology. Appearing in the Sharp ""EL-801"" in 1972, the transistors in the logic cells of CMOS ICs only used any appreciable power when they changed state. The LED and VFD displays often required added driver transistors or ICs, whereas the LCDs were more amenable to being driven directly by the calculator IC itself." +2290,"With this low power consumption came the possibility of using solar cells as the power source, realised around 1978 by calculators such as the Royal Solar 1, Sharp EL-8026, and Teal Photon." +2291,"At the start of the 1970s, hand-held electronic calculators were very costly, at two or three weeks' wages, and so were a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were costly to produce, and production runs that were too small to exploit economies of scale. Many firms saw that there were good profits to be made in the calculator business with the margin on such high prices. However, the cost of calculators fell as components and their production methods improved, and the effect of economies of scale was felt." +2292,"By 1976, the cost of the cheapest four-function pocket calculator had dropped to a few dollars, about 1/20 of the cost five years before. The results of this were that the pocket calculator was affordable, and that it was now difficult for the manufacturers to make a profit from calculators, leading to many firms dropping out of the business or closing. The firms that survived making calculators tended to be those with high outputs of higher quality calculators, or producing high-specification scientific and programmable calculators." +2293,"The first calculator capable of symbolic computing was the HP-28C, released in 1987. It could, for example, solve quadratic equations symbolically. The first graphing calculator was the Casio fx-7000G released in 1985." +2294,"The two leading manufacturers, HP and TI, released increasingly feature-laden calculators during the 1980s and 1990s. At the turn of the millennium, the line between a graphing calculator and a handheld computer was not always clear, as some very advanced calculators such as the TI-89, the Voyage 200 and HP-49G could differentiate and integrate functions, solve differential equations, run word processing and PIM software, and connect by wire or IR to other calculators/computers." +2295,"The HP 12c financial calculator is still produced. It was introduced in 1981 and is still being made with few changes. The HP 12c featured the reverse Polish notation mode of data entry. In 2003 several new models were released, including an improved version of the HP 12c, the ""HP 12c platinum edition"" which added more memory, more built-in functions, and the addition of the algebraic mode of data entry." +2296,"Calculated Industries competed with the HP 12c in the mortgage and real estate markets by differentiating the key labeling; changing the ""I"", ""PV"", ""FV"" to easier labeling terms such as ""Int"", ""Term"", ""Pmt"", and not using the reverse Polish notation. However, CI's more successful calculators involved a line of construction calculators, which evolved and expanded in the 1990s to present. According to Mark Bollman, a mathematics and calculator historian and associate professor of mathematics at Albion College, the ""Construction Master is the first in a long and profitable line of CI construction calculators"" which carried them through the 1980s, 1990s, and to the present." +2297,"In most countries, students use calculators for schoolwork. There was some initial resistance to the idea out of fear that basic or elementary arithmetic skills would suffer. There remains disagreement about the importance of the ability to perform calculations in the head, with some curricula restricting calculator use until a certain level of proficiency has been obtained, while others concentrate more on teaching estimation methods and problem-solving. Research suggests that inadequate guidance in the use of calculating tools can restrict the kind of mathematical thinking that students engage in. Others have argued that calculator use can even cause core mathematical skills to atrophy, or that such use can prevent understanding of advanced algebraic concepts. In December 2011 the UK's Minister of State for Schools, Nick Gibb, voiced concern that children can become ""too dependent"" on the use of calculators. As a result, the use of calculators is to be included as part of a review of the Curriculum. In the United States, many math educators and boards of education have enthusiastically endorsed the National Council of Teachers of Mathematics standards and actively promoted the use of classroom calculators from kindergarten through high school." +2298,"Personal computers often come with a calculator utility program that emulates the appearance and functions of a calculator, using the graphical user interface to portray a calculator. Examples include the Windows Calculator, Apple's Calculator, and KDE's KCalc. Most personal data assistants and smartphones also have such a feature." +2299,"The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functions, sometimes with support for programming languages ." +2300,"For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in read-only memory , and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require much multiplication. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. This distinction blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, more so the Z80, MC68000, and ARM architectures, and some custom designs specialized for the calculator market." +2301,"In computer programming, a function, subprogram, procedure, method, routine or subroutine is a callable unit that has a well-defined behavior and can be invoked by other software units to exhibit that behavior." +2302,"Callable units provide a powerful programming tool. The primary purpose is to allow for the decomposition of a large and/or complicated problem into chunks that have relatively low cognitive load and to assign the chunks meaningful names . Judicious application can reduce the cost of developing and maintaining software, while increasing its quality and reliability." +2303,"Callable units are present at multiple levels of abstraction in the programming environment. For example, a programmer may write a function in source code that is compiled to machine code that implements similar semantics. There is a callable unit in the source code and an associated one in the machine code, but they are different kinds of callable units – with different implications and features." +2304,"The idea of a callable unit was initially conceived by John Mauchly and Kathleen Antonelli during their work on ENIAC and recorded in a January 1947 Harvard symposium on ""Preparation of Problems for EDVAC-type Machines"". Maurice Wilkes, David Wheeler, and Stanley Gill are generally credited with the formal invention of this concept, which they termed a closed sub-routine, contrasted with an open subroutine or macro. However, Alan Turing had discussed subroutines in a paper of 1945 on design proposals for the NPL ACE, going so far as to invent the concept of a return address stack." +2305,"The idea of a subroutine was worked out after computing machines had already existed for some time. The arithmetic and conditional jump instructions were planned ahead of time and have changed relatively little, but the special instructions used for procedure calls have changed greatly over the years. The earliest computers and microprocessors, such as the Manchester Baby and the RCA 1802, did not have a single subroutine call instruction. Subroutines could be implemented, but they required programmers to use the call sequence—a series of instructions—at each call site." +2306,Subroutines were implemented in Konrad Zuse's Z4 in 1945. +2307,"In 1945, Alan M. Turing used the terms ""bury"" and ""unbury"" as a means of calling and returning from subroutines." +2308,"In January 1947 John Mauchly presented general notes at 'A Symposium of Large Scale Digital Calculating Machinery' +under the joint sponsorship of Harvard University and the Bureau of Ordnance, United States Navy. Here he discusses serial and parallel operation suggesting" +2309,"In other words, one can designate subroutine A as division and subroutine B as complex multiplication and subroutine C as the evaluation of a standard error of a sequence of numbers, and so on through the list of subroutines needed for a particular problem. ... All these subroutines will then be stored in the machine, and all one needs to do is make a brief reference to them by number, as they are indicated in the coding." +2310,Kay McNulty had worked closely with John Mauchly on the ENIAC team and developed an idea for subroutines for the ENIAC computer she was programming during World War II. She and the other ENIAC programmers used the subroutines to help calculate missile trajectories. +2311,Goldstine and von Neumann wrote a paper dated 16 August 1948 discussing the use of subroutines. +2312,"Some very early computers and microprocessors, such as the IBM 1620, the Intel 4004 and Intel 8008, and the PIC microcontrollers, have a single-instruction subroutine call that uses a dedicated hardware stack to store return addresses—such hardware supports only a few levels of subroutine nesting, but can support recursive subroutines. Machines before the mid-1960s—such as the UNIVAC I, the PDP-1, and the IBM 1130—typically use a calling convention which saved the instruction counter in the first memory location of the called subroutine. This allows arbitrarily deep levels of subroutine nesting but does not support recursive subroutines. The IBM System/360 had a subroutine call instruction that placed the saved instruction counter value into a general-purpose register; this can be used to support arbitrarily deep subroutine nesting and recursive subroutines. The Burroughs B5000 is one of the first computers to store subroutine return data on a stack." +2313,"The DEC PDP-6 is one of the first accumulator-based machines to have a subroutine call instruction that saved the return address in a stack addressed by an accumulator or index register. The later PDP-10 , PDP-11 and VAX-11 lines followed suit; this feature also supports both arbitrarily deep subroutine nesting and recursive subroutines." +2314,"In the very early assemblers, subroutine support was limited. Subroutines were not explicitly separated from each other or from the main program, and indeed the source code of a subroutine could be interspersed with that of other subprograms. Some assemblers would offer predefined macros to generate the call and return sequences. By the 1960s, assemblers usually had much more sophisticated support for both inline and separately assembled subroutines that could be linked together." +2315,One of the first programming languages to support user-written subroutines and functions was FORTRAN II. The IBM FORTRAN II compiler was released in 1958. ALGOL 58 and other early programming languages also supported procedural programming. +2316,"Even with this cumbersome approach, subroutines proved very useful. They allowed the use of the same code in many different programs. Memory was a very scarce resource on early computers, and subroutines allowed significant savings in the size of programs." +2317,"Many early computers loaded the program instructions into memory from a punched paper tape. Each subroutine could then be provided by a separate piece of tape, loaded or spliced before or after the main program ; and the same subroutine tape could then be used by many different programs. A similar approach was used in computers that loaded program instructions from punched cards. The name subroutine library originally meant a library, in the literal sense, which kept indexed collections of tapes or decks of cards for collective use." +2318,"To remove the need for self-modifying code, computer designers eventually provided an indirect jump instruction, whose operand, instead of being the return address itself, was the location of a variable or processor register containing the return address." +2319,"On those computers, instead of modifying the function's return jump, the calling program would store the return address in a variable so that when the function completed, it would execute an indirect jump that would direct execution to the location given by the predefined variable." +2320,"Another advance was the jump to subroutine instruction, which combined the saving of the return address with the calling jump, thereby minimizing overhead significantly." +2321,"In the IBM System/360, for example, the branch instructions BAL or BALR, designed for procedure calling, would save the return address in a processor register specified in the instruction, by convention register 14. To return, the subroutine had only to execute an indirect branch instruction through that register. If the subroutine needed that register for some other purpose , it would save the register's contents to a private memory location or a register stack." +2322,"In systems such as the HP 2100, the JSB instruction would perform a similar task, except that the return address was stored in the memory location that was the target of the branch. Execution of the procedure would actually begin at the next memory location. In the HP 2100 assembly language, one would write, for example" +2323,to call a subroutine called MYSUB from the main program. The subroutine would be coded as +2324,"The JSB instruction placed the address of the NEXT instruction into the location specified as its operand , and then branched to the NEXT location after that . The subroutine could then return to the main program by executing the indirect jump JMP MYSUB, I which branched to the location stored at location MYSUB." +2325,"Compilers for Fortran and other languages could easily make use of these instructions when available. This approach supported multiple levels of calls; however, since the return address, parameters, and return values of a subroutine were assigned fixed memory locations, it did not allow for recursive calls." +2326,"Incidentally, a similar method was used by Lotus 1-2-3, in the early 1980s, to discover the recalculation dependencies in a spreadsheet. Namely, a location was reserved in each cell to store the return address. Since circular references are not allowed for natural recalculation order, this allows a tree walk without reserving space for a stack in memory, which was very limited on small computers such as the IBM PC." +2327,"Most modern implementations of a function call use a call stack, a special case of the stack data structure, to implement function calls and returns. Each procedure call creates a new entry, called a stack frame, at the top of the stack; when the procedure returns, its stack frame is deleted from the stack, and its space may be used for other procedure calls. Each stack frame contains the private data of the corresponding call, which typically includes the procedure's parameters and internal variables, and the return address." +2328,"The call sequence can be implemented by a sequence of ordinary instructions and very long instruction word architectures), but many traditional machines designed since the late 1960s have included special instructions for that purpose." +2329,"The call stack is usually implemented as a contiguous area of memory. It is an arbitrary design choice whether the bottom of the stack is the lowest or highest address within this area, so that the stack may grow forwards or backwards in memory; however, many architectures chose the latter." +2330,"Some designs, notably some Forth implementations, used two separate stacks, one mainly for control information and the other for data. The former was, or worked like, a call stack and was only indirectly accessible to the programmer through other language constructs while the latter was more directly accessible." +2331,"When stack-based procedure calls were first introduced, an important motivation was to save precious memory. With this scheme, the compiler does not have to reserve separate space in memory for the private data of each procedure. At any moment, the stack contains only the private data of the calls that are currently active . Because of the ways in which programs were usually assembled from libraries, it was not uncommon to find programs that include thousands of functions, of which only a handful are active at any given moment. For such programs, the call stack mechanism could save significant amounts of memory. Indeed, the call stack mechanism can be viewed as the earliest and simplest method for automatic memory management." +2332,"However, another advantage of the call stack method is that it allows recursive function calls, since each nested call to the same procedure gets a separate instance of its private data." +2333,"In a multi-threaded environment, there is generally more than one stack. An environment that fully supports coroutines or lazy evaluation may use data structures other than stacks to store their activation records." +2334,"One disadvantage of the call stack mechanism is the increased cost of a procedure call and its matching return. The extra cost includes incrementing and decrementing the stack pointer , and accessing the local variables and parameters by frame-relative addresses, instead of absolute addresses. The cost may be realized in increased execution time, or increased processor complexity, or both." +2335,"This overhead is most obvious and objectionable in leaf procedures or leaf functions, which return without making any procedure calls themselves. To reduce that overhead, many modern compilers try to delay the use of a call stack until it is really needed. For example, the call of a procedure P may store the return address and parameters of the called procedure in certain processor registers, and transfer control to the procedure's body by a simple jump. If the procedure P returns without making any other call, the call stack is not used at all. If P needs to call another procedure Q, it will then use the call stack to save the contents of any registers that will be needed after Q returns." +2336,"In general, a callable unit is a list of instructions that, starting at the first instruction, executes sequentially except as directed via its internal logic. It can be invoked many times during the execution of a program. Execution continues at the next instruction after the call instruction when it returns control." +2337,"The features of implementations of callable units evolved over time and varies by context. +This section describes features of the various common implementations." +2338,"Most modern programming languages provide features to define and call functions, including syntax for accessing such features, including:" +2339,"Some languages, such as Pascal, Fortran, Ada and many dialects of BASIC, use a different name for a callable unit that returns a value vs. one that does not . +Other languages, such as C, C++, C# and Lisp, use only one name for a callable unit, function. The C-family languages use the keyword void to indicate no return value." +2340,"If declared to return a value, a call can be embedded in an expression in order to consume the return value. For example, a square root callable unit might be called like y = sqrt." +2341,"A callable unit that does not return a value is called as a stand-alone statement like print. This syntax can also be used for a callable unit that returns a value, but the return value will be ignored." +2342,"Some older languages require a keyword for calls that do not consume a return value, like CALL print." +2343,"Most implementations, especially in modern languages, support parameters which the callable declares as formal parameters. A caller passes actual parameters, a.k.a. arguments, to match. Different programming languages provide different conventions for passing arguments." +2344,"In some languages, such as BASIC, a callable has different syntax for a callable that returns a value vs. one that does not. +In other languages, the syntax is the same regardless. +In some of these languages an extra keyword is used to declare no return value; for example void in C, C++ and C#. +In some languages, such as Python, the difference is whether the body contains a return statement with a value, and a particular callable may return with or without a value based on control flow." +2345,"In many contexts, a callable may have side effect behavior such as modifying passed or global data, reading from or writing to a peripheral device, accessing a file, halting the program or the machine, or temporarily pausing program execution." +2346,"In strictly functional programming languages such as Haskell, a function can have no side effects, which means it cannot change the state of the program. Functions always return the same result for the same input. Such languages typically only support functions that return a value, since there is no value in a function that has neither return value nor side effect." +2347,Most contexts support local variables – memory owned by a callable to hold intermediate values. These variables are typically stored in the call's activation record on the call stack along with other information such as the return address. +2348,"If supported by the language, a callable may call itself, causing its execution to suspend while another nested execution of the same callable executes. Recursion is a useful means to simplify some complex algorithms and break down complex problems. Recursive languages provide a new copy of local variables on each call. If the programmer desires the recursive callable to use the same variables instead of using locals, they typically declare them in a shared context such static or global." +2349,"Languages going back to ALGOL, PL/I and C and modern languages, almost invariably use a call stack, usually supported by the instruction sets to provide an activation record for each call. That way, a nested call can modify its local variables without effecting any of the suspended calls variables." +2350,Recursion allows direct implementation of functionality defined by mathematical induction and recursive divide and conquer algorithms. Here is an example of a recursive function in C/C++ to find Fibonacci numbers: +2351,"Early languages like Fortran did not initially support recursion because only one set of variables and return address were allocated for each callable. Early computer instruction sets made storing return addresses and variables on a stack difficult. Machines with index registers or general-purpose registers, e.g., CDC 6000 series, PDP-6, GE 635, System/360, UNIVAC 1100 series, could use one of those registers as a stack pointer." +2352,"Some languages, e.g., Ada, Pascal, PL/I, Python, support declaring and defining a function inside, e.g., a function body, such that the name of the inner is only visible within the body of the outer." +2353,"If a callable can be executed properly even when another execution of the same callable is already in progress, that callable is said to be reentrant. A reentrant callable is also useful in multi-threaded situations since multiple threads can call the same callable without fear of interfering with each other. In the IBM CICS transaction processing system, quasi-reentrant was a slightly less restrictive, but similar, requirement for application programs that were shared by many threads." +2354,"Some languages support overloading – allow multiple callables with the same name in the same scope, but operating on different types of input. Consider the square root function applied to real number, complex number and matrix input. The algorithm for each type of input is different, and the return value may have a different type. By writing three separate callables with the same name. i.e. sqrt, the resulting code may be easier to write and to maintain since each one has a name that is relatively easy to understand and to remember instead of giving longer and more complicated names like sqrt_real, sqrt_complex, qrt_matrix." +2355,Overloading is supported in many languages that support strong typing. Often the compiler selects the overload to call based on the type of the input arguments or it fails if the input arguments do not select an overload. Older and weakly-typed languages generally do not support overloading. +2356,"Here is an example of overloading in C++, two functions Area that accept different types:" +2357,PL/I has the GENERIC attribute to define a generic name for a set of entry references called with different types of arguments. Example: +2358,"Multiple argument definitions may be specified for each entry. A call to ""gen_name"" will result in a call to ""name"" when the argument is FIXED BINARY, ""flame"" when FLOAT"", etc. If the argument matches none of the choices ""pathname"" will be called." +2359,"A closure is a callable plus values of some of its variables captured from the environment in which it was created. Closures were a notable feature of the Lisp programming language, introduced by John McCarthy. Depending on the implementation, closures can serve as a mechanism for side-effects." +2360,"Besides its happy path behavior, a callable may need to inform the caller about an exceptional condition that occurred during its execution." +2361,Most modern languages support exceptions which allows for exceptional control flow that pops the call stack until an exception handler is found to handle the condition. +2362,Languages that do not support exceptions can use the return value to indicate success or failure of a call. Another approach is to use a well-known location like a global variable for success indication. A callable writes the value and the caller reads it after a call. +2363,"In the IBM System/360, where return code was expected from a subroutine, the return value was often designed to be a multiple of 4—so that it could be used as a direct branch table index into a branch table often located immediately after the call instruction to avoid extra conditional tests, further improving efficiency. In the System/360 assembly language, one would write, for example:" +2364,"A call has runtime overhead, which may include but is not limited to:" +2365,Various techniques are employed to minimize the runtime cost of calls. +2366,"Some optimizations for minimizing call overhead may seem straight forward, but cannot be used if the callable has side effects. For example, in the expression -1)/+1), the function f cannot be called only once with its value used two times since the two calls may return different results. Moreover, in the few languages which define the order of evaluation of the division operator's operands, the value of x must be fetched again before the second call, since the first call may have changed it. Determining whether a callable has a side effect is difficult – indeed, undecidable by virtue of Rice's theorem. So, while this optimization is safe in a purely functional programming language, a compiler for an language not limited to functional typically assumes the worst case, that every callable may have side effects." +2367,"Inlining eliminates calls for particular callables. The compiler replaces each call with the compiled code of the callable. Not only does this avoid the call overhead, but it also allows the compiler to optimize code of the caller more effectively by taking into account the context and arguments at that call. Inlining, however, usually increases the compiled code size, except when only called once or the body is very short, like one line." +2368,"Callables can be defined within a program, or separately in a library that can be used by multiple programs." +2369,"A compiler translates call and return statements into machine instructions according to a well-defined calling convention. For code compiled by the same or a compatible compiler, functions can be compiled separately from the programs that call them. The instruction sequences corresponding to call and return statements are called the procedure's prologue and epilogue." +2370,"A built-in function, or builtin function, or intrinsic function, is a function for which the compiler generates code at compile time or provides in a way other than for other functions. A built-in function does not need to be defined like other functions since it is built in to the programming language." +2371,Advantages of breaking a program into functions include: +2372,"Compared to using in-line code, invoking a function imposes some computational overhead in the call mechanism." +2373,"A function typically requires standard housekeeping code – both at the entry to, and exit from, the function ." +2374,Many programming conventions have been developed regarding callables. +2375,"With respect to naming, many developers name a callable with a phrase starting with a verb when it does a certain task, with an adjective when it makes an inquiry, and with a noun when it is used to substitute variables." +2376,"Some programmers suggest that a callable should perform exactly one task, and if it performs more than one task, it should be split up into multiple callables. They argue that callables are key components in software maintenance, and their roles in the program must remain distinct." +2377,"Proponents of modular programming advocate that each callable should have minimal dependency on the rest of the codebase. For example, the use of global variables is generally deemed unwise, because it adds coupling between all callables that use the global variables. If such coupling is not necessary, they advise to refactor callables to accept passed parameters instead." +2378,"Early BASIC variants require each line to have a unique number that orders the lines for execution, provides no separation of the code that is callable, no mechanism for passing arguments or to return a value and all variables are global. It provides the command GOSUB where sub is short for sub procedure, subprocedure or subroutine. Control jumps to the specified line number and then continues on the next line on return." +2379,This code repeatedly asks the user to enter a number and reports the square root of the value. Lines 100-130 are the callable. +2380,"In Microsoft Small Basic, targeted to the student first leaning how to program in a text-based language, a callable unit is called a subroutine. +The Sub keyword denotes the start of a subroutine and is followed by a name identifier. Subsequent lines are the body which ends with the EndSub keyword. +" +2381,"This can be called as SayHello. +" +2382,"In later versions of Visual Basic , including the latest product line and VB6, the term procedure is used for the callable unit concept. The keyword Sub is used to return no value and Function to return a value. When used in the context of a class, a procedure is a method. +" +2383,"Each parameter has a data type that can be specified, but if not, defaults to Object for later versions based on .NET and variant for VB6." +2384,"VB supports parameter passing conventions by value and by reference via the keywords ByVal and ByRef, respectively. +Unless ByRef is specified, an argument is passed ByVal. Therefore, ByVal is rarely explicitly specified." +2385,"For a simple type like a number these conventions are relatively clear. Passing ByRef allows the procedure to modify the passed variable whereas passing ByVal does not. For an object, semantics can confuse programmers since an object is always treated as a reference. Passing an object ByVal copies the reference; not the state of the object. The called procedure can modify the state of the object via its methods yet cannot modify the object reference of the actual parameter." +2386,"The does not return a value and has to be called stand-alone, like DoSomething" +2387,"This returns the value 5, and a call can be part of an expression like y = x + GiveMeFive" +2388,"This has a side-effect – modifies the variable passed by reference and could be called for variable v like AddTwo. Giving v is 5 before the call, it will be 7 after." +2389,"In C and C++, a callable unit is called a function. +A function definition starts with the name of the type of value that it returns or void to indicate that it does not return a value. This is followed by the function name, formal arguments in parentheses, and body lines in braces." +2390,"In C++, a function declared in a class is called a member function or method. A function outside of a class can be called a free function to distinguish it from a member function. +" +2391,"This function does not return a value and is always called stand-alone, like doSomething" +2392,This function returns the integer value 5. The call can be stand-alone or in an expression like y = x + giveMeFive +2393,"This function has a side-effect – modifies the value passed by address to the input value plus 2. It could be called for variable v as addTwo where the ampersand tells the compiler to pass the address of a variable. Giving v is 5 before the call, it will be 7 after." +2394,This function requires C++ – would not compile as C. It has the same behavior as the preceding example but passes the actual parameter by reference rather than passing its address. A call such as addTwo does not include an ampersand since the compiler handles passing by reference without syntax in the call. +2395,"In PL/I a called procedure may be passed a descriptor providing information about the argument, such as string lengths and array bounds. This allows the procedure to be more general and eliminates the need for the programmer to pass such information. By default PL/I passes arguments by reference. A function to change the sign of each element of a two-dimensional array might look like:" +2396,This could be called with various arrays as follows: +2397,"In Python, the keyword def denotes the start of a function definition. The statements of the function body follow as indented on subsequent lines and end at the line that is indented the same as the first line or end of file." +2398,"The first function returns greeting text that includes the name passed by the caller. The second function calls the first and is called like greet_martin to write ""Welcome Martin"" to the console." +2399,"In the procedural interpretation of logic programs, logical implications behave as goal-reduction procedures. A rule of the form:" +2400,which has the logical reading: +2401,behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB. +2402,"Consider, for example, the Prolog program:" +2403,"Notice that the motherhood function, X = mother is represented by a relation, as in a relational database. However, relations in Prolog function as callable units." +2404,"For example, the procedure call ?- parent_child produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example:" +2405,"A branch instruction can be either an unconditional branch, which always results in branching, or a conditional branch, which may or may not cause branching depending on some condition. Also, depending on how it specifies the address of the new instruction sequence , a branch instruction is generally classified as direct, indirect or relative, meaning that the instruction contains the target address, or it specifies where the target address is to be found , or it specifies the difference between the current and target addresses." +2406,"Branch instructions can alter the contents of the CPU's Program Counter . The PC maintains the memory address of the next machine instruction to be fetched and executed. Therefore, a branch, if executed, causes the CPU to execute code from a new memory address, changing the program logic according to the algorithm planned by the programmer." +2407,"One type of machine level branch is the jump instruction. These may or may not result in the PC being loaded or modified with some new, different value other than what it ordinarily would have been . Jumps typically have unconditional and conditional forms where the latter may be taken or not taken depending on some condition." +2408,"The second type of machine level branch is the call instruction which is used to implement subroutines. Like jump instructions, calls may or may not modify the PC according to condition codes, however, additionally a return address is saved in a secure place in memory . Upon completion of the subroutine, this return address is restored to the PC, and so program execution resumes with the instruction following the call instruction." +2409,"The third type of machine level branch is the return instruction. This ""pops"" a return address off the stack and loads it into the PC register, thus returning control to the calling routine. Return instructions may also be conditionally executed. This description pertains to ordinary practice; however, the machine programmer has considerable powers to manipulate the return address on the stack, and so redirect program execution in any number of different ways." +2410,"Depending on the processor, jump and call instructions may alter the contents of the PC register in different ways. An absolute address may be loaded, or the current contents of the PC may have some value added or subtracted from its current value, making the destination address relative to the current place in the program. The source of the displacement value may vary, such as an immediate value embedded within the instruction, or the contents of a processor register or memory location, or the contents of some location added to an index value." +2411,"The term branch can also be used when referring to programs in high-level programming languages. In these branches usually take the form of conditional statements of various forms that encapsulate the instruction sequence that will be executed if the conditions are satisfied. Unconditional branch instructions such as GOTO are used to unconditionally jump to a different instruction sequence. If the algorithm requires a conditional branch, the GOTO is preceded by an IF-THEN statement specifying the condition. All high level languages support algorithms that can re-use code as a loop, a control structure that repeats a sequence of instructions until some condition is satisfied that causes the loop to terminate. Loops also qualify as branch instructions. At the machine level, loops are implemented as ordinary conditional jumps that redirect execution to repeating code." +2412,"In CPUs with flag registers, an earlier instruction sets a condition in the flag register. The earlier instruction may be arithmetic, or a logic instruction. It is often close to the branch, though not necessarily the instruction immediately before the branch. The stored condition is then used in a branch such as jump if overflow-flag set. This temporary information is often stored in a flag register but may also be located elsewhere. A flag register design is simple in slower, simple computers. In fast computers a flag register can place a bottleneck on speed, because instructions that could otherwise operate in parallel need to set the flag bits in a particular sequence." +2413,"There are also machines where the condition may be checked by the jump instruction itself, such as branch