Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
601
|
Operations and structures for "sequence control" allow controlling the execution flow of program instructions. When certain conditions are met, it is necessary to change the typical sequential execution of a program. Therefore, the interpreter employs data structures that are modified by operations distinct from those used for data manipulation .
|
602
|
Data transfer operations are used to control how operands and data are transported from memory to the interpreter and vice versa. These operations deal with the store and the retrieval order of operands from the store.
|
603
|
Memory management is concerned with the operations performed in memory to allocate data and applications. In the abstract machine, data and programmes can be held indefinitely, or in the case of programming languages, memory can be allocated or deallocated using a more complex mechanism.
|
604
|
Abstract machine hierarchies are often employed, in which each machine uses the functionality of the level immediately below and adds additional functionality of its own to meet the level immediately above. A hardware computer, constructed with physical electronic devices, can be added at the most basic level. Above this level, the abstract microprogrammed machine level may be introduced. The abstract machine supplied by the operating system, which is implemented by a program written in machine language, is located immediately above . On the one hand, the operating system extends the capability of the physical machine by providing higher-level primitives that are not available on the physical machine . The host machine is formed by the abstract machine given by the operating system, on which a high-level programming language is implemented using an intermediary machine, such as the Java Virtual machine and its byte code language. The level given by the abstract machine for the high-level language is not usually the final level of hierarchy. At this point, one or more applications that deliver additional services together may be introduced. A "web machine" level, for example, can be added to implement the functionalities necessary to handle Web communications . The "Web Service" level is located above this, and it provides the functionalities necessary to make web services communicate, both in terms of interaction protocols and the behaviour of the processes involved. At this level, entirely new languages that specify the behaviour of so-called "business processes" based on Web services may be developed . Finally, a specialised application can be found at the highest level which has very specific and limited functionality.
|
605
|
Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM.
|
606
|
Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software.
|
607
|
In mathematical logic and theoretical computer science, a register machine is a generic class of abstract machines used in a manner similar to a Turing machine. All models of register machines are Turing equivalent.
|
608
|
The register machine gets its name from its use of one or more "registers". In contrast to the tape and head used by a Turing machine, the model uses multiple, uniquely addressed registers, each of which holds a single positive integer.
|
609
|
There are at least four sub-classes found in literature, here listed from most primitive to the most like a computer:
|
610
|
Any properly defined register machine model is Turing equivalent. Computational speed is very dependent on the model specifics.
|
611
|
In practical computer science, a related concept known as a virtual machine is occasionally employed to reduce reliance on underlying machine architectures. These virtual machines are also utilized in educational settings. In textbooks, the term "register machine" is sometimes used interchangeably to describe a virtual machine.
|
612
|
A register machine consists of:
|
613
|
An unbounded number of labelled, discrete, unbounded registers unbounded in extent : a finite set of registers
r
0
…
r
n
{\displaystyle r_{0}\ldots r_{n}}
each considered to be of infinite extent and each of which holds a single non-negative integer . The registers may do their own arithmetic, or there may be one or more special registers that do the arithmetic e.g. an "accumulator" and/or "address register". See also Random-access machine.
|
614
|
Tally counters or marks: discrete, indistinguishable objects or marks of only one sort suitable for the model. In the most-reduced counter machine model, per each arithmetic operation only one object/mark is either added to or removed from its location/tape. In some counter machine models and most RAM and RASP models more than one object/mark can be added or removed in one operation with "addition" and usually "subtraction"; sometimes with "multiplication" and/or "division". Some models have control operations such as "copy" that move "clumps" of objects/marks from register to register in one action.
|
615
|
A limited set of instructions: the instructions tend to divide into two classes: arithmetic and control. The instructions are drawn from the two classes to form "instruction-sets", such that an instruction set must allow the model to be Turing equivalent .
Arithmetic: Arithmetic instructions may operate on all registers or on a specific register, such as an accumulator. Typically, they are selected from the following sets, though exceptions exist: Counter machine: { Increment , Decrement , Clear-to-zero } Reduced RAM, RASP: { Increment , Decrement , Clear-to-zero , Load-immediate-constant k, Add , Proper-Subtract , Increment accumulator, Decrement accumulator, Clear accumulator, Add the contents of register r to the accumulator, Proper-Subtract the contents of register r from the accumulator } Augmented RAM, RASP: Includes all of the reduced instructions as well as: { Multiply, Divide, various Boolean bit-wise operations }
Control: Counter machine models: Optionally include { Copy }. RAM and RASP models: Most include { Copy }, or { Load Accumulator from r, Store accumulator into r, Load Accumulator with an immediate constant }. All models: Include at least one conditional "jump" following the test of a register, such as { Jump-if-zero, Jump-if-not-zero , Jump-if-equal, Jump-if-not-equal }. All models optionally include: { unconditional program jump }.
Register-addressing method:
Counter machine: no indirect addressing, immediate operands possible in highly atomized models
RAM and RASP: indirect addressing available, immediate operands typical
Input-output: optional in all models
|
616
|
Arithmetic: Arithmetic instructions may operate on all registers or on a specific register, such as an accumulator. Typically, they are selected from the following sets, though exceptions exist: Counter machine: { Increment , Decrement , Clear-to-zero } Reduced RAM, RASP: { Increment , Decrement , Clear-to-zero , Load-immediate-constant k, Add , Proper-Subtract , Increment accumulator, Decrement accumulator, Clear accumulator, Add the contents of register r to the accumulator, Proper-Subtract the contents of register r from the accumulator } Augmented RAM, RASP: Includes all of the reduced instructions as well as: { Multiply, Divide, various Boolean bit-wise operations }
|
617
|
Control: Counter machine models: Optionally include { Copy }. RAM and RASP models: Most include { Copy }, or { Load Accumulator from r, Store accumulator into r, Load Accumulator with an immediate constant }. All models: Include at least one conditional "jump" following the test of a register, such as { Jump-if-zero, Jump-if-not-zero , Jump-if-equal, Jump-if-not-equal }. All models optionally include: { unconditional program jump }.
|
618
|
Counter machine: no indirect addressing, immediate operands possible in highly atomized models
|
619
|
RAM and RASP: indirect addressing available, immediate operands typical
|
620
|
Input-output: optional in all models
|
621
|
State register: A special Instruction Register , distinct from the registers mentioned earlier, stores the current instruction to be executed along with its address in the instruction table. This register, along with its associated table, is located within the finite state machine. The IR is inaccessible in all models. In the case of RAM and RASP, for determining the "address" of a register, the model can choose either the address specified by the table and temporarily stored in the IR for direct addressing or the contents of the register specified by the instruction in the IR for indirect addressing. It's important to note that the IR is not the "program counter" of the RASP . The PC is merely another register akin to an accumulator, but specifically reserved for holding the number of the RASP's current register-based instruction. Thus, a RASP possesses two "instruction/program" registers: the IR , and a PC for the program stored in the registers. Additionally, aside from the PC, a RASP may also dedicate another register to the "Program-Instruction Register" .
|
622
|
Two trends appeared in the early 1950s—the first to characterize the computer as a Turing machine, the second to define computer-like models—models with sequential instruction sequences and conditional jumps—with the power of a Turing machine, i.e. a so-called Turing equivalence. Need for this work was carried out in context of two "hard" problems: the unsolvable word problem posed by Emil Post—his problem of "tag"—and the very "hard" problem of Hilbert's problems—the 10th question around Diophantine equations. Researchers were questing for Turing-equivalent models that were less "logical" in nature and more "arithmetic": 281 : 218
|
623
|
The first trend toward characterizing computers has originated with Hans Hermes , Rózsa Péter , and Heinz Kaphengst , the second trend with Hao Wang and, as noted above, furthered along by Zdzislaw Alexander Melzak , Joachim Lambek and Marvin Minsky .
|
624
|
The last five names are listed explicitly in that order by Yuri Matiyasevich. He follows up with:
|
625
|
Lambek, Melzak, Minsky and Shepherdson and Sturgis independently discovered the same idea at the same time. See note on precedence below.
|
626
|
The history begins with Wang's model.
|
627
|
Wang's work followed from Emil Post's paper and led Wang to his definition of his Wang B-machine—a two-symbol Post–Turing machine computation model with only four atomic instructions:
|
628
|
To these four both Wang and then C. Y. Lee added another instruction from the Post set { ERASE }, and then a Post's unconditional jump { JUMP_to_ instruction_z } (or to make things easier, the conditional jump JUMP_IF_blank_to_instruction_z, or both. Lee named this a "W-machine" model:
|
629
|
Wang expressed hope that his model would be "a rapprochement": 63 between the theory of Turing machines and the practical world of the computer.
|
630
|
Wang's work was highly influential. We find him referenced by Minsky and , Melzak , Shepherdson and Sturgis . Indeed, Shepherdson and Sturgis remark that:
|
631
|
Martin Davis eventually evolved this model into the Post–Turing machine.
|
632
|
Difficulties with the Wang/Post–Turing model:
|
633
|
Except there was a problem: the Wang model was still a single-tape Turing-like device, however nice its sequential program instruction-flow might be. Both Melzak and Shepherdson and Sturgis observed this :
|
634
|
Indeed, as examples at Turing machine examples, Post–Turing machine and partial function show, the work can be "complicated".
|
635
|
So why not 'cut the tape' so each is infinitely long but left-ended, and call these three tapes "Post–Turing tapes"? The individual heads will move left and right . In one sense the heads indicate "the tops of the stack" of concatenated marks. Or in Minsky and Hopcroft and Ullman : 171ff the tape is always blank except for a mark at the left end—at no time does a head ever print or erase.
|
636
|
Care must be taken to write the instructions so that a test-for-zero and jump occurs before decrementing, otherwise the machine will "fall off the end" or "bump against the end"—creating an instance of a partial function.
|
637
|
Minsky and Shepherdson–Sturgis prove that only a few tapes—as few as one—still allow the machine to be Turing equivalent if the data on the tape is represented as a Gödel number ; this number will evolve as the computation proceeds. In the one tape version with Gödel number encoding the counter machine must be able to multiply the Gödel number by a constant , and divide by a constant and jump if the remainder is zero. Minsky shows that the need for this bizarre instruction set can be relaxed to { INC , JZDEC } and the convenience instructions { CLR , J } if two tapes are available. A simple Gödelization is still required, however. A similar result appears in Elgot–Robinson with respect to their RASP model.
|
638
|
Melzak's model is significantly different. He took his own model, flipped the tapes vertically, called them "holes in the ground" to be filled with "pebble counters". Unlike Minsky's "increment" and "decrement", Melzak allowed for proper subtraction of any count of pebbles and "adds" of any count of pebbles.
|
639
|
He defines indirect addressing for his model: 288 and provides two examples of its use;: 89 his "proof": 290–292 that his model is Turing equivalent is so sketchy that the reader cannot tell whether or not he intended the indirect addressing to be a requirement for the proof.
|
640
|
Legacy of Melzak's model is Lambek's simplification and the reappearance of his mnemonic conventions in Cook and Reckhow 1973.
|
641
|
Lambek took Melzak's ternary model and atomized it down to the two unary instructions—X+, X− if possible else jump—exactly the same two that Minsky had come up with.
|
642
|
However, like the Minsky model, the Lambek model does execute its instructions in a default-sequential manner—both X+ and X− carry the identifier of the next instruction, and X− also carries the jump-to instruction if the zero-test is successful.
|
643
|
A RASP or random-access stored-program machine begins as a counter machine with its "program of instruction" placed in its "registers". Analogous to, but independent of, the finite state machine's "Instruction Register", at least one of the registers ) and one or more "temporary" registers maintain a record of, and operate on, the current instruction's number. The finite state machine's TABLE of instructions is responsible for fetching the current program instruction from the proper register, parsing the program instruction, fetching operands specified by the program instruction, and executing the program instruction.
|
644
|
Except there is a problem: If based on the counter machine chassis this computer-like, von Neumann machine will not be Turing equivalent. It cannot compute everything that is computable. Intrinsically the model is bounded by the size of its finite state machine's instructions. The counter machine based RASP can compute any primitive recursive function but not all mu recursive functions .
|
645
|
Elgot–Robinson investigate the possibility of allowing their RASP model to "self modify" its program instructions. The idea was an old one, proposed by Burks–Goldstine–von Neumann , and sometimes called "the computed goto." Melzak specifically mentions the "computed goto" by name but instead provides his model with indirect addressing.
|
646
|
Computed goto: A RASP program of instructions that modifies the "goto address" in a conditional- or unconditional-jump program instruction.
|
647
|
But this does not solve the problem . What is necessary is a method to fetch the address of a program instruction that lies "beyond/above" the upper bound of the finite state machine instruction register and TABLE.
|
648
|
Minsky hints at the issue in his investigation of a counter machine equipped with the instructions { CLR , INC , and RPT }. He doesn't tell us how to fix the problem, but he does observe that:
|
649
|
But Elgot and Robinson solve the problem: They augment their P0 RASP with an indexed set of instructions—a somewhat more complicated form of indirect addressing. Their P'0 model addresses the registers by adding the contents of the "base" register to the "index" specified explicitly in the instruction . Thus the indexing P'0 instructions have one more parameter than the non-indexing P0 instructions:
|
650
|
By 1971 Hartmanis has simplified the indexing to indirection for use in his RASP model.
|
651
|
Indirect addressing: A pointer-register supplies the finite state machine with the address of the target register required for the instruction. Said another way: The contents of the pointer-register is the address of the "target" register to be used by the instruction. If the pointer-register is unbounded, the RAM, and a suitable RASP built on its chassis, will be Turing equivalent. The target register can serve either as a source or destination register, as specified by the instruction.
|
652
|
Note that the finite state machine does not have to explicitly specify this target register's address. It just says to the rest of the machine: Get me the contents of the register pointed to by my pointer-register and then do xyz with it. It must specify explicitly by name, via its instruction, this pointer-register but it doesn't have to know what number the pointer-register actually contains .
|
653
|
Cook and Reckhow cite Hartmanis and simplify his model to what they call a random-access machine . In a sense we are back to Melzak but with a much simpler model than Melzak's.
|
654
|
Minsky was working at the MIT Lincoln Laboratory and published his work there; his paper was received for publishing in the Annals of Mathematics on 15 August 1960, but not published until November 1961. While receipt occurred a full year before the work of Melzak and Lambek was received and published . That both were Canadians and published in the Canadian Mathematical Bulletin, neither would have had reference to Minsky's work because it was not yet published in a peer-reviewed journal, but Melzak references Wang, and Lambek references Melzak, leads one to hypothesize that their work occurred simultaneously and independently.
|
655
|
Almost exactly the same thing happened to Shepherdson and Sturgis. Their paper was received in December 1961—just a few months after Melzak and Lambek's work was received. Again, they had little or no benefit of reviewing the work of Minsky. They were careful to observe in footnotes that papers by Ershov, Kaphengst and Péter had "recently appeared": 219 These were published much earlier but appeared in the German language in German journals so issues of accessibility present themselves.
|
656
|
The final paper of Shepherdson and Sturgis did not appear in a peer-reviewed journal until 1963. And as they note in their Appendix A, the 'systems' of Kaphengst , Ershov , Péter are all so similar to what results were obtained later as to be indistinguishable to a set of the following:
|
657
|
Indeed, Shepherson and Sturgis conclude
|
658
|
By order of publishing date the work of Kaphengst , Ershov , Péter were first.
|
659
|
Background texts: The following bibliography of source papers includes a number of texts to be used as background. The mathematics that led to the flurry of papers about abstract machines in the 1950s and 1960s can be found in van Heijenoort —an assemblage of original papers spanning the 50 years from Frege to Gödel . Davis The Undecidable carries the torch onward beginning with Gödel through Gödel's postscriptum;: 71 the original papers of Alan Turing and Emil Post are included in The Undecidable. The mathematics of Church, Rosser and Kleene that appear as reprints of original papers in The Undecidable is carried further in Kleene , a mandatory text for anyone pursuing a deeper understanding of the mathematics behind the machines. Both Kleene and Davis are referenced by a number of the papers.
|
660
|
For a good treatment of the counter machine see Minsky Chapter 11 "Models similar to Digital Computers"—he calls the counter machine a "program computer". A recent overview is found at van Emde Boas . A recent treatment of the Minsky /Lambek model can be found Boolos–Burgess–Jeffrey ; they reincarnate Lambek's "abacus model" to demonstrate equivalence of Turing machines and partial recursive functions, and they provide a graduate-level introduction to both abstract machine models and the mathematics of recursion theory. Beginning with the first edition Boolos–Burgess this model appeared with virtually the same treatment.
|
661
|
The papers: The papers begin with Wang and his dramatic simplification of the Turing machine. Turing , Kleene , Davis and in particular Post are cited in Wang ; in turn, Wang is referenced by Melzak , Minsky and Shepherdson–Sturgis as they independently reduce the Turing tapes to "counters". Melzak provides his pebble-in-holes counter machine model with indirection but doesn't carry the treatment further. The work of Elgot–Robinson define the RASP—the computer-like random-access stored-program machines—and appear to be the first to investigate the failure of the bounded counter machine to calculate the mu-recursive functions. This failure—except with the draconian use of Gödel numbers in the manner of Minsky —leads to their definition of "indexed" instructions for their RASP model. Elgot–Robinson and more so Hartmanis investigate RASPs with self-modifying programs. Hartmanis specifies an instruction set with indirection, citing lecture notes of Cook . For use in investigations of computational complexity Cook and his graduate student Reckhow provide the definition of a RAM . The pointer machines are an offshoot of Knuth and independently Schönhage .
|
662
|
For the most part the papers contain mathematics beyond the undergraduate level—in particular the primitive recursive functions and mu recursive functions presented elegantly in Kleene and less in depth, but useful nonetheless, in Boolos–Burgess–Jeffrey .
|
663
|
All texts and papers excepting the four starred have been witnessed. These four are written in German and appear as references in Shepherdson–Sturgis and Elgot–Robinson ; Shepherdson–Sturgis offer a brief discussion of their results in Shepherdson–Sturgis' Appendix A. The terminology of at least one paper seems to hark back to the Burke–Goldstine–von Neumann analysis of computer architecture.
|
664
|
NUMA architectures logically follow in scaling from symmetric multiprocessing architectures. They were developed commercially during the 1990s by Unisys, Convex Computer , Honeywell Information Systems Italy , Silicon Graphics , Sequent Computer Systems , Data General , Digital and ICL. Techniques developed by these companies later featured in a variety of Unix-like operating systems, and to an extent in Windows NT.
|
665
|
The first commercial implementation of a NUMA-based Unix system was the Symmetrical Multi Processing XPS-100 family of servers, designed by Dan Gielan of VAST Corporation for Honeywell Information Systems Italy.
|
666
|
Modern CPUs operate considerably faster than the main memory they use. In the early days of computing and data processing, the CPU generally ran slower than its own memory. The performance lines of processors and memory crossed in the 1960s with the advent of the first supercomputers. Since then, CPUs increasingly have found themselves "starved for data" and having to stall while waiting for data to arrive from memory . Many supercomputer designs of the 1980s and 1990s focused on providing high-speed memory access as opposed to faster processors, allowing the computers to work on large data sets at speeds other systems could not approach.
|
667
|
Limiting the number of memory accesses provided the key to extracting high performance from a modern computer. For commodity processors, this meant installing an ever-increasing amount of high-speed cache memory and using increasingly sophisticated algorithms to avoid cache misses. But the dramatic increase in size of the operating systems and of the applications run on them has generally overwhelmed these cache-processing improvements. Multi-processor systems without NUMA make the problem considerably worse. Now a system can starve several processors at the same time, notably because only one processor can access the computer's memory at a time.
|
668
|
NUMA attempts to address this problem by providing separate memory for each processor, avoiding the performance hit when several processors attempt to address the same memory. For problems involving spread data , NUMA can improve the performance over a single shared memory by a factor of roughly the number of processors . Another approach to addressing this problem is the multi-channel memory architecture, in which a linear increase in the number of memory channels increases the memory access concurrency linearly.
|
669
|
Of course, not all data ends up confined to a single task, which means that more than one processor may require the same data. To handle these cases, NUMA systems include additional hardware or software to move data between memory banks. This operation slows the processors attached to those banks, so the overall speed increase due to NUMA heavily depends on the nature of the running tasks.
|
670
|
AMD implemented NUMA with its Opteron processor , using HyperTransport. Intel announced NUMA compatibility for its x86 and Itanium servers in late 2007 with its Nehalem and Tukwila CPUs. Both Intel CPU families share a common chipset; the interconnection is called Intel QuickPath Interconnect , which provides extremely high bandwidth to enable high on-board scalability and was replaced by a new version called Intel UltraPath Interconnect with the release of Skylake .
|
671
|
Nearly all CPU architectures use a small amount of very fast non-shared memory known as cache to exploit locality of reference in memory accesses. With NUMA, maintaining cache coherence across shared memory has a significant overhead. Although simpler to design and build, non-cache-coherent NUMA systems become prohibitively complex to program in the standard von Neumann architecture programming model.
|
672
|
Typically, ccNUMA uses inter-processor communication between cache controllers to keep a consistent memory image when more than one cache stores the same memory location. For this reason, ccNUMA may perform poorly when multiple processors attempt to access the same memory area in rapid succession. Support for NUMA in operating systems attempts to reduce the frequency of this kind of access by allocating processors and memory in NUMA-friendly ways and by avoiding scheduling and locking algorithms that make NUMA-unfriendly accesses necessary.
|
673
|
Alternatively, cache coherency protocols such as the MESIF protocol attempt to reduce the communication required to maintain cache coherency. Scalable Coherent Interface is an IEEE standard defining a directory-based cache coherency protocol to avoid scalability limitations found in earlier multiprocessor systems. For example, SCI is used as the basis for the NumaConnect technology.
|
674
|
One can view NUMA as a tightly coupled form of cluster computing. The addition of virtual memory paging to a cluster architecture can allow the implementation of NUMA entirely in software. However, the inter-node latency of software-based NUMA remains several orders of magnitude greater than that of hardware-based NUMA.
|
675
|
Since NUMA largely influences memory access performance, certain software optimizations are needed to allow scheduling threads and processes close to their in-memory data.
|
676
|
As of 2011, ccNUMA systems are multiprocessor systems based on the AMD Opteron processor, which can be implemented without external logic, and the Intel Itanium processor, which requires the chipset to support NUMA. Examples of ccNUMA-enabled chipsets are the SGI Shub , the Intel E8870, the HP sx2000 , and those found in NEC Itanium-based systems. Earlier ccNUMA systems such as those from Silicon Graphics were based on MIPS processors and the DEC Alpha 21364 processor.
|
677
|
A scalar processor is classified as a single instruction, single data processor in Flynn's taxonomy. The Intel 486 is an example of a scalar processor. It is to be contrasted with a vector processor where a single instruction operates simultaneously on multiple data items processor). The difference is analogous to the difference between scalar and vector arithmetic.
|
678
|
The term scalar in computing dates to the 1970 and 1980s when vector processors were first introduced. It was originally used to distinguish the older designs from the new vector processors.
|
679
|
A superscalar processor may execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier. The Cortex-M7, like many consumer CPUs today, is a superscalar processor.
|
680
|
A scalar data type, or just scalar, is any non-composite value.
|
681
|
Generally, all basic primitive data types are considered scalar:
|
682
|
Some programming languages also treat strings as scalar types, while other languages treat strings as arrays or objects.
|
683
|
A quantum computer is a computer that takes advantage of quantum mechanical phenomena.
|
684
|
On small scales, physical matter exhibits properties of both particles and waves, and quantum computing leverages this behavior, specifically quantum superposition and entanglement, using specialized hardware that supports the preparation and manipulation of quantum states.
|
685
|
Classical physics cannot explain the operation of these quantum devices, and a scalable quantum computer could perform some calculations exponentially faster than any modern "classical" computer. In particular, a large-scale quantum computer could break widely used encryption schemes and aid physicists in performing physical simulations; however, the current state of the technology is largely experimental and impractical, with several obstacles to useful applications. Moreover, scalable quantum computers do not hold promise for many practical tasks, and for many important tasks quantum speedups are proven impossible.
|
686
|
The basic unit of information in quantum computing is the qubit, similar to the bit in traditional digital electronics. Unlike a classical bit, a qubit can exist in a superposition of its two "basis" states. When measuring a qubit, the result is a probabilistic output of a classical bit, therefore making quantum computers nondeterministic in general. If a quantum computer manipulates the qubit in a particular way, wave interference effects can amplify the desired measurement results. The design of quantum algorithms involves creating procedures that allow a quantum computer to perform calculations efficiently and quickly.
|
687
|
Physically engineering high-quality qubits has proven challenging. If a physical qubit is not sufficiently isolated from its environment, it suffers from quantum decoherence, introducing noise into calculations. Paradoxically, perfectly isolating qubits is also undesirable because quantum computations typically need to initialize qubits, perform controlled qubit interactions, and measure the resulting quantum states. Each of those operations introduces errors and suffers from noise, and such inaccuracies accumulate.
|
688
|
In principle, a non-quantum computer can solve the same computational problems as a quantum computer, given enough time. Quantum advantage comes in the form of time complexity rather than computability, and quantum complexity theory shows that some quantum algorithms for carefully selected tasks require exponentially fewer computational steps than the best known non-quantum algorithms. Such tasks can in theory be solved on a large-scale quantum computer whereas classical computers would not finish computations in any reasonable amount of time. However, quantum speedup is not universal or even typical across computational tasks, since basic tasks such as sorting are proven to not allow any asymptotic quantum speedup. Claims of quantum supremacy have drawn significant attention to the discipline, but are demonstrated on contrived tasks, while near-term practical use cases remain limited.
|
689
|
For many years, the fields of quantum mechanics and computer science formed distinct academic communities. Modern quantum theory developed in the 1920s to explain the wave–particle duality observed at atomic scales, and digital computers emerged in the following decades to replace human computers for tedious calculations. Both disciplines had practical applications during World War II; computers played a major role in wartime cryptography, and quantum physics was essential for the nuclear physics used in the Manhattan Project.
|
690
|
A measurement-based quantum computer decomposes computation into a sequence of Bell state measurements and single-qubit quantum gates applied to a highly entangled initial state , using a technique called quantum gate teleportation.
|
691
|
An adiabatic quantum computer, based on quantum annealing, decomposes computation into a slow continuous transformation of an initial Hamiltonian into a final Hamiltonian, whose ground states contain the solution.
|
692
|
Neuromorphic quantum computing is an unconventional computing type of computing that uses neuromorphic computing to perform quantum operations. It was suggested that quantum algorithms, which are algorithms that run on a realistic model of quantum computation, can be computed equally efficiently with neuromorphic quantum computing. Both, traditional quantum computing and neuromorphic quantum computing are physics-based unconventional computing approaches to computations and don’t follow the von Neumann architecture. They both construct a system that represents the physical problem at hand, and then leverage their respective physics properties of the system to seek the “minimum”. Neuromorphic quantum computing and quantum computing share similar physical properties during computation.
|
693
|
A topological quantum computer decomposes computation into the braiding of anyons in a 2D lattice.
|
694
|
A quantum Turing machine is the quantum analog of a Turing machine. All of these models of computation—quantum circuits, one-way quantum computation, adiabatic quantum computation, and topological quantum computation—have been shown to be equivalent to the quantum Turing machine; given a perfect implementation of one such quantum computer, it can simulate all the others with no more than polynomial overhead. This equivalence need not hold for practical quantum computers, since the overhead of simulation may be too large to be practical.
|
695
|
Quantum computing has significant potential applications in the fields of cryptography and cybersecurity. Quantum cryptography, which relies on the principles of quantum mechanics, offers the possibility of secure communication channels that are resistant to eavesdropping. Quantum key distribution protocols, such as BB84, enable the secure exchange of cryptographic keys between parties, ensuring the confidentiality and integrity of communication. Moreover, quantum random number generators can produce high-quality random numbers, which are essential for secure encryption.
|
696
|
However, quantum computing also poses challenges to traditional cryptographic systems. Shor's algorithm, a quantum algorithm for integer factorization, could potentially break widely used public-key cryptography schemes like RSA, which rely on the difficulty of factoring large numbers. Post-quantum cryptography, which involves the development of cryptographic algorithms that are resistant to attacks by both classical and quantum computers, is an active area of research aimed at addressing this concern.
|
697
|
Ongoing research in quantum cryptography and post-quantum cryptography is crucial for ensuring the security of communication and data in the face of evolving quantum computing capabilities. Advances in these fields, such as the development of new QKD protocols, the improvement of QRNGs, and the standardization of post-quantum cryptographic algorithms, will play a key role in maintaining the integrity and confidentiality of information in the quantum era.
|
698
|
Quantum cryptography enables new ways to transmit data securely; for example, quantum key distribution uses entangled quantum states to establish secure cryptographic keys. When a sender and receiver exchange quantum states, they can guarantee that an adversary does not intercept the message, as any unauthorized eavesdropper would disturb the delicate quantum system and introduce a detectable change. With appropriate cryptographic protocols, the sender and receiver can thus establish shared private information resistant to eavesdropping.
|
699
|
Modern fiber-optic cables can transmit quantum information over relatively short distances. Ongoing experimental research aims to develop more reliable hardware , hoping to scale this technology to long-distance quantum networks with end-to-end entanglement. Theoretically, this could enable novel technological applications, such as distributed quantum computing and enhanced quantum sensing.
|
700
|
Progress in finding quantum algorithms typically focuses on this quantum circuit model, though exceptions like the quantum adiabatic algorithm exist. Quantum algorithms can be roughly categorized by the type of speedup achieved over corresponding classical algorithms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.