Number
int64 1
7.61k
| Text
stringlengths 2
3.11k
|
|---|---|
2,401
|
behaves as a procedure that reduces goals that unify with A to subgoals that are instances ofB.
|
2,402
|
Consider, for example, the Prolog program:
|
2,403
|
Notice that the motherhood function, X = mother is represented by a relation, as in a relational database. However, relations in Prolog function as callable units.
|
2,404
|
For example, the procedure call ?- parent_child produces the output X = elizabeth. But the same procedure can be called with other input-output patterns. For example:
|
2,405
|
A branch instruction can be either an unconditional branch, which always results in branching, or a conditional branch, which may or may not cause branching depending on some condition. Also, depending on how it specifies the address of the new instruction sequence , a branch instruction is generally classified as direct, indirect or relative, meaning that the instruction contains the target address, or it specifies where the target address is to be found , or it specifies the difference between the current and target addresses.
|
2,406
|
Branch instructions can alter the contents of the CPU's Program Counter . The PC maintains the memory address of the next machine instruction to be fetched and executed. Therefore, a branch, if executed, causes the CPU to execute code from a new memory address, changing the program logic according to the algorithm planned by the programmer.
|
2,407
|
One type of machine level branch is the jump instruction. These may or may not result in the PC being loaded or modified with some new, different value other than what it ordinarily would have been . Jumps typically have unconditional and conditional forms where the latter may be taken or not taken depending on some condition.
|
2,408
|
The second type of machine level branch is the call instruction which is used to implement subroutines. Like jump instructions, calls may or may not modify the PC according to condition codes, however, additionally a return address is saved in a secure place in memory . Upon completion of the subroutine, this return address is restored to the PC, and so program execution resumes with the instruction following the call instruction.
|
2,409
|
The third type of machine level branch is the return instruction. This "pops" a return address off the stack and loads it into the PC register, thus returning control to the calling routine. Return instructions may also be conditionally executed. This description pertains to ordinary practice; however, the machine programmer has considerable powers to manipulate the return address on the stack, and so redirect program execution in any number of different ways.
|
2,410
|
Depending on the processor, jump and call instructions may alter the contents of the PC register in different ways. An absolute address may be loaded, or the current contents of the PC may have some value added or subtracted from its current value, making the destination address relative to the current place in the program. The source of the displacement value may vary, such as an immediate value embedded within the instruction, or the contents of a processor register or memory location, or the contents of some location added to an index value.
|
2,411
|
The term branch can also be used when referring to programs in high-level programming languages. In these branches usually take the form of conditional statements of various forms that encapsulate the instruction sequence that will be executed if the conditions are satisfied. Unconditional branch instructions such as GOTO are used to unconditionally jump to a different instruction sequence. If the algorithm requires a conditional branch, the GOTO is preceded by an IF-THEN statement specifying the condition. All high level languages support algorithms that can re-use code as a loop, a control structure that repeats a sequence of instructions until some condition is satisfied that causes the loop to terminate. Loops also qualify as branch instructions. At the machine level, loops are implemented as ordinary conditional jumps that redirect execution to repeating code.
|
2,412
|
In CPUs with flag registers, an earlier instruction sets a condition in the flag register. The earlier instruction may be arithmetic, or a logic instruction. It is often close to the branch, though not necessarily the instruction immediately before the branch. The stored condition is then used in a branch such as jump if overflow-flag set. This temporary information is often stored in a flag register but may also be located elsewhere. A flag register design is simple in slower, simple computers. In fast computers a flag register can place a bottleneck on speed, because instructions that could otherwise operate in parallel need to set the flag bits in a particular sequence.
|
2,413
|
There are also machines where the condition may be checked by the jump instruction itself, such as branch <label> if register X negative. In simple computer designs, comparison branches execute more arithmetic and can use more power than flag register branches. In fast computer designs comparison branches can run faster than flag register branches, because comparison branches can access the registers with more parallelism, using the same CPU mechanisms as a calculation.
|
2,414
|
Some early and simple CPU architectures, still found in microcontrollers, may not implement a conditional jump, but rather only a conditional "skip the next instruction" operation. A conditional jump or call is thus implemented as a conditional skip of an unconditional jump or call instruction.
|
2,415
|
Depending on the computer architecture, the assembly language mnemonic for a jump instruction is typically some shortened form of the word jump or the word branch, often along with other informative letters representing the condition. Sometimes other details are included as well, such as the range of the jump or a special addressing mode that should be used to locate the actual effective offset.
|
2,416
|
This table lists the machine level branch or jump instructions found in several well-known architectures:
|
2,417
|
* x86, the PDP-11, VAX, and some others, set the carry-flag to signal borrow and clear the carry-flag to signal no borrow. ARM, 6502, the PIC, and some others, do the opposite for subtractive operations. This inverted function of the carry flag for certain instructions is marked by , that is, borrow=not carry in some parts of the table, but if not otherwise noted, borrow≡carry. However, carry on additive operations are handled the same way by most architectures.
|
2,418
|
To achieve high performance, modern processors are pipelined. They consist of multiple parts that each partially process an instruction, feed their results to the next stage in the pipeline, and start working on the next instruction in the program. This design expects instructions to execute in a particular unchanging sequence. Conditional branch instructions make it impossible to know this sequence. So conditional branches can cause "stalls" in which the pipeline has to be restarted on a different part of the program.
|
2,419
|
Several techniques improve speed by reducing stalls from conditional branches.
|
2,420
|
Historically, branch prediction took statistics, and used the result to optimize code. A programmer would compile a test version of a program, and run it with test data. The test code counted how the branches were actually taken. The statistics from the test code were then used by the compiler to optimize the branches of released code. The optimization would arrange that the fastest branch direction would always be the most frequently taken control flow path. To permit this, CPUs must be designed with predictable branch timing. Some CPUs have instruction sets that were designed with "branch hints" so that a compiler can tell a CPU how each branch is to be taken.
|
2,421
|
The problem with software branch prediction is that it requires a complex software development process.
|
2,422
|
To run any software, hardware branch predictors moved the statistics into the electronics. Branch predictors are parts of a processor that guess the outcome of a conditional branch. Then the processor's logic gambles on the guess by beginning to execute the expected instruction flow. An example of a simple hardware branch prediction scheme is to assume that all backward branches are taken , and all forward branches are not taken . Better branch predictors are developed and validated statistically by running them in simulation on a variety of test programs. Good predictors usually count the outcomes of previous executions of a branch. Faster, more expensive computers can then run faster by investing in better branch prediction electronics. In a CPU with hardware branch prediction, branch hints let the compiler's presumably superior branch prediction override the hardware's more simplistic branch prediction.
|
2,423
|
Some logic can be written without branches or with fewer branches. It is often possible to use bitwise operations, conditional moves or other predication instead of branches. In fact, branch-free code is a must for cryptography due to timing attacks.
|
2,424
|
Another technique is a branch delay slot. In this approach, at least one instruction following a branch is always executed, with some exceptions such like the legacy MIPS architecture likely/unlikely branch instruction. Therefore, the computer can use this instruction to do useful work whether or not its pipeline stalls. This approach was historically popular in RISC computers. In a family of compatible CPUs, it complicates multicycle CPUs , faster CPUs with longer-than-expected pipelines, and superscalar CPUs
|
2,425
|
The RA-machine's equivalent of the universal Turing machine – with its program in the registers as well as its data – is called the random-access stored-program machine or RASP-machine. It is an example of the so-called von Neumann architecture and is closest to the common notion of a computer.
|
2,426
|
Together with the Turing machine and counter-machine models, the RA-machine and RASP-machine models are used for computational complexity analysis. Van Emde Boas calls these three together with the pointer machine, "sequential machine" models, to distinguish them from "parallel random-access machine" models.
|
2,427
|
An RA-machine consists of the following:
|
2,428
|
For a description of a similar concept, but humorous, see the "programming language" Branflakes.
|
2,429
|
The concept of a random-access machine starts with the simplest model of all, the so-called counter machine model. Two additions move it away from the counter machine, however. The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulator-based computer with the addition of one or more auxiliary registers, the most common of which is called "the accumulator".
|
2,430
|
A random-access machine is an abstract computational-machine model identical to a multiple-register counter machine with the addition of indirect addressing. At the discretion of instruction from its finite state machine's TABLE, the machine derives a "target" register's address either directly from the instruction itself, or indirectly from the contents of the "pointer" register specified in the instruction.
|
2,431
|
By definition: A register is a location with both an address and a content – a single natural number. For precision we will use the quasi-formal symbolism from Boolos-Burgess-Jeffrey to specify a register, its contents, and an operation on a register:
|
2,432
|
Definition: A direct instruction is one that specifies in the instruction itself the address of the source or destination register whose contents will be the subject of the instruction.
Definition: An indirect instruction is one that specifies a "pointer register", the contents of which is the address of a "target" register. The target register can be either a source or a destination . A register can address itself indirectly.
|
2,433
|
Definition: The contents of source register is used by the instruction. The source register's address can be specified either directly by the instruction, or indirectly by the pointer register specified by the instruction.
|
2,434
|
Definition: The contents of the pointer register is the address of the "target" register.
|
2,435
|
Definition: The contents of the pointer register points to the target register – the "target" may be either a source or a destination register.
|
2,436
|
Definition: The destination register is where the instruction deposits its result. The source register's address can be specified either directly by the instruction, or indirectly by the pointer register specified by the instruction. The source and destination registers can be one.
|
2,437
|
The register machine has, for a memory external to its finite-state machine – an unbounded collection of discrete and uniquely labelled locations with unbounded capacity, called "registers". These registers hold only natural numbers . Per a list of sequential instructions in the finite state machine's TABLE, a few types of primitive operations operate on the contents of these "registers". Finally, a conditional-expression in the form of an IF-THEN-ELSE is available to test the contents of one or two registers and "branch/jump" the finite state machine out of the default instruction-sequence.
|
2,438
|
Base model 1: The model closest to Minsky's visualization and to Lambek :
|
2,439
|
Base model 2: The "successor" model :
|
2,440
|
Base model 3: Used by Elgot-Robinson in their investigation of bounded and unbounded RASPs – the "successor" model with COPY in the place of CLEAR:
|
2,441
|
The three base sets 1, 2, or 3 above are equivalent in the sense that one can create the instructions of one set using the instructions of another set – declare a reserved register e.g. call it "0" to contain the number 0). The choice of model will depend on which an author finds easiest to use in a demonstration, or a proof, etc.
|
2,442
|
Moreover, from base sets 1, 2, or 3 we can create any of the primitive recursive functions , Boolos-Burgess-Jeffrey ). . However, building the primitive recursive functions is difficult because the instruction sets are so ... primitive . One solution is to expand a particular set with "convenience instructions" from another set:
|
2,443
|
Again, all of this is for convenience only; none of this increases the model's intrinsic power.
|
2,444
|
For example: the most expanded set would include each unique instruction from the three sets, plus unconditional jump J i.e.:
|
2,445
|
Most authors pick one or the other of the conditional jumps, e.g. Shepherdson-Sturgis use the above set minus JE .
|
2,446
|
In our daily lives the notion of an "indirect operation" is not unusual.
|
2,447
|
Indirection specifies a location identified as the pirate chest in "Tom_&_Becky's_cave..." that acts as a pointer to any other location : its contents provides the "address" of the target location
"under_Thatcher's_front_porch" where the real action is occurring.
|
2,448
|
In the following one must remember that these models are abstract models with two fundamental differences from anything physically real: unbounded numbers of registers each with unbounded capacities. The problem appears most dramatically when one tries to use a counter-machine model to build a RASP that is Turing equivalent and thus compute any partial mu recursive function:
|
2,449
|
So how do we address a register beyond the bounds of the finite state machine? One approach would be to modify the program-instructions so that they contain more than one command. But this too can be exhausted unless an instruction is of unbounded size. So why not use just one "über-instruction" – one really really big number – that contains all the program instructions encoded into it! This is how Minsky solves the problem, but the Gödel numbering he uses represents a great inconvenience to the model, and the result is nothing at all like our intuitive notion of a "stored program computer".
|
2,450
|
Elgot and Robinson come to a similar conclusion with respect to a RASP that is "finitely determined". Indeed it can access an unbounded number of registers but only if the RASP allows "self modification" of its program instructions, and has encoded its "data" in a Gödel number .
|
2,451
|
In the context of a more computer-like model using his RPT instruction Minsky tantalizes us with a solution to the problem but offers no firm resolution. He asserts:
|
2,452
|
He offers us a bounded RPT that together with CLR and INC can compute any primitive recursive function, and he offers the unbounded RPT quoted above that as playing the role of μ operator; it together with CLR and INC can compute the mu recursive functions. But he does not discuss "indirection" or the RAM model per se.
|
2,453
|
From the references in Hartmanis it appears that Cook has firmed up the notion of indirect addressing. This becomes clearer in the paper of Cook and Reckhow – Cook is Reckhow's Master's thesis advisor. Hartmanis' model – quite similar to Melzak's model – uses two and three-register adds and subtracts and two parameter copies; Cook and Reckhow's model reduce the number of parameters to one call-out by use of an accumulator "AC".
|
2,454
|
The solution in a nutshell: Design our machine/model with unbounded indirection – provide an unbounded "address" register that can potentially name any register no matter how many there are. For this to work, in general, the unbounded register requires an ability to be cleared and then incremented by a potentially infinite loop. In this sense the solution represents the unbounded μ operator that can, if necessary, hunt ad infinitum along the unbounded string of registers until it finds what it is looking for. The pointer register is exactly like any other register with one exception: under the circumstances called "indirect addressing" it provides its contents, rather than the address-operand in the state machine's TABLE, to be the address of the target register .
|
2,455
|
If we eschew the Minsky approach of one monster number in one register, and specify that our machine model will be "like a computer" we have to confront this problem of indirection if we are to compute the recursive functions – both total and partial varieties.
|
2,456
|
Our simpler counter-machine model can do a "bounded" form of indirection – and thereby compute the sub-class of primitive recursive functions – by using a primitive recursive "operator" called "definition by cases" p. 229 and Boolos-Burgess-Jeffrey p. 74). Such a "bounded indirection" is a laborious, tedious affair. "Definition by cases" requires the machine to determine/distinguish the contents of the pointer register by attempting, time after time until success, to match this contents against a number/name that the case operator explicitly declares. Thus the definition by cases starts from e.g. the lower bound address and continues ad nauseam toward the upper bound address attempting to make a match:
|
2,457
|
"Bounded" indirection will not allow us to compute the partial recursive functions – for those we need unbounded indirection aka the μ operator.
|
2,458
|
To be Turing equivalent the counter machine needs to either use the unfortunate single-register Minsky Gödel number method, or be augmented with an ability to explore the ends of its register string, ad infinitum if necessary. pp. 316ff Chapter XII Partial Recursive Functions, in particular p. 323-325.) See more on this in the example below.
|
2,459
|
For unbounded indirection we require a "hardware" change in our machine model. Once we make this change the model is no longer a counter machine, but rather a random-access machine.
|
2,460
|
Now when e.g. INC is specified, the finite state machine's instruction will have to specify where the address of the register of interest will come from. This where can be either the state machine's instruction that provides an explicit label, or the pointer-register whose contents is the address of interest. Whenever an instruction specifies a register address it now will also need to specify an additional parameter "i/d" – "indirect/direct". In a sense this new "i/d" parameter is a "switch" that flips one way to get the direct address as specified in the instruction or the other way to get the indirect address from the pointer register . This "mutually exclusive but exhaustive choice" is yet another example of "definition by cases", and the arithmetic equivalent shown in the example below is derived from the definition in Kleene p. 229.
|
2,461
|
Probably the most useful of the added instructions is COPY. Indeed, Elgot-Robinson provide their models P0 and P'0 with the COPY instructions, and Cook-Reckhow provide their accumulator-based model with only two indirect instructions – COPY to accumulator indirectly, COPY from accumulator indirectly.
|
2,462
|
A plethora of instructions: Because any instruction acting on a single register can be augmented with its indirect "dual" , the inclusion of indirect instructions will double the number of single parameter/register instructions , INC ). Worse, every two parameter/register instruction will have 4 possible varieties, e.g.:
|
2,463
|
In a similar manner every three-register instruction that involves two source registers rs1 rs2 and a destination register rd will result in 8 varieties, for example the addition:
|
2,464
|
If we designate one register to be the "accumulator" and place strong restrictions on the various instructions allowed then we can greatly reduce the plethora of direct and indirect operations. However, one must be sure that the resulting reduced instruction-set is sufficient, and we must be aware that the reduction will come at the expense of more instructions per "significant" operation.
|
2,465
|
Historical convention dedicates a register to the accumulator, an "arithmetic organ" that literally accumulates its number during a sequence of arithmetic operations:
|
2,466
|
However, the accumulator comes at the expense of more instructions per arithmetic "operation", in particular with respect to what are called 'read-modify-write' instructions such as "Increment indirectly the contents of the register pointed to by register r2 ". "A" designates the "accumulator" register A:
|
2,467
|
If we stick with a specific name for the accumulator, e.g. "A", we can imply the accumulator in the instructions, for example,
|
2,468
|
However, when we write the CPY instructions without the accumulator called out the instructions are ambiguous or they must have empty parameters:
|
2,469
|
Historically what has happened is these two CPY instructions have received distinctive names; however, no convention exists. Tradition imaginary MIX computer) uses two names called LOAD and STORE. Here we are adding the "i/d" parameter:
|
2,470
|
The typical accumulator-based model will have all its two-variable arithmetic and constant operations , SUB ) use the accumulator's contents, together with a specified register's contents. The one-variable operations , DEC and CLR ) require only the accumulator. Both instruction-types deposit the result in the accumulator.
|
2,471
|
If we so choose, we can abbreviate the mnemonics because at least one source-register and the destination register is always the accumulator A. Thus we have :
|
2,472
|
If our model has an unbounded accumulator can we bound all the other registers? Not until we provide for at least one unbounded register from which we derive our indirect addresses.
|
2,473
|
The minimalist approach is to use itself .
|
2,474
|
Another approach is to declare a specific register the "indirect address register" and confine indirection relative to this register . Again our new register has no conventional name – perhaps "N" from "iNdex", or "iNdirect" or "address Number".
|
2,475
|
For maximum flexibility, as we have done for the accumulator A – we will consider N just another register subject to increment, decrement, clear, test, direct copy, etc. Again we can shrink the instruction to a single-parameter that provides for direction and indirection, for example.
|
2,476
|
Why is this such an interesting approach? At least two reasons:
|
2,477
|
An instruction set with no parameters:
|
2,478
|
Schönhage does this to produce his RAM0 instruction set. See section below.
|
2,479
|
Reduce a RAM to a Post-Turing machine:
|
2,480
|
Posing as minimalists, we reduce all the registers excepting the accumulator A and indirection register N e.g. r = { r0, r1, r2, ... } to an unbounded string of bounded-capacity pigeon-holes. These will do nothing but hold bounded numbers e.g. a lone bit with value { 0, 1 }. Likewise we shrink the accumulator to a single bit. We restrict any arithmetic to the registers { A, N }, use indirect operations to pull the contents of registers into the accumulator and write 0 or 1 from the accumulator to a register:
|
2,481
|
We push further and eliminate A altogether by the use of two "constant" registers called "ERASE" and "PRINT": =0, =1.
|
2,482
|
Rename the COPY instructions and call INC = RIGHT, DEC = LEFT and we have the same instructions as the Post-Turing machine, plus an extra CLRN :
|
2,483
|
In the section above we informally showed that a RAM with an unbounded indirection capability produces a Post–Turing machine. The Post–Turing machine is Turing equivalent, so we have shown that the RAM with indirection is Turing equivalent.
|
2,484
|
We give here a slightly more formal demonstration. Begin by designing our model with three reserved registers "E", "P", and "N", plus an unbounded set of registers 1, 2, ..., n to the right. The registers 1, 2, ..., n will be considered "the squares of the tape". Register "N" points to "the scanned square" that "the head" is currently observing. The "head" can be thought of as being in the conditional jump – observe that it uses indirect addressing . As we decrement or increment "N" the head will "move left" or "right" along the squares. We will move the contents of "E"=0 or "P"=1 to the "scanned square" as pointed to by N, using the indirect CPY.
|
2,485
|
The fact that our tape is left-ended presents us with a minor problem: Whenever LEFT occurs our instructions will have to test to determine whether or not the contents of "N" is zero; if so we should leave its count at "0" .
|
2,486
|
The following table both defines the Post-Turing instructions in terms of their RAM equivalent instructions and gives an example of their functioning. The location of the head along the tape of registers r0-r5 . . . is shown shaded:
|
2,487
|
Throughout this demonstration we have to keep in mind that the instructions in the finite state machine's TABLE is bounded, i.e. finite:
|
2,488
|
We will build the indirect CPY with the CASE operator. The address of the target register will be specified by the contents of register "q"; once the CASE operator has determined what this number is, CPY will directly deposit the contents of the register with that number into register "φ". We will need an additional register that we will call "y" – it serves as an up-counter.
|
2,489
|
The CASE "operator" is described in Kleene and in Boolos-Burgess-Jeffrey ; the latter authors emphasize its utility. The following definition is per Kleene but modified to reflect the familiar "IF-THEN-ELSE" construction.
|
2,490
|
The CASE operator "returns" a natural number into φ depending on which "case" is satisfied, starting with "case_0" and going successively through "case_last"; if no case is satisfied then the number called "default" is returned into φ ):
|
2,491
|
Definition by cases φ :
|
2,492
|
Kleene require that the "predicates" Qn that doing the testing are all mutually exclusive – "predicates" are functions that produce only { true, false } for output; Boolos-Burgess-Jeffrey add the requirement that the cases are "exhaustive".
|
2,493
|
We begin with a number in register q that represents the address of the target register. But what is this number? The "predicates" will test it to find out, one trial after another: JE followed by INC . Once the number is identified explicitly, the CASE operator directly/explicitly copies the contents of this register to φ:
|
2,494
|
Case_0 looks like this:
|
2,495
|
Case_n looks like this; remember, each instance of "n", "n+1", ..., "last" must be an explicit natural number:
|
2,496
|
Case_last stops the induction and bounds the CASE operator :
|
2,497
|
If the CASE could continue ad infinitum it would be the mu operator. But it can't – its finite state machine's "state register" has reached its maximum count or its table has run out of instructions; it is a finite machine, after all.
|
2,498
|
The commonly encountered Cook and Rechkow model is a bit like the ternary-register Malzek model .
|
2,499
|
Schönhage describes a very primitive, atomized model chosen for his proof of the equivalence of his SMM pointer machine model:
|
2,500
|
RAM1 model: Schönhage demonstrates how his construction can be used to form the more common, usable form of "successor"-like RAM :
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.