text
stringlengths 139
4.06k
| source
stringlengths 16
26
|
|---|---|
A stack frame may be built in many different ways; however, the caller and callee must agree on the sequence of steps. The steps below describe the calling convention used on most MIPS machines. This convention comes into play at three points during a procedure call: immediately before the caller invokes the callee, just as the callee starts executing, and immediately before the callee returns to the caller. In the first part, the caller puts the procedure call arguments in standard places and invokes the callee to do the following: 1. Pass arguments. By convention, the first four arguments are passed in regis ters $a0–$a3. Any remaining arguments are pushed on the stack and appear at the beginning of the called procedure’s stack frame. 2. Save caller-saved registers. The called procedure can use these registers ($a0–$a3 and $t0–$t9) without first saving their value. If the caller expects to use one of these registers after a call, it must save its value before the call. 3. Execute a jal instruction (see Section 2.8 of Chapter 2), which jumps to the callee’s first instruction and saves the return address in register $ra. FIGURE B.6.2 Layout of a stack frame. The frame pointer ($fp) points to the first word in the currently executing procedure’s stack frame. The stack pointer ($sp) points to the last word of the frame. The first four arguments are passed in registers, so the fifth argument is the first one stored on the stack. B.6 Procedure Call Convention B-25 Argument 6 Argument 5 Saved registers Local variables Higher memory addresses Lower memory addresses Stack grows $fp $sp
|
Hennesey_Page_801_Chunk801
|
B-26 Appendix B Assemblers, Linkers, and the SPIM Simulator Before a called routine starts running, it must take the following steps to set up its stack frame: 1. Allocate memory for the frame by subtracting the frame’s size from the stack pointer. 2. Save callee-saved registers in the frame. A callee must save the values in these registers ($s0–$s7, $fp, and $ra) before altering them, since the caller expects to find these registers unchanged after the call. Register $fp is saved by every procedure that allocates a new stack frame. However, register $ra only needs to be saved if the callee itself makes a call. The other callee-saved registers that are used also must be saved. 3. Establish the frame pointer by adding the stack frame’s size minus 4 to $sp and storing the sum in register $fp. The MIPS register use convention provides callee- and caller-saved registers, because both types of registers are advantageous in different circumstances. Callee- saved registers are better used to hold long-lived values, such as variables from a user’s program. These registers are only saved during a procedure call if the callee expects to use the register. On the other hand, caller-saved registers are better used to hold short-lived quantities that do not persist across a call, such as immediate values in an address calculation. During a call, the callee can also use these registers for short-lived temporaries. Finally, the callee returns to the caller by executing the following steps: 1. If the callee is a function that returns a value, place the returned value in register $v0. 2. Restore all callee-saved registers that were saved upon procedure entry. 3. Pop the stack frame by adding the frame size to $sp. 4. Return by jumping to the address in register $ra. Elaboration: A programming language that does not permit recursive procedures— procedures that call themselves either directly or indirectly through a chain of calls—need not allocate frames on a stack. In a nonrecursive language, each procedure’s frame may be statically allocated, since only one invocation of a procedure can be active at a time. Older versions of Fortran prohibited recursion, because statically allocated frames produced faster code on some older machines. However, on load store architectures like MIPS, stack frames may be just as fast, because a frame pointer register points directly Hardware/ Software Interface recursive procedures Procedures that call themselves either directly or indirectly through a chain of calls.
|
Hennesey_Page_802_Chunk802
|
to the active stack frame, which permits a single load or store instruction to access values in the frame. In addition, recursion is a valuable programming technique. Procedure Call Example As an example, consider the C routine main () { printf (“The factorial of 10 is %d\n”, fact (10)); } int fact (int n) { if (n < 1) return (1); else return (n * fact (n - 1)); } which computes and prints 10! (the factorial of 10, 10! = 10 × 9 × . . . × 1). fact is a recursive routine that computes n! by multiplying n times (n - 1)!. The assembly code for this routine illustrates how programs manipulate stack frames. Upon entry, the routine main creates its stack frame and saves the two callee- saved registers it will modify: $fp and $ra. The frame is larger than required for these two register because the calling convention requires the minimum size of a stack frame to be 24 bytes. This minimum frame can hold four argument registers ($a0–$a3) and the return address $ra, padded to a double-word boundary (24 bytes). Since main also needs to save $fp, its stack frame must be two words larger (remember: the stack pointer is kept doubleword aligned). .text .globl main main: subu $sp,$sp,32 # Stack frame is 32 bytes long sw $ra,20($sp) # Save return address sw $fp,16($sp) # Save old frame pointer addiu $fp,$sp,28 # Set up frame pointer The routine main then calls the factorial routine and passes it the single argument 10. After fact returns, main calls the library routine printf and passes it both a format string and the result returned from fact: B.6 Procedure Call Convention B-27
|
Hennesey_Page_803_Chunk803
|
B-28 Appendix B Assemblers, Linkers, and the SPIM Simulator li $a0,10 # Put argument (10) in $a0 jal fact # Call factorial function la $a0,$LC # Put format string in $a0 move $a1,$v0 # Move fact result to $a1 jal printf # Call the print function Finally, after printing the factorial, main returns. But first, it must restore the registers it saved and pop its stack frame: lw $ra,20($sp) # Restore return address lw $fp,16($sp) # Restore frame pointer addiu $sp,$sp,32 # Pop stack frame jr $ra # Return to caller .rdata $LC: .ascii “The factorial of 10 is %d\n\000” The factorial routine is similar in structure to main. First, it creates a stack frame and saves the callee-saved registers it will use. In addition to saving $ra and $fp, fact also saves its argument ($a0), which it will use for the recursive call: .text fact: subu $sp,$sp,32 # Stack frame is 32 bytes long sw $ra,20($sp) # Save return address sw $fp,16($sp) # Save frame pointer addiu $fp,$sp,28 # Set up frame pointer sw $a0,0($fp) # Save argument (n) The heart of the fact routine performs the computation from the C program. It tests whether the argument is greater than 0. If not, the routine returns the value 1. If the argument is greater than 0, the routine recursively calls itself to compute fact(n-1) and multiplies that value times n: lw $v0,0($fp) # Load n bgtz $v0,$L2 # Branch if n > 0 li $v0,1 # Return 1 jr $L1 # Jump to code to return $L2: lw $v1,0($fp) # Load n subu $v0,$v1,1 # Compute n - 1 move $a0,$v0 # Move value to $a0
|
Hennesey_Page_804_Chunk804
|
jal fact # Call factorial function lw $v1,0($fp) # Load n mul $v0,$v0,$v1 # Compute fact(n-1) * n Finally, the factorial routine restores the callee-saved registers and returns the value in register $v0: $L1: # Result is in $v0 lw $ra, 20($sp) # Restore $ra lw $fp, 16($sp) # Restore $fp addiu $sp, $sp, 32 # Pop stack jr $ra # Return to caller Stack in Recursive Procedure Figure B.6.3 shows the stack at the call fact(7). main runs first, so its frame is deepest on the stack. main calls fact(10), whose stack frame is next on the stack. Each invocation recursively invokes fact to compute the next-lowest factorial. The stack frames parallel the LIFO order of these calls. What does the stack look like when the call to fact(10) returns? EXAMPLE B.6 Procedure Call Convention B-29 FIGURE B.6.3 Stack frames during the call of fact(7). main fact (10) fact (9) fact (8) fact (7) Stack Stack grows Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp
|
Hennesey_Page_805_Chunk805
|
B-30 Appendix B Assemblers, Linkers, and the SPIM Simulator ANSWER Elaboration: The difference between the MIPS compiler and the gcc compiler is that the MIPS compiler usually does not use a frame pointer, so this register is available as another callee-saved register, $s8. This change saves a couple of instructions in the procedure call and return sequence. However, it complicates code generation, because a procedure must access its stack frame with $sp, whose value can change during a procedure’s execution if values are pushed on the stack. Another Procedure Call Example As another example, consider the following routine that computes the tak func tion, which is a widely used benchmark created by Ikuo Takeuchi. This function does not compute anything useful, but is a heavily recursive program that illustrates the MIPS calling convention. int tak (int x, int y, int z) { if (y < x) return 1+ tak (tak (x - 1, y, z), tak (y - 1, z, x), tak (z - 1, x, y)); else return z; } int main () { tak(18, 12, 6); } The assembly code for this program is shown below. The tak function first saves its return address in its stack frame and its arguments in callee-saved registers, since the routine may make calls that need to use registers $a0–$a2 and $ra. The function uses callee-saved registers, since they hold values that persist over the main Stack Stack grows Old $ra Old $fp
|
Hennesey_Page_806_Chunk806
|
lifetime of the function, which includes several calls that could potentially modify registers. .text .globl tak tak: subu $sp, $sp, 40 sw $ra, 32($sp) sw $s0, 16($sp) # x move $s0, $a0 sw $s1, 20($sp) # y move $s1, $a1 sw $s2, 24($sp) # z move $s2, $a2 sw $s3, 28($sp) # temporary The routine then begins execution by testing if y < x. If not, it branches to label L1, which is shown below. bge $s1, $s0, L1 # if (y < x) If y < x, then it executes the body of the routine, which contains four recursive calls. The first call uses almost the same arguments as its parent: addiu $a0, $s0, -1 move $a1, $s1 move $a2, $s2 jal tak # tak (x - 1, y, z) move $s3, $v0 Note that the result from the first recursive call is saved in register $s3, so that it can be used later. The function now prepares arguments for the second recursive call. addiu $a0, $s1, -1 move $a1, $s2 move $a2, $s0 jal tak # tak (y - 1, z, x) In the instructions below, the result from this recursive call is saved in register $s0. But first we need to read, for the last time, the saved value of the first argu ment from this register. B.6 Procedure Call Convention B-31
|
Hennesey_Page_807_Chunk807
|
B-32 Appendix B Assemblers, Linkers, and the SPIM Simulator addiu $a0, $s2, -1 move $a1, $s0 move $a2, $s1 move $s0, $v0 jal tak # tak (z - 1, x, y) After the three inner recursive calls, we are ready for the final recursive call. After the call, the function’s result is in $v0 and control jumps to the function’s epilogue. move $a0, $s3 move $a1, $s0 move $a2, $v0 jal tak # tak (tak(...), tak(...), tak(...)) addiu $v0, $v0, 1 j L2 This code at label L1 is the consequent of the if-then-else statement. It just moves the value of argument z into the return register and falls into the function epilogue. L1: move $v0, $s2 The code below is the function epilogue, which restores the saved registers and returns the function’s result to its caller. L2: lw $ra, 32($sp) lw $s0, 16($sp) lw $s1, 20($sp) lw $s2, 24($sp) lw $s3, 28($sp) addiu $sp, $sp, 40 jr $ra The main routine calls the tak function with its initial arguments, then takes the computed result (7) and prints it using SPIM’s system call for printing integers. .globl main main: subu $sp, $sp, 24 sw $ra, 16($sp) li $a0, 18 li $a1, 12
|
Hennesey_Page_808_Chunk808
|
li $a2, 6 jal tak # tak(18, 12, 6) move $a0, $v0 li $v0, 1 # print_int syscall syscall lw $ra, 16($sp) addiu $sp, $sp, 24 jr $ra B.7 Exceptions and Interrupts Section 4.9 of Chapter 4 describes the MIPS exception facility, which responds both to exceptions caused by errors during an instruction’s execution and to external interrupts caused by I/O devices. This section describes exception and interrupt handling in more detail.1 In MIPS processors, a part of the CPU called coprocessor 0 records the information the software needs to handle exceptions and interrupts. The MIPS simulator SPIM does not implement all of coprocessor 0’s registers, since many are not useful in a simulator or are part of the memory system, which SPIM does not implement. However, SPIM does provide the following coprocessor 0 registers: Register name Register number Usage BadVAddr 8 memory address at which an offending memory reference occurred Count 9 timer Compare 11 value compared against timer that causes interrupt when they match Status 12 interrupt mask and enable bits Cause 13 exception type and pending interrupt bits EPC 14 address of instruction that caused exception Config 16 configuration of machine 1. This section discusses exceptions in the MIPS-32 architecture, which is what SPIM implements in Version 7.0 and later. Earlier versions of SPIM implemented the MIPS-1 architecture, which handled exceptions slightly differently. Converting programs from these versions to run on MIPS-32 should not be difficult, as the changes are limited to the Status and Cause register fields and the replacement of the rfe instruction by the eret instruction. interrupt handler A piece of code that is run as a result of an exception or an interrupt. B.7 Exceptions and Interrupts B-33
|
Hennesey_Page_809_Chunk809
|
B-34 Appendix B Assemblers, Linkers, and the SPIM Simulator These seven registers are part of coprocessor 0’s register set. They are accessed by the mfc0 and mtc0 instructions. After an exception, register EPC contains the address of the instruction that was executing when the exception occurred. If the exception was caused by an external interrupt, then the instruction will not have started executing. All other exceptions are caused by the execution of the instruction at EPC, except when the offending instruction is in the delay slot of a branch or jump. In that case, EPC points to the branch or jump instruction and the BD bit is set in the Cause register. When that bit is set, the exception handler must look at EPC + 4 for the offending instruction. However, in either case, an exception handler properly resumes the program by returning to the instruction at EPC. If the instruction that caused the exception made a memory access, register BadVAddr contains the referenced memory location’s address. The Count register is a timer that increments at a fixed rate (by default, every 10 milliseconds) while SPIM is running. When the value in the Count register equals the value in the Compare register, a hardware interrupt at priority level 5 occurs. Figure B.7.1 shows the subset of the Status register fields implemented by the MIPS simulator SPIM. The interrupt mask field contains a bit for each of the six hardware and two software interrupt levels. A mask bit that is 1 allows inter rupts at that level to interrupt the processor. A mask bit that is 0 disables inter rupts at that level. When an interrupt arrives, it sets its interrupt pending bit in the Cause register, even if the mask bit is disabled. When an interrupt is pending, it will interrupt the processor when its mask bit is subsequently enabled. The user mode bit is 0 if the processor is running in kernel mode and 1 if it is running in user mode. On SPIM, this bit is fixed at 1, since the SPIM processor does not implement kernel mode. The exception level bit is normally 0, but is set to 1 after an exception occurs. When this bit is 1, interrupts are disabled and the EPC is not updated if another exception occurs. This bit prevents an exception handler from being disturbed by an interrupt or exception, but it should be reset when the handler finishes. If the interrupt enable bit is 1, interrupts are allowed. If it is 0, they are disabled. Figure B.7.2 shows the subset of Cause register fields that SPIM implements. The branch delay bit is 1 if the last exception occurred in an instruction executed in the delay slot of a branch. The interrupt pending bits become 1 when an interrupt
|
Hennesey_Page_810_Chunk810
|
is raised at a given hardware or software level. The exception code register describes the cause of an exception through the following codes: Number Name Cause of exception 0 Int interrupt (hardware) 4 AdEL address error exception (load or instruction fetch) 5 AdES address error exception (store) 6 IBE bus error on instruction fetch 7 DBE bus error on data load or store 8 Sys syscall exception 9 Bp breakpoint exception 10 RI reserved instruction exception 11 CpU coprocessor unimplemented 12 Ov arithmetic overflow exception 13 Tr trap 15 FPE floating point Exceptions and interrupts cause a MIPS processor to jump to a piece of code, at address 80000180hex (in the kernel, not user address space), called an exception handler. This code examines the exception’s cause and jumps to an appropriate point in the operating system. The operating system responds to an exception either by terminating the process that caused the exception or by performing some action. A process that causes an error, such as executing an unimplemented instruction, is killed by the operating system. On the other hand, other exceptions FIGURE B.7.1 The Status register. 15 8 4 1 0 Interrupt mask User mode Exception level Interrupt enable FIGURE B.7.2 The Cause register. 15 31 8 6 2 Pending interrupts Branch delay Exception code B.7 Exceptions and Interrupts B-35
|
Hennesey_Page_811_Chunk811
|
B-36 Appendix B Assemblers, Linkers, and the SPIM Simulator such as page faults are requests from a process to the operating system to perform a service, such as bringing in a page from disk. The operating system processes these requests and resumes the process. The final type of exceptions are interrupts from external devices. These generally cause the operating system to move data to or from an I/O device and resume the interrupted process. The code in the example below is a simple exception handler, which invokes a routine to print a message at each exception (but not interrupts). This code is similar to the exception handler (exceptions.s) used by the SPIM simulator. Exception Handler The exception handler first saves register $at, which is used in pseudo- instructions in the handler code, then saves $a0 and $a1, which it later uses to pass arguments. The exception handler cannot store the old values from these registers on the stack, as would an ordinary routine, because the cause of the exception might have been a memory reference that used a bad value (such as 0) in the stack pointer. Instead, the exception handler stores these registers in an exception handler register ($k1, since it can’t access memory without using $at) and two memory locations (save0 and save1). If the exception routine itself could be interrupted, two locations would not be enough since the second exception would overwrite values saved during the first exception. However, this simple exception handler finishes running before it enables interrupts, so the problem does not arise. .ktext 0x80000180 mov $k1, $at # Save $at register sw $a0, save0 # Handler is not re-entrant and can’t use sw $a1, save1 # stack to save $a0, $a1 # Don’t need to save $k0/$k1 The exception handler then moves the Cause and EPC registers into CPU registers. The Cause and EPC registers are not part of the CPU register set. Instead, they are registers in coprocessor 0, which is the part of the CPU that handles exceptions. The instruction mfc0 $k0, $13 moves coprocessor 0’s register 13 (the Cause register) into CPU register $k0. Note that the exception handler need not save registers $k0 and $k1, because user programs are not supposed to use these registers. The exception handler uses the value from the Cause register to test whether the exception was caused by an interrupt (see the preceding table). If so, the exception is ignored. If the exception was not an interrupt, the handler calls print_excp to print a message. EXAMPLE
|
Hennesey_Page_812_Chunk812
|
mfc0 $k0, $13 # Move Cause into $k0 srl $a0, $k0, 2 # Extract ExcCode field andi $a0, $a0, Oxf bgtz $a0, done # Branch if ExcCode is Int (0) mov $a0, $k0 # Move Cause into $a0 mfco $a1, $14 # Move EPC into $a1 jal print_excp # Print exception error message Before returning, the exception handler clears the Cause register; resets the Status register to enable interrupts and clear the EXL bit, which allows subsequent exceptions to change the EPC register; and restores registers $a0, $a1, and $at. It then executes the eret (exception return) instruction, which returns to the instruction pointed to by EPC. This exception handler returns to the instruction following the one that caused the exception, so as to not re-execute the faulting instruction and cause the same exception again. done: mfc0 $k0, $14 # Bump EPC addiu $k0, $k0, 4 # Do not re-execute # faulting instruction mtc0 $k0, $14 # EPC mtc0 $0, $13 # Clear Cause register mfc0 $k0, $12 # Fix Status register andi $k0, Oxfffd # Clear EXL bit ori $k0, Ox1 # Enable interrupts mtc0 $k0, $12 lw $a0, save0 # Restore registers lw $a1, save1 mov $at, $k1 eret # Return to EPC .kdata save0: .word 0 save1: .word 0 B.7 Exceptions and Interrupts B-37
|
Hennesey_Page_813_Chunk813
|
B-38 Appendix B Assemblers, Linkers, and the SPIM Simulator Elaboration: On real MIPS processors, the return from an exception handler is more complex. The exception handler cannot always jump to the instruction following EPC. For example, if the instruction that caused the exception was in a branch instruction’s delay slot (see Chapter 4), the next instruction to execute may not be the following instruction in memory. B.8 Input and Output SPIM simulates one I/O device: a memory-mapped console on which a program can read and write characters. When a program is running, SPIM connects its own terminal (or a separate console window in the X-window version xspim or the Windows version PCSpim) to the processor. A MIPS program running on SPIM can read the characters that you type. In addition, if the MIPS program writes characters to the terminal, they appear on SPIM’s terminal or console win dow. One exception to this rule is control-C: this character is not passed to the program, but instead causes SPIM to stop and return to command mode. When the program stops running (for example, because you typed control-C or because the program hit a breakpoint), the terminal is reconnected to SPIM so you can type SPIM commands. To use memory‑mapped I/O (see below), spim or xspim must be started with the ‑mapped_io flag. PCSpim can enable memory-mapped I/O through a command line flag or the “Settings” dialog. The terminal device consists of two independent units: a receiver and a trans mitter. The receiver reads characters from the keyboard. The transmitter displays characters on the console. The two units are completely independent. This means, for example, that characters typed at the keyboard are not automatically echoed on the display. Instead, a program echoes a character by reading it from the receiver and writing it to the transmitter. A program controls the terminal with four memory-mapped device registers, as shown in Figure B.8.1. “Memory-mapped’’ means that each register appears as a special memory location. The Receiver Control register is at location ffff0000hex. Only two of its bits are actually used. Bit 0 is called “ready’’: if it is 1, it means that a character has arrived from the keyboard but has not yet been read from the Receiver Data register. The ready bit is read-only: writes to it are ignored. The ready bit changes from 0 to 1 when a character is typed at the keyboard, and it changes from 1 to 0 when the character is read from the Receiver Data register.
|
Hennesey_Page_814_Chunk814
|
Bit 1 of the Receiver Control register is the keyboard “interrupt enable.” This bit may be both read and written by a program. The interrupt enable is initially 0. If it is set to 1 by a program, the terminal requests an interrupt at hardware level 1 whenever a character is typed, and the ready bit becomes 1. However, for the inter rupt to affect the processor, interrupts must also be enabled in the Status register (see Section B.7). All other bits of the Receiver Control register are unused. The second terminal device register is the Receiver Data register (at address ffff0004hex). The low-order eight bits of this register contain the last character typed at the keyboard. All other bits contain 0s. This register is read-only and changes only when a new character is typed at the keyboard. Reading the Receiver Data register resets the ready bit in the Receiver Control register to 0. The value in this register is undefined if the Receiver Control register is 0. The third terminal device register is the Transmitter Control register (at address ffff0008hex). Only the low-order two bits of this register are used. They behave much like the corresponding bits of the Receiver Control register. Bit 0 is called “ready’’ FIGURE B.8.1 The terminal is controlled by four device registers, each of which appears as a memory location at the given address. Only a few bits of these registers are actually used. The others always read as 0s and are ignored on writes. 1 Interrupt enable Ready 1 Unused Receiver control (0xffff0000) 8 Received byte Unused Receiver data (0xffff0004) 1 Interrupt enable Ready 1 Unused Transmitter control (0xffff0008) Transmitter data (0xffff000c) 8 Transmitted byte Unused B.8 Input and Output B-39
|
Hennesey_Page_815_Chunk815
|
B-40 Appendix B Assemblers, Linkers, and the SPIM Simulator and is read-only. If this bit is 1, the transmitter is ready to accept a new character for output. If it is 0, the transmitter is still busy writing the previous character. Bit 1 is “interrupt enable’’ and is readable and writable. If this bit is set to 1, then the terminal requests an interrupt at hardware level 0 whenever the transmitter is ready for a new character, and the ready bit becomes 1. The final device register is the Transmitter Data register (at address ffff000chex). When a value is written into this location, its low-order eight bits (i.e., an ASCII character as in Figure 2.15 in Chapter 2) are sent to the console. When the Trans- mitter Data register is written, the ready bit in the Transmitter Control register is reset to 0. This bit stays 0 until enough time has elapsed to transmit the character to the terminal; then the ready bit becomes 1 again. The Transmitter Data register should only be written when the ready bit of the Transmitter Control register is 1. If the transmitter is not ready, writes to the Transmitter Data register are ignored (the write appears to succeed but the character is not output). Real computers require time to send characters to a console or terminal. These time lags are simulated by SPIM. For example, after the transmitter starts to write a character, the transmitter’s ready bit becomes 0 for a while. SPIM measures time in instructions executed, not in real clock time. This means that the transmitter does not become ready again until the processor executes a fixed number of instructions. If you stop the machine and look at the ready bit, it will not change. However, if you let the machine run, the bit eventually changes back to 1. B.9 SPIM SPIM is a software simulator that runs assembly language programs written for processors that implement the MIPS-32 architecture, specifically Release 1 of this architecture with a fixed memory mapping, no caches, and only coprocessors 0 and 1.2 SPIM’s name is just MIPS spelled backwards. SPIM can read and immedi ately execute assembly language files. SPIM is a self-contained system for running 2. Earlier versions of SPIM (before 7.0) implemented the MIPS-1 architecture used in the original MIPS R2000 processors. This architecture is almost a proper subset of the MIPS-32 architecture, with the difference being the manner in which exceptions are handled. MIPS-32 also introduced approximately 60 new instructions, which are supported by SPIM. Programs that ran on the earlier versions of SPIM and did not use exceptions should run unmodified on newer versions of SPIM. Programs that used exceptions will require minor changes.
|
Hennesey_Page_816_Chunk816
|
MIPS programs. It contains a debugger and provides a few operating system–like services. SPIM is much slower than a real computer (100 or more times). However, its low cost and wide availability cannot be matched by real hardware! An obvious question is, “Why use a simulator when most people have PCs that contain processors that run significantly faster than SPIM?” One reason is that the processor in PCs are Intel 80x86s, whose architecture is far less regular and far more complex to understand and program than MIPS processors. The MIPS architecture may be the epitome of a simple, clean RISC machine. In addition, simulators can provide a better environment for assembly pro gramming than an actual machine because they can detect more errors and provide a better interface than an actual computer. Finally, simulators are useful tools in studying computers and the programs that run on them. Because they are implemented in software, not silicon, simulators can be examined and easily modified to add new instructions, build new systems such as multiprocessors, or simply collect data. Simulation of a Virtual Machine The basic MIPS architecture is difficult to program directly because of delayed branches, delayed loads, and restricted address modes. This difficulty is tolerable since these computers were designed to be programmed in high-level languages and present an interface designed for compilers rather than assembly language programmers. A good part of the programming complexity results from delayed instructions. A delayed branch requires two cycles to execute (see the Elaborations on pages 343 and 381 of Chapter 4). In the second cycle, the instruction imme- diately following the branch executes. This instruction can perform useful work that normally would have been done before the branch. It can also be a nop (no operation) that does nothing. Similarly, delayed loads require two cycles to bring a value from memory, so the instruction immediately following a load cannot use the value (see Section 4.2 of Chapter 4). MIPS wisely chose to hide this complexity by having its assembler implement a virtual machine. This virtual computer appears to have nondelayed branches and loads and a richer instruction set than the actual hardware. The assembler reorganizes (rearranges) instructions to fill the delay slots. The virtual computer also provides pseudoinstructions, which appear as real instructions in assembly language programs. The hardware, however, knows nothing about pseudoinstruc tions, so the assembler must translate them into equivalent sequences of actual machine instructions. For example, the MIPS hardware only provides instructions to branch when a register is equal to or not equal to 0. Other conditional branches, such as one that branches when one register is greater than another, are synthesized by comparing the two registers and branching when the result of the comparison is true (nonzero). virtual machine A virtual computer that appears to have nondelayed branches and loads and a richer instruction set than the actual hardware. B.9 SPIM B-41
|
Hennesey_Page_817_Chunk817
|
B-42 Appendix B Assemblers, Linkers, and the SPIM Simulator By default, SPIM simulates the richer virtual machine, since this is the machine that most programmers will find useful. However, SPIM can also simulate the delayed branches and loads in the actual hardware. Below, we describe the virtual machine and only mention in passing features that do not belong to the actual hardware. In doing so, we follow the convention of MIPS assembly language pro grammers (and compilers), who routinely use the extended machine as if it was implemented in silicon. Getting Started with SPIM The rest of this appendix introduces SPIM and the MIPS R2000 Assembly lan guage. Many details should never concern you; however, the sheer volume of information can sometimes obscure the fact that SPIM is a simple, easy-to-use program. This section starts with a quick tutorial on using SPIM, which should enable you to load, debug, and run simple MIPS programs. SPIM comes in different versions for different types of computer systems. The one constant is the simplest version, called spim, which is a command-line-driven program that runs in a console window. It operates like most programs of this type: you type a line of text, hit the return key, and spim executes your command. Despite its lack of a fancy interface, spim can do everything that its fancy cousins can do. There are two fancy cousins to spim. The version that runs in the X-windows environment of a UNIX or Linux system is called xspim. xspim is an easier pro gram to learn and use than spim, because its commands are always visible on the screen and because it continually displays the machine’s registers and memory. The other fancy version is called PCspim and runs on Microsoft Windows. The UNIX and Windows versions of SPIM are on the CD (click on Tutorials). Tutorials on xspim, pcSpim, spim, and SPIM command-line options are on the CD (click on Software). If you are going to run SPIM on a PC running Microsoft Windows, you should first look at the tutorial on PCSpim on the CD. If you are going to run SPIM on a computer running UNIX or Linux, you should read the tutorial on xspim (click on Tutorials). Surprising Features Although SPIM faithfully simulates the MIPS computer, SPIM is a simulator, and certain things are not identical to an actual computer. The most obvious differ ences are that instruction timing and the memory systems are not identical. SPIM does not simulate caches or memory latency, nor does it accurately reflect floating-point operation or multiply and divide instruction delays. In addition, the floating-point instructions do not detect many error conditions, which would cause exceptions on a real machine.
|
Hennesey_Page_818_Chunk818
|
Another surprise (which occurs on the real machine as well) is that a pseudo- instruction expands to several machine instructions. When you single-step or exam ine memory, the instructions that you see are different from the source program. The correspondence between the two sets of instructions is fairly simple, since SPIM does not reorganize instructions to fill delay slots. Byte Order Processors can number bytes within a word so the byte with the lowest number is either the leftmost or rightmost one. The convention used by a machine is called its byte order. MIPS processors can operate with either big-endian or little-endian byte order. For example, in a big-endian machine, the directive .byte 0, 1, 2, 3 would result in a memory word containing Byte # 0 1 2 3 while in a little-endian machine, the word would contain Byte # 3 2 1 0 SPIM operates with both byte orders. SPIM’s byte order is the same as the byte order of the underlying machine that runs the simulator. For example, on an Intel 80x86, SPIM is little-endian, while on a Macintosh or Sun SPARC, SPIM is big- endian. System Calls SPIM provides a small set of operating system–like services through the system call (syscall) instruction. To request a service, a program loads the system call code (see Figure B.9.1) into register $v0 and arguments into registers $a0–$a3 (or $f12 for floating-point values). System calls that return values put their results in register $v0 (or $f0 for floating-point results). For example, the following code prints "the answer = 5": .data str: .asciiz “the answer = ” .text B.9 SPIM B-43
|
Hennesey_Page_819_Chunk819
|
B-44 Appendix B Assemblers, Linkers, and the SPIM Simulator li $v0, 4 # system call code for print_str la $a0, str # address of string to print syscall # print the string li $v0, 1 # system call code for print_int li $a0, 5 # integer to print syscall # print it The print_int system call is passed an integer and prints it on the console. print_float prints a single floating-point number; print_double prints a double precision number; and print_string is passed a pointer to a null- terminated string, which it writes to the console. The system calls read_int, read_float, and read_double to read an entire line of input up to and including the newline. Characters following the number are ignored. read_string has the same semantics as the UNIX library routine fgets. It reads up to n − 1 characters into a buffer and terminates the string with a null byte. If fewer than n − 1 characters are on the current line, read_string reads up to and including the newline and again null-terminates the string. Service System call code Arguments Result print_int 1 $a0 = integer print_float 2 $f12 = float print_double 3 $f12 = double print_string 4 $a0 = string read_int 5 integer (in $v0) read_float 6 float (in $f0) read_double 7 double (in $f0) read_string 8 $a0 = buffer, $a1 = length sbrk 9 $a0 = amount address (in $v0) exit 10 print_char 11 $a0 = char read_char 12 char (in $v0) open 13 $a0 = filename (string), $a1 = flags, $a2 = mode file descriptor (in $a0) read 14 $a0 = file descriptor, $a1 = buffer, $a2 = length num chars read (in $a0) write 15 $a0 = file descriptor, $a1 = buffer, $a2 = length num chars written (in $a0) close 16 $a0 = file descriptor exit2 17 $a0 = result FIGURE B.9.1 System services.
|
Hennesey_Page_820_Chunk820
|
Warning: Programs that use these syscalls to read from the terminal should not use memory-mapped I/O (see Section B.8). sbrk returns a pointer to a block of memory containing n additional bytes. exit stops the program SPIM is running. exit2 terminates the SPIM program, and the argument to exit2 becomes the value returned when the SPIM simulator itself terminates. print_char and read_char write and read a single character. open, read, write, and close are the standard UNIX library calls. B.10 MIPS R2000 Assembly Language A MIPS processor consists of an integer processing unit (the CPU) and a collec tion of coprocessors that perform ancillary tasks or operate on other types of data, such as floating-point numbers (see Figure B.10.1). SPIM simulates two coprocessors. Coprocessor 0 handles exceptions and interrupts. Coprocessor 1 is the floating-point unit. SPIM simulates most aspects of this unit. Addressing Modes MIPS is a load store architecture, which means that only load and store instructions access memory. Computation instructions operate only on values in registers. The bare machine provides only one memory-addressing mode: c(rx), which uses the sum of the immediate c and register rx as the address. The virtual machine provides the following addressing modes for load and store instructions: Format Address computation (register) contents of register imm immediate imm (register) immediate + contents of register label address of label label ± imm address of label + or – immediate label ± imm (register) address of label + or – (immediate + contents of register) Most load and store instructions operate only on aligned data. A quantity is aligned if its memory address is a multiple of its size in bytes. Therefore, a halfword B.10 MIPS R2000 Assembly Language B-45
|
Hennesey_Page_821_Chunk821
|
B-46 Appendix B Assemblers, Linkers, and the SPIM Simulator object must be stored at even addresses, and a full word object must be stored at addresses that are a multiple of four. However, MIPS provides some instructions to manipulate unaligned data (lwl, lwr, swl, and swr). Elaboration: The MIPS assembler (and SPIM) synthesizes the more complex address ing modes by producing one or more instructions before the load or store to compute a complex address. For example, suppose that the label table referred to memory loca tion 0´10000004 and a program contained the instruction ld $a0, table + 4($a1) The assembler would translate this instruction into the instructions FIGURE B.10.1 MIPS R2000 CPU and FPU. CPU Registers $0 $31 Arithmetic unit Multiply divide Lo Hi Coprocessor 1 (FPU) Registers $0 $31 Arithmetic unit Registers BadVAddr Coprocessor 0 (traps and memory) Status Cause EPC Memory
|
Hennesey_Page_822_Chunk822
|
lui $at, 4096 addu $at, $at, $a1 lw $a0, 8($at) The first instruction loads the upper bits of the label’s address into register $at, which is the register that the assembler reserves for its own use. The second instruction adds the contents of register $a1 to the label’s partial address. Finally, the load instruction uses the hardware address mode to add the sum of the lower bits of the label’s address and the offset from the original instruction to the value in register $at. Assembler Syntax Comments in assembler files begin with a sharp sign (#). Everything from the sharp sign to the end of the line is ignored. Identifiers are a sequence of alphanumeric characters, underbars (_), and dots (.) that do not begin with a number. Instruction opcodes are reserved words that cannot be used as identifiers. Labels are declared by putting them at the beginning of a line followed by a colon, for example: .data item: .word 1 .text .globl main # Must be global main: lw $t0, item Numbers are base 10 by default. If they are preceded by 0x, they are interpreted as hexadecimal. Hence, 256 and 0x100 denote the same value. Strings are enclosed in double quotes (”). Special characters in strings follow the C convention: ■ ■newline \n ■ ■tab \t ■ ■quote \” SPIM supports a subset of the MIPS assembler directives: .align n Align the next datum on a 2n byte boundary. For example, .align 2 aligns the next value on a word boundary. .align 0 turns off automatic alignment of .half, .word, .float, and .double directives until the next .data or .kdata directive. .ascii str Store the string str in memory, but do not null- terminate it. B.10 MIPS R2000 Assembly Language B-47
|
Hennesey_Page_823_Chunk823
|
B-48 Appendix B Assemblers, Linkers, and the SPIM Simulator .asciiz str Store the string str in memory and null-terminate it. .byte b1,..., bn Store the n values in successive bytes of memory. .data <addr> Subsequent items are stored in the data segment. If the optional argument addr is present, subse quent items are stored starting at address addr. .double d1,..., dn Store the n floating-point double precision num- bers in successive memory locations. .extern sym size Declare that the datum stored at sym is size bytes large and is a global label. This directive enables the assembler to store the datum in a portion of the data segment that is efficiently accessed via register $gp. .float f1,..., fn Store the n floating-point single precision num bers in successive memory locations. .globl sym Declare that label sym is global and can be refer enced from other files. .half h1,..., hn Store the n 16-bit quantities in successive memory halfwords. .kdata <addr> Subsequent data items are stored in the kernel data segment. If the optional argument addr is present, subsequent items are stored starting at address addr. .ktext <addr> Subsequent items are put in the kernel text seg ment. In SPIM, these items may only be instruc tions or words (see the .word directive below). If the optional argument addr is present, subsequent items are stored starting at address addr. .set noat and .set at The first directive prevents SPIM from complain ing about subsequent instructions that use register $at. The second directive re-enables the warning. Since pseudoinstructions expand into code that uses register $at, programmers must be very care- ful about leaving values in this register. .space n Allocates n bytes of space in the current segment (which must be the data segment in SPIM).
|
Hennesey_Page_824_Chunk824
|
.text <addr> Subsequent items are put in the user text segment. In SPIM, these items may only be instructions or words (see the .word directive below). If the optional argument addr is present, subsequent items are stored starting at address addr. .word w1,..., wn Store the n 32-bit quantities in successive memory words. SPIM does not distinguish various parts of the data segment (.data, .rdata, and .sdata). Encoding MIPS Instructions Figure B.10.2 explains how a MIPS instruction is encoded in a binary number. Each column contains instruction encodings for a field (a contiguous group of bits) from an instruction. The numbers at the left margin are values for a field. For example, the j opcode has a value of 2 in the opcode field. The text at the top of a column names a field and specifies which bits it occupies in an instruction. For example, the op field is contained in bits 26–31 of an instruction. This field encodes most instructions. However, some groups of instructions use additional fields to distinguish related instructions. For example, the different floating-point instructions are specified by bits 0–5. The arrows from the first column show which opcodes use these additional fields. Instruction Format The rest of this appendix describes both the instructions implemented by actual MIPS hardware and the pseudoinstructions provided by the MIPS assembler. The two types of instructions are easily distinguished. Actual instructions depict the fields in their binary representation. For example, in Addition (with overflow) add rd, rs, rt 0 rs rt rd 0 0x20 6 5 5 5 5 6 the add instruction consists of six fields. Each field’s size in bits is the small number below the field. This instruction begins with six bits of 0s. Register specifiers begin with an r, so the next field is a 5-bit register specifier called rs. This is the same register that is the second argument in the symbolic assembly at the left of this line. Another common field is imm16, which is a 16-bit immediate number. B.10 MIPS R2000 Assembly Language B-49
|
Hennesey_Page_825_Chunk825
|
B-50 Appendix B Assemblers, Linkers, and the SPIM Simulator FIGURE B.10.2 MIPS opcode map. The values of each field are shown to its left. The first column shows the values in base 10, and the second shows base 16 for the op field (bits 31 to 26) in the third column. This op field completely specifies the MIPS operation except for six op values: 0, 1, 16, 17, 18, and 19. These operations are determined by other fields, identified by pointers. The last field (funct) uses “f ” to mean “s” if rs = 16 and op = 17 or “d” if rs = 17 and op = 17. The second field (rs) uses “z” to mean “0”, “1”, “2”, or “3” if op = 16, 17, 18, or 19, respectively. If rs = 16, the operation is specified elsewhere: if z = 0, the operations are specified in the fourth field (bits 4 to 0); if z = 1, then the operations are in the last field with f = s. If rs = 17 and z = 1, then the operations are in the last field with f = d. 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 16 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0 f 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1 f 20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2 f 30 31 32 33 34 35 36 37 38 39 3a 3b 3c 3d 3e 3 f rs (25:21) mfcz cfcz mtcz ctcz copz copz (17:16) bczf bczt bczfl bcztl tlbr tlbwi tlbwr tlbp eret deret rt (20:16) bltz bgez bltzl bgezl tgei tgeiu tlti tltiu tegi tnei bltzal bgezal bltzall bgczall cvt.s.f cvt.d.f cvt.w.f c.f.f c.un.f c.eq.f c.ueq.f c.olt.f c.ult.f c.ole.f c.ule.f c.sf.f c.ngle.f c.seq.f c.ngl.f c.lt.f c.nge.f c.le.f c.ngt.f funct(5:0) funct(5:0) sll srl sra sllv srlv srav jr jalr movz movn syscall break sync mfhi mthi mflo mtlo mult multu div divu add addu sub subu and or xor nor slt sltu tge tgeu tlt tltu teq tne if z = 1, f = d if z = 1, f = s if z = 0 if z = 1 or z = 2 0 1 2 3 funct (4:0) sub.f add.f mul.f div.f sqrt.f abs.f mov.f neg.f round.w.f trunc.w.f cell.w.f floor.w.f movz.f movn.f 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 clz clo funct(5:0) madd maddu mul msub msubu (16:16) movf movt 0 1 (16:16) movf.f movt.f 0 1 op(31:26) j jal beq bne blez bgtz addi addiu slti sltiu andi ori xori lui z = 0 z = 1 z = 2 beql bnel blezl bgtzl lb lh lwl lw lbu lhu lwr sb sh swl sw swr cache ll lwc1 lwc2 pref ldc1 ldc2 sc swc1 swc2 sdc1 sdc2
|
Hennesey_Page_826_Chunk826
|
Pseudoinstructions follow roughly the same conventions, but omit instruction encoding information. For example: Multiply (without overflow) mul rdest, rsrc1, src2 pseudoinstruction In pseudoinstructions, rdest and rsrc1 are registers and src2 is either a regis ter or an immediate value. In general, the assembler and SPIM translate a more general form of an instruction (e.g., add $v1, $a0, 0x55) to a specialized form (e.g., addi $v1, $a0, 0x55). Arithmetic and Logical Instructions Absolute value abs rdest, rsrc pseudoinstruction Put the absolute value of register rsrc in register rdest. Addition (with overflow) add rd, rs, rt 0 rs rt rd 0 0x20 6 5 5 5 5 6 Addition (without overflow) addu rd, rs, rt 0 rs rt rd 0 0x21 6 5 5 5 5 6 Put the sum of registers rs and rt into register rd. Addition immediate (with overflow) addi rt, rs, imm 8 rs rt imm 6 5 5 16 Addition immediate (without overflow) addiu rt, rs, imm 9 rs rt imm 6 5 5 16 Put the sum of register rs and the sign-extended immediate into register rt. B.10 MIPS R2000 Assembly Language B-51
|
Hennesey_Page_827_Chunk827
|
B-52 Appendix B Assemblers, Linkers, and the SPIM Simulator AND and rd, rs, rt 0 rs rt rd 0 0x24 6 5 5 5 5 6 Put the logical AND of registers rs and rt into register rd. AND immediate andi rt, rs, imm 0xc rs rt imm 6 5 5 16 Put the logical AND of register rs and the zero-extended immediate into reg- ister rt. Count leading ones clo rd, rs 0x1c rs 0 rd 0 0x21 6 5 5 5 5 6 Count leading zeros clz rd, rs 0x1c rs 0 rd 0 0x20 6 5 5 5 5 6 Count the number of leading ones (zeros) in the word in register rs and put the result into register rd. If a word is all ones (zeros), the result is 32. Divide (with overflow) div rs, rt 0 rs rt 0 0x1a 6 5 5 10 6 Divide (without overflow) divu rs, rt 0 rs rt 0 0x1b 6 5 5 10 6 Divide register rs by register rt. Leave the quotient in register lo and the remain der in register hi. Note that if an operand is negative, the remainder is unspecified by the MIPS architecture and depends on the convention of the machine on which SPIM is run.
|
Hennesey_Page_828_Chunk828
|
Divide (with overflow) div rdest, rsrc1, src2 pseudoinstruction Divide (without overflow) divu rdest, rsrc1, src2 pseudoinstruction Put the quotient of register rsrc1 and src2 into register rdest. Multiply mult rs, rt 0 rs rt 0 0x18 6 5 5 10 6 Unsigned multiply multu rs, rt 0 rs rt 0 0x19 6 5 5 10 6 Multiply registers rs and rt. Leave the low-order word of the product in register lo and the high-order word in register hi. Multiply (without overflow) mul rd, rs, rt 0x1c rs rt rd 0 2 6 5 5 5 5 6 Put the low-order 32 bits of the product of rs and rt into register rd. Multiply (with overflow) mulo rdest, rsrc1, src2 pseudoinstruction Unsigned multiply (with overflow) mulou rdest, rsrc1, src2 pseudoinstruction Put the low-order 32 bits of the product of register rsrc1 and src2 into register rdest. B.10 MIPS R2000 Assembly Language B-53
|
Hennesey_Page_829_Chunk829
|
B-54 Appendix B Assemblers, Linkers, and the SPIM Simulator Multiply add madd rs, rt 0x1c rs rt 0 0 6 5 5 10 6 Unsigned multiply add maddu rs, rt 0x1c rs rt 0 1 6 5 5 10 6 Multiply registers rs and rt and add the resulting 64-bit product to the 64-bit value in the concatenated registers lo and hi. Multiply subtract msub rs, rt 0x1c rs rt 0 4 6 5 5 10 6 Unsigned multiply subtract msub rs, rt 0x1c rs rt 0 5 6 5 5 10 6 Multiply registers rs and rt and subtract the resulting 64-bit product from the 64-bit value in the concatenated registers lo and hi. Negate value (with overflow) neg rdest, rsrc pseudoinstruction Negate value (without overflow) negu rdest, rsrc pseudoinstruction Put the negative of register rsrc into register rdest. NOR nor rd, rs, rt 0 rs rt rd 0 0x27 6 5 5 5 5 6 Put the logical NOR of registers rs and rt into register rd.
|
Hennesey_Page_830_Chunk830
|
NOT not rdest, rsrc pseudoinstruction Put the bitwise logical negation of register rsrc into register rdest. OR or rd, rs, rt 0 rs rt rd 0 0x25 6 5 5 5 5 6 Put the logical OR of registers rs and rt into register rd. OR immediate ori rt, rs, imm 0xd rs rt imm 6 5 5 16 Put the logical OR of register rs and the zero-extended immediate into register rt. Remainder rem rdest, rsrc1, rsrc2 pseudoinstruction Unsigned remainder remu rdest, rsrc1, rsrc2 pseudoinstruction Put the remainder of register rsrc1 divided by register rsrc2 into register rdest. Note that if an operand is negative, the remainder is unspecified by the MIPS architecture and depends on the convention of the machine on which SPIM is run. Shift left logical sll rd, rt, shamt 0 rs rt rd shamt 0 6 5 5 5 5 6 Shift left logical variable sllv rd, rt, rs 0 rs rt rd 0 4 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-55
|
Hennesey_Page_831_Chunk831
|
B-56 Appendix B Assemblers, Linkers, and the SPIM Simulator Shift right arithmetic sra rd, rt, shamt 0 rs rt rd shamt 3 6 5 5 5 5 6 Shift right arithmetic variable srav rd, rt, rs 0 rs rt rd 0 7 6 5 5 5 5 6 Shift right logical srl rd, rt, shamt 0 rs rt rd shamt 2 6 5 5 5 5 6 Shift right logical variable srlv rd, rt, rs 0 rs rt rd 0 6 6 5 5 5 5 6 Shift register rt left (right) by the distance indicated by immediate shamt or the register rs and put the result in register rd. Note that argument rs is ignored for sll, sra, and srl. Rotate left rol rdest, rsrc1, rsrc2 pseudoinstruction Rotate right ror rdest, rsrc1, rsrc2 pseudoinstruction Rotate register rsrc1 left (right) by the distance indicated by rsrc2 and put the result in register rdest. Subtract (with overflow) sub rd, rs, rt 0 rs rt rd 0 0x22 6 5 5 5 5 6
|
Hennesey_Page_832_Chunk832
|
Subtract (without overflow) subu rd, rs, rt 0 rs rt rd 0 0x23 6 5 5 5 5 6 Put the difference of registers rs and rt into register rd. Exclusive OR xor rd, rs, rt 0 rs rt rd 0 0x26 6 5 5 5 5 6 Put the logical XOR of registers rs and rt into register rd. XOR immediate xori rt, rs, imm 0xe rs rt Imm 6 5 5 16 Put the logical XOR of register rs and the zero-extended immediate into reg- ister rt. Constant-Manipulating Instructions Load upper immediate lui rt, imm 0xf O rt imm 6 5 5 16 Load the lower halfword of the immediate imm into the upper halfword of reg- ister rt. The lower bits of the register are set to 0. Load immediate li rdest, imm pseudoinstruction Move the immediate imm into register rdest. Comparison Instructions Set less than slt rd, rs, rt 0 rs rt rd 0 0x2a 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-57
|
Hennesey_Page_833_Chunk833
|
B-58 Appendix B Assemblers, Linkers, and the SPIM Simulator Set less than unsigned sltu rd, rs, rt 0 rs rt rd 0 0x2b 6 5 5 5 5 6 Set register rd to 1 if register rs is less than rt, and to 0 otherwise. Set less than immediate slti rt, rs, imm 0xa rs rt imm 6 5 5 16 Set less than unsigned immediate sltiu rt, rs, imm 0xb rs rt imm 6 5 5 16 Set register rt to 1 if register rs is less than the sign-extended immediate, and to 0 otherwise. Set equal seq rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 equals rsrc2, and to 0 otherwise. Set greater than equal sge rdest, rsrc1, rsrc2 pseudoinstruction Set greater than equal unsigned sgeu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is greater than or equal to rsrc2, and to 0 otherwise. Set greater than sgt rdest, rsrc1, rsrc2 pseudoinstruction
|
Hennesey_Page_834_Chunk834
|
Set greater than unsigned sgtu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is greater than rsrc2, and to 0 otherwise. Set less than equal sle rdest, rsrc1, rsrc2 pseudoinstruction Set less than equal unsigned sleu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is less than or equal to rsrc2, and to 0 otherwise. Set not equal sne rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is not equal to rsrc2, and to 0 otherwise. Branch Instructions Branch instructions use a signed 16-bit instruction offset field; hence, they can jump 215 − 1 instructions (not bytes) forward or 215 instructions backward. The jump instruction contains a 26-bit address field. In actual MIPS processors, branch instructions are delayed branches, which do not transfer control until the instruction following the branch (its “delay slot”) has executed (see Chapter 4). Delayed branches affect the offset calculation, since it must be computed relative to the address of the delay slot instruction (PC + 4), which is when the branch occurs. SPIM does not simulate this delay slot, unless the -bare or -delayed_branch flags are specified. In assembly code, offsets are not usually specified as numbers. Instead, an instructions branch to a label, and the assembler computes the distance between the branch and the target instructions. In MIPS-32, all actual (not pseudo) conditional branch instructions have a “likely” variant (for example, beq’s likely variant is beql), which does not execute B.10 MIPS R2000 Assembly Language B-59
|
Hennesey_Page_835_Chunk835
|
B-60 Appendix B Assemblers, Linkers, and the SPIM Simulator the instruction in the branch’s delay slot if the branch is not taken. Do not use these instructions; they may be removed in subsequent versions of the architecture. SPIM implements these instructions, but they are not described further. Branch instruction b label pseudoinstruction Unconditionally branch to the instruction at the label. Branch coprocessor false bclf cc label 0x11 8 cc 0 Offset 6 5 3 2 16 Branch coprocessor true bclt cc label 0x11 8 cc 1 Offset 6 5 3 2 16 Conditionally branch the number of instructions specified by the offset if the floating-point coprocessor’s condition flag numbered cc is false (true). If cc is omitted from the instruction, condition code flag 0 is assumed. Branch on equal beq rs, rt, label 4 rs rt Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs equals rt. Branch on greater than equal zero bgez rs, label 1 rs 1 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than or equal to 0.
|
Hennesey_Page_836_Chunk836
|
Branch on greater than equal zero and link bgezal rs, label 1 rs 0x11 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than or equal to 0. Save the address of the next instruction in reg- ister 31. Branch on greater than zero bgtz rs, label 7 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than 0. Branch on less than equal zero blez rs, label 6 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than or equal to 0. Branch on less than and link bltzal rs, label 1 rs 0x10 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than 0. Save the address of the next instruction in register 31. Branch on less than zero bltz rs, label 1 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than 0. B.10 MIPS R2000 Assembly Language B-61
|
Hennesey_Page_837_Chunk837
|
B-62 Appendix B Assemblers, Linkers, and the SPIM Simulator Branch on not equal bne rs, rt, label 5 rs rt Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is not equal to rt. Branch on equal zero beqz rsrc, label pseudoinstruction Conditionally branch to the instruction at the label if rsrc equals 0. Branch on greater than equal bge rsrc1, rsrc2, label pseudoinstruction Branch on greater than equal unsigned bgeu rsrc1, rsrc2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is greater than or equal to rsrc2. Branch on greater than bgt rsrc1, src2, label pseudoinstruction Branch on greater than unsigned bgtu rsrc1, src2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is greater than src2. Branch on less than equal ble rsrc1, src2, label pseudoinstruction
|
Hennesey_Page_838_Chunk838
|
Branch on less than equal unsigned bleu rsrc1, src2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is less than or equal to src2. Branch on less than blt rsrc1, rsrc2, label pseudoinstruction Branch on less than unsigned bltu rsrc1, rsrc2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is less than rsrc2. Branch on not equal zero bnez rsrc, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc is not equal to 0. Jump Instructions Jump j target 2 target 6 26 Unconditionally jump to the instruction at target. Jump and link jal target 3 target 6 26 Unconditionally jump to the instruction at target. Save the address of the next instruction in register $ra. B.10 MIPS R2000 Assembly Language B-63
|
Hennesey_Page_839_Chunk839
|
B-64 Appendix B Assemblers, Linkers, and the SPIM Simulator Jump and link register jalr rs, rd 0 rs 0 rd 0 9 6 5 5 5 5 6 Unconditionally jump to the instruction whose address is in register rs. Save the address of the next instruction in register rd (which defaults to 31). Jump register jr rs 0 rs 0 8 6 5 15 6 Unconditionally jump to the instruction whose address is in register rs. Trap Instructions Trap if equal teq rs, rt 0 rs rt 0 0x34 6 5 5 10 6 If register rs is equal to register rt, raise a Trap exception. Trap if equal immediate teqi rs, imm 1 rs 0xc imm 6 5 5 16 If register rs is equal to the sign-extended value imm, raise a Trap exception. Trap if not equal teq rs, rt 0 rs rt 0 0x36 6 5 5 10 6 If register rs is not equal to register rt, raise a Trap exception. Trap if not equal immediate teqi rs, imm 1 rs 0xe imm 6 5 5 16 If register rs is not equal to the sign-extended value imm, raise a Trap exception.
|
Hennesey_Page_840_Chunk840
|
Trap if greater equal tge rs, rt 0 rs rt 0 0x30 6 5 5 10 6 Unsigned trap if greater equal tgeu rs, rt 0 rs rt 0 0x31 6 5 5 10 6 If register rs is greater than or equal to register rt, raise a Trap exception. Trap if greater equal immediate tgei rs, imm 1 rs 8 imm 6 5 5 16 Unsigned trap if greater equal immediate tgeiu rs, imm 1 rs 9 imm 6 5 5 16 If register rs is greater than or equal to the sign-extended value imm, raise a Trap exception. Trap if less than tlt rs, rt 0 rs rt 0 0x32 6 5 5 10 6 Unsigned trap if less than tltu rs, rt 0 rs rt 0 0x33 6 5 5 10 6 If register rs is less than register rt, raise a Trap exception. Trap if less than immediate tlti rs, imm 1 rs a imm 6 5 5 16 B.10 MIPS R2000 Assembly Language B-65
|
Hennesey_Page_841_Chunk841
|
B-66 Appendix B Assemblers, Linkers, and the SPIM Simulator Unsigned trap if less than immediate tltiu rs, imm 1 rs b imm 6 5 5 16 If register rs is less than the sign-extended value imm, raise a Trap exception. Load Instructions Load address la rdest, address pseudoinstruction Load computed address—not the contents of the location—into register rdest. Load byte lb rt, address 0x20 rs rt Offset 6 5 5 16 Load unsigned byte lbu rt, address 0x24 rs rt Offset 6 5 5 16 Load the byte at address into register rt. The byte is sign-extended by lb, but not by lbu. Load halfword lh rt, address 0x21 rs rt Offset 6 5 5 16 Load unsigned halfword lhu rt, address 0x25 rs rt Offset 6 5 5 16 Load the 16-bit quantity (halfword) at address into register rt. The halfword is sign-extended by lh, but not by lhu.
|
Hennesey_Page_842_Chunk842
|
Load word lw rt, address 0x23 rs rt Offset 6 5 5 16 Load the 32-bit quantity (word) at address into register rt. Load word coprocessor 1 lwcl ft, address 0x31 rs rt Offset 6 5 5 16 Load the word at address into register ft in the floating-point unit. Load word left lwl rt, address 0x22 rs rt Offset 6 5 5 16 Load word right lwr rt, address 0x26 rs rt Offset 6 5 5 16 Load the left (right) bytes from the word at the possibly unaligned address into register rt. Load doubleword ld rdest, address pseudoinstruction Load the 64-bit quantity at address into registers rdest and rdest + 1. Unaligned load halfword ulh rdest, address pseudoinstruction B.10 MIPS R2000 Assembly Language B-67
|
Hennesey_Page_843_Chunk843
|
B-68 Appendix B Assemblers, Linkers, and the SPIM Simulator Unaligned load halfword unsigned ulhu rdest, address pseudoinstruction Load the 16-bit quantity (halfword) at the possibly unaligned address into register rdest. The halfword is sign-extended by ulh, but not ulhu. Unaligned load word ulw rdest, address pseudoinstruction Load the 32-bit quantity (word) at the possibly unaligned address into register rdest. Load linked ll rt, address 0x30 rs rt Offset 6 5 5 16 Load the 32-bit quantity (word) at address into register rt and start an atomic read-modify-write operation. This operation is completed by a store conditional (sc) instruction, which will fail if another processor writes into the block contain ing the loaded word. Since SPIM does not simulate multiple processors, the store conditional operation always succeeds. Store Instructions Store byte sb rt, address 0x28 rs rt Offset 6 5 5 16 Store the low byte from register rt at address. Store halfword sh rt, address 0x29 rs rt Offset 6 5 5 16 Store the low halfword from register rt at address.
|
Hennesey_Page_844_Chunk844
|
Store word sw rt, address 0x2b rs rt Offset 6 5 5 16 Store the word from register rt at address. Store word coprocessor 1 swcl ft, address 0x31 rs ft Offset 6 5 5 16 Store the floating-point value in register ft of floating-point coprocessor at address. Store double coprocessor 1 sdcl ft, address 0x3d rs ft Offset 6 5 5 16 Store the doubleword floating-point value in registers ft and ft + l of floating- point coprocessor at address. Register ft must be even numbered. Store word left swl rt, address 0x2a rs rt Offset 6 5 5 16 Store word right swr rt, address 0x2e rs rt Offset 6 5 5 16 Store the left (right) bytes from register rt at the possibly unaligned address. Store doubleword sd rsrc, address pseudoinstruction Store the 64-bit quantity in registers rsrc and rsrc + 1 at address. B.10 MIPS R2000 Assembly Language B-69
|
Hennesey_Page_845_Chunk845
|
B-70 Appendix B Assemblers, Linkers, and the SPIM Simulator Unaligned store halfword ush rsrc, address pseudoinstruction Store the low halfword from register rsrc at the possibly unaligned address. Unaligned store word usw rsrc, address pseudoinstruction Store the word from register rsrc at the possibly unaligned address. Store conditional sc rt, address 0x38 rs rt Offset 6 5 5 16 Store the 32-bit quantity (word) in register rt into memory at address and complete an atomic read-modify-write operation. If this atomic operation is successful, the memory word is modified and register rt is set to 1. If the atomic operation fails because another processor wrote to a location in the block containing the addressed word, this instruction does not modify memory and writes 0 into register rt. Since SPIM does not simulate multiple processors, the instruction always succeeds. Data Movement Instructions Move move rdest, rsrc pseudoinstruction Move register rsrc to rdest. Move from hi mfhi rd 0 0 rd 0 0x10 6 10 5 5 6
|
Hennesey_Page_846_Chunk846
|
Move from lo mflo rd 0 0 rd 0 0x12 6 10 5 5 6 The multiply and divide unit produces its result in two additional registers, hi and lo. These instructions move values to and from these registers. The multiply, divide, and remainder pseudoinstructions that make this unit appear to operate on the general registers move the result after the computation finishes. Move the hi (lo) register to register rd. Move to hi mthi rs 0 rs 0 0x11 6 5 15 6 Move to lo mtlo rs 0 rs 0 0x13 6 5 15 6 Move register rs to the hi (lo) register. Move from coprocessor 0 mfc0 rt, rd 0x10 0 rt rd 0 6 5 5 5 11 Move from coprocessor 1 mfcl rt, fs 0x11 0 rt fs 0 6 5 5 5 11 Coprocessors have their own register sets. These instructions move values between these registers and the CPU’s registers. Move register rd in a coprocessor (register fs in the FPU) to CPU register rt. The floating-point unit is coprocessor 1. B.10 MIPS R2000 Assembly Language B-71
|
Hennesey_Page_847_Chunk847
|
B-72 Appendix B Assemblers, Linkers, and the SPIM Simulator Move double from coprocessor 1 mfc1.d rdest, frsrc1 pseudoinstruction Move floating-point registers frsrc1 and frsrc1 + 1 to CPU registers rdest and rdest + 1. Move to coprocessor 0 mtc0 rd, rt 0x10 4 rt rd 0 6 5 5 5 11 Move to coprocessor 1 mtc1 rd, fs 0x11 4 rt fs 0 6 5 5 5 11 Move CPU register rt to register rd in a coprocessor (register fs in the FPU). Move conditional not zero movn rd, rs, rt 0 rs rt rd 0xb 6 5 5 5 11 Move register rs to register rd if register rt is not 0. Move conditional zero movz rd, rs, rt 0 rs rt rd 0xa 6 5 5 5 11 Move register rs to register rd if register rt is 0. Move conditional on FP false movf rd, rs, cc 0 rs cc 0 rd 0 1 6 5 3 2 5 5 6 Move CPU register rs to register rd if FPU condition code flag number cc is 0. If cc is omitted from the instruction, condition code flag 0 is assumed.
|
Hennesey_Page_848_Chunk848
|
Move conditional on FP true movt rd, rs, cc 0 rs cc 1 rd 0 1 6 5 3 2 5 5 6 Move CPU register rs to register rd if FPU condition code flag number cc is 1. If cc is omitted from the instruction, condition code bit 0 is assumed. Floating-Point Instructions The MIPS has a floating-point coprocessor (numbered 1) that operates on single precision (32-bit) and double precision (64-bit) floating-point numbers. This coprocessor has its own registers, which are numbered $f0–$f31. Because these registers are only 32 bits wide, two of them are required to hold doubles, so only floating-point registers with even numbers can hold double precision values. The floating-point coprocessor also has eight condition code (cc) flags, numbered 0–7, which are set by compare instructions and tested by branch (bclf or bclt) and conditional move instructions. Values are moved in or out of these registers one word (32 bits) at a time by lwc1, swc1, mtc1, and mfc1 instructions or one double (64 bits) at a time by ldcl and sdcl, described above, or by the l.s, l.d, s.s, and s.d pseudoinstructions described below. In the actual instructions below, bits 21–26 are 0 for single precision and 1 for double precision. In the pseudoinstructions below, fdest is a floating-point register (e.g., $f2). Floating-point absolute value double abs.d fd, fs 0x11 1 0 fs fd 5 6 5 5 5 5 6 Floating-point absolute value single abs.s fd, fs 0x11 0 0 fs fd 5 Compute the absolute value of the floating-point double (single) in register fs and put it in register fd. Floating-point addition double add.d fd, fs, ft 0x11 0x11 ft fs fd 0 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-73
|
Hennesey_Page_849_Chunk849
|
B-74 Appendix B Assemblers, Linkers, and the SPIM Simulator Floating-point addition single add.s fd, fs, ft 0x11 0x10 ft fs fd 0 6 5 5 5 5 6 Compute the sum of the floating-point doubles (singles) in registers fs and ft and put it in register fd. Floating-point ceiling to word ceil.w.d fd, fs 0x11 0x11 0 fs fd 0xe 6 5 5 5 5 6 ceil.w.s fd, fs 0x11 0x10 0 fs fd 0xe Compute the ceiling of the floating-point double (single) in register fs, convert to a 32-bit fixed-point value, and put the resulting word in register fd. Compare equal double c.eq.d cc fs, ft 0x11 0x11 ft fs cc 0 FC 2 6 5 5 5 3 2 2 4 Compare equal single c.eq.s cc fs, ft 0x11 0x10 ft fs cc 0 FC 2 6 5 5 5 3 2 2 4 Compare the floating-point double (single) in register fs against the one in ft and set the floating-point condition flag cc to 1 if they are equal. If cc is omitted, condition code flag 0 is assumed. Compare less than equal double c.le.d cc fs, ft 0x11 0x11 ft fs cc 0 FC 0xe 6 5 5 5 3 2 2 4 Compare less than equal single c.le.s cc fs, ft 0x11 0x10 ft fs cc 0 FC 0xe 6 5 5 5 3 2 2 4
|
Hennesey_Page_850_Chunk850
|
Compare the floating-point double (single) in register fs against the one in ft and set the floating-point condition flag cc to 1 if the first is less than or equal to the second. If cc is omitted, condition code flag 0 is assumed. Compare less than double c.lt.d cc fs, ft 0x11 0x11 ft fs cc 0 FC 0xc 6 5 5 5 3 2 2 4 Compare less than single c.lt.s cc fs, ft 0x11 0x10 ft fs cc 0 FC 0xc 6 5 5 5 3 2 2 4 Compare the floating-point double (single) in register fs against the one in ft and set the condition flag cc to 1 if the first is less than the second. If cc is omitted, condition code flag 0 is assumed. Convert single to double cvt.d.s fd, fs 0x11 0x10 0 fs fd 0x21 6 5 5 5 5 6 Convert integer to double cvt.d.w fd, fs 0x11 0x14 0 fs fd 0x21 6 5 5 5 5 6 Convert the single precision floating-point number or integer in register fs to a double (single) precision number and put it in register fd. Convert double to single cvt.s.d fd, fs 0x11 0x11 0 fs fd 0x20 6 5 5 5 5 6 Convert integer to single cvt.s.w fd, fs 0x11 0x14 0 fs fd 0x20 6 5 5 5 5 6 Convert the double precision floating-point number or integer in register fs to a single precision number and put it in register fd. B.10 MIPS R2000 Assembly Language B-75
|
Hennesey_Page_851_Chunk851
|
B-76 Appendix B Assemblers, Linkers, and the SPIM Simulator Convert double to integer cvt.w.d fd, fs 0x11 0x11 0 fs fd 0x24 6 5 5 5 5 6 Convert single to integer cvt.w.s fd, fs 0x11 0x10 0 fs fd 0x24 6 5 5 5 5 6 Convert the double or single precision floating-point number in register fs to an integer and put it in register fd. Floating-point divide double div.d fd, fs, ft 0x11 0x11 ft fs fd 3 6 5 5 5 5 6 Floating-point divide single div.s fd, fs, ft 0x11 0x10 ft fs fd 3 6 5 5 5 5 6 Compute the quotient of the floating-point doubles (singles) in registers fs and ft and put it in register fd. Floating-point floor to word floor.w.d fd, fs 0x11 0x11 0 fs fd 0xf 6 5 5 5 5 6 floor.w.s fd, fs 0x11 0x10 0 fs fd 0xf Compute the floor of the floating-point double (single) in register fs and put the resulting word in register fd. Load floating-point double l.d fdest, address pseudoinstruction
|
Hennesey_Page_852_Chunk852
|
Load floating-point single l.s fdest, address pseudoinstruction Load the floating-point double (single) at address into register fdest. Move floating-point double mov.d fd, fs 0x11 0x11 0 fs fd 6 6 5 5 5 5 6 Move floating-point single mov.s fd, fs 0x11 0x10 0 fs fd 6 6 5 5 5 5 6 Move the floating-point double (single) from register fs to register fd. Move conditional floating-point double false movf.d fd, fs, cc 0x11 0x11 cc 0 fs fd 0x11 6 5 3 2 5 5 6 Move conditional floating-point single false movf.s fd, fs, cc 0x11 0x10 cc 0 fs fd 0x11 6 5 3 2 5 5 6 Move the floating-point double (single) from register fs to register fd if condition code flag cc is 0. If cc is omitted, condition code flag 0 is assumed. Move conditional floating-point double true movt.d fd, fs, cc 0x11 0x11 cc 1 fs fd 0x11 6 5 3 2 5 5 6 Move conditional floating-point single true movt.s fd, fs, cc 0x11 0x10 cc 1 fs fd 0x11 6 5 3 2 5 5 6 B.10 MIPS R2000 Assembly Language B-77
|
Hennesey_Page_853_Chunk853
|
B-78 Appendix B Assemblers, Linkers, and the SPIM Simulator Move the floating-point double (single) from register fs to register fd if condition code flag cc is 1. If cc is omitted, condition code flag 0 is assumed. Move conditional floating-point double not zero movn.d fd, fs, rt 0x11 0x11 rt fs fd 0x13 6 5 5 5 5 6 Move conditional floating-point single not zero movn.s fd, fs, rt 0x11 0x10 rt fs fd 0x13 6 5 5 5 5 6 Move the floating-point double (single) from register fs to register fd if processor register rt is not 0. Move conditional floating-point double zero movz.d fd, fs, rt 0x11 0x11 rt fs fd 0x12 6 5 5 5 5 6 Move conditional floating-point single zero movz.s fd, fs, rt 0x11 0x10 rt fs fd 0x12 6 5 5 5 5 6 Move the floating-point double (single) from register fs to register fd if processor register rt is 0. Floating-point multiply double mul.d fd, fs, ft 0x11 0x11 ft fs fd 2 6 5 5 5 5 6 Floating-point multiply single mul.s fd, fs, ft 0x11 0x10 ft fs fd 2 6 5 5 5 5 6 Compute the product of the floating-point doubles (singles) in registers fs and ft and put it in register fd. Negate double neg.d fd, fs 0x11 0x11 0 fs fd 7 6 5 5 5 5 6
|
Hennesey_Page_854_Chunk854
|
Negate single neg.s fd, fs 0x11 0x10 0 fs fd 7 6 5 5 5 5 6 Negate the floating-point double (single) in register fs and put it in register fd. Floating-point round to word round.w.d fd, fs 0x11 0x11 0 fs fd 0xc 6 5 5 5 5 6 round.w.s fd, fs 0x11 0x10 0 fs fd 0xc Round the floating-point double (single) value in register fs, convert to a 32-bit fixed-point value, and put the resulting word in register fd. Square root double sqrt.d fd, fs 0x11 0x11 0 fs fd 4 6 5 5 5 5 6 Square root single sqrt.s fd, fs 0x11 0x10 0 fs fd 4 6 5 5 5 5 6 Compute the square root of the floating-point double (single) in register fs and put it in register fd. Store floating-point double s.d fdest, address pseudoinstruction Store floating-point single s.s fdest, address pseudoinstruction Store the floating-point double (single) in register fdest at address. Floating-point subtract double sub.d fd, fs, ft 0x11 0x11 ft fs fd 1 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-79
|
Hennesey_Page_855_Chunk855
|
B-80 Appendix B Assemblers, Linkers, and the SPIM Simulator Floating-point subtract single sub.s fd, fs, ft 0x11 0x10 ft fs fd 1 6 5 5 5 5 6 Compute the difference of the floating-point doubles (singles) in registers fs and ft and put it in register fd. Floating-point truncate to word trunc.w.d fd, fs 0x11 0x11 0 fs fd 0xd 6 5 5 5 5 6 trunc.w.s fd, fs 0x11 0x10 0 fs fd 0xd Truncate the floating-point double (single) value in register fs, convert to a 32-bit fixed-point value, and put the resulting word in register fd. Exception and Interrupt Instructions Exception return eret 0x10 1 0 0x18 6 1 19 6 Set the EXL bit in coprocessor 0’s Status register to 0 and return to the instruction pointed to by coprocessor 0’s EPC register. System call syscall 0 0 0xc 6 20 6 Register $v0 contains the number of the system call (see Figure B.9.1) provided by SPIM. Break break code 0 code 0xd 6 20 6 Cause exception code. Exception 1 is reserved for the debugger. No operation nop 0 0 0 0 0 0 6 5 5 5 5 6 Do nothing.
|
Hennesey_Page_856_Chunk856
|
B.11 Concluding Remarks Programming in assembly language requires a programmer to trade helpful fea- tures of high-level languages—such as data structures, type checking, and control constructs—for complete control over the instructions that a computer executes. External constraints on some applications, such as response time or program size, require a programmer to pay close attention to every instruction. However, the cost of this level of attention is assembly language programs that are longer, more time-consuming to write, and more difficult to maintain than high-level language programs. Moreover, three trends are reducing the need to write programs in assembly lan- guage. The first trend is toward the improvement of compilers. Modern compilers produce code that is typically comparable to the best handwritten code—and is sometimes better. The second trend is the introduction of new processors that are not only faster, but in the case of processors that execute multiple instructions simultaneously, also more difficult to program by hand. In addition, the rapid evolution of the modern computer favors high-level language programs that are not tied to a single architecture. Finally, we witness a trend toward increasingly complex applications, characterized by complex graphic interfaces and many more features than their predecessors. Large applications are written by teams of pro- grammers and require the modularity and semantic checking features provided by high-level languages. Further Reading Aho, A., R. Sethi, and J. Ullman [1985]. Compilers: Principles, Techniques, and Tools, Reading, MA: Addison-Wesley. Slightly dated and lacking in coverage of modern architectures, but still the standard reference on compilers. Sweetman, D. [1999]. See MIPS Run, San Francisco, CA: Morgan Kaufmann Publishers. A complete, detailed, and engaging introduction to the MIPS instruction set and assembly language programming on these machines. Detailed documentation on the MIPS-32 architecture is available on the Web: MIPS32™ Architecture for Programmers Volume I: Introduction to the MIPS32™ Architecture (http://mips.com/content/Documentation/MIPSDocumentation/ProcessorArchitecture/ ArchitectureProgrammingPublicationsforMIPS32/MD00082-2B-MIPS32INT-AFP-02.00.pdf/ getDownload) MIPS32™ Architecture for Programmers Volume II: The MIPS32™ Instruction Set (http://mips.com/content/Documentation/MIPSDocumentation/ProcessorArchitecture/ ArchitectureProgrammingPublicationsforMIPS32/MD00086-2B-MIPS32BIS-AFP-02.00.pdf/getDownload) MIPS32™ Architecture for Programmers Volume III: The MIPS32™ Privileged Resource Architecture (http://mips.com/content/Documentation/MIPSDocumentation/ProcessorArchitecture/ ArchitectureProgrammingPublicationsforMIPS32/MD00090-2B-MIPS32PRA-AFP-02.00.pdf/getDownload) B.11 Concluding Remarks B-81
|
Hennesey_Page_857_Chunk857
|
B-82 Appendix B Assemblers, Linkers, and the SPIM Simulator B.12 Exercises B.1 [5] <§B.5> Section B.5 described how memory is partitioned on most MIPS systems. Propose another way of dividing memory that meets the same goals. B.2 [20] <§B.6> Rewrite the code for fact to use fewer instructions. B.3 [5] <§B.7> Is it ever safe for a user program to use registers $k0 or $k1? B.4 [25] <§B.7> Section B.7 contains code for a very simple exception handler. One serious problem with this handler is that it disables interrupts for a long time. This means that interrupts from a fast I/O device may be lost. Write a better exception handler that is interruptable and enables interrupts as quickly as possible. B.5 [15] <§B.7> The simple exception handler always jumps back to the instruc tion following the exception. This works fine unless the instruction that causes the exception is in the delay slot of a branch. In that case, the next instruction is the target of the branch. Write a better handler that uses the EPC register to determine which instruction should be executed after the exception. B.6 [5] <§B.9> Using SPIM, write and test an adding machine program that repeatedly reads in integers and adds them into a running sum. The program should stop when it gets an input that is 0, printing out the sum at that point. Use the SPIM system calls described on pages B-43 and B-45. B.7 [5] <§B.9> Using SPIM, write and test a program that reads in three integers and prints out the sum of the largest two of the three. Use the SPIM system calls described on pages B-43 and B-45. You can break ties arbitrarily. B.8 [5] <§B.9> Using SPIM, write and test a program that reads in a positive inte ger using the SPIM system calls. If the integer is not positive, the program should terminate with the message “Invalid Entry”; otherwise the program should print out the names of the digits of the integers, delimited by exactly one space. For example, if the user entered “728,” the output would be “Seven Two Eight.” B.9 [25] <§B.9> Write and test a MIPS assembly language program to compute and print the first 100 prime numbers. A number n is prime if no numbers except 1 and n divide it evenly. You should implement two routines: ■ ■test_prime (n) Return 1 if n is prime and 0 if n is not prime. ■ ■main () Iterate over the integers, testing if each is prime. Print the first 100 numbers that are prime. Test your programs by running them on SPIM.
|
Hennesey_Page_858_Chunk858
|
B.10 [10] <§§B.6, B.9> Using SPIM, write and test a recursive program for solving the classic mathematical recreation, the Towers of Hanoi puzzle. (This will require the use of stack frames to support recursion.) The puzzle consists of three pegs (1, 2, and 3) and n disks (the number n can vary; typical values might be in the range from 1 to 8). Disk 1 is smaller than disk 2, which is in turn smaller than disk 3, and so forth, with disk n being the largest. Initially, all the disks are on peg 1, starting with disk n on the bottom, disk n − 1 on top of that, and so forth, up to disk 1 on the top. The goal is to move all the disks to peg 2. You may only move one disk at a time, that is, the top disk from any of the three pegs onto the top of either of the other two pegs. Moreover, there is a constraint: You must not place a larger disk on top of a smaller disk. The C program below can be used to help write your assembly language program. /* move n smallest disks from start to finish using extra */ void hanoi(int n, int start, int finish, int extra){ if(n != 0){ hanoi(n-1, start, extra, finish); print_string(“Move disk”); print_int(n); print_string(“from peg”); print_int(start); print_string(“to peg”); print_int(finish); print_string(“.\n”); hanoi(n-1, extra, finish, start); } } main(){ int n; print_string(“Enter number of disks>“); n = read_int(); hanoi(n, 1, 2, 3); return 0; } B.12 Exercises B-83
|
Hennesey_Page_859_Chunk859
|
1 INTRODUCTION A modern computer consists of one or more processors, some main memory, disks, printers, a keyboard, a mouse, a display, network interfaces, and various other input/output devices. All in all, a complex system.oo If every application pro- grammer had to understand how all these things work in detail, no code would ever get written. Furthermore, managing all these components and using them optimally is an exceedingly challenging job. For this reason, computers are equipped with a layer of software called the operating system, whose job is to provide user pro- grams with a better, simpler, cleaner, model of the computer and to handle manag- ing all the resources just mentioned. Operating systems are the subject of this book. Most readers will have had some experience with an operating system such as Windows, Linux, FreeBSD, or OS X, but appearances can be deceiving. The pro- gram that users interact with, usually called the shell when it is text based and the GUI (Graphical User Interface)—which is pronounced ‘‘gooey’’—when it uses icons, is actually not part of the operating system, although it uses the operating system to get its work done. A simple overview of the main components under discussion here is given in Fig. 1-1. Here we see the hardware at the bottom. The hardware consists of chips, boards, disks, a keyboard, a monitor, and similar physical objects. On top of the hardware is the software. Most computers have two modes of operation: kernel mode and user mode. The operating system, the most fundamental piece of soft- ware, runs in kernel mode (also called supervisor mode). In this mode it has 1
|
OS_Page_1_Chunk1
|
2 INTRODUCTION CHAP. 1 complete access to all the hardware and can execute any instruction the machine is capable of executing. The rest of the software runs in user mode, in which only a subset of the machine instructions is available. In particular, those instructions that affect control of the machine or do I/O )Input/Output" are forbidden to user-mode programs. We will come back to the difference between kernel mode and user mode repeatedly throughout this book. It plays a crucial role in how operating sys- tems work. Hardware Software User mode Kernel mode Operating system Web browser E-mail reader Music player User interface program Figure 1-1. Where the operating system fits in. The user interface program, shell or GUI, is the lowest level of user-mode soft- ware, and allows the user to start other programs, such as a Web browser, email reader, or music player. These programs, too, make heavy use of the operating sys- tem. The placement of the operating system is shown in Fig. 1-1. It runs on the bare hardware and provides the base for all the other software. An important distinction between the operating system and normal (user- mode) software is that if a user does not like a particular email reader, he† is free to get a different one or write his own if he so chooses; he is not free to write his own clock interrupt handler, which is part of the operating system and is protected by hardware against attempts by users to modify it. This distinction, however, is sometimes blurred in embedded systems (which may not have kernel mode) or interpreted systems (such as Java-based systems that use interpretation, not hardware, to separate the components). Also, in many systems there are programs that run in user mode but help the operating system or perform privileged functions. For example, there is often a program that allows users to change their passwords. It is not part of the operating system and does not run in kernel mode, but it clearly carries out a sensitive func- tion and has to be protected in a special way. In some systems, this idea is carried to an extreme, and pieces of what is traditionally considered to be the operating † ‘‘He’’ should be read as ‘‘he or she’’ throughout the book.
|
OS_Page_2_Chunk2
|
SEC. 1.1 WHAT IS AN OPERATING SYSTEM? 3 system (such as the file system) run in user space. In such systems, it is difficult to draw a clear boundary. Everything running in kernel mode is clearly part of the operating system, but some programs running outside it are arguably also part of it, or at least closely associated with it. Operating systems differ from user (i.e., application) programs in ways other than where they reside. In particular, they are huge, complex, and long-lived. The source code of the heart of an operating system like Linux or Windows is on the order of fiv e million lines of code or more. To conceive of what this means, think of printing out fiv e million lines in book form, with 50 lines per page and 1000 pages per volume (larger than this book). It would take 100 volumes to list an op- erating system of this size—essentially an entire bookcase. Can you imagine get- ting a job maintaining an operating system and on the first day having your boss bring you to a bookcase with the code and say: ‘‘Go learn that.’’ And this is only for the part that runs in the kernel. When essential shared libraries are included, Windows is well over 70 million lines of code or 10 to 20 bookcases. And this excludes basic application software (things like Windows Explorer, Windows Media Player, and so on). It should be clear now why operating systems live a long time—they are very hard to write, and having written one, the owner is loath to throw it out and start again. Instead, such systems evolve over long periods of time. Windows 95/98/Me was basically one operating system and Windows NT/2000/XP/Vista/Windows 7 is a different one. They look similar to the users because Microsoft made very sure that the user interface of Windows 2000/XP/Vista/Windows 7 was quite similar to that of the system it was replacing, mostly Windows 98. Nevertheless, there were very good reasons why Microsoft got rid of Windows 98. We will come to these when we study Windows in detail in Chap. 11. Besides Windows, the other main example we will use throughout this book is UNIX and its variants and clones. It, too, has evolved over the years, with versions like System V, Solaris, and FreeBSD being derived from the original system, whereas Linux is a fresh code base, although very closely modeled on UNIX and highly compatible with it. We will use examples from UNIX throughout this book and look at Linux in detail in Chap. 10. In this chapter we will briefly touch on a number of key aspects of operating systems, including what they are, their history, what kinds are around, some of the basic concepts, and their structure. We will come back to many of these important topics in later chapters in more detail. 1.1 WHAT IS AN OPERATING SYSTEM? It is hard to pin down what an operating system is other than saying it is the software that runs in kernel mode—and even that is not always true. Part of the problem is that operating systems perform two essentially unrelated functions:
|
OS_Page_3_Chunk3
|
4 INTRODUCTION CHAP. 1 providing application programmers (and application programs, naturally) a clean abstract set of resources instead of the messy hardware ones and managing these hardware resources. Depending on who is doing the talking, you might hear mostly about one function or the other. Let us now look at both. 1.1.1 The Operating System as an Extended Machine The architecture (instruction set, memory organization, I/O, and bus struc- ture) of most computers at the machine-language level is primitive and awkward to program, especially for input/output. To make this point more concrete, consider modern SATA (Serial ATA) hard disks used on most computers. A book (Ander- son, 2007) describing an early version of the interface to the disk—what a pro- grammer would have to know to use the disk—ran over 450 pages. Since then, the interface has been revised multiple times and is more complicated than it was in 2007. Clearly, no sane programmer would want to deal with this disk at the hard- ware level. Instead, a piece of software, called a disk driver, deals with the hard- ware and provides an interface to read and write disk blocks, without getting into the details. Operating systems contain many drivers for controlling I/O devices. But even this level is much too low for most applications. For this reason, all operating systems provide yet another layer of abstraction for using disks: files. Using this abstraction, programs can create, write, and read files, without having to deal with the messy details of how the hardware actually works. This abstraction is the key to managing all this complexity. Good abstractions turn a nearly impossible task into two manageable ones. The first is defining and implementing the abstractions. The second is using these abstractions to solve the problem at hand. One abstraction that almost every computer user understands is the file, as mentioned above. It is a useful piece of information, such as a digital photo, saved email message, song, or Web page. It is much easier to deal with pho- tos, emails, songs, and Web pages than with the details of SATA (or other) disks. The job of the operating system is to create good abstractions and then implement and manage the abstract objects thus created. In this book, we will talk a lot about abstractions. They are one of the keys to understanding operating systems. This point is so important that it is worth repeating in different words. With all due respect to the industrial engineers who so carefully designed the Macintosh, hardware is ugly. Real processors, memories, disks, and other devices are very complicated and present difficult, awkward, idiosyncratic, and inconsistent inter- faces to the people who have to write software to use them. Sometimes this is due to the need for backward compatibility with older hardware. Other times it is an attempt to save money. Often, however, the hardware designers do not realize (or care) how much trouble they are causing for the software. One of the major tasks of the operating system is to hide the hardware and present programs (and their programmers) with nice, clean, elegant, consistent, abstractions to work with in- stead. Operating systems turn the ugly into the beautiful, as shown in Fig. 1-2.
|
OS_Page_4_Chunk4
|
SEC. 1.1 WHAT IS AN OPERATING SYSTEM? 5 Operating system Hardware Ugly interface Beautiful interface Application programs Figure 1-2. Operating systems turn ugly hardware into beautiful abstractions. It should be noted that the operating system’s real customers are the applica- tion programs (via the application programmers, of course). They are the ones who deal directly with the operating system and its abstractions. In contrast, end users deal with the abstractions provided by the user interface, either a com- mand-line shell or a graphical interface. While the abstractions at the user interface may be similar to the ones provided by the operating system, this is not always the case. To make this point clearer, consider the normal Windows desktop and the line-oriented command prompt. Both are programs running on the Windows oper- ating system and use the abstractions Windows provides, but they offer very dif- ferent user interfaces. Similarly, a Linux user running Gnome or KDE sees a very different interface than a Linux user working directly on top of the underlying X Window System, but the underlying operating system abstractions are the same in both cases. In this book, we will study the abstractions provided to application programs in great detail, but say rather little about user interfaces. That is a large and important subject, but one only peripherally related to operating systems. 1.1.2 The Operating System as a Resource Manager The concept of an operating system as primarily providing abstractions to ap- plication programs is a top-down view. An alternative, bottom-up, view holds that the operating system is there to manage all the pieces of a complex system. Mod- ern computers consist of processors, memories, timers, disks, mice, network inter- faces, printers, and a wide variety of other devices. In the bottom-up view, the job of the operating system is to provide for an orderly and controlled allocation of the processors, memories, and I/O devices among the various programs wanting them. Modern operating systems allow multiple programs to be in memory and run at the same time. Imagine what would happen if three programs running on some computer all tried to print their output simultaneously on the same printer. The first
|
OS_Page_5_Chunk5
|
6 INTRODUCTION CHAP. 1 few lines of printout might be from program 1, the next few from program 2, then some from program 3, and so forth. The result would be utter chaos. The operating system can bring order to the potential chaos by buffering all the output destined for the printer on the disk. When one program is finished, the operating system can then copy its output from the disk file where it has been stored for the printer, while at the same time the other program can continue generating more output, oblivious to the fact that the output is not really going to the printer (yet). When a computer (or network) has more than one user, the need for managing and protecting the memory, I/O devices, and other resources is even more since the users might otherwise interfere with one another. In addition, users often need to share not only hardware, but information (files, databases, etc.) as well. In short, this view of the operating system holds that its primary task is to keep track of which programs are using which resource, to grant resource requests, to account for usage, and to mediate conflicting requests from different programs and users. Resource management includes multiplexing (sharing) resources in two dif- ferent ways: in time and in space. When a resource is time multiplexed, different programs or users take turns using it. First one of them gets to use the resource, then another, and so on. For example, with only one CPU and multiple programs that want to run on it, the operating system first allocates the CPU to one program, then, after it has run long enough, another program gets to use the CPU, then an- other, and then eventually the first one again. Determining how the resource is time multiplexed—who goes next and for how long—is the task of the operating sys- tem. Another example of time multiplexing is sharing the printer. When multiple print jobs are queued up for printing on a single printer, a decision has to be made about which one is to be printed next. The other kind of multiplexing is space multiplexing. Instead of the customers taking turns, each one gets part of the resource. For example, main memory is nor- mally divided up among several running programs, so each one can be resident at the same time (for example, in order to take turns using the CPU). Assuming there is enough memory to hold multiple programs, it is more efficient to hold several programs in memory at once rather than give one of them all of it, especially if it only needs a small fraction of the total. Of course, this raises issues of fairness, protection, and so on, and it is up to the operating system to solve them. Another resource that is space multiplexed is the disk. In many systems a single disk can hold files from many users at the same time. Allocating disk space and keeping track of who is using which disk blocks is a typical operating system task. 1.2 HISTORY OF OPERATING SYSTEMS Operating systems have been evolving through the years. In the following sec- tions we will briefly look at a few of the highlights. Since operating systems have historically been closely tied to the architecture of the computers on which they
|
OS_Page_6_Chunk6
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 7 run, we will look at successive generations of computers to see what their operat- ing systems were like. This mapping of operating system generations to computer generations is crude, but it does provide some structure where there would other- wise be none. The progression given below is largely chronological, but it has been a bumpy ride. Each development did not wait until the previous one nicely finished before getting started. There was a lot of overlap, not to mention many false starts and dead ends. Take this as a guide, not as the last word. The first true digital computer was designed by the English mathematician Charles Babbage (1792–1871). Although Babbage spent most of his life and for- tune trying to build his ‘‘analytical engine,’’ he nev er got it working properly be- cause it was purely mechanical, and the technology of his day could not produce the required wheels, gears, and cogs to the high precision that he needed. Needless to say, the analytical engine did not have an operating system. As an interesting historical aside, Babbage realized that he would need soft- ware for his analytical engine, so he hired a young woman named Ada Lovelace, who was the daughter of the famed British poet Lord Byron, as the world’s first programmer. The programming language Ada® is named after her. 1.2.1 The First Generation (1945–55): Vacuum Tubes After Babbage’s unsuccessful efforts, little progress was made in constructing digital computers until the World War II period, which stimulated an explosion of activity. Professor John Atanasoff and his graduate student Clifford Berry built what is now reg arded as the first functioning digital computer at Iowa State Univer- sity. It used 300 vacuum tubes. At roughly the same time, Konrad Zuse in Berlin built the Z3 computer out of electromechanical relays. In 1944, the Colossus was built and programmed by a group of scientists (including Alan Turing) at Bletchley Park, England, the Mark I was built by Howard Aiken at Harvard, and the ENIAC was built by William Mauchley and his graduate student J. Presper Eckert at the University of Pennsylvania. Some were binary, some used vacuum tubes, some were programmable, but all were very primitive and took seconds to perform even the simplest calculation. In these early days, a single group of people (usually engineers) designed, built, programmed, operated, and maintained each machine. All programming was done in absolute machine language, or even worse yet, by wiring up electrical cir- cuits by connecting thousands of cables to plugboards to control the machine’s basic functions. Programming languages were unknown (even assembly language was unknown). Operating systems were unheard of. The usual mode of operation was for the programmer to sign up for a block of time using the signup sheet on the wall, then come down to the machine room, insert his or her plugboard into the computer, and spend the next few hours hoping that none of the 20,000 or so vac- uum tubes would burn out during the run. Virtually all the problems were simple
|
OS_Page_7_Chunk7
|
8 INTRODUCTION CHAP. 1 straightforward mathematical and numerical calculations, such as grinding out tables of sines, cosines, and logarithms, or computing artillery trajectories. By the early 1950s, the routine had improved somewhat with the introduction of punched cards. It was now possible to write programs on cards and read them in instead of using plugboards; otherwise, the procedure was the same. 1.2.2 The Second Generation (1955–65): Transistors and Batch Systems The introduction of the transistor in the mid-1950s changed the picture radi- cally. Computers became reliable enough that they could be manufactured and sold to paying customers with the expectation that they would continue to function long enough to get some useful work done. For the first time, there was a clear separa- tion between designers, builders, operators, programmers, and maintenance per- sonnel. These machines, now called mainframes, were locked away in large, specially air-conditioned computer rooms, with staffs of professional operators to run them. Only large corporations or major government agencies or universities could afford the multimillion-dollar price tag. To run a job (i.e., a program or set of programs), a programmer would first write the program on paper (in FORTRAN or assem- bler), then punch it on cards. He would then bring the card deck down to the input room and hand it to one of the operators and go drink coffee until the output was ready. When the computer finished whatever job it was currently running, an operator would go over to the printer and tear off the output and carry it over to the output room, so that the programmer could collect it later. Then he would take one of the card decks that had been brought from the input room and read it in. If the FOR- TRAN compiler was needed, the operator would have to get it from a file cabinet and read it in. Much computer time was wasted while operators were walking around the machine room. Given the high cost of the equipment, it is not surprising that people quickly looked for ways to reduce the wasted time. The solution generally adopted was the batch system. The idea behind it was to collect a tray full of jobs in the input room and then read them onto a magnetic tape using a small (relatively) inexpen- sive computer, such as the IBM 1401, which was quite good at reading cards, copying tapes, and printing output, but not at all good at numerical calculations. Other, much more expensive machines, such as the IBM 7094, were used for the real computing. This situation is shown in Fig. 1-3. After about an hour of collecting a batch of jobs, the cards were read onto a magnetic tape, which was carried into the machine room, where it was mounted on a tape drive. The operator then loaded a special program (the ancestor of today’s operating system), which read the first job from tape and ran it. The output was written onto a second tape, instead of being printed. After each job finished, the operating system automatically read the next job from the tape and began running
|
OS_Page_8_Chunk8
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 9 1401 7094 1401 (a) (b) (c) (d) (e) (f) Card reader Tape drive Input tape Output tape System tape Printer Figure 1-3. An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries output tape to 1401. (f) 1401 prints output. it. When the whole batch was done, the operator removed the input and output tapes, replaced the input tape with the next batch, and brought the output tape to a 1401 for printing off line (i.e., not connected to the main computer). The structure of a typical input job is shown in Fig. 1-4. It started out with a $JOB card, specifying the maximum run time in minutes, the account number to be charged, and the programmer’s name. Then came a $FORTRAN card, telling the operating system to load the FORTRAN compiler from the system tape. It was di- rectly followed by the program to be compiled, and then a $LOAD card, directing the operating system to load the object program just compiled. (Compiled pro- grams were often written on scratch tapes and had to be loaded explicitly.) Next came the $RUN card, telling the operating system to run the program with the data following it. Finally, the $END card marked the end of the job. These primitive control cards were the forerunners of modern shells and command-line inter- preters. Large second-generation computers were used mostly for scientific and engin- eering calculations, such as solving the partial differential equations that often oc- cur in physics and engineering. They were largely programmed in FORTRAN and assembly language. Typical operating systems were FMS (the Fortran Monitor System) and IBSYS, IBM’s operating system for the 7094. 1.2.3 The Third Generation (1965–1980): ICs and Multiprogramming By the early 1960s, most computer manufacturers had two distinct, incompati- ble, product lines. On the one hand, there were the word-oriented, large-scale sci- entific computers, such as the 7094, which were used for industrial-strength nu- merical calculations in science and engineering. On the other hand, there were the
|
OS_Page_9_Chunk9
|
10 INTRODUCTION CHAP. 1 $JOB, 10,7710802, MARVIN TANENBAUM $FORTRAN $LOAD $RUN $END Data for program FORTRAN program Figure 1-4. Structure of a typical FMS job. character-oriented, commercial computers, such as the 1401, which were widely used for tape sorting and printing by banks and insurance companies. Developing and maintaining two completely different product lines was an ex- pensive proposition for the manufacturers. In addition, many new computer cus- tomers initially needed a small machine but later outgrew it and wanted a bigger machine that would run all their old programs, but faster. IBM attempted to solve both of these problems at a single stroke by introduc- ing the System/360. The 360 was a series of software-compatible machines rang- ing from 1401-sized models to much larger ones, more powerful than the mighty 7094. The machines differed only in price and performance (maximum memory, processor speed, number of I/O devices permitted, and so forth). Since they all had the same architecture and instruction set, programs written for one machine could run on all the others—at least in theory. (But as Yogi Berra reputedly said: ‘‘In theory, theory and practice are the same; in practice, they are not.’’) Since the 360 was designed to handle both scientific (i.e., numerical) and commercial computing, a single family of machines could satisfy the needs of all customers. In subsequent years, IBM came out with backward compatible successors to the 360 line, using more modern technology, known as the 370, 4300, 3080, and 3090. The zSeries is the most recent descendant of this line, although it has diverged considerably from the original. The IBM 360 was the first major computer line to use (small-scale) ICs (Inte- grated Circuits), thus providing a major price/performance advantage over the second-generation machines, which were built up from individual transistors. It
|
OS_Page_10_Chunk10
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 11 was an immediate success, and the idea of a family of compatible computers was soon adopted by all the other major manufacturers. The descendants of these ma- chines are still in use at computer centers today. Now adays they are often used for managing huge databases (e.g., for airline reservation systems) or as servers for World Wide Web sites that must process thousands of requests per second. The greatest strength of the ‘‘single-family’’ idea was simultaneously its great- est weakness. The original intention was that all software, including the operating system, OS/360, had to work on all models. It had to run on small systems, which often just replaced 1401s for copying cards to tape, and on very large systems, which often replaced 7094s for doing weather forecasting and other heavy comput- ing. It had to be good on systems with few peripherals and on systems with many peripherals. It had to work in commercial environments and in scientific environ- ments. Above all, it had to be efficient for all of these different uses. There was no way that IBM (or anybody else for that matter) could write a piece of software to meet all those conflicting requirements. The result was an enormous and extraordinarily complex operating system, probably two to three orders of magnitude larger than FMS. It consisted of millions of lines of assembly language written by thousands of programmers, and contained thousands upon thousands of bugs, which necessitated a continuous stream of new releases in an attempt to correct them. Each new release fixed some bugs and introduced new ones, so the number of bugs probably remained constant over time. One of the designers of OS/360, Fred Brooks, subsequently wrote a witty and incisive book (Brooks, 1995) describing his experiences with OS/360. While it would be impossible to summarize the book here, suffice it to say that the cover shows a herd of prehistoric beasts stuck in a tar pit. The cover of Silberschatz et al. (2012) makes a similar point about operating systems being dinosaurs. Despite its enormous size and problems, OS/360 and the similar third-genera- tion operating systems produced by other computer manufacturers actually satis- fied most of their customers reasonably well. They also popularized several key techniques absent in second-generation operating systems. Probably the most im- portant of these was multiprogramming. On the 7094, when the current job paused to wait for a tape or other I/O operation to complete, the CPU simply sat idle until the I/O finished. With heavily CPU-bound scientific calculations, I/O is infrequent, so this wasted time is not significant. With commercial data processing, the I/O wait time can often be 80 or 90% of the total time, so something had to be done to avoid having the (expensive) CPU be idle so much. The solution that evolved was to partition memory into several pieces, with a different job in each partition, as shown in Fig. 1-5. While one job was waiting for I/O to complete, another job could be using the CPU. If enough jobs could be held in main memory at once, the CPU could be kept busy nearly 100% of the time. Having multiple jobs safely in memory at once requires special hardware to protect each job against snooping and mischief by the other ones, but the 360 and other third-generation systems were equipped with this hardware.
|
OS_Page_11_Chunk11
|
12 INTRODUCTION CHAP. 1 Job 3 Job 2 Job 1 Operating system Memory partitions Figure 1-5. A multiprogramming system with three jobs in memory. Another major feature present in third-generation operating systems was the ability to read jobs from cards onto the disk as soon as they were brought to the computer room. Then, whenever a running job finished, the operating system could load a new job from the disk into the now-empty partition and run it. This techni- que is called spooling (from Simultaneous Peripheral Operation On Line) and was also used for output. With spooling, the 1401s were no longer needed, and much carrying of tapes disappeared. Although third-generation operating systems were well suited for big scientific calculations and massive commercial data-processing runs, they were still basically batch systems. Many programmers pined for the first-generation days when they had the machine all to themselves for a few hours, so they could debug their pro- grams quickly. With third-generation systems, the time between submitting a job and getting back the output was often several hours, so a single misplaced comma could cause a compilation to fail, and the programmer to waste half a day. Pro- grammers did not like that very much. This desire for quick response time paved the way for timesharing, a variant of multiprogramming, in which each user has an online terminal. In a timesharing system, if 20 users are logged in and 17 of them are thinking or talking or drinking coffee, the CPU can be allocated in turn to the three jobs that want service. Since people debugging programs usually issue short commands (e.g., compile a fiv e- page procedure†) rather than long ones (e.g., sort a million-record file), the com- puter can provide fast, interactive service to a number of users and perhaps also work on big batch jobs in the background when the CPU is otherwise idle. The first general-purpose timesharing system, CTSS (Compatible Time Sharing Sys- tem), was developed at M.I.T. on a specially modified 7094 (Corbato´ et al., 1962). However, timesharing did not really become popular until the necessary protection hardware became widespread during the third generation. After the success of the CTSS system, M.I.T., Bell Labs, and General Electric (at that time a major computer manufacturer) decided to embark on the develop- ment of a ‘‘computer utility,’’ that is, a machine that would support some hundreds †We will use the terms ‘‘procedure,’’ ‘‘subroutine,’’ and ‘‘function’’ interchangeably in this book.
|
OS_Page_12_Chunk12
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 13 of simultaneous timesharing users. Their model was the electricity system—when you need electric power, you just stick a plug in the wall, and within reason, as much power as you need will be there. The designers of this system, known as MULTICS (MULTiplexed Information and Computing Service), envisioned one huge machine providing computing power for everyone in the Boston area. The idea that machines 10,000 times faster than their GE-645 mainframe would be sold (for well under $1000) by the millions only 40 years later was pure science fiction. Sort of like the idea of supersonic trans-Atlantic undersea trains now. MULTICS was a mixed success. It was designed to support hundreds of users on a machine only slightly more powerful than an Intel 386-based PC, although it had much more I/O capacity. This is not quite as crazy as it sounds, since in those days people knew how to write small, efficient programs, a skill that has subse- quently been completely lost. There were many reasons that MULTICS did not take over the world, not the least of which is that it was written in the PL/I pro- gramming language, and the PL/I compiler was years late and barely worked at all when it finally arrived. In addition, MULTICS was enormously ambitious for its time, much like Charles Babbage’s analytical engine in the nineteenth century. To make a long story short, MULTICS introduced many seminal ideas into the computer literature, but turning it into a serious product and a major commercial success was a lot harder than anyone had expected. Bell Labs dropped out of the project, and General Electric quit the computer business altogether. Howev er, M.I.T. persisted and eventually got MULTICS working. It was ultimately sold as a commercial product by the company (Honeywell) that bought GE’s computer busi- ness and was installed by about 80 major companies and universities worldwide. While their numbers were small, MULTICS users were fiercely loyal. General Motors, Ford, and the U.S. National Security Agency, for example, shut down their MULTICS systems only in the late 1990s, 30 years after MULTICS was released, after years of trying to get Honeywell to update the hardware. By the end of the 20th century, the concept of a computer utility had fizzled out, but it may well come back in the form of cloud computing, in which rel- atively small computers (including smartphones, tablets, and the like) are con- nected to servers in vast and distant data centers where all the computing is done, with the local computer just handling the user interface. The motivation here is that most people do not want to administrate an increasingly complex and finicky computer system and would prefer to have that work done by a team of profession- als, for example, people working for the company running the data center. E-com- merce is already evolving in this direction, with various companies running emails on multiprocessor servers to which simple client machines connect, very much in the spirit of the MULTICS design. Despite its lack of commercial success, MULTICS had a huge influence on subsequent operating systems (especially UNIX and its derivatives, FreeBSD, Linux, iOS, and Android). It is described in several papers and a book (Corbato´ et al., 1972; Corbato´ and Vyssotsky, 1965; Daley and Dennis, 1968; Organick, 1972;
|
OS_Page_13_Chunk13
|
14 INTRODUCTION CHAP. 1 and Saltzer, 1974). It also has an active Website, located at www.multicians.org, with much information about the system, its designers, and its users. Another major development during the third generation was the phenomenal growth of minicomputers, starting with the DEC PDP-1 in 1961. The PDP-1 had only 4K of 18-bit words, but at $120,000 per machine (less than 5% of the price of a 7094), it sold like hotcakes. For certain kinds of nonnumerical work, it was al- most as fast as the 7094 and gav e birth to a whole new industry. It was quickly fol- lowed by a series of other PDPs (unlike IBM’s family, all incompatible) culminat- ing in the PDP-11. One of the computer scientists at Bell Labs who had worked on the MULTICS project, Ken Thompson, subsequently found a small PDP-7 minicomputer that no one was using and set out to write a stripped-down, one-user version of MULTICS. This work later developed into the UNIX operating system, which became popular in the academic world, with government agencies, and with many companies. The history of UNIX has been told elsewhere (e.g., Salus, 1994). Part of that story will be given in Chap. 10. For now, suffice it to say that because the source code was widely available, various organizations developed their own (incompati- ble) versions, which led to chaos. Two major versions developed, System V, from AT&T, and BSD (Berkeley Software Distribution) from the University of Cali- fornia at Berkeley. These had minor variants as well. To make it possible to write programs that could run on any UNIX system, IEEE developed a standard for UNIX, called POSIX, that most versions of UNIX now support. POSIX defines a minimal system-call interface that conformant UNIX systems must support. In fact, some other operating systems now also support the POSIX interface. As an aside, it is worth mentioning that in 1987, the author released a small clone of UNIX, called MINIX, for educational purposes. Functionally, MINIX is very similar to UNIX, including POSIX support. Since that time, the original ver- sion has evolved into MINIX 3, which is highly modular and focused on very high reliability. It has the ability to detect and replace faulty or even crashed modules (such as I/O device drivers) on the fly without a reboot and without disturbing run- ning programs. Its focus is on providing very high dependability and availability. A book describing its internal operation and listing the source code in an appendix is also available (Tanenbaum and Woodhull, 2006). The MINIX 3 system is avail- able for free (including all the source code) over the Internet at www.minix3.org. The desire for a free production (as opposed to educational) version of MINIX led a Finnish student, Linus Torvalds, to write Linux. This system was directly inspired by and developed on MINIX and originally supported various MINIX fea- tures (e.g., the MINIX file system). It has since been extended in many ways by many people but still retains some underlying structure common to MINIX and to UNIX. Readers interested in a detailed history of Linux and the open source movement might want to read Glyn Moody’s (2001) book. Most of what will be said about UNIX in this book thus applies to System V, MINIX, Linux, and other versions and clones of UNIX as well.
|
OS_Page_14_Chunk14
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 15 1.2.4 The Fourth Generation (1980–Present): Personal Computers With the development of LSI (Large Scale Integration) circuits—chips con- taining thousands of transistors on a square centimeter of silicon—the age of the personal computer dawned. In terms of architecture, personal computers (initially called microcomputers) were not all that different from minicomputers of the PDP-11 class, but in terms of price they certainly were different. Where the minicomputer made it possible for a department in a company or university to have its own computer, the microprocessor chip made it possible for a single individual to have his or her own personal computer. In 1974, when Intel came out with the 8080, the first general-purpose 8-bit CPU, it wanted an operating system for the 8080, in part to be able to test it. Intel asked one of its consultants, Gary Kildall, to write one. Kildall and a friend first built a controller for the newly released Shugart Associates 8-inch floppy disk and hooked the floppy disk up to the 8080, thus producing the first microcomputer with a disk. Kildall then wrote a disk-based operating system called CP/M (Control Program for Microcomputers) for it. Since Intel did not think that disk-based microcomputers had much of a future, when Kildall asked for the rights to CP/M, Intel granted his request. Kildall then formed a company, Digital Research, to fur- ther develop and sell CP/M. In 1977, Digital Research rewrote CP/M to make it suitable for running on the many microcomputers using the 8080, Zilog Z80, and other CPU chips. Many ap- plication programs were written to run on CP/M, allowing it to completely domi- nate the world of microcomputing for about 5 years. In the early 1980s, IBM designed the IBM PC and looked around for software to run on it. People from IBM contacted Bill Gates to license his BASIC inter- preter. They also asked him if he knew of an operating system to run on the PC. Gates suggested that IBM contact Digital Research, then the world’s dominant op- erating systems company. Making what was surely the worst business decision in recorded history, Kildall refused to meet with IBM, sending a subordinate instead. To make matters even worse, his lawyer even refused to sign IBM’s nondisclosure agreement covering the not-yet-announced PC. Consequently, IBM went back to Gates asking if he could provide them with an operating system. When IBM came back, Gates realized that a local computer manufacturer, Seattle Computer Products, had a suitable operating system, DOS (Disk Operat- ing System). He approached them and asked to buy it (allegedly for $75,000), which they readily accepted. Gates then offered IBM a DOS/BASIC package, which IBM accepted. IBM wanted certain modifications, so Gates hired the per- son who wrote DOS, Tim Paterson, as an employee of Gates’ fledgling company, Microsoft, to make them. The revised system was renamed MS-DOS (MicroSoft Disk Operating System) and quickly came to dominate the IBM PC market. A key factor here was Gates’ (in retrospect, extremely wise) decision to sell MS-DOS to computer companies for bundling with their hardware, compared to Kildall’s
|
OS_Page_15_Chunk15
|
16 INTRODUCTION CHAP. 1 attempt to sell CP/M to end users one at a time (at least initially). After all this transpired, Kildall died suddenly and unexpectedly from causes that have not been fully disclosed. By the time the successor to the IBM PC, the IBM PC/AT, came out in 1983 with the Intel 80286 CPU, MS-DOS was firmly entrenched and CP/M was on its last legs. MS-DOS was later widely used on the 80386 and 80486. Although the initial version of MS-DOS was fairly primitive, subsequent versions included more advanced features, including many taken from UNIX. (Microsoft was well aware of UNIX, even selling a microcomputer version of it called XENIX during the company’s early years.) CP/M, MS-DOS, and other operating systems for early microcomputers were all based on users typing in commands from the keyboard. That eventually chang- ed due to research done by Doug Engelbart at Stanford Research Institute in the 1960s. Engelbart invented the Graphical User Interface, complete with windows, icons, menus, and mouse. These ideas were adopted by researchers at Xerox PARC and incorporated into machines they built. One day, Steve Jobs, who co-invented the Apple computer in his garage, vis- ited PARC, saw a GUI, and instantly realized its potential value, something Xerox management famously did not. This strategic blunder of gargantuan proportions led to a book entitled Fumbling the Future (Smith and Alexander, 1988). Jobs then embarked on building an Apple with a GUI. This project led to the Lisa, which was too expensive and failed commercially. Jobs’ second attempt, the Apple Mac- intosh, was a huge success, not only because it was much cheaper than the Lisa, but also because it was user friendly, meaning that it was intended for users who not only knew nothing about computers but furthermore had absolutely no inten- tion whatsoever of learning. In the creative world of graphic design, professional digital photography, and professional digital video production, Macintoshes are very widely used and their users are very enthusiastic about them. In 1999, Apple adopted a kernel derived from Carnegie Mellon University’s Mach microkernel which was originally developed to replace the kernel of BSD UNIX. Thus, Mac OS X is a UNIX-based operating system, albeit with a very distinctive interface. When Microsoft decided to build a successor to MS-DOS, it was strongly influenced by the success of the Macintosh. It produced a GUI-based system call- ed Windows, which originally ran on top of MS-DOS (i.e., it was more like a shell than a true operating system). For about 10 years, from 1985 to 1995, Windows was just a graphical environment on top of MS-DOS. However, starting in 1995 a freestanding version, Windows 95, was released that incorporated many operating system features into it, using the underlying MS-DOS system only for booting and running old MS-DOS programs. In 1998, a slightly modified version of this sys- tem, called Windows 98 was released. Nevertheless, both Windows 95 and Win- dows 98 still contained a large amount of 16-bit Intel assembly language. Another Microsoft operating system, Windows NT (where the NT stands for New Technology), which was compatible with Windows 95 at a certain level, but a
|
OS_Page_16_Chunk16
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 17 complete rewrite from scratch internally. It was a full 32-bit system. The lead de- signer for Windows NT was David Cutler, who was also one of the designers of the VAX VMS operating system, so some ideas from VMS are present in NT. In fact, so many ideas from VMS were present in it that the owner of VMS, DEC, sued Microsoft. The case was settled out of court for an amount of money requiring many digits to express. Microsoft expected that the first version of NT would kill off MS-DOS and all other versions of Windows since it was a vastly superior sys- tem, but it fizzled. Only with Windows NT 4.0 did it finally catch on in a big way, especially on corporate networks. Version 5 of Windows NT was renamed Win- dows 2000 in early 1999. It was intended to be the successor to both Windows 98 and Windows NT 4.0. That did not quite work out either, so Microsoft came out with yet another ver- sion of Windows 98 called Windows Me (Millennium Edition). In 2001, a slightly upgraded version of Windows 2000, called Windows XP was released. That version had a much longer run (6 years), basically replacing all previous ver- sions of Windows. Still the spawning of versions continued unabated. After Windows 2000, Microsoft broke up the Windows family into a client and a server line. The client line was based on XP and its successors, while the server line included Windows Server 2003 and Windows 2008. A third line, for the embedded world, appeared a little later. All of these versions of Windows forked off their variations in the form of service packs. It was enough to drive some administrators (and writers of oper- ating systems textbooks) balmy. Then in January 2007, Microsoft finally released the successor to Windows XP, called Vista. It came with a new graphical interface, improved security, and many new or upgraded user programs. Microsoft hoped it would replace Windows XP completely, but it never did. Instead, it received much criticism and a bad press, mostly due to the high system requirements, restrictive licensing terms, and sup- port for Digital Rights Management, techniques that made it harder for users to copy protected material. With the arrival of Windows 7, a new and much less resource hungry version of the operating system, many people decided to skip Vista altogether. Windows 7 did not introduce too many new features, but it was relatively small and quite sta- ble. In less than three weeks, Windows 7 had obtained more market share than Vista in seven months. In 2012, Microsoft launched its successor, Windows 8, an operating system with a completely new look and feel, geared for touch screens. The company hopes that the new design will become the dominant operating sys- tem on a much wider variety of devices: desktops, laptops, notebooks, tablets, phones, and home theater PCs. So far, howev er, the market penetration is slow compared to Windows 7. The other major contender in the personal computer world is UNIX (and its various derivatives). UNIX is strongest on network and enterprise servers but is also often present on desktop computers, notebooks, tablets, and smartphones. On
|
OS_Page_17_Chunk17
|
18 INTRODUCTION CHAP. 1 x86-based computers, Linux is becoming a popular alternative to Windows for stu- dents and increasingly many corporate users. As an aside, throughout this book we will use the term x86 to refer to all mod- ern processors based on the family of instruction-set architectures that started with the 8086 in the 1970s. There are many such processors, manufactured by com- panies like AMD and Intel, and under the hood they often differ considerably: processors may be 32 bits or 64 bits with few or many cores and pipelines that may be deep or shallow, and so on. Nevertheless, to the programmer, they all look quite similar and they can all still run 8086 code that was written 35 years ago. Where the difference is important, we will refer to explicit models instead—and use x86-32 and x86-64 to indicate 32-bit and 64-bit variants. FreeBSD is also a popular UNIX derivative, originating from the BSD project at Berkeley. All modern Macintosh computers run a modified version of FreeBSD (OS X). UNIX is also standard on workstations powered by high-performance RISC chips. Its derivatives are widely used on mobile devices, such as those run- ning iOS 7 or Android. Many UNIX users, especially experienced programmers, prefer a command- based interface to a GUI, so nearly all UNIX systems support a windowing system called the X Window System (also known as X11) produced at M.I.T. This sys- tem handles the basic window management, allowing users to create, delete, move, and resize windows using a mouse. Often a complete GUI, such as Gnome or KDE, is available to run on top of X11, giving UNIX a look and feel something like the Macintosh or Microsoft Windows, for those UNIX users who want such a thing. An interesting development that began taking place during the mid-1980s is the growth of networks of personal computers running network operating sys- tems and distributed operating systems (Tanenbaum and Van Steen, 2007). In a network operating system, the users are aware of the existence of multiple com- puters and can log in to remote machines and copy files from one machine to an- other. Each machine runs its own local operating system and has its own local user (or users). Network operating systems are not fundamentally different from single-proc- essor operating systems. They obviously need a network interface controller and some low-level software to drive it, as well as programs to achieve remote login and remote file access, but these additions do not change the essential structure of the operating system. A distributed operating system, in contrast, is one that appears to its users as a traditional uniprocessor system, even though it is actually composed of multiple processors. The users should not be aware of where their programs are being run or where their files are located; that should all be handled automatically and ef- ficiently by the operating system. True distributed operating systems require more than just adding a little code to a uniprocessor operating system, because distributed and centralized systems
|
OS_Page_18_Chunk18
|
SEC. 1.2 HISTORY OF OPERATING SYSTEMS 19 differ in certain critical ways. Distributed systems, for example, often allow appli- cations to run on several processors at the same time, thus requiring more complex processor scheduling algorithms in order to optimize the amount of parallelism. Communication delays within the network often mean that these (and other) algorithms must run with incomplete, outdated, or even incorrect information. This situation differs radically from that in a single-processor system in which the oper- ating system has complete information about the system state. 1.2.5 The Fifth Generation (1990–Present): Mobile Computers Ever since detective Dick Tracy started talking to his ‘‘two-way radio wrist watch’’ in the 1940s comic strip, people have craved a communication device they could carry around wherever they went. The first real mobile phone appeared in 1946 and weighed some 40 kilos. You could take it wherever you went as long as you had a car in which to carry it. The first true handheld phone appeared in the 1970s and, at roughly one kilo- gram, was positively featherweight. It was affectionately known as ‘‘the brick.’’ Pretty soon everybody wanted one. Today, mobile phone penetration is close to 90% of the global population. We can make calls not just with our portable phones and wrist watches, but soon with eyeglasses and other wearable items. Moreover, the phone part is no longer that interesting. We receive email, surf the Web, text our friends, play games, navigate around heavy traffic—and do not even think twice about it. While the idea of combining telephony and computing in a phone-like device has been around since the 1970s also, the first real smartphone did not appear until the mid-1990s when Nokia released the N9000, which literally combined two, mostly separate devices: a phone and a PDA (Personal Digital Assistant). In 1997, Ericsson coined the term smartphone for its GS88 ‘‘Penelope.’’ Now that smartphones have become ubiquitous, the competition between the various operating systems is fierce and the outcome is even less clear than in the PC world. At the time of writing, Google’s Android is the dominant operating sys- tem with Apple’s iOS a clear second, but this was not always the case and all may be different again in just a few years. If anything is clear in the world of smart- phones, it is that it is not easy to stay king of the mountain for long. After all, most smartphones in the first decade after their inception were run- ning Symbian OS. It was the operating system of choice for popular brands like Samsung, Sony Ericsson, Motorola, and especially Nokia. However, other operat- ing systems like RIM’s Blackberry OS (introduced for smartphones in 2002) and Apple’s iOS (released for the first iPhone in 2007) started eating into Symbian’s market share. Many expected that RIM would dominate the business market, while iOS would be the king of the consumer devices. Symbian’s market share plum- meted. In 2011, Nokia ditched Symbian and announced it would focus on Win- dows Phone as its primary platform. For some time, Apple and RIM were the toast
|
OS_Page_19_Chunk19
|
20 INTRODUCTION CHAP. 1 of the town (although not nearly as dominant as Symbian had been), but it did not take very long for Android, a Linux-based operating system released by Google in 2008, to overtake all its rivals. For phone manufacturers, Android had the advantage that it was open source and available under a permissive license. As a result, they could tinker with it and adapt it to their own hardware with ease. Also, it has a huge community of devel- opers writing apps, mostly in the familiar Java programming language. Even so, the past years have shown that the dominance may not last, and Android’s competi- tors are eager to claw back some of its market share. We will look at Android in detail in Sec. 10.8. 1.3 COMPUTER HARDWARE REVIEW An operating system is intimately tied to the hardware of the computer it runs on. It extends the computer’s instruction set and manages its resources. To work, it must know a great deal about the hardware, at least about how the hardware ap- pears to the programmer. For this reason, let us briefly review computer hardware as found in modern personal computers. After that, we can start getting into the de- tails of what operating systems do and how they work. Conceptually, a simple personal computer can be abstracted to a model resem- bling that of Fig. 1-6. The CPU, memory, and I/O devices are all connected by a system bus and communicate with one another over it. Modern personal computers have a more complicated structure, involving multiple buses, which we will look at later. For the time being, this model will be sufficient. In the following sections, we will briefly review these components and examine some of the hardware issues that are of concern to operating system designers. Needless to say, this will be a very compact summary. Many books have been written on the subject of computer hardware and computer organization. Two well-known ones are by Tanenbaum and Austin (2012) and Patterson and Hennessy (2013). Monitor Keyboard USB printer Hard disk drive Hard disk controller USB controller Keyboard controller Video controller Memory CPU Bus MMU Figure 1-6. Some of the components of a simple personal computer.
|
OS_Page_20_Chunk20
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 21 1.3.1 Processors The ‘‘brain’’ of the computer is the CPU. It fetches instructions from memory and executes them. The basic cycle of every CPU is to fetch the first instruction from memory, decode it to determine its type and operands, execute it, and then fetch, decode, and execute subsequent instructions. The cycle is repeated until the program finishes. In this way, programs are carried out. Each CPU has a specific set of instructions that it can execute. Thus an x86 processor cannot execute ARM programs and an ARM processor cannot execute x86 programs. Because accessing memory to get an instruction or data word takes much longer than executing an instruction, all CPUs contain some registers inside to hold key variables and temporary results. Thus the instruction set generally con- tains instructions to load a word from memory into a register, and store a word from a register into memory. Other instructions combine two operands from regis- ters, memory, or both into a result, such as adding two words and storing the result in a register or in memory. In addition to the general registers used to hold variables and temporary re- sults, most computers have sev eral special registers that are visible to the pro- grammer. One of these is the program counter, which contains the memory ad- dress of the next instruction to be fetched. After that instruction has been fetched, the program counter is updated to point to its successor. Another register is the stack pointer, which points to the top of the current stack in memory. The stack contains one frame for each procedure that has been entered but not yet exited. A procedure’s stack frame holds those input parameters, local variables, and temporary variables that are not kept in registers. Yet another register is the PSW (Program Status Word). This register con- tains the condition code bits, which are set by comparison instructions, the CPU priority, the mode (user or kernel), and various other control bits. User programs may normally read the entire PSW but typically may write only some of its fields. The PSW plays an important role in system calls and I/O. The operating system must be fully aware of all the registers. When time mul- tiplexing the CPU, the operating system will often stop the running program to (re)start another one. Every time it stops a running program, the operating system must save all the registers so they can be restored when the program runs later. To improve performance, CPU designers have long abandoned the simple model of fetching, decoding, and executing one instruction at a time. Many modern CPUs have facilities for executing more than one instruction at the same time. For example, a CPU might have separate fetch, decode, and execute units, so that while it is executing instruction n, it could also be decoding instruction n + 1 and fetch- ing instruction n + 2. Such an organization is called a pipeline and is illustrated in Fig. 1-7(a) for a pipeline with three stages. Longer pipelines are common. In most pipeline designs, once an instruction has been fetched into the pipeline, it must be executed, even if the preceding instruction was a conditional branch that was taken.
|
OS_Page_21_Chunk21
|
22 INTRODUCTION CHAP. 1 Pipelines cause compiler writers and operating system writers great headaches be- cause they expose the complexities of the underlying machine to them and they have to deal with them. Fetch unit Fetch unit Fetch unit Decode unit Decode unit Execute unit Execute unit Execute unit Execute unit Decode unit Holding buffer (a) (b) Figure 1-7. (a) A three-stage pipeline. (b) A superscalar CPU. Even more advanced than a pipeline design is a superscalar CPU, shown in Fig. 1-7(b). In this design, multiple execution units are present, for example, one for integer arithmetic, one for floating-point arithmetic, and one for Boolean opera- tions. Two or more instructions are fetched at once, decoded, and dumped into a holding buffer until they can be executed. As soon as an execution unit becomes available, it looks in the holding buffer to see if there is an instruction it can hand- le, and if so, it removes the instruction from the buffer and executes it. An implica- tion of this design is that program instructions are often executed out of order. For the most part, it is up to the hardware to make sure the result produced is the same one a sequential implementation would have produced, but an annoying amount of the complexity is foisted onto the operating system, as we shall see. Most CPUs, except very simple ones used in embedded systems, have two modes, kernel mode and user mode, as mentioned earlier. Usually, a bit in the PSW controls the mode. When running in kernel mode, the CPU can execute every in- struction in its instruction set and use every feature of the hardware. On desktop and server machines, the operating system normally runs in kernel mode, giving it access to the complete hardware. On most embedded systems, a small piece runs in kernel mode, with the rest of the operating system running in user mode. User programs always run in user mode, which permits only a subset of the in- structions to be executed and a subset of the features to be accessed. Generally, all instructions involving I/O and memory protection are disallowed in user mode. Setting the PSW mode bit to enter kernel mode is also forbidden, of course. To obtain services from the operating system, a user program must make a sys- tem call, which traps into the kernel and invokes the operating system. The TRAP instruction switches from user mode to kernel mode and starts the operating sys- tem. When the work has been completed, control is returned to the user program at the instruction following the system call. We will explain the details of the system call mechanism later in this chapter. For the time being, think of it as a special kind
|
OS_Page_22_Chunk22
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 23 of procedure call that has the additional property of switching from user mode to kernel mode. As a note on typography, we will use the lower-case Helvetica font to indicate system calls in running text, like this: read. It is worth noting that computers have traps other than the instruction for ex- ecuting a system call. Most of the other traps are caused by the hardware to warn of an exceptional situation such as an attempt to divide by 0 or a floating-point underflow. In all cases the operating system gets control and must decide what to do. Sometimes the program must be terminated with an error. Other times the error can be ignored (an underflowed number can be set to 0). Finally, when the program has announced in advance that it wants to handle certain kinds of condi- tions, control can be passed back to the program to let it deal with the problem. Multithreaded and Multicore Chips Moore’s law states that the number of transistors on a chip doubles every 18 months. This ‘‘law’’ is not some kind of law of physics, like conservation of mo- mentum, but is an observation by Intel cofounder Gordon Moore of how fast proc- ess engineers at the semiconductor companies are able to shrink their transistors. Moore’s law has held for over three decades now and is expected to hold for at least one more. After that, the number of atoms per transistor will become too small and quantum mechanics will start to play a big role, preventing further shrinkage of transistor sizes. The abundance of transistors is leading to a problem: what to do with all of them? We saw one approach above: superscalar architectures, with multiple func- tional units. But as the number of transistors increases, even more is possible. One obvious thing to do is put bigger caches on the CPU chip. That is definitely hap- pening, but eventually the point of diminishing returns will be reached. The obvious next step is to replicate not only the functional units, but also some of the control logic. The Intel Pentium 4 introduced this property, called multithreading or hyperthreading (Intel’s name for it), to the x86 processor, and several other CPU chips also have it—including the SPARC, the Power5, the Intel Xeon, and the Intel Core family. To a first approximation, what it does is allow the CPU to hold the state of two different threads and then switch back and forth on a nanosecond time scale. (A thread is a kind of lightweight process, which, in turn, is a running program; we will get into the details in Chap. 2.) For example, if one of the processes needs to read a word from memory (which takes many clock cycles), a multithreaded CPU can just switch to another thread. Multithreading does not offer true parallelism. Only one process at a time is running, but thread-switching time is reduced to the order of a nanosecond. Multithreading has implications for the operating system because each thread appears to the operating system as a separate CPU. Consider a system with two actual CPUs, each with two threads. The operating system will see this as four CPUs. If there is only enough work to keep two CPUs busy at a certain point in
|
OS_Page_23_Chunk23
|
24 INTRODUCTION CHAP. 1 time, it may inadvertently schedule two threads on the same CPU, with the other CPU completely idle. This choice is far less efficient than using one thread on each CPU. Beyond multithreading, many CPU chips now hav e four, eight, or more com- plete processors or cores on them. The multicore chips of Fig. 1-8 effectively carry four minichips on them, each with its own independent CPU. (The caches will be explained below.) Some processors, like Intel Xeon Phi and the Tilera TilePro, al- ready sport more than 60 cores on a single chip. Making use of such a multicore chip will definitely require a multiprocessor operating system. Incidentally, in terms of sheer numbers, nothing beats a modern GPU (Graph- ics Processing Unit). A GPU is a processor with, literally, thousands of tiny cores. They are very good for many small computations done in parallel, like rendering polygons in graphics applications. They are not so good at serial tasks. They are also hard to program. While GPUs can be useful for operating systems (e.g., en- cryption or processing of network traffic), it is not likely that much of the operating system itself will run on the GPUs. L2 L2 L2 L2 L2 cache L1 cache (a) (b) Core 1 Core 2 Core 3 Core 4 Core 1 Core 2 Core 3 Core 4 Figure 1-8. (a) A quad-core chip with a shared L2 cache. (b) A quad-core chip with separate L2 caches. 1.3.2 Memory The second major component in any computer is the memory. Ideally, a memo- ry should be extremely fast (faster than executing an instruction so that the CPU is not held up by the memory), abundantly large, and dirt cheap. No current technol- ogy satisfies all of these goals, so a different approach is taken. The memory sys- tem is constructed as a hierarchy of layers, as shown in Fig. 1-9. The top layers have higher speed, smaller capacity, and greater cost per bit than the lower ones, often by factors of a billion or more. The top layer consists of the registers internal to the CPU. They are made of the same material as the CPU and are thus just as fast as the CPU. Consequently, there is no delay in accessing them. The storage capacity available in them is
|
OS_Page_24_Chunk24
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 25 Registers Cache Main memory Magnetic disk 1 nsec 2 nsec 10 nsec 10 msec <1 KB 4 MB 1-8 GB 1-4 TB Typical capacity Typical access time Figure 1-9. A typical memory hierarchy. The numbers are very rough approximations. typically 32 × 32 bits on a 32-bit CPU and 64 × 64 bits on a 64-bit CPU. Less than 1 KB in both cases. Programs must manage the registers (i.e., decide what to keep in them) themselves, in software. Next comes the cache memory, which is mostly controlled by the hardware. Main memory is divided up into cache lines, typically 64 bytes, with addresses 0 to 63 in cache line 0, 64 to 127 in cache line 1, and so on. The most heavily used cache lines are kept in a high-speed cache located inside or very close to the CPU. When the program needs to read a memory word, the cache hardware checks to see if the line needed is in the cache. If it is, called a cache hit, the request is satisfied from the cache and no memory request is sent over the bus to the main memory. Cache hits normally take about two clock cycles. Cache misses have to go to memory, with a substantial time penalty. Cache memory is limited in size due to its high cost. Some machines have two or even three levels of cache, each one slower and bigger than the one before it. Caching plays a major role in many areas of computer science, not just caching lines of RAM. Whenever a resource can be divided into pieces, some of which are used much more heavily than others, caching is often used to improve perfor- mance. Operating systems use it all the time. For example, most operating systems keep (pieces of) heavily used files in main memory to avoid having to fetch them from the disk repeatedly. Similarly, the results of converting long path names like /home/ast/projects/minix3/src/kernel/clock.c into the disk address where the file is located can be cached to avoid repeated lookups. Finally, when the address of a Web page (URL) is converted to a network address (IP address), the result can be cached for future use. Many other uses exist. In any caching system, several questions come up fairly soon, including: 1. When to put a new item into the cache. 2. Which cache line to put the new item in. 3. Which item to remove from the cache when a slot is needed. 4. Where to put a newly evicted item in the larger memory.
|
OS_Page_25_Chunk25
|
26 INTRODUCTION CHAP. 1 Not every question is relevant to every caching situation. For caching lines of main memory in the CPU cache, a new item will generally be entered on every cache miss. The cache line to use is generally computed by using some of the high-order bits of the memory address referenced. For example, with 4096 cache lines of 64 bytes and 32 bit addresses, bits 6 through 17 might be used to specify the cache line, with bits 0 to 5 the byte within the cache line. In this case, the item to remove is the same one as the new data goes into, but in other systems it might not be. Finally, when a cache line is rewritten to main memory (if it has been modified since it was cached), the place in memory to rewrite it to is uniquely determined by the address in question. Caches are such a good idea that modern CPUs have two of them. The first level or L1 cache is always inside the CPU and usually feeds decoded instructions into the CPU’s execution engine. Most chips have a second L1 cache for very heavily used data words. The L1 caches are typically 16 KB each. In addition, there is often a second cache, called the L2 cache, that holds several megabytes of recently used memory words. The difference between the L1 and L2 caches lies in the timing. Access to the L1 cache is done without any delay, whereas access to the L2 cache involves a delay of one or two clock cycles. On multicore chips, the designers have to decide where to place the caches. In Fig. 1-8(a), a single L2 cache is shared by all the cores. This approach is used in Intel multicore chips. In contrast, in Fig. 1-8(b), each core has its own L2 cache. This approach is used by AMD. Each strategy has its pros and cons. For example, the Intel shared L2 cache requires a more complicated cache controller but the AMD way makes keeping the L2 caches consistent more difficult. Main memory comes next in the hierarchy of Fig. 1-9. This is the workhorse of the memory system. Main memory is usually called RAM (Random Access Memory). Old-timers sometimes call it core memory, because computers in the 1950s and 1960s used tiny magnetizable ferrite cores for main memory. They hav e been gone for decades but the name persists. Currently, memories are hundreds of megabytes to several gigabytes and growing rapidly. All CPU requests that cannot be satisfied out of the cache go to main memory. In addition to the main memory, many computers have a small amount of non- volatile random-access memory. Unlike RAM, nonvolatile memory does not lose its contents when the power is switched off. ROM (Read Only Memory) is pro- grammed at the factory and cannot be changed afterward. It is fast and inexpen- sive. On some computers, the bootstrap loader used to start the computer is con- tained in ROM. Also, some I/O cards come with ROM for handling low-level de- vice control. EEPROM (Electrically Erasable PROM) and flash memory are also non- volatile, but in contrast to ROM can be erased and rewritten. However, writing them takes orders of magnitude more time than writing RAM, so they are used in the same way ROM is, only with the additional feature that it is now possible to correct bugs in programs they hold by rewriting them in the field.
|
OS_Page_26_Chunk26
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 27 Flash memory is also commonly used as the storage medium in portable elec- tronic devices. It serves as film in digital cameras and as the disk in portable music players, to name just two uses. Flash memory is intermediate in speed between RAM and disk. Also, unlike disk memory, if it is erased too many times, it wears out. Yet another kind of memory is CMOS, which is volatile. Many computers use CMOS memory to hold the current time and date. The CMOS memory and the clock circuit that increments the time in it are powered by a small battery, so the time is correctly updated, even when the computer is unplugged. The CMOS mem- ory can also hold the configuration parameters, such as which disk to boot from. CMOS is used because it draws so little power that the original factory-installed battery often lasts for several years. However, when it begins to fail, the computer can appear to have Alzheimer’s disease, forgetting things that it has known for years, like which hard disk to boot from. 1.3.3 Disks Next in the hierarchy is magnetic disk (hard disk). Disk storage is two orders of magnitude cheaper than RAM per bit and often two orders of magnitude larger as well. The only problem is that the time to randomly access data on it is close to three orders of magnitude slower. The reason is that a disk is a mechanical device, as shown in Fig. 1-10. Surface 2 Surface 1 Surface 0 Read/write head (1 per surface) Direction of arm motion Surface 3 Surface 5 Surface 4 Surface 7 Surface 6 Figure 1-10. Structure of a disk drive. A disk consists of one or more metal platters that rotate at 5400, 7200, 10,800 RPM or more. A mechanical arm pivots over the platters from the corner, similar to the pickup arm on an old 33-RPM phonograph for playing vinyl records.
|
OS_Page_27_Chunk27
|
28 INTRODUCTION CHAP. 1 Information is written onto the disk in a series of concentric circles. At any giv en arm position, each of the heads can read an annular region called a track. Toget- her, all the tracks for a given arm position form a cylinder. Each track is divided into some number of sectors, typically 512 bytes per sec- tor. On modern disks, the outer cylinders contain more sectors than the inner ones. Moving the arm from one cylinder to the next takes about 1 msec. Moving it to a random cylinder typically takes 5 to 10 msec, depending on the drive. Once the arm is on the correct track, the drive must wait for the needed sector to rotate under the head, an additional delay of 5 msec to 10 msec, depending on the drive’s RPM. Once the sector is under the head, reading or writing occurs at a rate of 50 MB/sec on low-end disks to 160 MB/sec on faster ones. Sometimes you will hear people talk about disks that are really not disks at all, like SSDs, (Solid State Disks). SSDs do not have moving parts, do not contain platters in the shape of disks, and store data in (Flash) memory. The only ways in which they resemble disks is that they also store a lot of data which is not lost when the power is off. Many computers support a scheme known as virtual memory, which we will discuss at some length in Chap. 3. This scheme makes it possible to run programs larger than physical memory by placing them on the disk and using main memory as a kind of cache for the most heavily executed parts. This scheme requires re- mapping memory addresses on the fly to convert the address the program gener- ated to the physical address in RAM where the word is located. This mapping is done by a part of the CPU called the MMU (Memory Management Unit), as shown in Fig. 1-6. The presence of caching and the MMU can have a major impact on per- formance. In a multiprogramming system, when switching from one program to another, sometimes called a context switch, it may be necessary to flush all modi- fied blocks from the cache and change the mapping registers in the MMU. Both of these are expensive operations, and programmers try hard to avoid them. We will see some of the implications of their tactics later. 1.3.4 I/O Devices The CPU and memory are not the only resources that the operating system must manage. I/O devices also interact heavily with the operating system. As we saw in Fig. 1-6, I/O devices generally consist of two parts: a controller and the de- vice itself. The controller is a chip or a set of chips that physically controls the de- vice. It accepts commands from the operating system, for example, to read data from the device, and carries them out. In many cases, the actual control of the device is complicated and detailed, so it is the job of the controller to present a simpler (but still very complex) interface to the operating system. For example, a disk controller might accept a command to
|
OS_Page_28_Chunk28
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 29 read sector 11,206 from disk 2. The controller then has to convert this linear sector number to a cylinder, sector, and head. This conversion may be complicated by the fact that outer cylinders have more sectors than inner ones and that some bad sec- tors have been remapped onto other ones. Then the controller has to determine which cylinder the disk arm is on and give it a command to move in or out the req- uisite number of cylinders. It has to wait until the proper sector has rotated under the head and then start reading and storing the bits as they come off the drive, removing the preamble and computing the checksum. Finally, it has to assemble the incoming bits into words and store them in memory. To do all this work, con- trollers often contain small embedded computers that are programmed to do their work. The other piece is the actual device itself. Devices have fairly simple inter- faces, both because they cannot do much and to make them standard. The latter is needed so that any SAT A disk controller can handle any SAT A disk, for example. SATA stands for Serial ATA and AT A in turn stands for AT Attachment. In case you are curious what AT stands for, this was IBM’s second generation ‘‘Personal Computer Advanced Technology’’ built around the then-extremely-potent 6-MHz 80286 processor that the company introduced in 1984. What we learn from this is that the computer industry has a habit of continuously enhancing existing acro- nyms with new prefixes and suffixes. We also learned that an adjective like ‘‘ad- vanced’’ should be used with great care, or you will look silly thirty years down the line. SATA is currently the standard type of disk on many computers. Since the ac- tual device interface is hidden behind the controller, all that the operating system sees is the interface to the controller, which may be quite different from the inter- face to the device. Because each type of controller is different, different software is needed to control each one. The software that talks to a controller, giving it commands and accepting responses, is called a device driver. Each controller manufacturer has to supply a driver for each operating system it supports. Thus a scanner may come with drivers for OS X, Windows 7, Windows 8, and Linux, for example. To be used, the driver has to be put into the operating system so it can run in kernel mode. Drivers can actually run outside the kernel, and operating systems like Linux and Windows nowadays do offer some support for doing so. The vast majority of the drivers still run below the kernel boundary. Only very few current systems, such as MINIX 3, run all drivers in user space. Drivers in user space must be allowed to access the device in a controlled way, which is not straightforward. There are three ways the driver can be put into the kernel. The first way is to relink the kernel with the new driver and then reboot the system. Many older UNIX systems work like this. The second way is to make an entry in an operating system file telling it that it needs the driver and then reboot the system. At boot time, the operating system goes and finds the drivers it needs and loads them. Windows works this way. The third way is for the operating system to be able to accept new
|
OS_Page_29_Chunk29
|
30 INTRODUCTION CHAP. 1 drivers while running and install them on the fly without the need to reboot. This way used to be rare but is becoming much more common now. Hot-pluggable devices, such as USB and IEEE 1394 devices (discussed below), always need dy- namically loaded drivers. Every controller has a small number of registers that are used to communicate with it. For example, a minimal disk controller might have registers for specifying the disk address, memory address, sector count, and direction (read or write). To activate the controller, the driver gets a command from the operating system, then translates it into the appropriate values to write into the device registers. The col- lection of all the device registers forms the I/O port space, a subject we will come back to in Chap. 5. On some computers, the device registers are mapped into the operating sys- tem’s address space (the addresses it can use), so they can be read and written like ordinary memory words. On such computers, no special I/O instructions are re- quired and user programs can be kept away from the hardware by not putting these memory addresses within their reach (e.g., by using base and limit registers). On other computers, the device registers are put in a special I/O port space, with each register having a port address. On these machines, special IN and OUT instructions are available in kernel mode to allow drivers to read and write the registers. The former scheme eliminates the need for special I/O instructions but uses up some of the address space. The latter uses no address space but requires special instruc- tions. Both systems are widely used. Input and output can be done in three different ways. In the simplest method, a user program issues a system call, which the kernel then translates into a procedure call to the appropriate driver. The driver then starts the I/O and sits in a tight loop continuously polling the device to see if it is done (usually there is some bit that in- dicates that the device is still busy). When the I/O has completed, the driver puts the data (if any) where they are needed and returns. The operating system then re- turns control to the caller. This method is called busy waiting and has the disad- vantage of tying up the CPU polling the device until it is finished. The second method is for the driver to start the device and ask it to give an in- terrupt when it is finished. At that point the driver returns. The operating system then blocks the caller if need be and looks for other work to do. When the con- troller detects the end of the transfer, it generates an interrupt to signal comple- tion. Interrupts are very important in operating systems, so let us examine the idea more closely. In Fig. 1-11(a) we see a three-step process for I/O. In step 1, the driver tells the controller what to do by writing into its device registers. The con- troller then starts the device. When the controller has finished reading or writing the number of bytes it has been told to transfer, it signals the interrupt controller chip using certain bus lines in step 2. If the interrupt controller is ready to accept the interrupt (which it may not be if it is busy handling a higher-priority one), it as- serts a pin on the CPU chip telling it, in step 3. In step 4, the interrupt controller
|
OS_Page_30_Chunk30
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 31 puts the number of the device on the bus so the CPU can read it and know which device has just finished (many devices may be running at the same time). CPU Interrupt controller Disk controller Disk drive Current instruction Next instruction 1. Interrupt 3. Return 2. Dispatch to handler Interrupt handler (b) (a) 1 3 4 2 Figure 1-11. (a) The steps in starting an I/O device and getting an interrupt. (b) Interrupt processing involves taking the interrupt, running the interrupt handler, and returning to the user program. Once the CPU has decided to take the interrupt, the program counter and PSW are typically then pushed onto the current stack and the CPU switched into kernel mode. The device number may be used as an index into part of memory to find the address of the interrupt handler for this device. This part of memory is called the interrupt vector. Once the interrupt handler (part of the driver for the interrupting device) has started, it removes the stacked program counter and PSW and saves them, then queries the device to learn its status. When the handler is all finished, it returns to the previously running user program to the first instruction that was not yet executed. These steps are shown in Fig. 1-11(b). The third method for doing I/O makes use of special hardware: a DMA (Direct Memory Access) chip that can control the flow of bits between memory and some controller without constant CPU intervention. The CPU sets up the DMA chip, telling it how many bytes to transfer, the device and memory addresses involved, and the direction, and lets it go. When the DMA chip is done, it causes an interrupt, which is handled as described above. DMA and I/O hardware in gen- eral will be discussed in more detail in Chap. 5. Interrupts can (and often do) happen at highly inconvenient moments, for ex- ample, while another interrupt handler is running. For this reason, the CPU has a way to disable interrupts and then reenable them later. While interrupts are dis- abled, any devices that finish continue to assert their interrupt signals, but the CPU is not interrupted until interrupts are enabled again. If multiple devices finish while interrupts are disabled, the interrupt controller decides which one to let through first, usually based on static priorities assigned to each device. The highest-priority device wins and gets to be serviced first. The others must wait.
|
OS_Page_31_Chunk31
|
32 INTRODUCTION CHAP. 1 1.3.5 Buses The organization of Fig. 1-6 was used on minicomputers for years and also on the original IBM PC. However, as processors and memories got faster, the ability of a single bus (and certainly the IBM PC bus) to handle all the traffic was strained to the breaking point. Something had to give. As a result, additional buses were added, both for faster I/O devices and for CPU-to-memory traffic. As a conse- quence of this evolution, a large x86 system currently looks something like Fig. 1-12. Memory controllers DDR3 Memory Graphics PCIe Platform Controller Hub DMI PCIe slot PCIe slot PCIe slot PCIe slot Core1 Core2 Shared cache GPU Cores DDR3 Memory SATA USB 2.0 ports USB 3.0 ports Gigabit Ethernet Cache Cache More PCIe devices PCIe Figure 1-12. The structure of a large x86 system. This system has many buses (e.g., cache, memory, PCIe, PCI, USB, SATA, and DMI), each with a different transfer rate and function. The operating system must be aware of all of them for configuration and management. The main bus is the PCIe (Peripheral Component Interconnect Express) bus. The PCIe bus was invented by Intel as a successor to the older PCI bus, which in turn was a replacement for the original ISA (Industry Standard Architecture) bus. Capable of transferring tens of gigabits per second, PCIe is much faster than its predecessors. It is also very different in nature. Up to its creation in 2004, most buses were parallel and shared. A shared bus architecture means that multiple de- vices use the same wires to transfer data. Thus, when multiple devices have data to send, you need an arbiter to determine who can use the bus. In contrast, PCIe makes use of dedicated, point-to-point connections. A parallel bus architecture as used in traditional PCI means that you send each word of data over multiple wires. For instance, in regular PCI buses, a single 32-bit number is sent over 32 parallel wires. In contrast to this, PCIe uses a serial bus architecture and sends all bits in
|
OS_Page_32_Chunk32
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 33 a message through a single connection, known as a lane, much like a network packet. This is much simpler, because you do not have to ensure that all 32 bits arrive at the destination at exactly the same time. Parallelism is still used, because you can have multiple lanes in parallel. For instance, we may use 32 lanes to carry 32 messages in parallel. As the speed of peripheral devices like network cards and graphics adapters increases rapidly, the PCIe standard is upgraded every 3–5 years. For instance, 16 lanes of PCIe 2.0 offer 64 gigabits per second. Upgrading to PCIe 3.0 will give you twice that speed and PCIe 4.0 will double that again. Meanwhile, we still have many leg acy devices for the older PCI standard. As we see in Fig. 1-12, these devices are hooked up to a separate hub processor. In the future, when we consider PCI no longer merely old, but ancient, it is possible that all PCI devices will attach to yet another hub that in turn connects them to the main hub, creating a tree of buses. In this configuration, the CPU talks to memory over a fast DDR3 bus, to an ex- ternal graphics device over PCIe and to all other devices via a hub over a DMI (Direct Media Interface) bus. The hub in turn connects all the other devices, using the Universal Serial Bus to talk to USB devices, the SATA bus to interact with hard disks and DVD drives, and PCIe to transfer Ethernet frames. We hav e al- ready mentioned the older PCI devices that use a traditional PCI bus. Moreover, each of the cores has a dedicated cache and a much larger cache that is shared between them. Each of these caches introduces another bus. The USB (Universal Serial Bus) was invented to attach all the slow I/O de- vices, such as the keyboard and mouse, to the computer. Howev er, calling a mod- ern USB 3.0 device humming along at 5 Gbps ‘‘slow’’ may not come naturally for the generation that grew up with 8-Mbps ISA as the main bus in the first IBM PCs. USB uses a small connector with four to eleven wires (depending on the version), some of which supply electrical power to the USB devices or connect to ground. USB is a centralized bus in which a root device polls all the I/O devices every 1 msec to see if they hav e any traffic. USB 1.0 could handle an aggregate load of 12 Mbps, USB 2.0 increased the speed to 480 Mbps, and USB 3.0 tops at no less than 5 Gbps. Any USB device can be connected to a computer and it will function im- mediately, without requiring a reboot, something pre-USB devices required, much to the consternation of a generation of frustrated users. The SCSI (Small Computer System Interface) bus is a high-performance bus intended for fast disks, scanners, and other devices needing considerable band- width. Nowadays, we find them mostly in servers and workstations. They can run at up to 640 MB/sec. To work in an environment such as that of Fig. 1-12, the operating system has to know what peripheral devices are connected to the computer and configure them. This requirement led Intel and Microsoft to design a PC system called plug and play, based on a similar concept first implemented in the Apple Macintosh. Before plug and play, each I/O card had a fixed interrupt request level and fixed ad- dresses for its I/O registers. For example, the keyboard was interrupt 1 and used
|
OS_Page_33_Chunk33
|
34 INTRODUCTION CHAP. 1 I/O addresses 0x60 to 0x64, the floppy disk controller was interrupt 6 and used I/O addresses 0x3F0 to 0x3F7, and the printer was interrupt 7 and used I/O addresses 0x378 to 0x37A, and so on. So far, so good. The trouble came in when the user bought a sound card and a modem card and both happened to use, say, interrupt 4. They would conflict and would not work together. The solution was to include DIP switches or jumpers on ev ery I/O card and instruct the user to please set them to select an interrupt level and I/O device addresses that did not conflict with any others in the user’s system. Teenagers who devoted their lives to the intricacies of the PC hardware could sometimes do this without making errors. Unfortunately, nobody else could, lead- ing to chaos. What plug and play does is have the system automatically collect information about the I/O devices, centrally assign interrupt levels and I/O addresses, and then tell each card what its numbers are. This work is closely related to booting the computer, so let us look at that. It is not completely trivial. 1.3.6 Booting the Computer Very briefly, the boot process is as follows. Every PC contains a parentboard (formerly called a motherboard before political correctness hit the computer indus- try). On the parentboard is a program called the system BIOS (Basic Input Out- put System). The BIOS contains low-level I/O software, including procedures to read the keyboard, write to the screen, and do disk I/O, among other things. Now- adays, it is held in a flash RAM, which is nonvolatile but which can be updated by the operating system when bugs are found in the BIOS. When the computer is booted, the BIOS is started. It first checks to see how much RAM is installed and whether the keyboard and other basic devices are in- stalled and responding correctly. It starts out by scanning the PCIe and PCI buses to detect all the devices attached to them. If the devices present are different from when the system was last booted, the new devices are configured. The BIOS then determines the boot device by trying a list of devices stored in the CMOS memory. The user can change this list by entering a BIOS configuration program just after booting. Typically, an attempt is made to boot from a CD-ROM (or sometimes USB) drive, if one is present. If that fails, the system boots from the hard disk. The first sector from the boot device is read into memory and executed. This sector contains a program that normally examines the partition table at the end of the boot sector to determine which partition is active. Then a secondary boot loader is read in from that partition. This loader reads in the operating system from the active partition and starts it. The operating system then queries the BIOS to get the configuration infor- mation. For each device, it checks to see if it has the device driver. If not, it asks the user to insert a CD-ROM containing the driver (supplied by the device’s manu- facturer) or to download it from the Internet. Once it has all the device drivers, the
|
OS_Page_34_Chunk34
|
SEC. 1.3 COMPUTER HARDWARE REVIEW 35 operating system loads them into the kernel. Then it initializes its tables, creates whatever background processes are needed, and starts up a login program or GUI. 1.4 THE OPERATING SYSTEM ZOO Operating systems have been around now for over half a century. During this time, quite a variety of them have been developed, not all of them widely known. In this section we will briefly touch upon nine of them. We will come back to some of these different kinds of systems later in the book. 1.4.1 Mainframe Operating Systems At the high end are the operating systems for mainframes, those room-sized computers still found in major corporate data centers. These computers differ from personal computers in terms of their I/O capacity. A mainframe with 1000 disks and millions of gigabytes of data is not unusual; a personal computer with these specifications would be the envy of its friends. Mainframes are also making some- thing of a comeback as high-end Web servers, servers for large-scale electronic commerce sites, and servers for business-to-business transactions. The operating systems for mainframes are heavily oriented toward processing many jobs at once, most of which need prodigious amounts of I/O. They typically offer three kinds of services: batch, transaction processing, and timesharing. A batch system is one that processes routine jobs without any interactive user present. Claims processing in an insurance company or sales reporting for a chain of stores is typically done in batch mode. Transaction-processing systems handle large num- bers of small requests, for example, check processing at a bank or airline reserva- tions. Each unit of work is small, but the system must handle hundreds or thou- sands per second. Timesharing systems allow multiple remote users to run jobs on the computer at once, such as querying a big database. These functions are closely related; mainframe operating systems often perform all of them. An example mainframe operating system is OS/390, a descendant of OS/360. However, main- frame operating systems are gradually being replaced by UNIX variants such as Linux. 1.4.2 Server Operating Systems One level down are the server operating systems. They run on servers, which are either very large personal computers, workstations, or even mainframes. They serve multiple users at once over a network and allow the users to share hardware and software resources. Servers can provide print service, file service, or Web
|
OS_Page_35_Chunk35
|
36 INTRODUCTION CHAP. 1 service. Internet providers run many server machines to support their customers and Websites use servers to store the Web pages and handle the incoming requests. Typical server operating systems are Solaris, FreeBSD, Linux and Windows Server 201x. 1.4.3 Multiprocessor Operating Systems An increasingly common way to get major-league computing power is to con- nect multiple CPUs into a single system. Depending on precisely how they are connected and what is shared, these systems are called parallel computers, multi- computers, or multiprocessors. They need special operating systems, but often these are variations on the server operating systems, with special features for com- munication, connectivity, and consistency. With the recent advent of multicore chips for personal computers, even conventional desktop and notebook operating systems are starting to deal with at least small-scale multiprocessors and the number of cores is likely to grow over time. Luckily, quite a bit is known about multiprocessor operating systems from years of previous research, so using this knowledge in multicore systems should not be hard. The hard part will be having applications make use of all this comput- ing power. Many popular operating systems, including Windows and Linux, run on multiprocessors. 1.4.4 Personal Computer Operating Systems The next category is the personal computer operating system. Modern ones all support multiprogramming, often with dozens of programs started up at boot time. Their job is to provide good support to a single user. They are widely used for word processing, spreadsheets, games, and Internet access. Common examples are Linux, FreeBSD, Windows 7, Windows 8, and Apple’s OS X. Personal computer operating systems are so widely known that probably little introduction is needed. In fact, many people are not even aware that other kinds exist. 1.4.5 Handheld Computer Operating Systems Continuing on down to smaller and smaller systems, we come to tablets, smartphones and other handheld computers. A handheld computer, originally known as a PDA (Personal Digital Assistant), is a small computer that can be held in your hand during operation. Smartphones and tablets are the best-known examples. As we have already seen, this market is currently dominated by Google’s Android and Apple’s iOS, but they hav e many competitors. Most of these devices boast multicore CPUs, GPS, cameras and other sensors, copious amounts of memory, and sophisticated operating systems. Moreover, all of them have more third-party applications (‘‘apps’’) than you can shake a (USB) stick at.
|
OS_Page_36_Chunk36
|
SEC. 1.4 THE OPERATING SYSTEM ZOO 37 1.4.6 Embedded Operating Systems Embedded systems run on the computers that control devices that are not gen- erally thought of as computers and which do not accept user-installed software. Typical examples are microwave ovens, TV sets, cars, DVD recorders, traditional phones, and MP3 players. The main property which distinguishes embedded sys- tems from handhelds is the certainty that no untrusted software will ever run on it. You cannot download new applications to your microwave oven—all the software is in ROM. This means that there is no need for protection between applications, leading to design simplification. Systems such as Embedded Linux, QNX and VxWorks are popular in this domain. 1.4.7 Sensor-Node Operating Systems Networks of tiny sensor nodes are being deployed for numerous purposes. These nodes are tiny computers that communicate with each other and with a base station using wireless communication. Sensor networks are used to protect the perimeters of buildings, guard national borders, detect fires in forests, measure temperature and precipitation for weather forecasting, glean information about enemy movements on battlefields, and much more. The sensors are small battery-powered computers with built-in radios. They have limited power and must work for long periods of time unattended outdoors, frequently in environmentally harsh conditions. The network must be robust enough to tolerate failures of individual nodes, which happen with ever-increasing frequency as the batteries begin to run down. Each sensor node is a real computer, with a CPU, RAM, ROM, and one or more environmental sensors. It runs a small, but real operating system, usually one that is event driven, responding to external events or making measurements period- ically based on an internal clock. The operating system has to be small and simple because the nodes have little RAM and battery lifetime is a major issue. Also, as with embedded systems, all the programs are loaded in advance; users do not sud- denly start programs they downloaded from the Internet, which makes the design much simpler. TinyOS is a well-known operating system for a sensor node. 1.4.8 Real-Time Operating Systems Another type of operating system is the real-time system. These systems are characterized by having time as a key parameter. For example, in industrial proc- ess-control systems, real-time computers have to collect data about the production process and use it to control machines in the factory. Often there are hard deadlines that must be met. For example, if a car is moving down an assembly line, certain actions must take place at certain instants of time. If, for example, a welding robot welds too early or too late, the car will be ruined. If the action absolutely must
|
OS_Page_37_Chunk37
|
38 INTRODUCTION CHAP. 1 occur at a certain moment (or within a certain range), we have a hard real-time system. Many of these are found in industrial process control, avionics, military, and similar application areas. These systems must provide absolute guarantees that a certain action will occur by a certain time. A soft real-time system, is one where missing an occasional deadline, while not desirable, is acceptable and does not cause any permanent damage. Digital audio or multimedia systems fall in this category. Smartphones are also soft real- time systems. Since meeting deadlines is crucial in (hard) real-time systems, sometimes the operating system is simply a library linked in with the application programs, with ev erything tightly coupled and no protection between parts of the system. An ex- ample of this type of real-time system is eCos. The categories of handhelds, embedded systems, and real-time systems overlap considerably. Nearly all of them have at least some soft real-time aspects. The em- bedded and real-time systems run only software put in by the system designers; users cannot add their own software, which makes protection easier. The handhelds and embedded systems are intended for consumers, whereas real-time systems are more for industrial usage. Nevertheless, they hav e a certain amount in common. 1.4.9 Smart Card Operating Systems The smallest operating systems run on smart cards, which are credit-card-sized devices containing a CPU chip. They hav e very severe processing power and mem- ory constraints. Some are powered by contacts in the reader into which they are inserted, but contactless smart cards are inductively powered, which greatly limits what they can do. Some of them can handle only a single function, such as elec- tronic payments, but others can handle multiple functions. Often these are propri- etary systems. Some smart cards are Java oriented. This means that the ROM on the smart card holds an interpreter for the Java Virtual Machine (JVM). Java applets (small programs) are downloaded to the card and are interpreted by the JVM interpreter. Some of these cards can handle multiple Java applets at the same time, leading to multiprogramming and the need to schedule them. Resource management and pro- tection also become an issue when two or more applets are present at the same time. These issues must be handled by the (usually extremely primitive) operating system present on the card. 1.5 OPERATING SYSTEM CONCEPTS Most operating systems provide certain basic concepts and abstractions such as processes, address spaces, and files that are central to understanding them. In the following sections, we will look at some of these basic concepts ever so briefly, as
|
OS_Page_38_Chunk38
|
SEC. 1.5 OPERATING SYSTEM CONCEPTS 39 an introduction. We will come back to each of them in great detail later in this book. To illustrate these concepts we will, from time to time, use examples, gener- ally drawn from UNIX. Similar examples typically exist in other systems as well, however, and we will study some of them later. 1.5.1 Processes A key concept in all operating systems is the process. A process is basically a program in execution. Associated with each process is its address space, a list of memory locations from 0 to some maximum, which the process can read and write. The address space contains the executable program, the program’s data, and its stack. Also associated with each process is a set of resources, commonly including registers (including the program counter and stack pointer), a list of open files, out- standing alarms, lists of related processes, and all the other information needed to run the program. A process is fundamentally a container that holds all the infor- mation needed to run a program. We will come back to the process concept in much more detail in Chap. 2. For the time being, the easiest way to get a good intuitive feel for a process is to think about a multiprogramming system. The user may have started a video editing pro- gram and instructed it to convert a one-hour video to a certain format (something that can take hours) and then gone off to surf the Web. Meanwhile, a background process that wakes up periodically to check for incoming email may have started running. Thus we have (at least) three active processes: the video editor, the Web browser, and the email receiver. Periodically, the operating system decides to stop running one process and start running another, perhaps because the first one has used up more than its share of CPU time in the past second or two. When a process is suspended temporarily like this, it must later be restarted in exactly the same state it had when it was stopped. This means that all information about the process must be explicitly saved somewhere during the suspension. For example, the process may have sev eral files open for reading at once. Associated with each of these files is a pointer giving the current position (i.e., the number of the byte or record to be read next). When a process is temporarily suspended, all these pointers must be saved so that a read call executed after the process is restart- ed will read the proper data. In many operating systems, all the information about each process, other than the contents of its own address space, is stored in an oper- ating system table called the process table, which is an array of structures, one for each process currently in existence. Thus, a (suspended) process consists of its address space, usually called the core image (in honor of the magnetic core memories used in days of yore), and its process table entry, which contains the contents of its registers and many other items needed to restart the process later. The key process-management system calls are those dealing with the creation and termination of processes. Consider a typical example. A process called the command interpreter or shell reads commands from a terminal. The user has just
|
OS_Page_39_Chunk39
|
40 INTRODUCTION CHAP. 1 typed a command requesting that a program be compiled. The shell must now cre- ate a new process that will run the compiler. When that process has finished the compilation, it executes a system call to terminate itself. If a process can create one or more other processes (referred to as child pro- cesses) and these processes in turn can create child processes, we quickly arrive at the process tree structure of Fig. 1-13. Related processes that are cooperating to get some job done often need to communicate with one another and synchronize their activities. This communication is called interprocess communication, and will be addressed in detail in Chap. 2. A B D E F C Figure 1-13. A process tree. Process A created two child processes, B and C. Process B created three child processes, D, E, and F. Other process system calls are available to request more memory (or release unused memory), wait for a child process to terminate, and overlay its program with a different one. Occasionally, there is a need to convey information to a running process that is not sitting around waiting for this information. For example, a process that is com- municating with another process on a different computer does so by sending mes- sages to the remote process over a computer network. To guard against the possi- bility that a message or its reply is lost, the sender may request that its own operat- ing system notify it after a specified number of seconds, so that it can retransmit the message if no acknowledgement has been received yet. After setting this timer, the program may continue doing other work. When the specified number of seconds has elapsed, the operating system sends an alarm signal to the process. The signal causes the process to temporarily sus- pend whatever it was doing, save its registers on the stack, and start running a spe- cial signal-handling procedure, for example, to retransmit a presumably lost mes- sage. When the signal handler is done, the running process is restarted in the state it was in just before the signal. Signals are the software analog of hardware inter- rupts and can be generated by a variety of causes in addition to timers expiring. Many traps detected by hardware, such as executing an illegal instruction or using an invalid address, are also converted into signals to the guilty process. Each person authorized to use a system is assigned a UID (User IDentifica- tion) by the system administrator. Every process started has the UID of the person who started it. A child process has the same UID as its parent. Users can be mem- bers of groups, each of which has a GID (Group IDentification).
|
OS_Page_40_Chunk40
|
SEC. 1.5 OPERATING SYSTEM CONCEPTS 41 One UID, called the superuser (in UNIX), or Administrator (in Windows), has special power and may override many of the protection rules. In large in- stallations, only the system administrator knows the password needed to become superuser, but many of the ordinary users (especially students) devote considerable effort seeking flaws in the system that allow them to become superuser without the password. We will study processes and interprocess communication in Chap. 2. 1.5.2 Address Spaces Every computer has some main memory that it uses to hold executing pro- grams. In a very simple operating system, only one program at a time is in memo- ry. To run a second program, the first one has to be removed and the second one placed in memory. More sophisticated operating systems allow multiple programs to be in memo- ry at the same time. To keep them from interfering with one another (and with the operating system), some kind of protection mechanism is needed. While this mech- anism has to be in the hardware, it is controlled by the operating system. The above viewpoint is concerned with managing and protecting the com- puter’s main memory. A different, but equally important, memory-related issue is managing the address space of the processes. Normally, each process has some set of addresses it can use, typically running from 0 up to some maximum. In the sim- plest case, the maximum amount of address space a process has is less than the main memory. In this way, a process can fill up its address space and there will be enough room in main memory to hold it all. However, on many computers addresses are 32 or 64 bits, giving an address space of 232 or 264 bytes, respectively. What happens if a process has more address space than the computer has main memory and the process wants to use it all? In the first computers, such a process was just out of luck. Nowadays, a technique cal- led virtual memory exists, as mentioned earlier, in which the operating system keeps part of the address space in main memory and part on disk and shuttles pieces back and forth between them as needed. In essence, the operating system creates the abstraction of an address space as the set of addresses a process may reference. The address space is decoupled from the machine’s physical memory and may be either larger or smaller than the physical memory. Management of ad- dress spaces and physical memory form an important part of what an operating system does, so all of Chap. 3 is devoted to this topic. 1.5.3 Files Another key concept supported by virtually all operating systems is the file system. As noted before, a major function of the operating system is to hide the peculiarities of the disks and other I/O devices and present the programmer with a
|
OS_Page_41_Chunk41
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.