text stringlengths 1 7.76k | source stringlengths 17 81 |
|---|---|
A-44 Appendix A Graphics and Computing GPUs values of the attributes at each pixel location. The value of a given attribute U in an (x,y) plane can be expressed using plane equations of the form: U(x,y) = Aux + Buy + Cu where A, B, and C are interpolation parameters associated with each attribute U. The interpolation parameters A, B, and C are all represented as single precision floating-point numbers. Given the need for both a function evaluator and an attribute interpolator in a pixel shader processor, a single SFU that performs both functions for efficiency can be designed. Both functions use a sum of products operation to interpolate results, and the number of terms to be summed in both functions is very similar. Texture Operations Texture mapping and filtering is another key set of specialized floating-point arithmetic operations in a GPU. The operations used for texture mapping include: 1. Receive texture address (s, t) for the current screen pixel (x, y), where s and t are single precision floating-point numbers. 2. Compute the level of detail to identify the correct texture MIP-map level. 3. Compute the trilinear interpolation fraction. 4. Scale texture address (s, t) for the selected MIP-map level. 5. Access memory and retrieve desired texels (texture elements). 6. Perform filtering operation on texels. Texture mapping requires a significant amount of floating-point computation for full-speed operation, much of which is done at 16-bit half precision. As an example, the GeForce 8800 Ultra delivers about 500 GFLOPS of proprietary format floating-point computation for texture mapping instructions, in addition to its conventional IEEE single precision floating-point instructions. For more details on texture mapping and filtering, see Foley and van Dam [1995]. Performance The floating-point addition and multiplication arithmetic hardware is fully pipelined, and latency is optimized to balance delay and area. While pipelined, the throughput of the special functions is less than the floating-point addition and multiplication operations. Quarter-speed throughput for the special functions is typical performance in modern GPUs, with one SFU shared by four SP cores. In contrast, CPUs typically have significantly lower throughput for similar functions, such as division and square root, albeit with more accurate results. The attribute interpolation hardware is typically fully pipelined to enable full-speed pixel shaders. MIP-map A Latin phrase multum in parvo, or much in a small space. A MIP-map contains precalculated images of different resolutions, used to increase rendering speed and reduce artifacts. | clipped_hennesy_Page_744_Chunk6101 |
Double precision Newer GPUs such as the Tesla T10P also support IEEE 754 64-bit double precision operations in hardware. Standard floating-point arithmetic operations in double precision include addition, multiplication, and conversions between different floating-point and integer formats. The 2008 IEEE 754 floating-point standard includes specification for the fused-multiply-add operation (FMA), as discussed in Chapter 3. The FMA operation performs a floating-point multiplication followed by an addition, with a single rounding. The fused multiplication and addition operations retain full accuracy in intermediate calculations. This behavior enables more accurate floating-point computations involving the accumulation of prod- ucts, including dot products, matrix multiplication, and polynomial evaluation. The FMA instruction also enables efficient software implementations of exactly rounded division and square root, removing the need for a hardware division or square root unit. A double precision hardware FMA unit implements 64-bit addition, multipli cation, conversions, and the FMA operation itself. The architecture of a double FIGURE A.6.2 Double precision fused-multiply-add (FMA) unit. Hardware to implement floating- point A × B + C for double precision. Multiplier Array 53 x 53 Exp Diff B C A Carry Propagate Adder 64 Alignment shifter Inversion 3-2 CSA 161 bits Complementer Normalizer Rounder 53 Sum Carry Shifted C 161 64 64 53 53 Sum Carry A.6 Floating-point A-45 | clipped_hennesy_Page_745_Chunk6102 |
A-46 Appendix A Graphics and Computing GPUs precision FMA unit enables full-speed denormalized number support on both inputs and outputs. Figure A.6.2 shows a block diagram of an FMA unit. As shown in Figure A.6.2, the significands of A and B are multiplied to form a 106-bit product, with the results left in carry-save form. In parallel, the 53-bit addend C is conditionally inverted and aligned to the 106-bit product. The sum and carry results of the 106-bit product are summed with the aligned addend through a 161-bit-wide carry-save adder (CSA). The carry-save output is then summed together in a carry-propagate adder to produce an unrounded result in nonredundant, two’s complement form. The result is conditionally recomplemented, so as to return a result in sign-magnitude form. The complemented result is normalized, and then it is rounded to fit within the target format. A.7 Real Stuff: The NVIDIA GeForce 8800 The NVIDIA GeForce 8800 GPU, introduced in November 2006, is a unified vertex and pixel processor design that also supports parallel computing applications written in C using the CUDA parallel programming model. It is the first imple- mentation of the Tesla unified graphics and computing architecture described in Section A.4 and in Lindholm, Nickolls, Oberman, and Montrym [2008]. A family of Tesla architecture GPUs addresses the different needs of laptops, desktops, work- stations, and servers. Streaming Processor Array (SPA) The GeForce 8800 GPU shown in Figure A.7.1 contains 128 streaming processor (SP) cores organized as 16 streaming multiprocessors (SMs). Two SMs share a texture unit in each texture/processor cluster (TPC). An array of eight TPCs makes up the streaming processor array (SPA), which executes all graphics shader programs and computing programs. The host interface unit communicates with the host CPU via the PCI-Express bus, checks command consistency, and performs context switching. The input assembler collects geometric primitives (points, lines, triangles). The work distri bution blocks dispatch vertices, pixels, and compute thread arrays to the TPCs in the SPA. The TPCs execute vertex and geometry shader programs and computing programs. Output geometric data is sent to the viewport/clip/setup/raster/zcull block to be rasterized into pixel fragments that are then redistributed back into the SPA to execute pixel shader programs. Shaded pixels are sent across the intercon- nection network for processing by the ROP units. The network also routes tex- ture memory read requests from the SPA to DRAM and reads data from DRAM through a level-2 cache back to the SPA. | clipped_hennesy_Page_746_Chunk6103 |
Texture/Processor Cluster (TPC) Each TPC contains a geometry controller, an SM controller (SMC), two streaming multiprocessors (SMs), and a texture unit as shown in Figure A.7. 2. The geometry controller maps the logical graphics vertex pipeline into recir- culation on the physical SMs by directing all primitive and vertex attribute and topology flow in the TPC. The SMC controls multiple SMs, arbitrating the shared texture unit, load/store path, and I/O path. The SMC serves three graphics workloads simultaneously: vertex, geometry, and pixel. The texture unit processes a texture instruction for one vertex, geometry, or pixel quad, or four compute threads per cycle. Texture instruction sources are texture coordinates, and the outputs are weighted samples, typically a four-component (RGBA) floating-point color. The texture unit is deeply pipelined. Although it FIGURE A.7.1 NVIDIA Tesla unified graphics and computing GPU architecture. This GeForce 8800 has 128 streaming processor (SP) cores in 16 streaming multiprocessors (SM), arranged in eight texture/processor clusters (TPC). The processors connect with six 64-bit-wide DRAM partitions via an interconnection network. Other GPUs implementing the Tesla architecture vary the number of SP cores, SMs, DRAM partitions, and other units. GPU Host CPU System Memory DRAM DRAM DRAM DRAM DRAM DRAM ROP L2 ROP L2 ROP L2 ROP L2 ROP L2 ROP L2 TPC SPA TPC TPC TPC TPC TPC TPC TPC Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 Texture Unit Tex L1 SM SM SM SM SM SM SM SM SM SM SM SM SM SM SM SM SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP Vertex Work Distribution Input Assembler Host Interface Bridge Pixel Work Distribution Viewport/Clip/ Setup/Raster/ ZCull Compute Work Distribution Interconnection Network Interface Display High-Definition Video Processors Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory Shared Memory A.7 Real Stuff: The NVIDIA GeForce 8800 A-47 | clipped_hennesy_Page_747_Chunk6104 |
A-48 Appendix A Graphics and Computing GPUs contains a streaming cache to capture filtering locality, it streams hits mixed with misses without stalling. Streaming Multiprocessor (SM) The SM is a unified graphics and computing multiprocessor that executes vertex, geometry, and pixel-fragment shader programs and parallel computing programs. The SM consists of eight SP thread processor cores, two SFUs, a multithreaded instruction fetch and issue unit (MT issue), an instruction cache, a read-only constant cache, and a 16 KB read/write shared memory. It executes scalar instructions for individual threads. The GeForce 8800 Ultra clocks the SP cores and SFUs at 1.5 GHz, for a peak of 36 GFLOPS per SM. To optimize power and area efficiency, some SM nondatapath units operate at half the SP clock rate. SMC Geometry Controller TPC Texture Unit Tex L1 SP Shared Memory SP SP SP SP SP SP SP I-Cache MT Issue C-Cache SFU SFU SP Shared Memory SP SP SP SP SP SP SP I-Cache MT Issue C-Cache SFU SFU Shared Memory I-Cache MT Issue C-Cache SFU SFU SM SM SM SP SP SP SP SP SP SP SP FIGURE A.7.2 Texture/processor cluster (TPC) and a streaming multiprocessor (SM). Each SM has eight streaming processor (SP) cores, two SFUs, and a shared memory. | clipped_hennesy_Page_748_Chunk6105 |
To efficiently execute hundreds of parallel threads while running several different programs, the SM is hardware multithreaded. It manages and executes up to 768 concurrent threads in hardware with zero scheduling overhead. Each thread has its own thread execution state and can execute an independent code path. A warp consists of up to 32 threads of the same type—vertex, geometry, pixel, or compute. The SIMT design, previously described in Section A.4, shares the SM instruction fetch and issue unit efficiently across 32 threads but requires a full warp of active threads for full performance efficiency. The SM schedules and executes multiple warp types concurrently. Each issue cycle, the scheduler selects one of the 24 warps to execute a SIMT warp instruction. An issued warp instruction executes as four sets of 8 threads over four processor cycles. The SP and SFU units execute instructions independently, and by issuing instructions between them on alternate cycles, the scheduler can keep both fully occupied. A scoreboard qualifies each warp for issue each cycle. The instruction scheduler prioritizes all ready warps and selects the one with highest priority for issue. Prioritization considers warp type, instruction type, and “fairness” to all warps executing in the SM. The SM executes cooperative thread arrays (CTAs) as multiple concurrent warps which access a shared memory region allocated dynamically for the CTA. Instruction Set Threads execute scalar instructions, unlike previous GPU vector instruction architectures. Scalar instructions are simpler and compiler friendly. Texture instructions remain vector based, taking a source coordinate vector and returning a filtered color vector. The register-based instruction set includes all the floating-point and integer arithmetic, transcendental, logical, flow control, memory load/store, and texture instructions listed in the PTX instruction table of Figure A.4.3. Memory load/store instructions use integer byte addressing with register-plus-offset address arithmetic. For computing, the load/store instructions access three read-write memory spaces: local memory for per-thread, private, temporary data; shared memory for low- latency per-CTA data shared by the threads of the CTA; and global memory for data shared by all threads. Computing programs use the fast barrier synchroniza- tion bar.sync instruction to synchronize threads within a CTA that communicate with each other via shared and global memory. The latest Tesla architecture GPUs implement PTX atomic memory operations, which facilitate parallel reductions and parallel data structure management. Streaming Processor (SP) The multithreaded SP core is the primary thread processor, as introduced in Section A.4. Its register file provides 1024 scalar 32-bit registers for up to 96 threads (more threads than the example SP of Section A.4). Its floating-point A.7 Real Stuff: The NVIDIA GeForce 8800 A-49 | clipped_hennesy_Page_749_Chunk6106 |
add and multiply operations are compatible with the IEEE 754 standard for single precision FP numbers, including not-a-number (NaN) and infinity. The add and multiply operations use IEEE round-to-nearest-even as the default rounding mode. The SP core also implements all of the 32-bit and 64-bit integer arithmetic, comparison, conversion, and logical PTX instructions in Figure A.4.3. The processor is fully pipelined, and latency is optimized to balance delay and area. Special Function Unit (SFU) The SFU supports computation of both transcendental functions and planar attribute interpolation. As described in Section A.6, it uses quadratic interpola- tion based on enhanced minimax approximations to approximate the reciprocal, reciprocal square root, log2x, 2x, and sin/cos functions at one result per cycle. The SFU also supports pixel attribute interpolation such as color, depth, and texture coordinates at four samples per cycle. Rasterization Geometry primitives from the SMs go in their original round-robin input order to the viewport/clip/setup/raster/zcull block. The viewport and clip units clip the primitives to the view frustum and to any enabled user clip planes, and then transform the vertices into screen (pixel) space. Surviving primitives then go to the setup unit, which generates edge equations for the rasterizer. A coarse-rasterization stage generates all pixel tiles that are at least partially inside the primitive. The zcull unit maintains a hierarchical z surface, rejecting pixel tiles if they are conservatively known to be occluded by previously drawn pixels. The rejection rate is up to 256 pixels per clock. Pixels that survive zcull then go to a fine-rasterization stage that generates detailed coverage informa- tion and depth values. The depth test and update can be performed ahead of the fragment shader, or after, depending on current state. The SMC assembles surviving pixels into warps to be processed by an SM running the current pixel shader. The SMC then sends surviving pixel and associated data to the ROP. Raster Operations Processor (ROP) and Memory System Each ROP is paired with a specific memory partition. For each pixel fragment emitted by a pixel shader program, ROPs perform depth and stencil testing and updates, and in parallel, color blending and updates. Lossless color compression (up to 8:1) and depth compression (up to 8:1) are used to reduce DRAM bandwidth. Each ROP has a peak rate of four pixels per clock and supports 16-bit floating- point and 32-bit floating-point HDR formats. ROPs support double-rate-depth processing when color writes are disabled. A-50 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_750_Chunk6107 |
Antialiasing support includes up to 16× multisampling and supersampling. The coverage-sampling antialiasing (CSAA) algorithm computes and stores Boolean coverage at up to 16 samples and compresses redundant color, depth, and stencil information into the memory footprint and a bandwidth of four or eight samples for improved performance. The DRAM memory data bus width is 384 pins, arranged in six independent partitions of 64 pins each. Each partition supports double-data-rate DDR2 and graphics-oriented GDDR3 protocols at up to 1.0 GHz, yielding a bandwidth of about 16 GB/s per partition, or 96 GB/s. The memory controllers support a wide range of DRAM clock rates, protocols, device densities, and data bus widths. Texture and load/store requests can occur between any TPC and any memory partition, so an interconnection network routes requests and responses. Scalability The Tesla unified architecture is designed for scalability. Varying the number of SMs, TPCs, ROPs, caches, and memory partitions provides the right balance for different performance and cost targets in GPU market segments. Scalable link interconnect (SLI) connects multiple GPUs, providing further scalability. Performance The GeForce 8800 Ultra clocks the SP thread processor cores and SFUs at 1.5 GHz, for a theoretical operation peak of 576 GFLOPS. The GeForce 8800 GTX has a 1.35 GHz processor clock and a corresponding peak of 518 GFLOPS. The following three sections compare the performance of a GeForce 8800 GPU with a multicore CPU on three different applications—dense linear algebra, fast Fourier transforms, and sorting. The GPU programs and libraries are compiled CUDA C code. The CPU code uses the single precision multithreaded Intel MKL 10.0 library to leverage SSE instructions and multiple cores. Dense Linear Algebra Performance Dense linear algebra computations are fundamental in many applications. Volkov and Demmel [2008] present GPU and CPU performance results for single precision dense matrix-matrix multiplication (the SGEMM routine) and LU, QR, and Cholesky matrix factorizations. Figure A.7.3 compares GFLOPS rates on SGEMM dense matrix-matrix multiplication for a GeForce 8800 GTX GPU with a quad-core CPU. Figure A.7.4 compares GFLOPS rates on matrix factorization for a GPU with a quad-core CPU. Because SGEMM matrix-matrix multiply and similar BLAS3 routines are the bulk of the work in matrix factorization, their performance sets an upper bound on factorization rate. As the matrix order increases beyond 200 to 400, the factorization A.7 Real Stuff: The NVIDIA GeForce 8800 A-51 | clipped_hennesy_Page_751_Chunk6108 |
FIGURE A.7.3 SGEMM dense matrix-matrix multiplication performance rates. The graph shows single precision GFLOPS rates achieved in multiplying square N×N matrices (solid lines) and thin N×64 and 64×N matrices (dashed lines). Adapted from Figure 6 of Volkov and Demmel [2008]. The black lines are a 1.35 GHz GeForce 8800 GTX using Volkov’s SGEMM code (now in NVIDIA CUBLAS 2.0) on matrices in GPU memory. The blue lines are a quad-core 2.4 GHz Intel Core2 Quad Q6600, 64-bit Linux, Intel MKL 10.0 on matrices in CPU memory. 210 180 150 120 90 60 30 0 GFLOPS N 64 128 256 512 1024 2048 4096 8192 A:NN, B:NN A:N64, B:64N GeForce 8800 GTX Core2 Quad LU Cholesky QR Core2 Quad 210 180 150 120 90 60 30 0 Order of Matrix GeForce 8800 GTX + Core2 Duo 64 128 256 512 1024 2048 4096 8192 16384 GFLOPS FIGURE A.7.4 Dense matrix factorization performance rates. The graph shows GFLOPS rates achieved in matrix factorizations using the GPU and using the CPU alone. Adapted from Figure 7 of Volkov and Demmel [2008]. The black lines are a 1.35 GHz NVIDIA GeForce 8800 GTX, CUDA 1.1, Windows XP attached to a 2.67 GHz Intel Core2 Duo E6700 Windows XP, including all CPU–GPU data transfer times. The blue lines are a quad-core 2.4 GHz Intel Core2 Quad Q6600, 64-bit Linux, Intel MKL 10.0. A-52 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_752_Chunk6109 |
problem becomes large enough that SGEMM can leverage the GPU parallelism and overcome the CPU–GPU system and copy overhead. Volkov’s SGEMM matrix- matrix multiply achieves 206 GFLOPS, about 60% of the GeForce 8800 GTX peak multiply-add rate, while the QR factorization reached 192 GFLOPS, about 4.3 times the quad-core CPU. FFT Performance Fast Fourier Transforms are used in many applications. Large transforms and multidimensional transforms are partitioned into batches of smaller 1D transforms. Figure A.7.5 compares the in-place 1D complex single precision FFT perfor- mance of a 1.35 GHz GeForce 8800 GTX (dating from late 2006) with a 2.8 GHz quad-Core Intel Xeon E5462 series (code named “Harpertown,” dating from late 2007). CPU performance was measured using the Intel Math Kernel Library (MKL) 10.0 FFT with four threads. GPU performance was measured using the NVIDIA CUFFT 2.1 library and batched 1D radix-16 decimation-in-frequency FFTs. Both CPU and GPU throughput performance was measured using batched FFTs, batch size was 224/n, where n is the transform size. Thus, the workload for every trans- form size was 128 MB. To determine GFLOPS rate, the number of operations per transform was taken as 5n log2 n. FIGURE A.7.5 Fast Fourier Transform throughput performance. The graph compares the performance of batched one-dimensional in-place complex FFTs on a 1.35 GHz GeForce 8800 GTX with a quad-core 2.8 GHz Intel Xeon E5462 series (code named “Harpertown”), 6MB L2 Cache, 4GB Memory, 1600 FSB, Red Hat Linux, Intel MKL 10.0. 80 GeForce 8800GTX Xeon 5462 70 GFLOPS 60 50 40 30 20 10 Number of Elements in One Transform 0 128 256 512 1,024 2,048 4,096 8,192 16,384 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304 A.7 Real Stuff: The NVIDIA GeForce 8800 A-53 | clipped_hennesy_Page_753_Chunk6110 |
Sorting Performance In contrast to the applications just discussed, sort requires far more substantial coordination among parallel threads, and parallel scaling is correspondingly harder to obtain. Nevertheless, a variety of well-known sorting algorithms can be efficiently parallelized to run well on the GPU. Satish, et al. [2008] detail the design of sorting algorithms in CUDA, and the results they report for radix sort are summarized below. Figure A.7.6 compares the parallel sorting performance of a GeForce 8800 Ultra with an 8-core Intel Clovertown system, both of which date to early 2007. The CPU cores are distributed between two physical sockets. Each socket contains a multichip module with twin Core2 chips, and each chip has a 4MB L2 cache. All sorting routines were designed to sort key-value pairs where both keys and values are 32-bit integers. The primary algorithm being studied is radix sort, although the quicksort-based parallel_sort() procedure provided by Intel’s Threading Building Blocks is also included for comparison. Of the two CPU-based radix sort codes, one was implemented using only the scalar instruction set and the other utilizes carefully hand-tuned assembly language routines that take advantage of the SSE2 SIMD vector instructions. The graph itself shows the achieved sorting rate—defined as the number of ele- ments sorted divided by the time to sort—for a range of sequence sizes. It is apparent 0 10 20 30 40 50 60 70 80 1,000 10,000 100,000 1,000,000 10,000,000 100,000,000 Millions Sequence Size Sorting Rate (pairs/sec) CPU quick sort CPU radix sort (scalar) GPU radix sort CPU radix sort (SIMD) FIGURE A.7.6 Parallel sorting performance. This graph compares sorting rates for parallel radix sort implementations on a 1.5 GHz GeForce 8800 Ultra and an 8-core 2.33 GHz Intel Core2 Xeon E5345 system. A-54 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_754_Chunk6111 |
from this graph that the GPU radix sort achieved the highest sorting rate for all sequences of 8K-elements and larger. In this range, it is on average 2.6 times faster than the quicksort-based routine and roughly 2 times faster than the radix sort rou- tines, all of which were using the eight available CPU cores. The CPU radix sort per- formance varies widely, likely due to poor cache locality of its global permutations. A.8 Real Stuff: Mapping Applications to GPUs The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now parallel systems. Furthermore, their parallelism continues to scale with Moore’s law. The challenge is to develop mainstream visual computing and high-performance computing applications that transparently scale their parallelism to leverage the increasing number of processor cores, much as 3D graphics applications transparently scale their parallelism to GPUs with widely varying numbers of cores. This section presents examples of mapping scalable parallel computing applications to the GPU using CUDA. Sparse Matrices A wide variety of parallel algorithms can be written in CUDA in a fairly straightforward manner, even when the data structures involved are not simple regular grids. Sparse matrix-vector multiplication (SpMV) is a good example of an important numerical building block that can be parallelized quite directly using the abstractions provided by CUDA. The kernels we discuss below, when combined with the provided CUBLAS vector routines, make writing iterative solvers such as the conjugate gradient method straightforward. A sparse n × n matrix is one in which the number of nonzero entries m is only a small fraction of the total. Sparse matrix representations seek to store only the nonzero elements of a matrix. Since it is fairly typical that a sparse n × n matrix will contain only m = O(n) nonzero elements, this represents a substantial savings in storage space and processing time. One of the most common representations for general unstructured sparse matrices is the compressed sparse row (CSR) representation. The m nonzero elements of the matrix A are stored in row-major order in an array Av. A second array Aj records the corresponding column index for each entry of Av. Finally, an array Ap of n + 1 elements records the extent of each row in the previous arrays; the entries for row i in Aj and Av extend from index Ap[i] up to, but not including, index Ap[i + 1]. This implies that Ap[0] will always be 0 and Ap[n] will always be the number of nonzero elements in the matrix. Figure A.8.1 shows an example of the CSR representation of a simple matrix. A.8 Real Stuff: Mapping Applications to GPUs A-55 | clipped_hennesy_Page_755_Chunk6112 |
Given a matrix A in CSR form and a vector x, we can compute a single row of the product y = Ax using the multiply_row() procedure shown in Figure A.8.2. Computing the full product is then simply a matter of looping over all rows and computing the result for that row using multiply_row(), as in the serial C code shown in Figure A.8.3. This algorithm can be translated into a parallel CUDA kernel quite easily. We simply spread the loop in csrmul_serial() over many parallel threads. Each thread will compute exactly one row of the output vector y. The code for this kernel is shown in Figure A.8.4. Note that it looks extremely similar to the serial loop used in the csrmul_serial() procedure. There are really only two points of difference. First, the row index for each thread is computed from the block and thread indices assigned to each thread, eliminating the for-loop. Second, we have a conditional that only evaluates a row product if the row index is within the bounds of the matrix (this is necessary since the number of rows n need not be a multiple of the block size used in launching the kernel). float multiply_row(unsigned int rowsize, unsigned int *Aj, // column indices for row float *Av, // nonzero entries for row float *x) // the RHS vector { float sum = 0; for(unsigned int column=0; column<rowsize; ++column) sum += Av[column] * x[Aj[column]]; return sum; FIGURE A.8.1 Compressed sparse row (CSR) matrix. 3 0 0 1 A a. Sample matrix A 0 0 2 0 0 4 1 1 0 0 1 0 = Row 0 b. CSR representation of matrix Row 2 Row 3 Av[7] = = = Aj[7] { } } } { { Ap[5] 0 3 0 2 1 2 3 0 3 1 2 4 1 1 1 2 2 5 7 } FIGURE A.8.2 Serial C code for a single row of sparse matrix-vector multiply. A-56 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_756_Chunk6113 |
void csrmul_serial(unsigned int *Ap, unsigned int *Aj, float *Av, unsigned int num_rows, float *x, float *y) { for(unsigned int row=0; row<num_rows; ++row) { unsigned int row_begin = Ap[row]; unsigned int row_end = Ap[row+1]; y[row] = multiply_row(row_end-row_begin, Aj+row_begin, Av+row_begin, x); } } FIGURE A.8.3 Serial code for sparse matrix-vector multiply. __global__ void csrmul_kernel(unsigned int *Ap, unsigned int *Aj, float *Av, unsigned int num_rows, float *x, float *y) { unsigned int row = blockIdx.x*blockDim.x + threadIdx.x; if( row<num_rows ) { unsigned int row_begin = Ap[row]; unsigned int row_end = Ap[row+1]; y[row] = multiply_row(row_end-row_begin, Aj+row_begin, Av+row_begin, x); } } FIGURE A.8.4 CUDA version of sparse matrix-vector multiply. Assuming that the matrix data structures have already been copied to the GPU device memory, launching this kernel will look like: A.8 Real Stuff: Mapping Applications to GPUs A-57 unsigned int blocksize = 128; // or any size up to 512 unsigned int nblocks = (num_rows + blocksize - 1) / blocksize; csrmul_kernel<<<nblocks,blocksize>>>(Ap, Aj, Av, num_rows, x, y); | clipped_hennesy_Page_757_Chunk6114 |
A-58 Appendix A Graphics and Computing GPUs The pattern that we see here is a very common one. The original serial algorithm is a loop whose iterations are independent of each other. Such loops can be parallelized quite easily by simply assigning one or more iterations of the loop to each parallel thread. The programming model provided by CUDA makes expressing this type of parallelism particularly straightforward. This general strategy of decomposing computations into blocks of independent work, and more specifically breaking up independent loop iterations, is not unique to CUDA. This is a common approach used in one form or another by various parallel programming systems, including OpenMP and Intel’s Threading Building Blocks. Caching in Shared memory The SpMV algorithms outlined above are fairly simplistic. There are a number of optimizations that can be made in both the CPU and GPU codes that can improve performance, including loop unrolling, matrix reordering, and register blocking. The parallel kernels can also be reimplemented in terms of data parallel scan operations presented by Sengupta, et al. [2007]. One of the important architectural features exposed by CUDA is the presence of the per-block shared memory, a small on-chip memory with very low latency. Taking advantage of this memory can deliver substantial performance improve- ments. One common way of doing this is to use shared memory as a software- managed cache to hold frequently reused data. Modifications using shared memory are shown in Figure A.8.5. In the context of sparse matrix multiplication, we observe that several rows of A may use a particular array element x[i]. In many common cases, and particularly when the matrix has been reordered, the rows using x[i] will be rows near row i. We can therefore implement a simple caching scheme and expect to achieve some performance benefit. The block of threads processing rows i through j will load x[i] through x[j] into its shared memory. We will unroll the multiply_row() loop and fetch elements of x from the cache whenever possible. The resulting code is shown in Figure A.8.5. Shared memory can also be used to make other optimizations, such as fetching Ap[row+1] from an adjacent thread rather than refetching it from memory. Because the Tesla architecture provides an explicitly managed on-chip shared memory, rather than an implicitly active hardware cache, it is fairly common to add this sort of optimization. Although this can impose some additional development burden on the programmer, it is relatively minor, and the potential performance benefits can be substantial. In the example shown above, even this fairly simple use of shared memory returns a roughly 20% performance improvement on representative matrices derived from 3D surface meshes. The availability of an explicitly managed memory in lieu of an implicit cache also has the advantage that caching and prefetching policies can be specifically tailored to the application needs. | clipped_hennesy_Page_758_Chunk6115 |
__global__ void csrmul_cached(unsigned int *Ap, unsigned int *Aj, float *Av, unsigned int num_rows, const float *x, float *y) { // Cache the rows of x[] corresponding to this block. __shared__ float cache[blocksize]; unsigned int block_begin = blockIdx.x * blockDim.x; unsigned int block_end = block_begin + blockDim.x; unsigned int row = block_begin + threadIdx.x; // Fetch and cache our window of x[]. if( row<num_rows) cache[threadIdx.x] = x[row]; __syncthreads(); if( row<num_rows ) { unsigned int row_begin = Ap[row]; unsigned int row_end = Ap[row+1]; float sum = 0, x_j; for(unsigned int col=row_begin; col<row_end; ++col) { unsigned int j = Aj[col]; // Fetch x_j from our cache when possible if( j>=block_begin && j<block_end ) x_j = cache[j-block_begin]; else x_j = x[j]; sum += Av[col] * x_j; } y[row] = sum; } } FIGURE A.8.5 Shared memory version of sparse matrix-vector multiply. A.8 Real Stuff: Mapping Applications to GPUs A-59 | clipped_hennesy_Page_759_Chunk6116 |
These are fairly simple kernels whose purpose is to illustrate basic techniques in writing CUDA programs, rather than how to achieve maximal performance. Numerous possible avenues for optimization are available, several of which are explored by Williams, et al. [2007] on a handful of different multicore architectures. Nevertheless, it is still instructive to examine the comparative performance of even these simplistic kernels. On a 2 GHz Intel Core2 Xeon E5335 processor, the csrmul_serial() kernel runs at roughly 202 million nonzeros processed per second, for a collection of Laplacian matrices derived from 3D triangulated surface meshes. Parallelizing this kernel with the parallel_for construct provided by Intel’s Threading Building Blocks produces parallel speed-ups of 2.0, 2.1, and 2.3 running on two, four, and eight cores of the machine, respectively. On a GeForce 8800 Ultra, the csrmul_kernel() and csrmul_cached() kernels achieve processing rates of roughly 772 and 920 million nonzeros per second, corresponding to parallel speed-ups of 3.8 and 4.6 times over the serial performance of a single CPU core. Scan and Reduction Parallel scan, also known as parallel prefix sum, is one of the most important building blocks for data-parallel algorithms [Blelloch, 1990]. Given a sequence a of n elements: [a0, a1, . . ., an-1] and a binary associative operator Å, the scan function computes the sequence: scan(a, Å) = [a0, (a0 Å a1), . . ., (a0 Å a1 Å . . . Å an-1)] As an example, if we take Å to be the usual addition operator, then applying scan to the input array a = [3 1 7 0 4 1 6 3] will produce the sequence of partial sums: scan(a, +) = [3 4 11 11 15 16 22 25] This scan operator is an inclusive scan, in the sense that element i of the output sequence incorporates element ai of the input. Incorporating only previous elements would yield an exclusive scan operator, also known as a prefix-sum operation. The serial implementation of this operation is extremely simple. It is simply a loop that iterates once over the entire sequence, as shown in Figure A.8.6. At first glance, it might appear that this operation is inherently serial. However, it can actually be implemented in parallel efficiently. The key observation is that A-60 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_760_Chunk6117 |
because addition is associative, we are free to change the order in which elements are added together. For instance, we can imagine adding pairs of consecutive elements in parallel, and then adding these partial sums, and so on. One simple scheme for doing this is from Hillis and Steele [1989]. An implementation of their algorithm in CUDA is shown in Figure A.8.7. It assumes that the input array x[] contains exactly one element per thread of the thread block. It performs log2 n iterations of a loop collecting partial sums together. To understand the action of this loop, consider Figure A.8.8, which illustrates the simple case for n = 8 threads and elements. Each level of the diagram represents one step of the loop. The lines indicate the location from which the data is being fetched. For each element of the output (i.e., the final row of the diagram) we are building a summation tree over the input elements. The edges highlighted in blue show the form of this summation tree for the final element. The leaves of this tree are all the initial elements. Tracing back from any output element shows that it incorporates all input values up to and including itself. template<class T> __host__ T plus_scan(T *x, unsigned int n) { for(unsigned int i=1; i<n; ++i) x[i] = x[i-1] + x[i]; } FIGURE A.8.6 Template for serial plus-scan. template<class T> __device__ T plus_scan(T *x) { unsigned int i = threadIdx.x; unsigned int n = blockDim.x; for(unsigned int offset=1; offset<n; offset *= 2) { T t; if(i>=offset) t = x[i-offset]; __syncthreads(); if(i>=offset) x[i] = t + x[i]; __syncthreads(); } return x[i]; } FIGURE A.8.7 CUDA template for parallel plus-scan. A.8 Real Stuff: Mapping Applications to GPUs A-61 | clipped_hennesy_Page_761_Chunk6118 |
A-62 Appendix A Graphics and Computing GPUs While simple, this algorithm is not as efficient as we would like. Examining the serial implementation, we see that it performs O(n) additions. The parallel implementation, in contrast, performs O(n log n) additions. For this reason, it is not work efficient, since it does more work than the serial implementation to compute the same result. Fortunately, there are other techniques for implementing scan that are work efficient. Details on more efficient implementation techniques and the extension of this per-block procedure to multiblock arrays are provided by Sengupta, et al. [2007]. In some instances, we may only be interested in computing the sum of all elements in an array, rather than the sequence of all prefix sums returned by scan. This is the parallel reduction problem. We could simply use a scan algorithm to perform this computation, but reduction can generally be implemented more efficiently than scan. Figure A.8.9 shows the code for computing a reduction using addition. In this example, each thread simply loads one element of the input sequence (i.e., it initially sums a subsequence of length 1). At the end of the reduction, we want thread 0 to hold the sum of all elements initially loaded by the threads of its block. The loop in this kernel implicitly builds a summation tree over the input elements, much like the scan algorithm above. At the end of this loop, thread 0 holds the sum of all the values loaded by this block. If we want the final value of the location pointed to by total to contain the total of all elements in the array, we must combine the partial sums of all the blocks in the grid. One strategy to do this would be to have each block write its partial sum into a second array and then launch the reduction kernel again, repeating the process until we had reduced the sequence to a single value. A more attractive alternative supported by the Tesla GPU architecture is to use the atomicAdd() FIGURE A.8.8 Tree-based parallel scan data references. x[0] x[0] x[0] x[0] x[1] x[1] x[1] x[1] x[2] x[2] x[2] x[2] x[3] x[3] x[3] x[3] x[4] x[4] x[4] x[4] x[5] x[5] x[5] x[6] x[6] x[6] x[5] x[6] x[7] x[7] x [ i ] + = x [ i – 1 ] ; x [ i ] + = x [ i – 2 ] ; x [ i ] + = x [ i – 4 ] ; x[7] x[7] | clipped_hennesy_Page_762_Chunk6119 |
__global__ void plus_reduce(int *input, unsigned int N, int *total) { unsigned int tid = threadIdx.x; unsigned int i = blockIdx.x*blockDim.x + threadIdx.x; // Each block loads its elements into shared memory, padding // with 0 if N is not a multiple of blocksize __shared__ int x[blocksize]; x[tid] = (i<N) ? input[i] : 0; __syncthreads(); // Every thread now holds 1 input value in x[] // // Build summation tree over elements. for(int s=blockDim.x/2; s>0; s=s/2) { if(tid < s) x[tid] += x[tid + s]; __syncthreads(); } // Thread 0 now holds the sum of all input values // to this block. Have it add that sum to the running total if( tid == 0 ) atomicAdd(total, x[tid]); } FIGURE A.8.9 CUDA implementation of plus-reduction. A.8 Real Stuff: Mapping Applications to GPUs A-63 primitive, an efficient atomic read-modify-write primitive supported by the memory subsystem. This eliminates the need for additional temporary arrays and repeated kernel launches. Parallel reduction is an essential primitive for parallel programming and highlights the importance of per-block shared memory and low-cost barriers in making cooperation among threads efficient. This degree of data shuffling among threads would be prohibitively expensive if done in off-chip global memory. Radix Sort One important application of scan primitives is in the implementation of sorting routines. The code in Figure A.8.10 implements a radix sort of integers across a single thread block. It accepts as input an array values containing one 32-bit integer for each thread of the block. For efficiency, this array should be stored in per-block shared memory, but this is not required for the sort to behave correctly. This is a fairly simple implementation of radix sort. It assumes the availability of a procedure partition_by_bit() that will partition the given array such that | clipped_hennesy_Page_763_Chunk6120 |
A-64 Appendix A Graphics and Computing GPUs all values with a 0 in the designated bit will come before all values with a 1 in that bit. To produce the correct output, this partitioning must be stable. Implementing the partitioning procedure is a simple application of scan. Thread i holds the value xi and must calculate the correct output index at which to write this value. To do so, it needs to calculate (1) the number of threads j < i for which the designated bit is 1 and (2) the total number of bits for which the designated bit is 0. The CUDA code for partition_by_bit() is shown in Figure A.8.11. __device__ void partition_by_bit(unsigned int *values, unsigned int bit) { unsigned int i = threadIdx.x; unsigned int size = blockDim.x; unsigned int x_i = values[i]; unsigned int p_i = (x_i >> bit) & 1; values[i] = p_i; __syncthreads(); // Compute number of T bits up to and including p_i. // Record the total number of F bits as well. unsigned int T_before = plus_scan(values); unsigned int T_total = values[size-1]; unsigned int F_total = size - T_total; __syncthreads(); // Write every x_i to its proper place if( p_i ) values[T_before-1 + F_total] = x_i; else values[i - T_before] = x_i; } FIGURE A.8.11 CUDA code to partition data on a bit-by-bit basis, as part of radix sort. __device__ void radix_sort(unsigned int *values) { for(int bit=0; bit<32; ++bit) { partition_by_bit(values, bit); __syncthreads(); } } FIGURE A.8.10 CUDA code for radix sort. | clipped_hennesy_Page_764_Chunk6121 |
A similar strategy can be applied for implementing a radix sort kernel that sorts an array of large length, rather than just a one-block array. The fundamental step remains the scan procedure, although when the computation is partitioned across multiple kernels, we must double-buffer the array of values rather than doing the partitioning in place. Details on performing radix sorts on large arrays efficiently are provided by Satish, Harris, and Garland [2008]. N-Body Applications on a GPU1 Nyland, Harris, and Prins [2007] describe a simple yet useful computational kernel with excellent GPU performance—the all-pairs N-body algorithm. It is a time-consuming component of many scientific applications. N-body simulations calculate the evolution of a system of bodies in which each body continuously interacts with every other body. One example is an astrophysical simulation in which each body represents an individual star, and the bodies gravitationally attract each other. Other examples are protein folding, where N-body simulation is used to calculate electrostatic and van der Waals forces; turbulent fluid flow simulation; and global illumination in computer graphics. The all-pairs N-body algorithm calculates the total force on each body in the system by computing each pair-wise force in the system, summing for each body. Many scientists consider this method to be the most accurate, with the only loss of precision coming from the floating-point hardware operations. The drawback is its O(n2) computational complexity, which is far too large for systems with more than 106 bodies. To overcome this high cost, several simplifications have been proposed to yield O(n log n) and O(n) algorithms; examples are the Barnes-Hut algorithm, the Fast Multipole Method and Particle-Mesh-Ewald summation. All of the fast methods still rely on the all-pairs method as a kernel for accurate computation of short-range forces; thus it continues to be important. N-Body Mathematics For gravitational simulation, calculate the body-body force using elementary physics. Between two bodies indexed by i and j, the 3D force vector is: fij = G mimj ||rij||2 × rij ||rij|| The force magnitude is calculated in the left term, while the direction is computed in the right (unit vector pointing from one body to the other). Given a list of interacting bodies (an entire system or a subset), the calculation is simple: for all pairs of interactions, compute the force and sum for each body. Once the total forces are calculated, they are used to update each body’s position and velocity, based on the previous position and velocity. The calculation of the forces has complexity O(n2), while the update is O(n). 1 Adapted from Nyland, Harris and Prins [2007], “Fast N-Body Simulation with CUDA,” Chapter 31 of GPU Gems 3. A.8 Real Stuff: Mapping Applications to GPUs A-65 | clipped_hennesy_Page_765_Chunk6122 |
void accel_on_all_bodies() { int i, j; float3 acc(0.0f, 0.0f, 0.0f); for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { acc = body_body_interaction(acc, body[i], body[j]); } accel[i] = acc; } } FIGURE A.8.12 Serial code to compute all pair-wise forces on N bodies. __global__ void accel_on_one_body() { int i = threadIdx.x + blockDim.x * blockIdx.x; int j; float3 acc(0.0f, 0.0f, 0.0f); for (j = 0; j < N; j++) { acc = body_body_interaction(acc, body[i], body[j]); } accel[i] = acc; } FIGURE A.8.13 CUDA thread code to compute the total force on a single body. The serial force-calculation code uses two nested for-loops iterating over pairs of bodies. The outer loop selects the body for which the total force is being calcu- lated, and the inner loop iterates over all the bodies. The inner loop calls a function that computes the pair-wise force, then adds the force into a running sum. To compute the forces in parallel, we assign one thread to each body, since the calculation of force on each body is independent of the calculation on all other bodies. Once all of the forces are computed, the positions and velocities of the bodies can be updated. The code for the serial and parallel versions is shown in Figure A.8.12 and Figure A.8.13. The serial version has two nested for-loops. The conversion to CUDA, like many other examples, converts the serial outer loop to a per-thread kernel where each thread computes the total force on a single body. The CUDA kernel computes a global thread ID for each thread, replacing the iterator variable of the serial outer loop. Both kernels finish by storing the total acceleration in a global array used to compute the new position and velocity values in a subsequent step. A-66 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_766_Chunk6123 |
The outer loop is replaced by a CUDA kernel grid that launches N threads, one for each body. Optimization for GPU Execution The CUDA code shown is functionally correct, but is not efficient, as it ignores key architectural features. Better performance can be achieved with three main optimizations. First, shared memory can be used to avoid identical memory reads between threads. Second, using multiple threads per body improves performance for small values of N. Third, loop unrolling reduces loop overhead. Using Shared memory Shared memory can hold a subset of body positions, much like a cache, eliminating redundant global memory requests between threads. We optimize the code shown above to have each of p threads in a thread-block load one position into shared memory (for a total of p positions). Once all the threads have loaded a value into shared memory, ensured by __syncthreads(), each thread can then perform p interactions (using the data in shared memory). This is repeated N/p times to complete the force calculation for each body, which reduces the number of requests to memory by a factor of p (typically in the range 32–128). The function called accel_on_one_body() requires a few changes to support this optimization. The modified code is shown in Figure A.8.14. __shared__ float4 shPosition[256]; … __global__ void accel_on_one_body() { int i = threadIdx.x + blockDim.x * blockIdx.x; int j, k; int p = blockDim.x; float3 acc(0.0f, 0.0f, 0.0f); float4 myBody = body[i]; for (j = 0; j < N; j += p) { // Outer loops jumps by p each time shPosition[threadIdx.x] = body[j+threadIdx.x]; __syncthreads(); for (k = 0; k < p; k++) { // Inner loop accesses p positions acc = body_body_interaction(acc, myBody, shPosition[k]); } __syncthreads(); } accel[i] = acc; } FIGURE A.8.14 CUDA code to compute the total force on each body, using shared memory to improve performance. A.8 Real Stuff: Mapping Applications to GPUs A-67 | clipped_hennesy_Page_767_Chunk6124 |
The loop that formerly iterated over all bodies now jumps by the block dimension p. Each iteration of the outer loop loads p successive positions into shared memory (one position per thread). The threads synchronize, and then p force calculations are computed by each thread. A second synchronization is required to ensure that new values are not loaded into shared memory prior to all threads completing the force calculations with the current data. Using shared memory reduces the memory bandwidth required to less than 10% of the total bandwidth that the GPU can sustain (using less than 5 GB/s). This optimization keeps the application busy performing computation rather than waiting on memory accesses, as it would have without the use of shared memory. The performance for varying values of N is shown in Figure A.8.15. Using Multiple Threads per Body Figure A.8.15 shows performance degradation for problems with small values of N (N < 4096) on the GeForce 8800 GTX. Many research efforts that rely on N-body calculations focus on small N (for long simulation times), making it a target of our optimization efforts. Our presumption to explain the lower performance was that there was simply not enough work to keep the GPU busy when N is small. The solution is to allocate more threads per body. We change the thread-block dimensions from (p, 1, 1) to (p, q, 1), where q threads divide the work of a single body into equal parts. By allocating the additional threads within the same thread block, partial results can be stored in shared memory. When all the force calculations are FIGURE A.8.15 Performance measurements of the N-body application on a GeForce 8800 GTX and a GeForce 9600. The 8800 has 128 stream processors at 1.35 GHz, while the 9600 has 64 at 0.80 GHz (about 30% of the 8800). The peak performance is 242 GFLOPS. For a GPU with more processors, the problem needs to be bigger to achieve full performance (the 9600 peak is around 2048 bodies, while the 8800 doesn’t reach its peak until 16,384 bodies). For small N, more than one thread per body can significantly improve performance, but eventually incurs a performance penalty as N grows. 250 N-Body Performance on GPUs 200 150 100 50 GFLOPS 0 512 768 1024 1536 2048 3072 4096 6144 8192 12288 16384 24576 32768 1 thread, 8800 2 threads, 8800 4 threads, 8800 1 thread, 9600 2 threads, 9600 4 threads, 9600 Number of Bodies A-68 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_768_Chunk6125 |
done, the q partial results can be collected and summed to compute the final result. Using two or four threads per body leads to large improvements for small N. As an example, the performance on the 8800 GTX jumps by 110% when N = 1024 (one thread achieves 90 GFLOPS, where four achieve 190 GFLOPS). Performance degrades slightly on large N, so we only use this optimization for N smaller than 4096. The performance increases are shown in Figure A.8.15 for a GPU with 128 processors and a smaller GPU with 64 processors clocked at two-thirds the speed. Performance Comparison The performance of the N-body code is shown in Figure A.8.15 and Figure A.8.16. In Figure A.8.15, performance of high- and medium-performance GPUs is shown, along with the performance improvements achieved by using multiple threads per body. The performance on the faster GPU ranges from 90 to just under 250 GFLOPS. Figure A.8.16 shows nearly identical code (C++ versus CUDA) running on Intel Core2 CPUs. The CPU performance is about 1% of the GPU, in the range of 0.2 to 2 GFLOPS, remaining nearly constant over the wide range of problem sizes. Number of Bodies N-Body Performance on Intel CPUs 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 512 GFLOPS 768 1024 1536 2048 3072 4096 6144 8192 12288 16384 24576 32768 T2400 E8200 X9775 X9775-Cuda FIGURE A.8.16 Performance measurements on the N-body code on a CPU. The graph shows single precision N-body performance using Intel Core2 CPUs, denoted by their CPU model number. Note the dramatic reduction in GFLOPS performance (shown in GFLOPS on the y-axis), demonstrating how much faster the GPU is compared to the CPU. The performance on the CPU is generally independent of problem size, except for an anomalously low performance when N=16,384 on the X9775 CPU. The graph also shows the results of running the CUDA version of the code (using the CUDA-for-CPU compiler) on a single CPU core, where it outperforms the C++ code by 24%. As a programming language, CUDA exposes parallelism and locality that a compiler can exploit. The Intel CPUs are a 3.2 GHz Extreme X9775 (code named “Penryn”), a 2.66 GHz E8200 (code named “Wolfdale”), a desktop, pre-Penryn CPU, and a 1.83 GHz T2400 (code named “Yonah”), a 2007 laptop CPU. The Penryn version of the Core 2 architecture is particu- larly interesting for N-body calculations with its 4-bit divider, allowing division and square root operations to execute four times faster than previous Intel CPUs. A.8 Real Stuff: Mapping Applications to GPUs A-69 | clipped_hennesy_Page_769_Chunk6126 |
The graph also shows the results of compiling the CUDA version of the code for a CPU, where the performance improves by 24%. CUDA, as a programming language, exposes parallelism, allowing the compiler to make better use of the SSE vector unit on a single core. The CUDA version of the N-body code naturally maps to multicore CPUs as well (with grids of blocks), where it achieves nearly perfect scaling on an eight-core system with N = 4096 (ratios of 2.0, 3.97, and 7.94 on two, four, and eight cores, respectively). Results With a modest effort, we developed a computational kernel that improves GPU performance over multicore CPUs by a factor of up to 157. Execution time for the N-body code running on a recent CPU from Intel (Penryn X9775 at 3.2 GHz, single core) took more than 3 seconds per frame to run the same code that runs at a 44 Hz frame rate on a GeForce 8800 GPU. On pre-Penryn CPUs, the code requires 6–16 seconds, and on older Core2 processors and Pentium IV processor, the time is about 25 seconds. We must divide the apparent increase in performance in half, as the CPU requires only half as many calculations to compute the same result (using the optimization that the forces on a pair of bodies are equal in strength and opposite in direction). How can the GPU speed up the code by such a large amount? The answer requires inspecting architectural details. The pair-wise force calculation requires 20 floating-point operations, comprised mostly of addition and multiplication instructions (some of which can be combined using a multiply-add instruction), but there are also division and square root instructions for vector normaliza- tion. Intel CPUs take many cycles for single precision division and square root instructions,2 although this has improved in the latest Penryn CPU family with its faster 4-bit divider.3 Additionally, the limitations in register capacity leads to many MOV instructions in the x86 code (presumably to/from L1 cache). In con- trast, the GeForce 8800 executes a reciprocal square-root thread instruction in four clocks; see Section A.6 for special function accuracy. It has a larger register file (per thread) and shared memory that can be accessed as an instruction operand. Finally, the CUDA compiler emits 15 instructions for one iteration of the loop, compared with more than 40 instructions from a variety of x86 CPU compilers. Greater parallelism, faster execution of complex instructions, more register space, and an efficient compiler all combine to explain the dramatic performance improvement of the N-body code between the CPU and the GPU. 2 The x86 SSE instructions reciprocal-square-root (RSQRT*) and reciprocal (RCP*) were not considered, as their accuracy is too low to be comparable. 3 Intel Corporation, Intel 64 and IA-32 Architectures Optimization Reference Manual. November 2007. Order Number: 248966-016. Also available at www3.intel.com/design/processor/manuals/ 248966.pdf. A-70 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_770_Chunk6127 |
On a GeForce 8800, the all-pairs N-body algorithm delivers more than 240 GFLOPS of performance, compared to less than 2 GFLOPS on recent sequential processors. Compiling and executing the CUDA version of the code on a CPU demonstrates that the problem scales well to multicore CPUs, but is still significantly slower than a single GPU. We coupled the GPU N-body simulation with a graphical display of the motion, and can interactively display 16K bodies interacting at 44 frames per second. This allows astrophysical and biophysical events to be displayed and navigated at interactive rates. Additionally, we can parameterize many settings, such as noise reduction, damping, and integration techniques, immediately displaying their effects on the dynamics of the system. This provides scientists with stunning visual imagery, boosting their insights on otherwise invisible systems (too large or small, too fast or too slow), allowing them to create better models of physical phenomena. Figure A.8.17 shows a time-series display of an astrophysical simulation of 16K bodies, with each body acting as a galaxy. The initial configuration is a FIGURE A.8.17 12 images captured during the evolution of an N-body system with 16,384 bodies. A.8 Real Stuff: Mapping Applications to GPUs A-71 | clipped_hennesy_Page_771_Chunk6128 |
spherical shell of bodies rotating about the z-axis. One phenomenon of interest to astrophysicists is the clustering that occurs, along with the merging of galaxies over time. For the interested reader, the CUDA code for this application is available in the CUDA SDK from www.nvidia.com/CUDA. A.9 Fallacies and Pitfalls GPUs have evolved and changed so rapidly that many fallacies and pitfalls have arisen. We cover a few here. Fallacy: GPUs are just SIMD vector multiprocessors. It is easy to draw the false conclusion that GPUs are simply SIMD vector multiprocessors. GPUs do have a SPMD-style programming model, in that a programmer can write a single pro- gram that is executed in multiple thread instances with multiple data. The execu- tion of these threads is not purely SIMD or vector, however; it is single-instruction multiple-thread (SIMT), described in Section A.4. Each GPU thread has its own scalar registers, thread private memory, thread execution state, thread ID, indepen- dent execution and branch path, and effective program counter, and can address memory independently. Although a group of threads (e.g., a warp of 32 threads) executes more efficiently when the PCs for the threads are the same, this is not necessary. So, the multiprocessors are not purely SIMD. The thread execution model is MIMD with barrier synchronization and SIMT optimizations. Execution is more efficient if individual thread load/store memory accesses can be coalesced into block accesses, as well. However, this is not strictly necessary. In a purely SIMD vector architecture, memory/register accesses for different threads must be aligned in a regular vector pattern. A GPU has no such restriction for register or mem- ory accesses; however, execution is more efficient if warps of threads access local blocks of data. In a further departure from a pure SIMD model, an SIMT GPU can execute more than one warp of threads concurrently. In graphics applications, there may be multiple groups of vertex programs, pixel programs, and geometry programs running in the multiprocessor array concurrently. Computing programs may also execute different programs concurrently in different warps. Fallacy: GPU performance cannot grow faster than Moore’s law. Moore’s law is simply a rate. It is not a “speed of light” limit for any other rate. Moore’s law describes an expectation that over time, as semiconductor technology advances and transistors become smaller, the manufacturing cost per transistor will decline A-72 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_772_Chunk6129 |
exponentially. Put another way, given a constant manufacturing cost, the number of transistors will increase exponentially. Gordon Moore [1965] predicted that this progression would provide roughly two times the number of transistors for the same manufacturing cost every year, and later revised it to doubling every two years. Although Moore made the initial prediction in 1965 when there were just 50 components per integrated circuit, it has proved remarkably consistent. The reduction of transistor size has historically had other benefits, such as lower power per transistor and faster clock speeds at constant power. This increasing bounty of transistors is used by chip architects to build proces- sors, memory, and other components. For some time, CPU designers have used the extra transistors to increase processor performance at a rate similar to Moore’s law, so much so that many people think that processor performance growth of two times every 18–24 months is Moore’s law. In fact, it is not. Microprocessor designers spend some of the new transistors on processor cores, improving the architecture and design, and pipelining for more clock speed. The rest of the new transistors are used for providing more cache, to make memory access faster. In contrast, GPU designers use almost none of the new transistors to provide more cache; most of the transistors are used for improving the processor cores and adding more processor cores. GPUs get faster by four mechanisms. First, GPU designers reap the Moore’s law bounty directly by applying exponentially more transistors to building more parallel, and thus faster, processors. Second, GPU designers can improve on the architecture over time, increasing the efficiency of the processing. Third, Moore’s law assumes constant cost, so the Moore’s law rate can clearly be exceeded by spending more for larger chips with more transistors. Fourth, GPU memory systems have increased their effective bandwidth at a pace nearly comparable to the processing rate, by using faster memories, wider memories, data compression, and better caches. The combination of these four approaches has historically allowed GPU performance to double regularly, roughly every 12 to 18 months. This rate, exceeding the rate of Moore’s law, has been demonstrated on graphics applications for approximately ten years and shows no sign of significant slowdown. The most challenging rate limiter appears to be the memory system, but competitive innovation is advancing that rapidly too. Fallacy: GPUs only render 3D graphics; they can’t do general computation. GPUs are built to render 3D graphics as well as 2D graphics and video. To meet the demands of graphics software developers as expressed in the interfaces and performance/feature requirements of the graphics APIs, GPUs have become mas- sively parallel programmable floating-point processors. In the graphics domain, these processors are programmed through the graphics APIs and with arcane graphics programming languages (GLSL, Cg, and HLSL, in OpenGL and Direct3D). A.9 Fallacies and Pitfalls A-73 | clipped_hennesy_Page_773_Chunk6130 |
However, there is nothing preventing GPU architects from exposing the parallel processor cores to programmers without the graphics API or the arcane graphics languages. In fact, the Tesla architecture family of GPUs exposes the processors through a software environment known as CUDA, which allows programmers to develop general application programs using the C language and soon C++. GPUs are Turing-complete processors, so they can run any program that a CPU can run, although perhaps less well. And perhaps faster. Fallacy: GPUs cannot run double precision floating-point programs fast. In the past, GPUs could not run double precision floating-point programs at all, except through software emulation. And that’s not very fast at all. GPUs have made the progression from indexed arithmetic representation (lookup tables for colors) to 8-bit integers per color component, to fixed-point arithmetic, to single precision floating-point, and recently added double precision. Modern GPUs perform virtually all calculations in single precision IEEE floating-point arithmetic, and are beginning to use double precision in addition. For a small additional cost, a GPU can support double precision floating-point as well as single precision floating-point. Today, double precision runs more slowly than the single precision speed, about five to ten times slower. For incremental additional cost, double precision performance can be increased relative to single precision in stages, as more applications demand it. Fallacy: GPUs don’t do floating-point correctly. GPUs, at least in the Tesla archi- tecture family of processors, perform single precision floating-point processing at a level prescribed by the IEEE 754 floating-point standard. So, in terms of accuracy, GPUs are the equal of any other IEEE 754–compliant processors. Today, GPUs do not implement some of the specific features described in the standard, such as handling denormalized numbers and providing precise floating- point exceptions. However, the recently introduced Tesla T10P GPU provides full IEEE rounding, fused-multiply-add, and denormalized number support for double precision. Pitfall: Just use more threads to cover longer memory latencies. CPU cores are typically designed to run a single thread at full speed. To run at full speed, every instruction and its data need to be available when it is time for that instruction to run. If the next instruction is not ready or the data required for that instruction is not available, the instruction cannot run and the processor stalls. External memory is distant from the processor, so it takes many cycles of wasted execution to fetch data from memory. Consequently, CPUs require large local caches to keep running A-74 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_774_Chunk6131 |
without stalling. Memory latency is long, so it is avoided by striving to run in the cache. At some point, program working set demands may be larger than any cache. Some CPUs have used multithreading to tolerate latency, but the number of threads per core has generally been limited to a small number. The GPU strategy is different. GPU cores are designed to run many threads concurrently, but only one instruction from any thread at a time. Another way to say this is that a GPU runs each thread slowly, but in aggregate runs the threads efficiently. Each thread can tolerate some amount of memory latency, because other threads can run. The downside of this is that multiple—many multiple threads—are required to cover the memory latency. In addition, if memory accesses are scattered or not correlated among threads, the memory system will get progressively slower in responding to each individual request. Eventually, even the multiple threads will not be able to cover the latency. So, the pitfall is that for the “just use more threads” strategy to work for covering latency, you have to have enough threads, and the threads have to be well-behaved in terms of locality of memory access. Fallacy: O(n) algorithms are difficult to speed up. No matter how fast the GPU is at processing data, the steps of transferring data to and from the device may limit the performance of algorithms with O(n) complexity (with a small amount of work per datum). The highest transfer rate over the PCIe bus is approximately 48 GB/ second when DMA transfers are used, and slightly less for nonDMA transfers. The CPU, in contrast, has typical access speeds of 8–12 GB/second to system memory. Example problems, such as vector addition, will be limited by the transfer of the inputs to the GPU and the returning output from the computation. There are three ways to overcome the cost of transferring data. First, try to leave the data on the GPU for as long as possible, instead of moving the data back and forth for different steps of a complicated algorithm. CUDA deliberately leaves data alone in the GPU between launches to support this. Second, the GPU supports the concurrent operations of copy-in, copy-out and computation, so data can be streamed in and out of the device while it is computing. This model is useful for any data stream that can be processed as it arrives. Examples are video processing, network routing, data compression/decompression, and even simpler computations such as large vector mathematics. The third suggestion is to use the CPU and GPU together, improving performance by assigning a subset of the work to each, treating the system as a heterogeneous computing platform. The CUDA programming model supports allocation of work to one or more GPUs along with continued use of the CPU without the use of threads (via asynchronous GPU functions), so it is relatively simple to keep all GPUs and a CPU working concurrently to solve problems even faster. A.9 Fallacies and Pitfalls A-75 | clipped_hennesy_Page_775_Chunk6132 |
A.10 Concluding Remarks GPUs are massively parallel processors and have become widely used, not only for 3D graphics, but also for many other applications. This wide application was made possible by the evolution of graphics devices into programmable processors. The graphics application programming model for GPUs is usually an API such as DirectXtm or OpenGLtm. For more general-purpose computing, the CUDA programming model uses an SPMD (single-program multiple data) style, executing a program with many parallel threads. GPU parallelism will continue to scale with Moore’s law, mainly by increasing the number of processors. Only the parallel programming models that can readily scale to hundreds of processor cores and thousands of threads will be successful in supporting manycore GPUs and CPUs. Also, only those applications that have many largely independent parallel tasks will be accelerated by massively parallel manycore architectures. Parallel programming models for GPUs are becoming more flexible, for both graphics and parallel computing. For example, CUDA is evolving rapidly in the direction of full C/C++ functionality. Graphics APIs and programming models will likely adapt parallel computing capabilities and models from CUDA. Its SPMD- style threading model is scalable, and is a convenient, succinct, and easily learned model for expressing large amounts of parallelism. Driven by these changes in the programming models, GPU architecture is in turn becoming more flexible and more programmable. GPU fixed-function units are becoming accessible from general programs, along the lines of how CUDA programs already use texture intrinsic functions to perform texture lookups using the GPU texture instruction and texture unit. GPU architecture will continue to adapt to the usage patterns of both graphics and other application programmers. GPUs will continue to expand to include more processing power through additional processor cores, as well as increasing the thread and memory bandwidth available for programs. In addition, the programming models must evolve to include programming heterogeneous manycore systems including both GPUs and CPUs. Acknowledgments This appendix is the work of several authors at NVIDIA. We gratefully acknowledge the significant contributions of Michael Garland, John Montrym, Doug Voorhies, Lars Nyland, Erik Lindholm, Paulius Micikevicius, Massimiliano Fatica, Stuart Oberman, and Vasily Volkov. A-76 Appendix A Graphics and Computing GPUs | clipped_hennesy_Page_776_Chunk6133 |
Historical Perspective and Further Reading This section, which appears on the CD, surveys the history of programmable real- time graphics processing units (GPUs) from the early 1980s through today as they declined in price by two orders of magnitude and increased in performance by two orders of magnitude. It traces the evolution of the GPU from fixed func- tion pipelines to programmable graphics processors, with perspectives on GPU computing, unified graphics and computing processors, visual computing, and scalable GPUs. A.11 A.11 Historical Perspective and Further Reading A-77 | clipped_hennesy_Page_777_Chunk6134 |
B Fear of serious injury cannot alone justify suppression of free speech and assembly. Louis Brandeis Whitney v. California, 1927 Assemblers, Linkers, and the SPIM Simulator James R. Larus Microsoft Research Microsoft A P P E N D I X | clipped_hennesy_Page_778_Chunk6135 |
B.1 Introduction B-3 B.2 Assemblers B-10 B.3 Linkers B-18 B.4 Loading B-19 B.5 Memory Usage B-20 B.6 Procedure Call Convention B-22 B.7 Exceptions and Interrupts B-33 B.8 Input and Output B-38 B.9 SPIM B-40 B.10 MIPS R2000 Assembly Language B-45 B.11 Concluding Remarks B-81 B.12 Exercises B-82 B.1 Introduction Encoding instructions as binary numbers is natural and efficient for computers. Humans, however, have a great deal of difficulty understanding and manipulating these numbers. People read and write symbols (words) much better than long sequences of digits. Chapter 2 showed that we need not choose between numbers and words, because computer instructions can be represented in many ways. Humans can write and read symbols, and computers can execute the equivalent binary numbers. This appendix describes the process by which a human-readable program is translated into a form that a computer can execute, provides a few hints about writing assembly programs, and explains how to run these programs on SPIM, a simulator that executes MIPS programs. UNIX, Windows, and Mac OS X versions of the SPIM simulator are available on the CD. Assembly language is the symbolic representation of a computer’s binary encoding—the machine language. Assembly language is more readable than machine language, because it uses symbols instead of bits. The symbols in assembly language name commonly occurring bit patterns, such as opcodes and register specifiers, so people can read and remember them. In addition, assembly language machine language Binary representation used for communication within a computer system. | clipped_hennesy_Page_779_Chunk6136 |
B-4 Appendix B Assemblers, Linkers, and the SPIM Simulator FIGURE B.1.1 The process that produces an executable file. An assembler translates a file of assembly language into an object file, which is linked with other files and libraries into an executable file. Object file Source file Assembler Linker Assembler Assembler Program library Object file Object file Source file Source file Executable file permits programmers to use labels to identify and name particular memory words that hold instructions or data. A tool called an assembler translates assembly language into binary instruc- tions. Assemblers provide a friendlier representation than a computer’s 0s and 1s, which simplifies writing and reading programs. Symbolic names for opera- tions and locations are one facet of this representation. Another facet is program- ming facilities that increase a program’s clarity. For example, macros, discussed in Section B.2, enable a programmer to extend the assembly language by defining new operations. An assembler reads a single assembly language source file and produces an object file containing machine instructions and bookkeeping information that helps combine several object files into a program. Figure B.1.1 illustrates how a program is built. Most programs consist of several files—also called modules— that are written, compiled, and assembled independently. A program may also use prewritten routines supplied in a program library. A module typically contains ref erences to subroutines and data defined in other modules and in libraries. The code in a module cannot be executed when it contains unresolved references to labels in other object files or libraries. Another tool, called a linker, combines a collection of object and library files into an executable file, which a computer can run. To see the advantage of assembly language, consider the following sequence of figures, all of which contain a short subroutine that computes and prints the sum of the squares of integers from 0 to 100. Figure B.1.2 shows the machine language that a MIPS computer executes. With considerable effort, you could use the opcode and instruction format tables in Chapter 2 to translate the instructions into a symbolic program similar to that shown in Figure B.1.3. This form of the routine is much assembler A program that translates a symbolic version of instruction into the binary version. macro A pattern- matching and replacement facility that provides a simple mechanism to name a frequently used sequence of instructions. unresolved reference A reference that requires more information from an outside source to be complete. linker Also called link editor. A systems program that combines independently assembled machine language programs and resolves all undefined labels into an executable file. | clipped_hennesy_Page_780_Chunk6137 |
B.1 Introduction B-5 easier to read, because operations and operands are written with symbols rather than with bit patterns. However, this assembly language is still difficult to follow, because memory locations are named by their address rather than by a symbolic label. Figure B.1.4 shows assembly language that labels memory addresses with mne monic names. Most programmers prefer to read and write this form. Names that begin with a period, for example .data and .globl, are assembler directives that tell the assembler how to translate a program but do not produce machine instructions. Names followed by a colon, such as str: or main:, are labels that name the next memory location. This program is as readable as most assembly language programs (except for a glaring lack of comments), but it is still difficult to follow, because many simple operations are required to accomplish simple tasks and because assembly language’s lack of control flow constructs provides few hints about the program’s operation. By contrast, the C routine in Figure B.1.5 is both shorter and clearer, since vari ables have mnemonic names and the loop is explicit rather than constructed with branches. In fact, the C routine is the only one that we wrote. The other forms of the program were produced by a C compiler and assembler. In general, assembly language plays two roles (see Figure B.1.6). The first role is the output language of compilers. A compiler translates a program written in a assembler directive An operation that tells the assembler how to translate a program but does not produce machine instruc tions; always begins with a period. 00100111101111011111111111100000 10101111101111110000000000010100 10101111101001000000000000100000 10101111101001010000000000100100 10101111101000000000000000011000 10101111101000000000000000011100 10001111101011100000000000011100 10001111101110000000000000011000 00000001110011100000000000011001 00100101110010000000000000000001 00101001000000010000000001100101 10101111101010000000000000011100 00000000000000000111100000010010 00000011000011111100100000100001 00010100001000001111111111110111 10101111101110010000000000011000 00111100000001000001000000000000 10001111101001010000000000011000 00001100000100000000000011101100 00100100100001000000010000110000 10001111101111110000000000010100 00100111101111010000000000100000 00000011111000000000000000001000 00000000000000000001000000100001 FIGURE B.1.2 MIPS machine language code for a routine to compute and print the sum of the squares of integers between 0 and 100. | clipped_hennesy_Page_781_Chunk6138 |
B-6 Appendix B Assemblers, Linkers, and the SPIM Simulator high-level language (such as C or Pascal) into an equivalent program in machine or assembly language. The high-level language is called the source language, and the compiler’s output is its target language. Assembly language’s other role is as a language in which to write programs. This role used to be the dominant one. Today, however, because of larger main memo- ries and better compilers, most programmers write in a high-level language and rarely, if ever, see the instructions that a computer executes. Nevertheless, assembly language is still important to write programs in which speed or size is critical or to exploit hardware features that have no analogues in high-level languages. Although this appendix focuses on MIPS assembly language, assembly pro gramming on most other machines is very similar. The additional instructions and address modes in CISC machines, such as the VAX, can make assembly programs shorter but do not change the process of assembling a program or provide assembly language with the advantages of high-level languages, such as type-checking and structured control flow. source language The high-level language in which a program is originally written. addiu $29, $29, -32 sw $31, 20($29) sw $4, 32($29) sw $5, 36($29) sw $0, 24($29) sw $0, 28($29) lw $14, 28($29) lw $24, 24($29) multu $14, $14 addiu $8, $14, 1 slti $1, $8, 101 sw $8, 28($29) mflo $15 addu $25, $24, $15 bne $1, $0, -9 sw $25, 24($29) lui $4, 4096 lw $5, 24($29) jal 1048812 addiu $4, $4, 1072 lw $31, 20($29) addiu $29, $29, 32 jr $31 move $2, $0 FIGURE B.1.3 The same routine written in assembly language. However, the code for the routine does not label registers or memory locations nor include comments. | clipped_hennesy_Page_782_Chunk6139 |
B.1 Introduction B-7 When to Use Assembly Language The primary reason to program in assembly language, as opposed to an available high-level language, is because the speed or size of a program is critically important. For example, consider a computer that controls a piece of machinery, such as a car’s brakes. A computer that is incorporated in another device, such as a car, is called an embedded computer. This type of computer needs to respond rapidly and predictably to events in the outside world. Because a compiler introduces .text .align 2 .globl main main: subu $sp, $sp, 32 sw $ra, 20($sp) sd $a0, 32($sp) sw $0, 24($sp) sw $0, 28($sp) loop: lw $t6, 28($sp) mul $t7, $t6, $t6 lw $t8, 24($sp) addu $t9, $t8, $t7 sw $t9, 24($sp) addu $t0, $t6, 1 sw $t0, 28($sp) ble $t0, 100, loop la $a0, str lw $a1, 24($sp) jal printf move $v0, $0 lw $ra, 20($sp) addu $sp, $sp, 32 jr $ra .data .align 0 str: .asciiz "The sum from 0 .. 100 is %d\n" FIGURE B.1.4 The same routine written in assembly language with labels, but no com ments. The commands that start with periods are assembler directives (see pages B-47–49). .text indicates that succeeding lines contain instructions. .data indicates that they contain data. .align n indicates that the items on the succeeding lines should be aligned on a 2n byte boundary. Hence, .align 2 means the next item should be on a word boundary. .globl main declares that main is a global symbol that should be visible to code stored in other files. Finally, .asciiz stores a null-terminated string in memory. | clipped_hennesy_Page_783_Chunk6140 |
B-8 Appendix B Assemblers, Linkers, and the SPIM Simulator uncertainty about the time cost of operations, programmers may find it difficult to ensure that a high-level language program responds within a definite time interval—say, 1 millisecond after a sensor detects that a tire is skidding. An assembly language programmer, on the other hand, has tight control over which instruc tions execute. In addition, in embedded applications, reducing a program’s size, so that it fits in fewer memory chips, reduces the cost of the embedded computer. A hybrid approach, in which most of a program is written in a high-level lan guage and time-critical sections are written in assembly language, builds on the strengths of both languages. Programs typically spend most of their time executing a small fraction of the program’s source code. This observation is just the principle of locality that underlies caches (see Section 5.1 in Chapter 5). Program profiling measures where a program spends its time and can find the time-critical parts of a program. In many cases, this portion of the program can be made faster with better data structures or algorithms. Sometimes, however, sig nificant performance improvements only come from recoding a critical portion of a program in assembly language. #include <stdio.h> int main (int argc, char *argv[]) { int i; int sum = 0; for (i = 0; i <= 100; i = i + 1) sum = sum + i * i; printf (“The sum from 0 .. 100 is %d\n”, sum); } FIGURE B.1.5 The routine written in the C programming language. FIGURE B.1.6 Assembly language either is written by a programmer or is the output of a compiler. Linker Compiler Assembler Computer High-level language program Assembly language program Program | clipped_hennesy_Page_784_Chunk6141 |
B.1 Introduction B-9 This improvement is not necessarily an indication that the high-level language’s compiler has failed. Compilers typically are better than programmers at produc- ing uniformly high-quality machine code across an entire program. Programmers, however, understand a program’s algorithms and behavior at a deeper level than a compiler and can expend considerable effort and ingenuity improving small sections of the program. In particular, programmers often consider several proce- dures simultaneously while writing their code. Compilers typically compile each procedure in isolation and must follow strict conventions governing the use of registers at procedure boundaries. By retaining commonly used values in regis ters, even across procedure boundaries, programmers can make a program run faster. Another major advantage of assembly language is the ability to exploit special ized instructions—for example, string copy or pattern-matching instructions. Compilers, in most cases, cannot determine that a program loop can be replaced by a single instruction. However, the programmer who wrote the loop can replace it easily with a single instruction. Currently, a programmer’s advantage over a compiler has become difficult to maintain as compilation techniques improve and machines’ pipelines increase in complexity (Chapter 4). The final reason to use assembly language is that no high-level language is available on a particular computer. Many older or specialized computers do not have a compiler, so a programmer’s only alternative is assembly language. Drawbacks of Assembly Language Assembly language has many disadvantages that strongly argue against its wide spread use. Perhaps its major disadvantage is that programs written in assembly language are inherently machine-specific and must be totally rewritten to run on another computer architecture. The rapid evolution of computers discussed in Chapter 1 means that architectures become obsolete. An assembly language pro gram remains tightly bound to its original architecture, even after the computer is eclipsed by new, faster, and more cost-effective machines. Another disadvantage is that assembly language programs are longer than the equivalent programs written in a high-level language. For example, the C program in Figure B.1.5 is 11 lines long, while the assembly program in Figure B.1.4 is 31 lines long. In more complex programs, the ratio of assembly to high-level lan guage (its expansion factor) can be much larger than the factor of three in this exam ple. Unfortunately, empirical studies have shown that programmers write roughly the same number of lines of code per day in assembly as in high-level languages. This means that programmers are roughly x times more productive in a high-level language, where x is the assembly language expansion factor. | clipped_hennesy_Page_785_Chunk6142 |
B-10 Appendix B Assemblers, Linkers, and the SPIM Simulator To compound the problem, longer programs are more difficult to read and understand, and they contain more bugs. Assembly language exacerbates the prob lem because of its complete lack of structure. Common programming idioms, such as if-then statements and loops, must be built from branches and jumps. The resulting programs are hard to read, because the reader must reconstruct every higher-level construct from its pieces and each instance of a statement may be slightly different. For example, look at Figure B.1.4 and answer these questions: What type of loop is used? What are its lower and upper bounds? Elaboration: Compilers can produce machine language directly instead of relying on an assembler. These compilers typically execute much faster than those that invoke an assembler as part of compilation. However, a compiler that generates machine lan guage must perform many tasks that an assembler normally handles, such as resolv ing addresses and encoding instructions as binary numbers. The tradeoff is between compilation speed and compiler simplicity. Elaboration: Despite these considerations, some embedded applications are written in a high-level language. Many of these applications are large and complex programs that must be extremely reliable. Assembly language programs are longer and more difficult to write and read than high-level language programs. This greatly increases the cost of writing an assembly language program and makes it extremely difficult to verify the correctness of this type of program. In fact, these considerations led the Department of Defense, which pays for many complex embedded systems, to develop Ada, a new high- level language for writing embedded systems. B.2 Assemblers An assembler translates a file of assembly language statements into a file of binary machine instructions and binary data. The translation process has two major parts. The first step is to find memory locations with labels so that the relationship between symbolic names and addresses is known when instructions are translated. The second step is to translate each assembly statement by combining the numeric equivalents of opcodes, register specifiers, and labels into a legal instruction. As shown in Figure B.1.1, the assembler produces an output file, called an object file, which contains the machine instructions, data, and bookkeeping information. An object file typically cannot be executed, because it references procedures or data in other files. A label is external (also called global) if the labeled object can external label Also called global label. A label referring to an object that can be referenced from files other than the one in which it is defined. | clipped_hennesy_Page_786_Chunk6143 |
be referenced from files other than the one in which it is defined. A label is local if the object can be used only within the file in which it is defined. In most assem blers, labels are local by default and must be explicitly declared global. Subroutines and global variables require external labels since they are referenced from many files in a program. Local labels hide names that should not be visible to other modules—for example, static functions in C, which can only be called by other functions in the same file. In addition, compiler-generated names—for example, a name for the instruction at the beginning of a loop—are local so that the compiler need not produce unique names in every file. Local and Global Labels Consider the program in Figure B.1.4. The subroutine has an external (global) label main. It also contains two local labels—loop and str—that are only visible with this assembly language file. Finally, the routine also contains an unresolved reference to an external label printf, which is the library routine that prints values. Which labels in Figure B.1.4 could be referenced from another file? Only global labels are visible outside a file, so the only label that could be referenced from another file is main. Since the assembler processes each file in a program individually and in isolation, it only knows the addresses of local labels. The assembler depends on another tool, the linker, to combine a collection of object files and libraries into an executable file by resolving external labels. The assembler assists the linker by providing lists of labels and unresolved references. However, even local labels present an interesting challenge to an assembler. Unlike names in most high-level languages, assembly labels may be used before they are defined. In the example, in Figure B.1.4, the label str is used by the la instruction before it is defined. The possibility of a forward reference, like this one, forces an assembler to translate a program in two steps: first find all labels and then produce instructions. In the example, when the assembler sees the la instruction, it does not know where the word labeled str is located or even whether str labels an instruction or datum. local label A label referring to an object that can be used only within the file in which it is defined. EXAMPLE ANSWER forward reference A label that is used before it is defined. B.2 Assemblers B-11 | clipped_hennesy_Page_787_Chunk6144 |
B-12 Appendix B Assemblers, Linkers, and the SPIM Simulator An assembler’s first pass reads each line of an assembly file and breaks it into its component pieces. These pieces, which are called lexemes, are individual words, numbers, and punctuation characters. For example, the line ble $t0, 100, loop contains six lexemes: the opcode ble, the register specifier $t0, a comma, the number 100, a comma, and the symbol loop. If a line begins with a label, the assembler records in its symbol table the name of the label and the address of the memory word that the instruction occupies. The assembler then calculates how many words of memory the instruction on the current line will occupy. By keeping track of the instructions’ sizes, the assembler can determine where the next instruction goes. To compute the size of a variable- length instruction, like those on the VAX, an assembler has to examine it in detail. However, fixed-length instructions, like those on MIPS, require only a cursory examination. The assembler performs a similar calculation to compute the space required for data statements. When the assembler reaches the end of an assembly file, the symbol table records the location of each label defined in the file. The assembler uses the information in the symbol table during a second pass over the file, which actually produces machine code. The assembler again exam ines each line in the file. If the line contains an instruction, the assembler com bines the binary representations of its opcode and operands (register specifiers or memory address) into a legal instruction. The process is similar to the one used in Section 2.5 in Chapter 2. Instructions and data words that reference an external symbol defined in another file cannot be completely assembled (they are unre solved), since the symbol’s address is not in the symbol table. An assembler does not complain about unresolved references, since the corresponding label is likely to be defined in another file. Assembly language is a programming language. Its principal difference from high-level languages such as BASIC, Java, and C is that assembly lan guage provides only a few, simple types of data and control flow. Assembly language programs do not specify the type of value held in a variable. Instead, a programmer must apply the appropriate operations (e.g., integer or floating-point addition) to a value. In addition, in assembly language, programs must implement all control flow with go tos. Both factors make assembly language programming for any machine—MIPS or x86—more difficult and error-prone than writing in a high-level language. symbol table A table that matches names of labels to the addresses of the memory words that instructions occupy. The BIG Picture | clipped_hennesy_Page_788_Chunk6145 |
Elaboration: If an assembler’s speed is important, this two-step process can be done in one pass over the assembly file with a technique known as backpatching. In its pass over the file, the assembler builds a (possibly incomplete) binary representation of every instruction. If the instruction references a label that has not yet been defined, the assembler records the label and instruction in a table. When a label is defined, the assembler consults this table to find all instructions that contain a forward reference to the label. The assembler goes back and corrects their binary representation to incorpo rate the address of the label. Backpatching speeds assembly because the assembler only reads its input once. However, it requires an assembler to hold the entire binary rep resentation of a program in memory so instructions can be backpatched. This require ment can limit the size of programs that can be assembled. The process is complicated by machines with several types of branches that span different ranges of instructions. When the assembler first sees an unresolved label in a branch instruction, it must either use the largest possible branch or risk having to go back and readjust many instructions to make room for a larger branch. Object File Format Assemblers produce object files. An object file on UNIX contains six distinct sections (see Figure B.2.1): ■ ■The object file header describes the size and position of the other pieces of the file. ■ ■The text segment contains the machine language code for routines in the source file. These routines may be unexecutable because of unresolved references. ■ ■The data segment contains a binary representation of the data in the source file. The data also may be incomplete because of unresolved references to labels in other files. ■ ■The relocation information identifies instructions and data words that depend on absolute addresses. These references must change if portions of the program are moved in memory. ■ ■The symbol table associates addresses with external labels in the source file and lists unresolved references. ■ ■The debugging information contains a concise description of the way the program was compiled, so a debugger can find which instruction addresses correspond to lines in a source file and print the data structures in readable form. The assembler produces an object file that contains a binary representation of the program and data and additional information to help link pieces of a program. backpatching A method for translating from assembly language to machine instructions in which the assembler builds a (possibly incomplete) binary representation of every instruction in one pass over a program and then returns to fill in previ ously undefined labels. text segment The segment of a UNIX object file that contains the machine language code for routines in the source file. data segment The segment of a UNIX object or executable file that contains a binary represen tation of the initialized data used by the program. relocation information The segment of a UNIX object file that identifies instructions and data words that depend on absolute addresses. absolute address A variable’s or routine’s actual address in memory. B.2 Assemblers B-13 | clipped_hennesy_Page_789_Chunk6146 |
B-14 Appendix B Assemblers, Linkers, and the SPIM Simulator This relocation information is necessary because the assembler does not know which memory locations a procedure or piece of data will occupy after it is linked with the rest of the program. Procedures and data from a file are stored in a con- tiguous piece of memory, but the assembler does not know where this memory will be located. The assembler also passes some symbol table entries to the linker. In particular, the assembler must record which external symbols are defined in a file and what unresolved references occur in a file. Elaboration: For convenience, assemblers assume each file starts at the same address (for example, location 0) with the expectation that the linker will relocate the code and data when they are assigned locations in memory. The assembler produces relocation information, which contains an entry describing each instruction or data word in the file that references an absolute address. On MIPS, only the subroutine call, load, and store instructions reference absolute addresses. Instructions that use PC-relative addressing, such as branches, need not be relocated. Additional Facilities Assemblers provide a variety of convenience features that help make assembler programs shorter and easier to write, but do not fundamentally change assembly language. For example, data layout directives allow a programmer to describe data in a more concise and natural manner than its binary representation. In Figure B.1.4, the directive .asciiz “The sum from 0 .. 100 is %d\n” stores characters from the string in memory. Contrast this line with the alternative of writing each character as its ASCII value (Figure 2.15 in Chapter 2 describes the ASCII encoding for characters): .byte 84, 104, 101, 32, 115, 117, 109, 32 .byte 102, 114, 111, 109, 32, 48, 32, 46 .byte 46, 32, 49, 48, 48, 32, 105, 115 .byte 32, 37, 100, 10, 0 The .asciiz directive is easier to read because it represents characters as letters, not binary numbers. An assembler can translate characters to their binary repre sentation much faster and more accurately than a human can. Data layout directives FIGURE B.2.1 Object file. A UNIX assembler produces an object file with six distinct sections. Object file header Text segment Data segment Relocation information Symbol table Debugging information | clipped_hennesy_Page_790_Chunk6147 |
specify data in a human-readable form that the assembler translates to binary. Other layout directives are described in Section B.10. String Directive Define the sequence of bytes produced by this directive: .asciiz “The quick brown fox jumps over the lazy dog” .byte 84, 104, 101, 32, 113, 117, 105, 99 .byte 107, 32, 98, 114, 111, 119, 110, 32 .byte 102, 111, 120, 32, 106, 117, 109, 112 .byte 115, 32, 111, 118, 101, 114, 32, 116 .byte 104, 101, 32, 108, 97, 122, 121, 32 .byte 100, 111, 103, 0 Macro is a pattern-matching and replacement facility that provides a simple mechanism to name a frequently used sequence of instructions. Instead of repeat edly typing the same instructions every time they are used, a programmer invokes the macro and the assembler replaces the macro call with the corresponding sequence of instructions. Macros, like subroutines, permit a programmer to create and name a new abstraction for a common operation. Unlike subroutines, how ever, macros do not cause a subroutine call and return when the program runs, since a macro call is replaced by the macro’s body when the program is assembled. After this replacement, the resulting assembly is indistinguishable from the equiv alent program written without macros. Macros As an example, suppose that a programmer needs to print many numbers. The library routine printf accepts a format string and one or more values to print as its arguments. A programmer could print the integer in register $7 with the following instructions: .data int_str: .asciiz“%d” .text la $a0, int_str # Load string address # into first arg EXAMPLE ANSWER EXAMPLE B.2 Assemblers B-15 | clipped_hennesy_Page_791_Chunk6148 |
B-16 Appendix B Assemblers, Linkers, and the SPIM Simulator mov $a1, $7 # Load value into # second arg jal printf # Call the printf routine The .data directive tells the assembler to store the string in the program’s data segment, and the .text directive tells the assembler to store the instruc tions in its text segment. However, printing many numbers in this fashion is tedious and produces a verbose program that is difficult to understand. An alternative is to introduce a macro, print_int, to print an integer: .data int_str:.asciiz “%d” .text .macro print_int($arg) la $a0, int_str # Load string address into # first arg mov $a1, $arg # Load macro’s parameter # ($arg) into second arg jal printf # Call the printf routine .end_macro print_int($7) The macro has a formal parameter, $arg, that names the argument to the macro. When the macro is expanded, the argument from a call is substituted for the formal parameter throughout the macro’s body. Then the assembler replaces the call with the macro’s newly expanded body. In the first call on print_int, the argument is $7, so the macro expands to the code la $a0, int_str mov $a1, $7 jal printf In a second call on print_int, say, print_int($t0), the argument is $t0, so the macro expands to la $a0, int_str mov $a1, $t0 jal printf What does the call print_int($a0) expand to? formal parameter A variable that is the argument to a procedure or macro; replaced by that argument once the macro is expanded. | clipped_hennesy_Page_792_Chunk6149 |
la $a0, int_str mov $a1, $a0 jal printf This example illustrates a drawback of macros. A programmer who uses this macro must be aware that print_int uses register $a0 and so cannot correctly print the value in that register. Some assemblers also implement pseudoinstructions, which are instructions pro vided by an assembler but not implemented in hardware. Chapter 2 contains many examples of how the MIPS assembler synthesizes pseudoinstructions and addressing modes from the spartan MIPS hardware instruction set. For example, Section 2.7 in Chapter 2 describes how the assembler synthesizes the blt instruction from two other instructions: slt and bne. By extending the instruction set, the MIPS assembler makes assembly language programming easier without complicating the hardware. Many pseudoinstructions could also be simulated with macros, but the MIPS assembler can generate better code for these instructions because it can use a dedicated register ($at) and is able to optimize the generated code. Elaboration: Assemblers conditionally assemble pieces of code, which permits a programmer to include or exclude groups of instructions when a program is assembled. This feature is particularly useful when several versions of a program differ by a small amount. Rather than keep these programs in separate files—which greatly complicates fixing bugs in the common code—programmers typically merge the versions into a sin gle file. Code particular to one version is conditionally assembled, so it can be excluded when other versions of the program are assembled. If macros and conditional assembly are useful, why do assemblers for UNIX systems rarely, if ever, provide them? One reason is that most programmers on these systems write programs in higher-level languages like C. Most of the assembly code is produced by compilers, which find it more convenient to repeat code rather than define macros. Another reason is that other tools on UNIX—such as cpp, the C preprocessor, or m4, a general macro processor—can provide macros and conditional assembly for assembly language programs. ANSWER Hardware/ Software Interface B.2 Assemblers B-17 | clipped_hennesy_Page_793_Chunk6150 |
B-18 Appendix B Assemblers, Linkers, and the SPIM Simulator B.3 Linkers Separate compilation permits a program to be split into pieces that are stored in different files. Each file contains a logically related collection of subroutines and data structures that form a module in a larger program. A file can be compiled and assembled independently of other files, so changes to one module do not require recompiling the entire program. As we discussed above, separate compilation neces- sitates the additional step of linking to combine object files from separate modules and fix their unresolved references. The tool that merges these files is the linker (see Figure B.3.1). It performs three tasks: ■ ■Searches the program libraries to find library routines used by the program ■ ■Determines the memory locations that code from each module will occupy and relocates its instructions by adjusting absolute references ■ ■Resolves references among files A linker’s first task is to ensure that a program contains no undefined labels. The linker matches the external symbols and unresolved references from a program’s files. An external symbol in one file resolves a reference from another file if both refer to a label with the same name. Unmatched references mean a symbol was used but not defined anywhere in the program. Unresolved references at this stage in the linking process do not necessarily mean a programmer made a mistake. The program could have referenced a library routine whose code was not in the object files passed to the linker. After matching symbols in the program, the linker searches the system’s program libraries to find predefined subroutines and data structures that the program references. The basic libraries contain routines that read and write data, allocate and deallocate memory, and perform numeric operations. Other libraries contain routines to access a database or manipulate terminal windows. A program that references an unresolved symbol that is not in any library is erroneous and cannot be linked. When the program uses a library routine, the linker extracts the routine’s code from the library and incorporates it into the program text segment. This new routine, in turn, may depend on other library routines, so the linker continues to fetch other library routines until no external references are unresolved or a routine cannot be found. If all external references are resolved, the linker next determines the memory locations that each module will occupy. Since the files were assembled in isolation, separate compilation Splitting a program across many files, each of which can be compiled without knowledge of what is in the other files. | clipped_hennesy_Page_794_Chunk6151 |
the assembler could not know where a module’s instructions or data would be placed relative to other modules. When the linker places a module in memory, all absolute references must be relocated to reflect its true location. Since the linker has relocation information that identifies all relocatable references, it can efficiently find and backpatch these references. The linker produces an executable file that can run on a computer. Typically, this file has the same format as an object file, except that it contains no unresolved references or relocation information. B.4 Loading A program that links without an error can be run. Before being run, the program resides in a file on secondary storage, such as a disk. On UNIX systems, the operating FIGURE B.3.1 The linker searches a collection of object files and program libraries to find nonlocal routines used in a program, combines them into a single executable file, and resolves references between routines in different files. Object file Instructions Relocation records main: jal ??? • • • jal ??? call, sub call, printf Executable file main: jal printf jal sub printf: sub: Object file sub: • • • C library print: • • • Linker • • • • • • • • • B.4 Loading B-19 | clipped_hennesy_Page_795_Chunk6152 |
B-20 Appendix B Assemblers, Linkers, and the SPIM Simulator system kernel brings a program into memory and starts it running. To start a program, the operating system performs the following steps: 1. It reads the executable file’s header to determine the size of the text and data segments. 2. It creates a new address space for the program. This address space is large enough to hold the text and data segments, along with a stack segment (see Section B.5). 3. It copies instructions and data from the executable file into the new address space. 4. It copies arguments passed to the program onto the stack. 5. It initializes the machine registers. In general, most registers are cleared, but the stack pointer must be assigned the address of the first free stack location (see Section B.5). 6. It jumps to a start-up routine that copies the program’s arguments from the stack to registers and calls the program’s main routine. If the main routine returns, the start-up routine terminates the program with the exit system call. B.5 Memory Usage The next few sections elaborate the description of the MIPS architecture presented earlier in the book. Earlier chapters focused primarily on hardware and its relationship with low-level software. These sections focus primarily on how assembly language programmers use MIPS hardware. These sections describe a set of conventions followed on many MIPS systems. For the most part, the hardware does not impose these conventions. Instead, they represent an agreement among programmers to follow the same set of rules so that software written by different people can work together and make effective use of MIPS hardware. Systems based on MIPS processors typically divide memory into three parts (see Figure B.5.1). The first part, near the bottom of the address space (starting at address 400000hex), is the text segment, which holds the program’s instructions. The second part, above the text segment, is the data segment, which is further divided into two parts. Static data (starting at address 10000000hex) contains objects whose size is known to the compiler and whose lifetime—the interval during which a program can access them—is the program’s entire execution. For example, in C, global variables are statically allocated, since they can be referenced static data The portion of memory that contains data whose size is known to the compiler and whose lifetime is the program’s entire execution. | clipped_hennesy_Page_796_Chunk6153 |
FIGURE B.5.1 Layout of memory. Dynamic data Static data Reserved Stack segment Data segment Text segment 7fff fffchex 10000000hex 400000hex Because the data segment begins far above the program at address 10000000hex, load and store instructions cannot directly reference data objects with their 16-bit offset fields (see Section 2.5 in Chapter 2). For example, to load the word in the data segment at address 10010020hex into register $v0 requires two instructions: lui $s0, 0x1001 # 0x1001 means 1001 base 16 lw $v0, 0x0020($s0) # 0x10010000 + 0x0020 = 0x10010020 (The 0x before a number means that it is a hexadecimal value. For example, 0x8000 is 8000hex or 32,768ten.) To avoid repeating the lui instruction at every load and store, MIPS systems typically dedicate a register ($gp) as a global pointer to the static data segment. This register contains address 10008000hex, so load and store instructions can use their signed 16-bit offset fields to access the first 64 KB of the static data segment. With this global pointer, we can rewrite the example as a single instruction: lw $v0, 0x8020($gp) Of course, a global pointer register makes addressing locations 10000000hex– 10010000hex faster than other heap locations. The MIPS compiler usually stores global variables in this area, because these variables have fixed locations and fit bet ter than other global data, such as arrays. Hardware/ Software Interface B.5 Memory Usage B-21 anytime during a program’s execution. The linker both assigns static objects to locations in the data segment and resolves references to these objects. Immediately above static data is dynamic data. This data, as its name implies, is allocated by the program as it executes. In C programs, the malloc library routine | clipped_hennesy_Page_797_Chunk6154 |
B-22 Appendix B Assemblers, Linkers, and the SPIM Simulator finds and returns a new block of memory. Since a compiler cannot predict how much memory a program will allocate, the operating system expands the dynamic data area to meet demand. As the upward arrow in the figure indicates, malloc expands the dynamic area with the sbrk system call, which causes the operating system to add more pages to the program’s virtual address space (see Section 5.4 in Chapter 5) immediately above the dynamic data segment. The third part, the program stack segment, resides at the top of the virtual address space (starting at address 7fffffffhex). Like dynamic data, the maximum size of a program’s stack is not known in advance. As the program pushes values on to the stack, the operating system expands the stack segment down toward the data segment. This three-part division of memory is not the only possible one. However, it has two important characteristics: the two dynamically expandable segments are as far apart as possible, and they can grow to use a program’s entire address space. B.6 Procedure Call Convention Conventions governing the use of registers are necessary when procedures in a pro- gram are compiled separately. To compile a particular procedure, a compiler must know which registers it may use and which registers are reserved for other proce- dures. Rules for using registers are called register use or procedure call conven- tions. As the name implies, these rules are, for the most part, conventions followed by software rather than rules enforced by hardware. However, most compilers and programmers try very hard to follow these conventions because violating them causes insidious bugs. The calling convention described in this section is the one used by the gcc com piler. The native MIPS compiler uses a more complex convention that is slightly faster. The MIPS CPU contains 32 general-purpose registers that are numbered 0–31. Register $0 always contains the hardwired value 0. ■ ■Registers $at (1), $k0 (26), and $k1 (27) are reserved for the assembler and operating system and should not be used by user programs or compilers. ■ ■Registers $a0–$a3 (4–7) are used to pass the first four arguments to routines (remaining arguments are passed on the stack). Registers $v0 and $v1 (2, 3) are used to return values from functions. stack segment The portion of memory used by a program to hold procedure call frames. register use convention Also called procedure call convention. A software protocol governing the use of registers by procedures. | clipped_hennesy_Page_798_Chunk6155 |
■ ■Registers $t0–$t9 (8–15, 24, 25) are caller-saved registers that are used to hold temporary quantities that need not be preserved across calls (see Section 2.8 in Chapter 2). ■ ■Registers $s0–$s7 (16–23) are callee-saved registers that hold long-lived values that should be preserved across calls. ■ ■Register $gp (28) is a global pointer that points to the middle of a 64K block of memory in the static data segment. ■ ■Register $sp (29) is the stack pointer, which points to the last location on the stack. Register $fp (30) is the frame pointer. The jal instruction writes register $ra (31), the return address from a procedure call. These two regis ters are explained in the next section. The two-letter abbreviations and names for these registers—for example $sp for the stack pointer—reflect the registers’ intended uses in the procedure call convention. In describing this convention, we will use the names instead of register numbers. Figure B.6.1 lists the registers and describes their intended uses. Procedure Calls This section describes the steps that occur when one procedure (the caller) invokes another procedure (the callee). Programmers who write in a high-level language (like C or Pascal) never see the details of how one procedure calls another, because the compiler takes care of this low-level bookkeeping. However, assembly language programmers must explicitly implement every procedure call and return. Most of the bookkeeping associated with a call is centered around a block of memory called a procedure call frame. This memory is used for a variety of purposes: ■ ■To hold values passed to a procedure as arguments ■ ■To save registers that a procedure may modify, but which the procedure’s caller does not want changed ■ ■To provide space for variables local to a procedure In most programming languages, procedure calls and returns follow a strict last-in, first-out (LIFO) order, so this memory can be allocated and deallocated on a stack, which is why these blocks of memory are sometimes called stack frames. Figure B.6.2 shows a typical stack frame. The frame consists of the memory between the frame pointer ($fp), which points to the first word of the frame, and the stack pointer ($sp), which points to the last word of the frame. The stack grows down from higher memory addresses, so the frame pointer points above the caller-saved register A register saved by the routine being called. callee-saved register A register saved by the routine making a procedure call. procedure call frame A block of memory that is used to hold values passed to a procedure as arguments, to save registers that a procedure may modify but that the procedure’s caller does not want changed, and to provide space for variables local to a procedure. B.6 Procedure Call Convention B-23 | clipped_hennesy_Page_799_Chunk6156 |
B-24 Appendix B Assemblers, Linkers, and the SPIM Simulator stack pointer. The executing procedure uses the frame pointer to quickly access values in its stack frame. For example, an argument in the stack frame can be loaded into register $v0 with the instruction lw $v0, 0($fp) Register name Number Usage $zero 0 constant 0 $at 1 reserved for assembler $v0 2 expression evaluation and results of a function $v1 3 expression evaluation and results of a function $a0 4 argument 1 $a1 5 argument 2 $a2 6 argument 3 $a3 7 argument 4 $t0 8 temporary (not preserved across call) $t1 9 temporary (not preserved across call) $t2 10 temporary (not preserved across call) $t3 11 temporary (not preserved across call) $t4 12 temporary (not preserved across call) $t5 13 temporary (not preserved across call) $t6 14 temporary (not preserved across call) $t7 15 temporary (not preserved across call) $s0 16 saved temporary (preserved across call) $s1 17 saved temporary (preserved across call) $s2 18 saved temporary (preserved across call) $s3 19 saved temporary (preserved across call) $s4 20 saved temporary (preserved across call) $s5 21 saved temporary (preserved across call) $s6 22 saved temporary (preserved across call) $s7 23 saved temporary (preserved across call) $t8 24 temporary (not preserved across call) $t9 25 temporary (not preserved across call) $k0 26 reserved for OS kernel $k1 27 reserved for OS kernel $gp 28 pointer to global area $sp 29 stack pointer $fp 30 frame pointer $ra 31 return address (used by function call) FIGURE B.6.1 MIPS registers and usage convention. | clipped_hennesy_Page_800_Chunk6157 |
A stack frame may be built in many different ways; however, the caller and callee must agree on the sequence of steps. The steps below describe the calling convention used on most MIPS machines. This convention comes into play at three points during a procedure call: immediately before the caller invokes the callee, just as the callee starts executing, and immediately before the callee returns to the caller. In the first part, the caller puts the procedure call arguments in standard places and invokes the callee to do the following: 1. Pass arguments. By convention, the first four arguments are passed in regis ters $a0–$a3. Any remaining arguments are pushed on the stack and appear at the beginning of the called procedure’s stack frame. 2. Save caller-saved registers. The called procedure can use these registers ($a0–$a3 and $t0–$t9) without first saving their value. If the caller expects to use one of these registers after a call, it must save its value before the call. 3. Execute a jal instruction (see Section 2.8 of Chapter 2), which jumps to the callee’s first instruction and saves the return address in register $ra. FIGURE B.6.2 Layout of a stack frame. The frame pointer ($fp) points to the first word in the currently executing procedure’s stack frame. The stack pointer ($sp) points to the last word of the frame. The first four arguments are passed in registers, so the fifth argument is the first one stored on the stack. B.6 Procedure Call Convention B-25 Argument 6 Argument 5 Saved registers Local variables Higher memory addresses Lower memory addresses Stack grows $fp $sp | clipped_hennesy_Page_801_Chunk6158 |
B-26 Appendix B Assemblers, Linkers, and the SPIM Simulator Before a called routine starts running, it must take the following steps to set up its stack frame: 1. Allocate memory for the frame by subtracting the frame’s size from the stack pointer. 2. Save callee-saved registers in the frame. A callee must save the values in these registers ($s0–$s7, $fp, and $ra) before altering them, since the caller expects to find these registers unchanged after the call. Register $fp is saved by every procedure that allocates a new stack frame. However, register $ra only needs to be saved if the callee itself makes a call. The other callee-saved registers that are used also must be saved. 3. Establish the frame pointer by adding the stack frame’s size minus 4 to $sp and storing the sum in register $fp. The MIPS register use convention provides callee- and caller-saved registers, because both types of registers are advantageous in different circumstances. Callee- saved registers are better used to hold long-lived values, such as variables from a user’s program. These registers are only saved during a procedure call if the callee expects to use the register. On the other hand, caller-saved registers are better used to hold short-lived quantities that do not persist across a call, such as immediate values in an address calculation. During a call, the callee can also use these registers for short-lived temporaries. Finally, the callee returns to the caller by executing the following steps: 1. If the callee is a function that returns a value, place the returned value in register $v0. 2. Restore all callee-saved registers that were saved upon procedure entry. 3. Pop the stack frame by adding the frame size to $sp. 4. Return by jumping to the address in register $ra. Elaboration: A programming language that does not permit recursive procedures— procedures that call themselves either directly or indirectly through a chain of calls—need not allocate frames on a stack. In a nonrecursive language, each procedure’s frame may be statically allocated, since only one invocation of a procedure can be active at a time. Older versions of Fortran prohibited recursion, because statically allocated frames produced faster code on some older machines. However, on load store architectures like MIPS, stack frames may be just as fast, because a frame pointer register points directly Hardware/ Software Interface recursive procedures Procedures that call themselves either directly or indirectly through a chain of calls. | clipped_hennesy_Page_802_Chunk6159 |
to the active stack frame, which permits a single load or store instruction to access values in the frame. In addition, recursion is a valuable programming technique. Procedure Call Example As an example, consider the C routine main () { printf (“The factorial of 10 is %d\n”, fact (10)); } int fact (int n) { if (n < 1) return (1); else return (n * fact (n - 1)); } which computes and prints 10! (the factorial of 10, 10! = 10 × 9 × . . . × 1). fact is a recursive routine that computes n! by multiplying n times (n - 1)!. The assembly code for this routine illustrates how programs manipulate stack frames. Upon entry, the routine main creates its stack frame and saves the two callee- saved registers it will modify: $fp and $ra. The frame is larger than required for these two register because the calling convention requires the minimum size of a stack frame to be 24 bytes. This minimum frame can hold four argument registers ($a0–$a3) and the return address $ra, padded to a double-word boundary (24 bytes). Since main also needs to save $fp, its stack frame must be two words larger (remember: the stack pointer is kept doubleword aligned). .text .globl main main: subu $sp,$sp,32 # Stack frame is 32 bytes long sw $ra,20($sp) # Save return address sw $fp,16($sp) # Save old frame pointer addiu $fp,$sp,28 # Set up frame pointer The routine main then calls the factorial routine and passes it the single argument 10. After fact returns, main calls the library routine printf and passes it both a format string and the result returned from fact: B.6 Procedure Call Convention B-27 | clipped_hennesy_Page_803_Chunk6160 |
B-28 Appendix B Assemblers, Linkers, and the SPIM Simulator li $a0,10 # Put argument (10) in $a0 jal fact # Call factorial function la $a0,$LC # Put format string in $a0 move $a1,$v0 # Move fact result to $a1 jal printf # Call the print function Finally, after printing the factorial, main returns. But first, it must restore the registers it saved and pop its stack frame: lw $ra,20($sp) # Restore return address lw $fp,16($sp) # Restore frame pointer addiu $sp,$sp,32 # Pop stack frame jr $ra # Return to caller .rdata $LC: .ascii “The factorial of 10 is %d\n\000” The factorial routine is similar in structure to main. First, it creates a stack frame and saves the callee-saved registers it will use. In addition to saving $ra and $fp, fact also saves its argument ($a0), which it will use for the recursive call: .text fact: subu $sp,$sp,32 # Stack frame is 32 bytes long sw $ra,20($sp) # Save return address sw $fp,16($sp) # Save frame pointer addiu $fp,$sp,28 # Set up frame pointer sw $a0,0($fp) # Save argument (n) The heart of the fact routine performs the computation from the C program. It tests whether the argument is greater than 0. If not, the routine returns the value 1. If the argument is greater than 0, the routine recursively calls itself to compute fact(n-1) and multiplies that value times n: lw $v0,0($fp) # Load n bgtz $v0,$L2 # Branch if n > 0 li $v0,1 # Return 1 jr $L1 # Jump to code to return $L2: lw $v1,0($fp) # Load n subu $v0,$v1,1 # Compute n - 1 move $a0,$v0 # Move value to $a0 | clipped_hennesy_Page_804_Chunk6161 |
jal fact # Call factorial function lw $v1,0($fp) # Load n mul $v0,$v0,$v1 # Compute fact(n-1) * n Finally, the factorial routine restores the callee-saved registers and returns the value in register $v0: $L1: # Result is in $v0 lw $ra, 20($sp) # Restore $ra lw $fp, 16($sp) # Restore $fp addiu $sp, $sp, 32 # Pop stack jr $ra # Return to caller Stack in Recursive Procedure Figure B.6.3 shows the stack at the call fact(7). main runs first, so its frame is deepest on the stack. main calls fact(10), whose stack frame is next on the stack. Each invocation recursively invokes fact to compute the next-lowest factorial. The stack frames parallel the LIFO order of these calls. What does the stack look like when the call to fact(10) returns? EXAMPLE B.6 Procedure Call Convention B-29 FIGURE B.6.3 Stack frames during the call of fact(7). main fact (10) fact (9) fact (8) fact (7) Stack Stack grows Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp Old $a0 Old $ra Old $fp | clipped_hennesy_Page_805_Chunk6162 |
B-30 Appendix B Assemblers, Linkers, and the SPIM Simulator ANSWER Elaboration: The difference between the MIPS compiler and the gcc compiler is that the MIPS compiler usually does not use a frame pointer, so this register is available as another callee-saved register, $s8. This change saves a couple of instructions in the procedure call and return sequence. However, it complicates code generation, because a procedure must access its stack frame with $sp, whose value can change during a procedure’s execution if values are pushed on the stack. Another Procedure Call Example As another example, consider the following routine that computes the tak func tion, which is a widely used benchmark created by Ikuo Takeuchi. This function does not compute anything useful, but is a heavily recursive program that illustrates the MIPS calling convention. int tak (int x, int y, int z) { if (y < x) return 1+ tak (tak (x - 1, y, z), tak (y - 1, z, x), tak (z - 1, x, y)); else return z; } int main () { tak(18, 12, 6); } The assembly code for this program is shown below. The tak function first saves its return address in its stack frame and its arguments in callee-saved registers, since the routine may make calls that need to use registers $a0–$a2 and $ra. The function uses callee-saved registers, since they hold values that persist over the main Stack Stack grows Old $ra Old $fp | clipped_hennesy_Page_806_Chunk6163 |
lifetime of the function, which includes several calls that could potentially modify registers. .text .globl tak tak: subu $sp, $sp, 40 sw $ra, 32($sp) sw $s0, 16($sp) # x move $s0, $a0 sw $s1, 20($sp) # y move $s1, $a1 sw $s2, 24($sp) # z move $s2, $a2 sw $s3, 28($sp) # temporary The routine then begins execution by testing if y < x. If not, it branches to label L1, which is shown below. bge $s1, $s0, L1 # if (y < x) If y < x, then it executes the body of the routine, which contains four recursive calls. The first call uses almost the same arguments as its parent: addiu $a0, $s0, -1 move $a1, $s1 move $a2, $s2 jal tak # tak (x - 1, y, z) move $s3, $v0 Note that the result from the first recursive call is saved in register $s3, so that it can be used later. The function now prepares arguments for the second recursive call. addiu $a0, $s1, -1 move $a1, $s2 move $a2, $s0 jal tak # tak (y - 1, z, x) In the instructions below, the result from this recursive call is saved in register $s0. But first we need to read, for the last time, the saved value of the first argu ment from this register. B.6 Procedure Call Convention B-31 | clipped_hennesy_Page_807_Chunk6164 |
B-32 Appendix B Assemblers, Linkers, and the SPIM Simulator addiu $a0, $s2, -1 move $a1, $s0 move $a2, $s1 move $s0, $v0 jal tak # tak (z - 1, x, y) After the three inner recursive calls, we are ready for the final recursive call. After the call, the function’s result is in $v0 and control jumps to the function’s epilogue. move $a0, $s3 move $a1, $s0 move $a2, $v0 jal tak # tak (tak(...), tak(...), tak(...)) addiu $v0, $v0, 1 j L2 This code at label L1 is the consequent of the if-then-else statement. It just moves the value of argument z into the return register and falls into the function epilogue. L1: move $v0, $s2 The code below is the function epilogue, which restores the saved registers and returns the function’s result to its caller. L2: lw $ra, 32($sp) lw $s0, 16($sp) lw $s1, 20($sp) lw $s2, 24($sp) lw $s3, 28($sp) addiu $sp, $sp, 40 jr $ra The main routine calls the tak function with its initial arguments, then takes the computed result (7) and prints it using SPIM’s system call for printing integers. .globl main main: subu $sp, $sp, 24 sw $ra, 16($sp) li $a0, 18 li $a1, 12 | clipped_hennesy_Page_808_Chunk6165 |
li $a2, 6 jal tak # tak(18, 12, 6) move $a0, $v0 li $v0, 1 # print_int syscall syscall lw $ra, 16($sp) addiu $sp, $sp, 24 jr $ra B.7 Exceptions and Interrupts Section 4.9 of Chapter 4 describes the MIPS exception facility, which responds both to exceptions caused by errors during an instruction’s execution and to external interrupts caused by I/O devices. This section describes exception and interrupt handling in more detail.1 In MIPS processors, a part of the CPU called coprocessor 0 records the information the software needs to handle exceptions and interrupts. The MIPS simulator SPIM does not implement all of coprocessor 0’s registers, since many are not useful in a simulator or are part of the memory system, which SPIM does not implement. However, SPIM does provide the following coprocessor 0 registers: Register name Register number Usage BadVAddr 8 memory address at which an offending memory reference occurred Count 9 timer Compare 11 value compared against timer that causes interrupt when they match Status 12 interrupt mask and enable bits Cause 13 exception type and pending interrupt bits EPC 14 address of instruction that caused exception Config 16 configuration of machine 1. This section discusses exceptions in the MIPS-32 architecture, which is what SPIM implements in Version 7.0 and later. Earlier versions of SPIM implemented the MIPS-1 architecture, which handled exceptions slightly differently. Converting programs from these versions to run on MIPS-32 should not be difficult, as the changes are limited to the Status and Cause register fields and the replacement of the rfe instruction by the eret instruction. interrupt handler A piece of code that is run as a result of an exception or an interrupt. B.7 Exceptions and Interrupts B-33 | clipped_hennesy_Page_809_Chunk6166 |
B-34 Appendix B Assemblers, Linkers, and the SPIM Simulator These seven registers are part of coprocessor 0’s register set. They are accessed by the mfc0 and mtc0 instructions. After an exception, register EPC contains the address of the instruction that was executing when the exception occurred. If the exception was caused by an external interrupt, then the instruction will not have started executing. All other exceptions are caused by the execution of the instruction at EPC, except when the offending instruction is in the delay slot of a branch or jump. In that case, EPC points to the branch or jump instruction and the BD bit is set in the Cause register. When that bit is set, the exception handler must look at EPC + 4 for the offending instruction. However, in either case, an exception handler properly resumes the program by returning to the instruction at EPC. If the instruction that caused the exception made a memory access, register BadVAddr contains the referenced memory location’s address. The Count register is a timer that increments at a fixed rate (by default, every 10 milliseconds) while SPIM is running. When the value in the Count register equals the value in the Compare register, a hardware interrupt at priority level 5 occurs. Figure B.7.1 shows the subset of the Status register fields implemented by the MIPS simulator SPIM. The interrupt mask field contains a bit for each of the six hardware and two software interrupt levels. A mask bit that is 1 allows inter rupts at that level to interrupt the processor. A mask bit that is 0 disables inter rupts at that level. When an interrupt arrives, it sets its interrupt pending bit in the Cause register, even if the mask bit is disabled. When an interrupt is pending, it will interrupt the processor when its mask bit is subsequently enabled. The user mode bit is 0 if the processor is running in kernel mode and 1 if it is running in user mode. On SPIM, this bit is fixed at 1, since the SPIM processor does not implement kernel mode. The exception level bit is normally 0, but is set to 1 after an exception occurs. When this bit is 1, interrupts are disabled and the EPC is not updated if another exception occurs. This bit prevents an exception handler from being disturbed by an interrupt or exception, but it should be reset when the handler finishes. If the interrupt enable bit is 1, interrupts are allowed. If it is 0, they are disabled. Figure B.7.2 shows the subset of Cause register fields that SPIM implements. The branch delay bit is 1 if the last exception occurred in an instruction executed in the delay slot of a branch. The interrupt pending bits become 1 when an interrupt | clipped_hennesy_Page_810_Chunk6167 |
is raised at a given hardware or software level. The exception code register describes the cause of an exception through the following codes: Number Name Cause of exception 0 Int interrupt (hardware) 4 AdEL address error exception (load or instruction fetch) 5 AdES address error exception (store) 6 IBE bus error on instruction fetch 7 DBE bus error on data load or store 8 Sys syscall exception 9 Bp breakpoint exception 10 RI reserved instruction exception 11 CpU coprocessor unimplemented 12 Ov arithmetic overflow exception 13 Tr trap 15 FPE floating point Exceptions and interrupts cause a MIPS processor to jump to a piece of code, at address 80000180hex (in the kernel, not user address space), called an exception handler. This code examines the exception’s cause and jumps to an appropriate point in the operating system. The operating system responds to an exception either by terminating the process that caused the exception or by performing some action. A process that causes an error, such as executing an unimplemented instruction, is killed by the operating system. On the other hand, other exceptions FIGURE B.7.1 The Status register. 15 8 4 1 0 Interrupt mask User mode Exception level Interrupt enable FIGURE B.7.2 The Cause register. 15 31 8 6 2 Pending interrupts Branch delay Exception code B.7 Exceptions and Interrupts B-35 | clipped_hennesy_Page_811_Chunk6168 |
B-36 Appendix B Assemblers, Linkers, and the SPIM Simulator such as page faults are requests from a process to the operating system to perform a service, such as bringing in a page from disk. The operating system processes these requests and resumes the process. The final type of exceptions are interrupts from external devices. These generally cause the operating system to move data to or from an I/O device and resume the interrupted process. The code in the example below is a simple exception handler, which invokes a routine to print a message at each exception (but not interrupts). This code is similar to the exception handler (exceptions.s) used by the SPIM simulator. Exception Handler The exception handler first saves register $at, which is used in pseudo- instructions in the handler code, then saves $a0 and $a1, which it later uses to pass arguments. The exception handler cannot store the old values from these registers on the stack, as would an ordinary routine, because the cause of the exception might have been a memory reference that used a bad value (such as 0) in the stack pointer. Instead, the exception handler stores these registers in an exception handler register ($k1, since it can’t access memory without using $at) and two memory locations (save0 and save1). If the exception routine itself could be interrupted, two locations would not be enough since the second exception would overwrite values saved during the first exception. However, this simple exception handler finishes running before it enables interrupts, so the problem does not arise. .ktext 0x80000180 mov $k1, $at # Save $at register sw $a0, save0 # Handler is not re-entrant and can’t use sw $a1, save1 # stack to save $a0, $a1 # Don’t need to save $k0/$k1 The exception handler then moves the Cause and EPC registers into CPU registers. The Cause and EPC registers are not part of the CPU register set. Instead, they are registers in coprocessor 0, which is the part of the CPU that handles exceptions. The instruction mfc0 $k0, $13 moves coprocessor 0’s register 13 (the Cause register) into CPU register $k0. Note that the exception handler need not save registers $k0 and $k1, because user programs are not supposed to use these registers. The exception handler uses the value from the Cause register to test whether the exception was caused by an interrupt (see the preceding table). If so, the exception is ignored. If the exception was not an interrupt, the handler calls print_excp to print a message. EXAMPLE | clipped_hennesy_Page_812_Chunk6169 |
mfc0 $k0, $13 # Move Cause into $k0 srl $a0, $k0, 2 # Extract ExcCode field andi $a0, $a0, Oxf bgtz $a0, done # Branch if ExcCode is Int (0) mov $a0, $k0 # Move Cause into $a0 mfco $a1, $14 # Move EPC into $a1 jal print_excp # Print exception error message Before returning, the exception handler clears the Cause register; resets the Status register to enable interrupts and clear the EXL bit, which allows subsequent exceptions to change the EPC register; and restores registers $a0, $a1, and $at. It then executes the eret (exception return) instruction, which returns to the instruction pointed to by EPC. This exception handler returns to the instruction following the one that caused the exception, so as to not re-execute the faulting instruction and cause the same exception again. done: mfc0 $k0, $14 # Bump EPC addiu $k0, $k0, 4 # Do not re-execute # faulting instruction mtc0 $k0, $14 # EPC mtc0 $0, $13 # Clear Cause register mfc0 $k0, $12 # Fix Status register andi $k0, Oxfffd # Clear EXL bit ori $k0, Ox1 # Enable interrupts mtc0 $k0, $12 lw $a0, save0 # Restore registers lw $a1, save1 mov $at, $k1 eret # Return to EPC .kdata save0: .word 0 save1: .word 0 B.7 Exceptions and Interrupts B-37 | clipped_hennesy_Page_813_Chunk6170 |
B-38 Appendix B Assemblers, Linkers, and the SPIM Simulator Elaboration: On real MIPS processors, the return from an exception handler is more complex. The exception handler cannot always jump to the instruction following EPC. For example, if the instruction that caused the exception was in a branch instruction’s delay slot (see Chapter 4), the next instruction to execute may not be the following instruction in memory. B.8 Input and Output SPIM simulates one I/O device: a memory-mapped console on which a program can read and write characters. When a program is running, SPIM connects its own terminal (or a separate console window in the X-window version xspim or the Windows version PCSpim) to the processor. A MIPS program running on SPIM can read the characters that you type. In addition, if the MIPS program writes characters to the terminal, they appear on SPIM’s terminal or console win dow. One exception to this rule is control-C: this character is not passed to the program, but instead causes SPIM to stop and return to command mode. When the program stops running (for example, because you typed control-C or because the program hit a breakpoint), the terminal is reconnected to SPIM so you can type SPIM commands. To use memory‑mapped I/O (see below), spim or xspim must be started with the ‑mapped_io flag. PCSpim can enable memory-mapped I/O through a command line flag or the “Settings” dialog. The terminal device consists of two independent units: a receiver and a trans mitter. The receiver reads characters from the keyboard. The transmitter displays characters on the console. The two units are completely independent. This means, for example, that characters typed at the keyboard are not automatically echoed on the display. Instead, a program echoes a character by reading it from the receiver and writing it to the transmitter. A program controls the terminal with four memory-mapped device registers, as shown in Figure B.8.1. “Memory-mapped’’ means that each register appears as a special memory location. The Receiver Control register is at location ffff0000hex. Only two of its bits are actually used. Bit 0 is called “ready’’: if it is 1, it means that a character has arrived from the keyboard but has not yet been read from the Receiver Data register. The ready bit is read-only: writes to it are ignored. The ready bit changes from 0 to 1 when a character is typed at the keyboard, and it changes from 1 to 0 when the character is read from the Receiver Data register. | clipped_hennesy_Page_814_Chunk6171 |
Bit 1 of the Receiver Control register is the keyboard “interrupt enable.” This bit may be both read and written by a program. The interrupt enable is initially 0. If it is set to 1 by a program, the terminal requests an interrupt at hardware level 1 whenever a character is typed, and the ready bit becomes 1. However, for the inter rupt to affect the processor, interrupts must also be enabled in the Status register (see Section B.7). All other bits of the Receiver Control register are unused. The second terminal device register is the Receiver Data register (at address ffff0004hex). The low-order eight bits of this register contain the last character typed at the keyboard. All other bits contain 0s. This register is read-only and changes only when a new character is typed at the keyboard. Reading the Receiver Data register resets the ready bit in the Receiver Control register to 0. The value in this register is undefined if the Receiver Control register is 0. The third terminal device register is the Transmitter Control register (at address ffff0008hex). Only the low-order two bits of this register are used. They behave much like the corresponding bits of the Receiver Control register. Bit 0 is called “ready’’ FIGURE B.8.1 The terminal is controlled by four device registers, each of which appears as a memory location at the given address. Only a few bits of these registers are actually used. The others always read as 0s and are ignored on writes. 1 Interrupt enable Ready 1 Unused Receiver control (0xffff0000) 8 Received byte Unused Receiver data (0xffff0004) 1 Interrupt enable Ready 1 Unused Transmitter control (0xffff0008) Transmitter data (0xffff000c) 8 Transmitted byte Unused B.8 Input and Output B-39 | clipped_hennesy_Page_815_Chunk6172 |
B-40 Appendix B Assemblers, Linkers, and the SPIM Simulator and is read-only. If this bit is 1, the transmitter is ready to accept a new character for output. If it is 0, the transmitter is still busy writing the previous character. Bit 1 is “interrupt enable’’ and is readable and writable. If this bit is set to 1, then the terminal requests an interrupt at hardware level 0 whenever the transmitter is ready for a new character, and the ready bit becomes 1. The final device register is the Transmitter Data register (at address ffff000chex). When a value is written into this location, its low-order eight bits (i.e., an ASCII character as in Figure 2.15 in Chapter 2) are sent to the console. When the Trans- mitter Data register is written, the ready bit in the Transmitter Control register is reset to 0. This bit stays 0 until enough time has elapsed to transmit the character to the terminal; then the ready bit becomes 1 again. The Transmitter Data register should only be written when the ready bit of the Transmitter Control register is 1. If the transmitter is not ready, writes to the Transmitter Data register are ignored (the write appears to succeed but the character is not output). Real computers require time to send characters to a console or terminal. These time lags are simulated by SPIM. For example, after the transmitter starts to write a character, the transmitter’s ready bit becomes 0 for a while. SPIM measures time in instructions executed, not in real clock time. This means that the transmitter does not become ready again until the processor executes a fixed number of instructions. If you stop the machine and look at the ready bit, it will not change. However, if you let the machine run, the bit eventually changes back to 1. B.9 SPIM SPIM is a software simulator that runs assembly language programs written for processors that implement the MIPS-32 architecture, specifically Release 1 of this architecture with a fixed memory mapping, no caches, and only coprocessors 0 and 1.2 SPIM’s name is just MIPS spelled backwards. SPIM can read and immedi ately execute assembly language files. SPIM is a self-contained system for running 2. Earlier versions of SPIM (before 7.0) implemented the MIPS-1 architecture used in the original MIPS R2000 processors. This architecture is almost a proper subset of the MIPS-32 architecture, with the difference being the manner in which exceptions are handled. MIPS-32 also introduced approximately 60 new instructions, which are supported by SPIM. Programs that ran on the earlier versions of SPIM and did not use exceptions should run unmodified on newer versions of SPIM. Programs that used exceptions will require minor changes. | clipped_hennesy_Page_816_Chunk6173 |
MIPS programs. It contains a debugger and provides a few operating system–like services. SPIM is much slower than a real computer (100 or more times). However, its low cost and wide availability cannot be matched by real hardware! An obvious question is, “Why use a simulator when most people have PCs that contain processors that run significantly faster than SPIM?” One reason is that the processor in PCs are Intel 80x86s, whose architecture is far less regular and far more complex to understand and program than MIPS processors. The MIPS architecture may be the epitome of a simple, clean RISC machine. In addition, simulators can provide a better environment for assembly pro gramming than an actual machine because they can detect more errors and provide a better interface than an actual computer. Finally, simulators are useful tools in studying computers and the programs that run on them. Because they are implemented in software, not silicon, simulators can be examined and easily modified to add new instructions, build new systems such as multiprocessors, or simply collect data. Simulation of a Virtual Machine The basic MIPS architecture is difficult to program directly because of delayed branches, delayed loads, and restricted address modes. This difficulty is tolerable since these computers were designed to be programmed in high-level languages and present an interface designed for compilers rather than assembly language programmers. A good part of the programming complexity results from delayed instructions. A delayed branch requires two cycles to execute (see the Elaborations on pages 343 and 381 of Chapter 4). In the second cycle, the instruction imme- diately following the branch executes. This instruction can perform useful work that normally would have been done before the branch. It can also be a nop (no operation) that does nothing. Similarly, delayed loads require two cycles to bring a value from memory, so the instruction immediately following a load cannot use the value (see Section 4.2 of Chapter 4). MIPS wisely chose to hide this complexity by having its assembler implement a virtual machine. This virtual computer appears to have nondelayed branches and loads and a richer instruction set than the actual hardware. The assembler reorganizes (rearranges) instructions to fill the delay slots. The virtual computer also provides pseudoinstructions, which appear as real instructions in assembly language programs. The hardware, however, knows nothing about pseudoinstruc tions, so the assembler must translate them into equivalent sequences of actual machine instructions. For example, the MIPS hardware only provides instructions to branch when a register is equal to or not equal to 0. Other conditional branches, such as one that branches when one register is greater than another, are synthesized by comparing the two registers and branching when the result of the comparison is true (nonzero). virtual machine A virtual computer that appears to have nondelayed branches and loads and a richer instruction set than the actual hardware. B.9 SPIM B-41 | clipped_hennesy_Page_817_Chunk6174 |
B-42 Appendix B Assemblers, Linkers, and the SPIM Simulator By default, SPIM simulates the richer virtual machine, since this is the machine that most programmers will find useful. However, SPIM can also simulate the delayed branches and loads in the actual hardware. Below, we describe the virtual machine and only mention in passing features that do not belong to the actual hardware. In doing so, we follow the convention of MIPS assembly language pro grammers (and compilers), who routinely use the extended machine as if it was implemented in silicon. Getting Started with SPIM The rest of this appendix introduces SPIM and the MIPS R2000 Assembly lan guage. Many details should never concern you; however, the sheer volume of information can sometimes obscure the fact that SPIM is a simple, easy-to-use program. This section starts with a quick tutorial on using SPIM, which should enable you to load, debug, and run simple MIPS programs. SPIM comes in different versions for different types of computer systems. The one constant is the simplest version, called spim, which is a command-line-driven program that runs in a console window. It operates like most programs of this type: you type a line of text, hit the return key, and spim executes your command. Despite its lack of a fancy interface, spim can do everything that its fancy cousins can do. There are two fancy cousins to spim. The version that runs in the X-windows environment of a UNIX or Linux system is called xspim. xspim is an easier pro gram to learn and use than spim, because its commands are always visible on the screen and because it continually displays the machine’s registers and memory. The other fancy version is called PCspim and runs on Microsoft Windows. The UNIX and Windows versions of SPIM are on the CD (click on Tutorials). Tutorials on xspim, pcSpim, spim, and SPIM command-line options are on the CD (click on Software). If you are going to run SPIM on a PC running Microsoft Windows, you should first look at the tutorial on PCSpim on the CD. If you are going to run SPIM on a computer running UNIX or Linux, you should read the tutorial on xspim (click on Tutorials). Surprising Features Although SPIM faithfully simulates the MIPS computer, SPIM is a simulator, and certain things are not identical to an actual computer. The most obvious differ ences are that instruction timing and the memory systems are not identical. SPIM does not simulate caches or memory latency, nor does it accurately reflect floating-point operation or multiply and divide instruction delays. In addition, the floating-point instructions do not detect many error conditions, which would cause exceptions on a real machine. | clipped_hennesy_Page_818_Chunk6175 |
Another surprise (which occurs on the real machine as well) is that a pseudo- instruction expands to several machine instructions. When you single-step or exam ine memory, the instructions that you see are different from the source program. The correspondence between the two sets of instructions is fairly simple, since SPIM does not reorganize instructions to fill delay slots. Byte Order Processors can number bytes within a word so the byte with the lowest number is either the leftmost or rightmost one. The convention used by a machine is called its byte order. MIPS processors can operate with either big-endian or little-endian byte order. For example, in a big-endian machine, the directive .byte 0, 1, 2, 3 would result in a memory word containing Byte # 0 1 2 3 while in a little-endian machine, the word would contain Byte # 3 2 1 0 SPIM operates with both byte orders. SPIM’s byte order is the same as the byte order of the underlying machine that runs the simulator. For example, on an Intel 80x86, SPIM is little-endian, while on a Macintosh or Sun SPARC, SPIM is big- endian. System Calls SPIM provides a small set of operating system–like services through the system call (syscall) instruction. To request a service, a program loads the system call code (see Figure B.9.1) into register $v0 and arguments into registers $a0–$a3 (or $f12 for floating-point values). System calls that return values put their results in register $v0 (or $f0 for floating-point results). For example, the following code prints "the answer = 5": .data str: .asciiz “the answer = ” .text B.9 SPIM B-43 | clipped_hennesy_Page_819_Chunk6176 |
B-44 Appendix B Assemblers, Linkers, and the SPIM Simulator li $v0, 4 # system call code for print_str la $a0, str # address of string to print syscall # print the string li $v0, 1 # system call code for print_int li $a0, 5 # integer to print syscall # print it The print_int system call is passed an integer and prints it on the console. print_float prints a single floating-point number; print_double prints a double precision number; and print_string is passed a pointer to a null- terminated string, which it writes to the console. The system calls read_int, read_float, and read_double to read an entire line of input up to and including the newline. Characters following the number are ignored. read_string has the same semantics as the UNIX library routine fgets. It reads up to n − 1 characters into a buffer and terminates the string with a null byte. If fewer than n − 1 characters are on the current line, read_string reads up to and including the newline and again null-terminates the string. Service System call code Arguments Result print_int 1 $a0 = integer print_float 2 $f12 = float print_double 3 $f12 = double print_string 4 $a0 = string read_int 5 integer (in $v0) read_float 6 float (in $f0) read_double 7 double (in $f0) read_string 8 $a0 = buffer, $a1 = length sbrk 9 $a0 = amount address (in $v0) exit 10 print_char 11 $a0 = char read_char 12 char (in $v0) open 13 $a0 = filename (string), $a1 = flags, $a2 = mode file descriptor (in $a0) read 14 $a0 = file descriptor, $a1 = buffer, $a2 = length num chars read (in $a0) write 15 $a0 = file descriptor, $a1 = buffer, $a2 = length num chars written (in $a0) close 16 $a0 = file descriptor exit2 17 $a0 = result FIGURE B.9.1 System services. | clipped_hennesy_Page_820_Chunk6177 |
Warning: Programs that use these syscalls to read from the terminal should not use memory-mapped I/O (see Section B.8). sbrk returns a pointer to a block of memory containing n additional bytes. exit stops the program SPIM is running. exit2 terminates the SPIM program, and the argument to exit2 becomes the value returned when the SPIM simulator itself terminates. print_char and read_char write and read a single character. open, read, write, and close are the standard UNIX library calls. B.10 MIPS R2000 Assembly Language A MIPS processor consists of an integer processing unit (the CPU) and a collec tion of coprocessors that perform ancillary tasks or operate on other types of data, such as floating-point numbers (see Figure B.10.1). SPIM simulates two coprocessors. Coprocessor 0 handles exceptions and interrupts. Coprocessor 1 is the floating-point unit. SPIM simulates most aspects of this unit. Addressing Modes MIPS is a load store architecture, which means that only load and store instructions access memory. Computation instructions operate only on values in registers. The bare machine provides only one memory-addressing mode: c(rx), which uses the sum of the immediate c and register rx as the address. The virtual machine provides the following addressing modes for load and store instructions: Format Address computation (register) contents of register imm immediate imm (register) immediate + contents of register label address of label label ± imm address of label + or – immediate label ± imm (register) address of label + or – (immediate + contents of register) Most load and store instructions operate only on aligned data. A quantity is aligned if its memory address is a multiple of its size in bytes. Therefore, a halfword B.10 MIPS R2000 Assembly Language B-45 | clipped_hennesy_Page_821_Chunk6178 |
B-46 Appendix B Assemblers, Linkers, and the SPIM Simulator object must be stored at even addresses, and a full word object must be stored at addresses that are a multiple of four. However, MIPS provides some instructions to manipulate unaligned data (lwl, lwr, swl, and swr). Elaboration: The MIPS assembler (and SPIM) synthesizes the more complex address ing modes by producing one or more instructions before the load or store to compute a complex address. For example, suppose that the label table referred to memory loca tion 0´10000004 and a program contained the instruction ld $a0, table + 4($a1) The assembler would translate this instruction into the instructions FIGURE B.10.1 MIPS R2000 CPU and FPU. CPU Registers $0 $31 Arithmetic unit Multiply divide Lo Hi Coprocessor 1 (FPU) Registers $0 $31 Arithmetic unit Registers BadVAddr Coprocessor 0 (traps and memory) Status Cause EPC Memory | clipped_hennesy_Page_822_Chunk6179 |
lui $at, 4096 addu $at, $at, $a1 lw $a0, 8($at) The first instruction loads the upper bits of the label’s address into register $at, which is the register that the assembler reserves for its own use. The second instruction adds the contents of register $a1 to the label’s partial address. Finally, the load instruction uses the hardware address mode to add the sum of the lower bits of the label’s address and the offset from the original instruction to the value in register $at. Assembler Syntax Comments in assembler files begin with a sharp sign (#). Everything from the sharp sign to the end of the line is ignored. Identifiers are a sequence of alphanumeric characters, underbars (_), and dots (.) that do not begin with a number. Instruction opcodes are reserved words that cannot be used as identifiers. Labels are declared by putting them at the beginning of a line followed by a colon, for example: .data item: .word 1 .text .globl main # Must be global main: lw $t0, item Numbers are base 10 by default. If they are preceded by 0x, they are interpreted as hexadecimal. Hence, 256 and 0x100 denote the same value. Strings are enclosed in double quotes (”). Special characters in strings follow the C convention: ■ ■newline \n ■ ■tab \t ■ ■quote \” SPIM supports a subset of the MIPS assembler directives: .align n Align the next datum on a 2n byte boundary. For example, .align 2 aligns the next value on a word boundary. .align 0 turns off automatic alignment of .half, .word, .float, and .double directives until the next .data or .kdata directive. .ascii str Store the string str in memory, but do not null- terminate it. B.10 MIPS R2000 Assembly Language B-47 | clipped_hennesy_Page_823_Chunk6180 |
B-48 Appendix B Assemblers, Linkers, and the SPIM Simulator .asciiz str Store the string str in memory and null-terminate it. .byte b1,..., bn Store the n values in successive bytes of memory. .data <addr> Subsequent items are stored in the data segment. If the optional argument addr is present, subse quent items are stored starting at address addr. .double d1,..., dn Store the n floating-point double precision num- bers in successive memory locations. .extern sym size Declare that the datum stored at sym is size bytes large and is a global label. This directive enables the assembler to store the datum in a portion of the data segment that is efficiently accessed via register $gp. .float f1,..., fn Store the n floating-point single precision num bers in successive memory locations. .globl sym Declare that label sym is global and can be refer enced from other files. .half h1,..., hn Store the n 16-bit quantities in successive memory halfwords. .kdata <addr> Subsequent data items are stored in the kernel data segment. If the optional argument addr is present, subsequent items are stored starting at address addr. .ktext <addr> Subsequent items are put in the kernel text seg ment. In SPIM, these items may only be instruc tions or words (see the .word directive below). If the optional argument addr is present, subsequent items are stored starting at address addr. .set noat and .set at The first directive prevents SPIM from complain ing about subsequent instructions that use register $at. The second directive re-enables the warning. Since pseudoinstructions expand into code that uses register $at, programmers must be very care- ful about leaving values in this register. .space n Allocates n bytes of space in the current segment (which must be the data segment in SPIM). | clipped_hennesy_Page_824_Chunk6181 |
.text <addr> Subsequent items are put in the user text segment. In SPIM, these items may only be instructions or words (see the .word directive below). If the optional argument addr is present, subsequent items are stored starting at address addr. .word w1,..., wn Store the n 32-bit quantities in successive memory words. SPIM does not distinguish various parts of the data segment (.data, .rdata, and .sdata). Encoding MIPS Instructions Figure B.10.2 explains how a MIPS instruction is encoded in a binary number. Each column contains instruction encodings for a field (a contiguous group of bits) from an instruction. The numbers at the left margin are values for a field. For example, the j opcode has a value of 2 in the opcode field. The text at the top of a column names a field and specifies which bits it occupies in an instruction. For example, the op field is contained in bits 26–31 of an instruction. This field encodes most instructions. However, some groups of instructions use additional fields to distinguish related instructions. For example, the different floating-point instructions are specified by bits 0–5. The arrows from the first column show which opcodes use these additional fields. Instruction Format The rest of this appendix describes both the instructions implemented by actual MIPS hardware and the pseudoinstructions provided by the MIPS assembler. The two types of instructions are easily distinguished. Actual instructions depict the fields in their binary representation. For example, in Addition (with overflow) add rd, rs, rt 0 rs rt rd 0 0x20 6 5 5 5 5 6 the add instruction consists of six fields. Each field’s size in bits is the small number below the field. This instruction begins with six bits of 0s. Register specifiers begin with an r, so the next field is a 5-bit register specifier called rs. This is the same register that is the second argument in the symbolic assembly at the left of this line. Another common field is imm16, which is a 16-bit immediate number. B.10 MIPS R2000 Assembly Language B-49 | clipped_hennesy_Page_825_Chunk6182 |
B-50 Appendix B Assemblers, Linkers, and the SPIM Simulator FIGURE B.10.2 MIPS opcode map. The values of each field are shown to its left. The first column shows the values in base 10, and the second shows base 16 for the op field (bits 31 to 26) in the third column. This op field completely specifies the MIPS operation except for six op values: 0, 1, 16, 17, 18, and 19. These operations are determined by other fields, identified by pointers. The last field (funct) uses “f ” to mean “s” if rs = 16 and op = 17 or “d” if rs = 17 and op = 17. The second field (rs) uses “z” to mean “0”, “1”, “2”, or “3” if op = 16, 17, 18, or 19, respectively. If rs = 16, the operation is specified elsewhere: if z = 0, the operations are specified in the fourth field (bits 4 to 0); if z = 1, then the operations are in the last field with f = s. If rs = 17 and z = 1, then the operations are in the last field with f = d. 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 16 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0 f 10 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1 f 20 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2 f 30 31 32 33 34 35 36 37 38 39 3a 3b 3c 3d 3e 3 f rs (25:21) mfcz cfcz mtcz ctcz copz copz (17:16) bczf bczt bczfl bcztl tlbr tlbwi tlbwr tlbp eret deret rt (20:16) bltz bgez bltzl bgezl tgei tgeiu tlti tltiu tegi tnei bltzal bgezal bltzall bgczall cvt.s.f cvt.d.f cvt.w.f c.f.f c.un.f c.eq.f c.ueq.f c.olt.f c.ult.f c.ole.f c.ule.f c.sf.f c.ngle.f c.seq.f c.ngl.f c.lt.f c.nge.f c.le.f c.ngt.f funct(5:0) funct(5:0) sll srl sra sllv srlv srav jr jalr movz movn syscall break sync mfhi mthi mflo mtlo mult multu div divu add addu sub subu and or xor nor slt sltu tge tgeu tlt tltu teq tne if z = 1, f = d if z = 1, f = s if z = 0 if z = 1 or z = 2 0 1 2 3 funct (4:0) sub.f add.f mul.f div.f sqrt.f abs.f | clipped_hennesy_Page_826_Chunk6183 |
mov.f neg.f round.w.f trunc.w.f cell.w.f floor.w.f movz.f movn.f 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 clz clo funct(5:0) madd maddu mul msub msubu (16:16) movf movt 0 1 (16:16) movf.f movt.f 0 1 op(31:26) j jal beq bne blez bgtz addi addiu slti sltiu andi ori xori lui z = 0 z = 1 z = 2 beql bnel blezl bgtzl lb lh lwl lw lbu lhu lwr sb sh swl sw swr cache ll lwc1 lwc2 pref ldc1 ldc2 sc swc1 swc2 sdc1 sdc2 | clipped_hennesy_Page_826_Chunk6184 |
Pseudoinstructions follow roughly the same conventions, but omit instruction encoding information. For example: Multiply (without overflow) mul rdest, rsrc1, src2 pseudoinstruction In pseudoinstructions, rdest and rsrc1 are registers and src2 is either a regis ter or an immediate value. In general, the assembler and SPIM translate a more general form of an instruction (e.g., add $v1, $a0, 0x55) to a specialized form (e.g., addi $v1, $a0, 0x55). Arithmetic and Logical Instructions Absolute value abs rdest, rsrc pseudoinstruction Put the absolute value of register rsrc in register rdest. Addition (with overflow) add rd, rs, rt 0 rs rt rd 0 0x20 6 5 5 5 5 6 Addition (without overflow) addu rd, rs, rt 0 rs rt rd 0 0x21 6 5 5 5 5 6 Put the sum of registers rs and rt into register rd. Addition immediate (with overflow) addi rt, rs, imm 8 rs rt imm 6 5 5 16 Addition immediate (without overflow) addiu rt, rs, imm 9 rs rt imm 6 5 5 16 Put the sum of register rs and the sign-extended immediate into register rt. B.10 MIPS R2000 Assembly Language B-51 | clipped_hennesy_Page_827_Chunk6185 |
B-52 Appendix B Assemblers, Linkers, and the SPIM Simulator AND and rd, rs, rt 0 rs rt rd 0 0x24 6 5 5 5 5 6 Put the logical AND of registers rs and rt into register rd. AND immediate andi rt, rs, imm 0xc rs rt imm 6 5 5 16 Put the logical AND of register rs and the zero-extended immediate into reg- ister rt. Count leading ones clo rd, rs 0x1c rs 0 rd 0 0x21 6 5 5 5 5 6 Count leading zeros clz rd, rs 0x1c rs 0 rd 0 0x20 6 5 5 5 5 6 Count the number of leading ones (zeros) in the word in register rs and put the result into register rd. If a word is all ones (zeros), the result is 32. Divide (with overflow) div rs, rt 0 rs rt 0 0x1a 6 5 5 10 6 Divide (without overflow) divu rs, rt 0 rs rt 0 0x1b 6 5 5 10 6 Divide register rs by register rt. Leave the quotient in register lo and the remain der in register hi. Note that if an operand is negative, the remainder is unspecified by the MIPS architecture and depends on the convention of the machine on which SPIM is run. | clipped_hennesy_Page_828_Chunk6186 |
Divide (with overflow) div rdest, rsrc1, src2 pseudoinstruction Divide (without overflow) divu rdest, rsrc1, src2 pseudoinstruction Put the quotient of register rsrc1 and src2 into register rdest. Multiply mult rs, rt 0 rs rt 0 0x18 6 5 5 10 6 Unsigned multiply multu rs, rt 0 rs rt 0 0x19 6 5 5 10 6 Multiply registers rs and rt. Leave the low-order word of the product in register lo and the high-order word in register hi. Multiply (without overflow) mul rd, rs, rt 0x1c rs rt rd 0 2 6 5 5 5 5 6 Put the low-order 32 bits of the product of rs and rt into register rd. Multiply (with overflow) mulo rdest, rsrc1, src2 pseudoinstruction Unsigned multiply (with overflow) mulou rdest, rsrc1, src2 pseudoinstruction Put the low-order 32 bits of the product of register rsrc1 and src2 into register rdest. B.10 MIPS R2000 Assembly Language B-53 | clipped_hennesy_Page_829_Chunk6187 |
B-54 Appendix B Assemblers, Linkers, and the SPIM Simulator Multiply add madd rs, rt 0x1c rs rt 0 0 6 5 5 10 6 Unsigned multiply add maddu rs, rt 0x1c rs rt 0 1 6 5 5 10 6 Multiply registers rs and rt and add the resulting 64-bit product to the 64-bit value in the concatenated registers lo and hi. Multiply subtract msub rs, rt 0x1c rs rt 0 4 6 5 5 10 6 Unsigned multiply subtract msub rs, rt 0x1c rs rt 0 5 6 5 5 10 6 Multiply registers rs and rt and subtract the resulting 64-bit product from the 64-bit value in the concatenated registers lo and hi. Negate value (with overflow) neg rdest, rsrc pseudoinstruction Negate value (without overflow) negu rdest, rsrc pseudoinstruction Put the negative of register rsrc into register rdest. NOR nor rd, rs, rt 0 rs rt rd 0 0x27 6 5 5 5 5 6 Put the logical NOR of registers rs and rt into register rd. | clipped_hennesy_Page_830_Chunk6188 |
NOT not rdest, rsrc pseudoinstruction Put the bitwise logical negation of register rsrc into register rdest. OR or rd, rs, rt 0 rs rt rd 0 0x25 6 5 5 5 5 6 Put the logical OR of registers rs and rt into register rd. OR immediate ori rt, rs, imm 0xd rs rt imm 6 5 5 16 Put the logical OR of register rs and the zero-extended immediate into register rt. Remainder rem rdest, rsrc1, rsrc2 pseudoinstruction Unsigned remainder remu rdest, rsrc1, rsrc2 pseudoinstruction Put the remainder of register rsrc1 divided by register rsrc2 into register rdest. Note that if an operand is negative, the remainder is unspecified by the MIPS architecture and depends on the convention of the machine on which SPIM is run. Shift left logical sll rd, rt, shamt 0 rs rt rd shamt 0 6 5 5 5 5 6 Shift left logical variable sllv rd, rt, rs 0 rs rt rd 0 4 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-55 | clipped_hennesy_Page_831_Chunk6189 |
B-56 Appendix B Assemblers, Linkers, and the SPIM Simulator Shift right arithmetic sra rd, rt, shamt 0 rs rt rd shamt 3 6 5 5 5 5 6 Shift right arithmetic variable srav rd, rt, rs 0 rs rt rd 0 7 6 5 5 5 5 6 Shift right logical srl rd, rt, shamt 0 rs rt rd shamt 2 6 5 5 5 5 6 Shift right logical variable srlv rd, rt, rs 0 rs rt rd 0 6 6 5 5 5 5 6 Shift register rt left (right) by the distance indicated by immediate shamt or the register rs and put the result in register rd. Note that argument rs is ignored for sll, sra, and srl. Rotate left rol rdest, rsrc1, rsrc2 pseudoinstruction Rotate right ror rdest, rsrc1, rsrc2 pseudoinstruction Rotate register rsrc1 left (right) by the distance indicated by rsrc2 and put the result in register rdest. Subtract (with overflow) sub rd, rs, rt 0 rs rt rd 0 0x22 6 5 5 5 5 6 | clipped_hennesy_Page_832_Chunk6190 |
Subtract (without overflow) subu rd, rs, rt 0 rs rt rd 0 0x23 6 5 5 5 5 6 Put the difference of registers rs and rt into register rd. Exclusive OR xor rd, rs, rt 0 rs rt rd 0 0x26 6 5 5 5 5 6 Put the logical XOR of registers rs and rt into register rd. XOR immediate xori rt, rs, imm 0xe rs rt Imm 6 5 5 16 Put the logical XOR of register rs and the zero-extended immediate into reg- ister rt. Constant-Manipulating Instructions Load upper immediate lui rt, imm 0xf O rt imm 6 5 5 16 Load the lower halfword of the immediate imm into the upper halfword of reg- ister rt. The lower bits of the register are set to 0. Load immediate li rdest, imm pseudoinstruction Move the immediate imm into register rdest. Comparison Instructions Set less than slt rd, rs, rt 0 rs rt rd 0 0x2a 6 5 5 5 5 6 B.10 MIPS R2000 Assembly Language B-57 | clipped_hennesy_Page_833_Chunk6191 |
B-58 Appendix B Assemblers, Linkers, and the SPIM Simulator Set less than unsigned sltu rd, rs, rt 0 rs rt rd 0 0x2b 6 5 5 5 5 6 Set register rd to 1 if register rs is less than rt, and to 0 otherwise. Set less than immediate slti rt, rs, imm 0xa rs rt imm 6 5 5 16 Set less than unsigned immediate sltiu rt, rs, imm 0xb rs rt imm 6 5 5 16 Set register rt to 1 if register rs is less than the sign-extended immediate, and to 0 otherwise. Set equal seq rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 equals rsrc2, and to 0 otherwise. Set greater than equal sge rdest, rsrc1, rsrc2 pseudoinstruction Set greater than equal unsigned sgeu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is greater than or equal to rsrc2, and to 0 otherwise. Set greater than sgt rdest, rsrc1, rsrc2 pseudoinstruction | clipped_hennesy_Page_834_Chunk6192 |
Set greater than unsigned sgtu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is greater than rsrc2, and to 0 otherwise. Set less than equal sle rdest, rsrc1, rsrc2 pseudoinstruction Set less than equal unsigned sleu rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is less than or equal to rsrc2, and to 0 otherwise. Set not equal sne rdest, rsrc1, rsrc2 pseudoinstruction Set register rdest to 1 if register rsrc1 is not equal to rsrc2, and to 0 otherwise. Branch Instructions Branch instructions use a signed 16-bit instruction offset field; hence, they can jump 215 − 1 instructions (not bytes) forward or 215 instructions backward. The jump instruction contains a 26-bit address field. In actual MIPS processors, branch instructions are delayed branches, which do not transfer control until the instruction following the branch (its “delay slot”) has executed (see Chapter 4). Delayed branches affect the offset calculation, since it must be computed relative to the address of the delay slot instruction (PC + 4), which is when the branch occurs. SPIM does not simulate this delay slot, unless the -bare or -delayed_branch flags are specified. In assembly code, offsets are not usually specified as numbers. Instead, an instructions branch to a label, and the assembler computes the distance between the branch and the target instructions. In MIPS-32, all actual (not pseudo) conditional branch instructions have a “likely” variant (for example, beq’s likely variant is beql), which does not execute B.10 MIPS R2000 Assembly Language B-59 | clipped_hennesy_Page_835_Chunk6193 |
B-60 Appendix B Assemblers, Linkers, and the SPIM Simulator the instruction in the branch’s delay slot if the branch is not taken. Do not use these instructions; they may be removed in subsequent versions of the architecture. SPIM implements these instructions, but they are not described further. Branch instruction b label pseudoinstruction Unconditionally branch to the instruction at the label. Branch coprocessor false bclf cc label 0x11 8 cc 0 Offset 6 5 3 2 16 Branch coprocessor true bclt cc label 0x11 8 cc 1 Offset 6 5 3 2 16 Conditionally branch the number of instructions specified by the offset if the floating-point coprocessor’s condition flag numbered cc is false (true). If cc is omitted from the instruction, condition code flag 0 is assumed. Branch on equal beq rs, rt, label 4 rs rt Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs equals rt. Branch on greater than equal zero bgez rs, label 1 rs 1 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than or equal to 0. | clipped_hennesy_Page_836_Chunk6194 |
Branch on greater than equal zero and link bgezal rs, label 1 rs 0x11 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than or equal to 0. Save the address of the next instruction in reg- ister 31. Branch on greater than zero bgtz rs, label 7 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is greater than 0. Branch on less than equal zero blez rs, label 6 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than or equal to 0. Branch on less than and link bltzal rs, label 1 rs 0x10 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than 0. Save the address of the next instruction in register 31. Branch on less than zero bltz rs, label 1 rs 0 Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is less than 0. B.10 MIPS R2000 Assembly Language B-61 | clipped_hennesy_Page_837_Chunk6195 |
B-62 Appendix B Assemblers, Linkers, and the SPIM Simulator Branch on not equal bne rs, rt, label 5 rs rt Offset 6 5 5 16 Conditionally branch the number of instructions specified by the offset if register rs is not equal to rt. Branch on equal zero beqz rsrc, label pseudoinstruction Conditionally branch to the instruction at the label if rsrc equals 0. Branch on greater than equal bge rsrc1, rsrc2, label pseudoinstruction Branch on greater than equal unsigned bgeu rsrc1, rsrc2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is greater than or equal to rsrc2. Branch on greater than bgt rsrc1, src2, label pseudoinstruction Branch on greater than unsigned bgtu rsrc1, src2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is greater than src2. Branch on less than equal ble rsrc1, src2, label pseudoinstruction | clipped_hennesy_Page_838_Chunk6196 |
Branch on less than equal unsigned bleu rsrc1, src2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is less than or equal to src2. Branch on less than blt rsrc1, rsrc2, label pseudoinstruction Branch on less than unsigned bltu rsrc1, rsrc2, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc1 is less than rsrc2. Branch on not equal zero bnez rsrc, label pseudoinstruction Conditionally branch to the instruction at the label if register rsrc is not equal to 0. Jump Instructions Jump j target 2 target 6 26 Unconditionally jump to the instruction at target. Jump and link jal target 3 target 6 26 Unconditionally jump to the instruction at target. Save the address of the next instruction in register $ra. B.10 MIPS R2000 Assembly Language B-63 | clipped_hennesy_Page_839_Chunk6197 |
B-64 Appendix B Assemblers, Linkers, and the SPIM Simulator Jump and link register jalr rs, rd 0 rs 0 rd 0 9 6 5 5 5 5 6 Unconditionally jump to the instruction whose address is in register rs. Save the address of the next instruction in register rd (which defaults to 31). Jump register jr rs 0 rs 0 8 6 5 15 6 Unconditionally jump to the instruction whose address is in register rs. Trap Instructions Trap if equal teq rs, rt 0 rs rt 0 0x34 6 5 5 10 6 If register rs is equal to register rt, raise a Trap exception. Trap if equal immediate teqi rs, imm 1 rs 0xc imm 6 5 5 16 If register rs is equal to the sign-extended value imm, raise a Trap exception. Trap if not equal teq rs, rt 0 rs rt 0 0x36 6 5 5 10 6 If register rs is not equal to register rt, raise a Trap exception. Trap if not equal immediate teqi rs, imm 1 rs 0xe imm 6 5 5 16 If register rs is not equal to the sign-extended value imm, raise a Trap exception. | clipped_hennesy_Page_840_Chunk6198 |
Trap if greater equal tge rs, rt 0 rs rt 0 0x30 6 5 5 10 6 Unsigned trap if greater equal tgeu rs, rt 0 rs rt 0 0x31 6 5 5 10 6 If register rs is greater than or equal to register rt, raise a Trap exception. Trap if greater equal immediate tgei rs, imm 1 rs 8 imm 6 5 5 16 Unsigned trap if greater equal immediate tgeiu rs, imm 1 rs 9 imm 6 5 5 16 If register rs is greater than or equal to the sign-extended value imm, raise a Trap exception. Trap if less than tlt rs, rt 0 rs rt 0 0x32 6 5 5 10 6 Unsigned trap if less than tltu rs, rt 0 rs rt 0 0x33 6 5 5 10 6 If register rs is less than register rt, raise a Trap exception. Trap if less than immediate tlti rs, imm 1 rs a imm 6 5 5 16 B.10 MIPS R2000 Assembly Language B-65 | clipped_hennesy_Page_841_Chunk6199 |
B-66 Appendix B Assemblers, Linkers, and the SPIM Simulator Unsigned trap if less than immediate tltiu rs, imm 1 rs b imm 6 5 5 16 If register rs is less than the sign-extended value imm, raise a Trap exception. Load Instructions Load address la rdest, address pseudoinstruction Load computed address—not the contents of the location—into register rdest. Load byte lb rt, address 0x20 rs rt Offset 6 5 5 16 Load unsigned byte lbu rt, address 0x24 rs rt Offset 6 5 5 16 Load the byte at address into register rt. The byte is sign-extended by lb, but not by lbu. Load halfword lh rt, address 0x21 rs rt Offset 6 5 5 16 Load unsigned halfword lhu rt, address 0x25 rs rt Offset 6 5 5 16 Load the 16-bit quantity (halfword) at address into register rt. The halfword is sign-extended by lh, but not by lhu. | clipped_hennesy_Page_842_Chunk6200 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.