[ { "section": "Chapter 1 ", "content": "introduction riscv pronounced riskfive new instructionset architecture isa originally designed support computer architecture research education hope also become standard free open architecture industry implementations goals defining riscv include completely open isa freely available academia industry real isa suitable direct native hardware implementation simulation binary translation isa avoids overarchitecting particular microarchitecture style eg mi crocoded inorder decoupled outoforder implementation technology eg fullcustom asic fpga allows efficient implementation isa separated small base integer isa usable base customized accelerators educational purposes optional standard extensions support general purpose software development support revised 2008 ieee754 floatingpoint standard 7 isa supporting extensive isa extensions specialized variants 32bit 64bit address space variants applications operating system kernels hardware implementations isa support highlyparallel multicore manycore implementations including heterogeneous multiprocessors optional variablelength instructions expand available instruction encoding space support optional dense instruction encoding improved performance static code size energy efficiency fully virtualizable isa ease hypervisor development isa simplifies experiments new privileged architecture designs commentary design decisions formatted paragraph nonnormative text skipped reader interested specification name riscv chosen represent fifth major risc isa design uc berkeley risci 15 riscii 8 soar 21 spur 11 first four also pun use roman numeral v signify variations vectors support range architecture research including various dataparallel accelerators explicit goal isa design riscv isa defined avoiding implementation details much possible although com mentary included implementationdriven decisions read softwarevisible interface wide variety implementations rather design particular hardware artifact riscv manual structured two volumes volume covers design base unprivileged instructions including optional unprivileged isa extensions unprivileged instructions generally usable privilege modes privileged architectures though behavior might vary depending privilege mode privilege architecture second volume provides design first classic privileged architecture manuals use iec 80000132008 conventions byte 8 bits unprivileged isa design tried remove dependence particular microarchi tectural features cache line size privileged architecture details page translation simplicity allow maximum flexibility alternative microar chitectures alternative privileged architectures", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-09-18 14:50:12", "segment": "segment0", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "1.1 RISC-V Hardware Platform Terminology ", "content": "riscv hardware platform contain one riscvcompatible processing cores to gether nonriscvcompatible cores fixedfunction accelerators various physical mem ory structures io devices interconnect structure allow components communicate component termed core contains independent instruction fetch unit riscv compatible core might support multiple riscvcompatible hardware threads harts multithreading riscv core might additional specialized instructionset extensions added coprocessor use term coprocessor refer unit attached riscv core mostly sequenced riscv instruction stream contains additional architectural state instructionset extensions possibly limited autonomy relative primary riscv instruction stream use term accelerator refer either nonprogrammable fixedfunction unit core operate autonomously specialized certain tasks riscv systems expect many programmable accelerators riscvbased cores specialized instructionset extensions andor customized coprocessors important class riscv accelerators io accelerators offload io processing tasks main application cores systemlevel organization riscv hardware platform range singlecore micro controller manythousandnode cluster sharedmemory manycore server nodes even small systemsonachip might structured hierarchy multicomputers andor multiprocessors modularize development effort provide secure isolation subsystems", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-09-18 14:50:12", "segment": "segment1", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "1.2 RISC-V Software Execution Environments and Harts ", "content": "behavior riscv program depends execution environment runs riscv execution environment interface eei defines initial state program number type harts environment including privilege modes supported harts accessibility attributes memory io regions behavior legal instructions exe cuted hart ie isa one component eei handling interrupts exceptions raised execution including environment calls examples eeis include linux application binary interface abi riscv supervisor binary interface sbi implementation riscv execution environment pure hardware pure software combination hardware software example opcode traps software emulation used implement functionality provided hardware examples execution environment implementations include bare metal hardware platforms harts directly implemented physical processor threads instructions full access physical address space hardware platform defines execution environment begins poweron reset riscv operating systems provide multiple userlevel execution environments mul tiplexing userlevel harts onto available physical processor threads controlling access memory via virtual memory riscv hypervisors provide multiple supervisorlevel execution environments guest operating systems riscv emulators spike qemu rv8 emulate riscv harts under lying x86 system provide either userlevel supervisorlevel execution environment bare hardware platform considered define eei accessible harts memory devices populate environment initial state poweron reset generally software designed use abstract interface hardware abstract eeis provide greater portability across different hardware platforms often eeis layered top one another one higherlevel eei uses another lowerlevel eei perspective software running given execution environment hart resource autonomously fetches executes riscv instructions within execution environment respect hart behaves like hardware thread resource even timemultiplexed onto real hardware execution environment eeis support creation destruction additional harts example via environment calls fork new harts execution environment responsible ensuring eventual forward progress harts given hart responsibility suspended hart exercising mechanism explicitly waits event waitforinterrupt instruction defined volume ii specification responsibility ends hart terminated following events constitute forward progress retirement instruction trap defined section 16 event defined extension constitute forward progress term hart introduced work lithe 13 14 provide term represent abstract execution resource opposed software thread programming abstraction important distinction hardware thread hart software thread context software running inside execution environment responsible causing progress harts responsibility outer execution environment environment harts operate like hardware threads perspective software inside execution environment execution environment implementation might timemultiplex set guest harts onto fewer host harts provided execution environment must way guest harts operate like independent hardware threads particular guest harts host harts execution environment must able preempt guest harts must wait indefinitely guest software guest hart yield control guest hart", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-09-18 14:50:12", "segment": "segment2", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "1.3 RISC-V ISA Overview ", "content": "riscv isa defined base integer isa must present implementation plus optional extensions base isa base integer isas similar early risc processors except branch delay slots support optional variablelength instruction encodings base carefully restricted minimal set instructions sufficient provide reasonable target compilers assemblers linkers operating systems addi tional privileged operations provides convenient isa software toolchain skeleton around customized processor isas built although convenient speak riscv isa riscv actually family related isas currently four base isas base integer instruction set characterized width integer registers corresponding size address space number integer registers two primary base integer variants rv32i rv64i described chapters 2 5 provide 32bit 64bit address spaces respectively use term xlen refer width integer register bits either 32 64 chapter 4 describes rv32e subset variant rv32i base instruction set added support small microcontrollers half number integer registers chapter 6 sketches future rv128i variant base integer instruction set supporting flat 128bit address space xlen128 base integer instruction sets use two scomplement representation signed integer values although 64bit address spaces requirement larger systems believe 32bit address spaces remain adequate many embedded client devices decades come desirable lower memory traffic energy consumption addition 32bit address spaces sufficient educational purposes larger flat 128bit address space might eventually required ensured could accommodated within riscv isa framework four base isas riscv treated distinct base isas common question single isa particular rv32i strict subset rv64i earlier isa designs sparc mips adopted strict superset policy increasing address space size support running existing 32bit binaries new 64bit hardware main advantage explicitly separating base isas base isa opti mized needs without requiring support operations needed base isas example rv64i omit instructions csrs needed cope nar rower registers rv32i rv32i variants use encoding space otherwise reserved instructions required wider addressspace variants main disadvantage treating design single isa complicates hardware needed emulate one base isa another eg rv32i rv64i however differences addressing illegal instruction traps generally mean mode switch would required hardware case even full superset instruction encodings different riscv base isas similar enough supporting multiple versions relatively low cost although proposed strict superset design would allow legacy 32bit libraries linked 64bit code impractical practice even compatible encodings due differences software calling conventions systemcall interfaces riscv privileged architecture provides fields misa control unprivileged isa level support emulating different base isas hardware note newer sparc mips isa revisions deprecated support running 32bit code unchanged 64bit systems related question different encoding 32bit adds rv32i add rv64i addw addw opcode could used 32bit adds rv32i addd 64bit adds rv64i instead existing design uses opcode add 32 bit adds rv32i 64bit adds rv64i different opcode addw 32bit adds rv64i would also consistent use lw opcode 32bit load rv32i rv64i first versions riscv isa variant alternate design riscv design changed current choice january 2011 focus supporting 32bit integers 64bit isa providing compatibility 32bit isa motivation remove asymmetry arose opcodes rv32i w suffix eg addw andw hindsight perhaps welljustified consequence designing isas time opposed adding one later sit top another also belief fold platform requirements isa spec would imply rv32i instructions would required rv64i late change encoding also little practical consequence reasons stated noted could enable w variants extension rv32i systems provide common encoding across rv64i future rv32 variant riscv designed support extensive customization specialization base integer isa extended one optional instructionset extensions divide risc v instructionset encoding space related encoding spaces csrs three disjoint categories standard reserved custom standard encodings defined foundation shall conflict standard extensions base isa reserved encodings currently defined saved future standard extensions use term non standard describe extension defined foundation custom encodings shall never used standard extensions made available vendorspecific nonstandard extensions use term nonconforming describe nonstandard extension uses either standard reserved encoding ie custom extensions nonconforming instructionset extensions generally shared may provide slightly different functionality depending base isa chapter 26 describes various ways extending riscv isa also developed naming convention riscv base instructions instructionset extensions described detail chapter 27 support general software development set standard extensions defined provide integer multiplydivide atomic operations single doubleprecision floatingpoint arith metic base integer isa named prefixed rv32 rv64 depending integer register width contains integer computational instructions integer loads integer stores control flow instructions standard integer multiplication division extension named adds instructions multiply divide values held integer registers standard atomic instruction extension denoted adds instructions atomically read modify write memory interprocessor synchronization standard singleprecision floatingpoint exten sion denoted f adds floatingpoint registers singleprecision computational instructions singleprecision loads stores standard doubleprecision floatingpoint extension denoted expands floatingpoint registers adds doubleprecision computational instruc tions loads stores standard c compressed instruction extension provides narrower 16bit forms common instructions beyond base integer isa standard gc extensions believe rare new instruction provide significant benefit applications although may beneficial certain domain energy efficiency concerns forcing greater specialization believe important simplify required portion isa specification whereas architectures usually treat isa single entity changes new version instructions added time riscv endeavor keep base standard extension constant time instead layer new instructions optional extensions example base integer isas continue fully supported standalone isas regardless subsequent extensions", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-09-18 14:50:13", "segment": "segment3", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "1.4 Memory ", "content": "riscv hart single byteaddressable address space 2xlen bytes memory accesses word memory defined 32 bits 4 bytes correspondingly halfword 16 bits 2 bytes doubleword 64 bits 8 bytes quadword 128 bits 16 bytes memory address space circular byte address 2xlen 1 adjacent byte address zero accordingly memory address computations done hardware ignore overflow instead wrap around modulo 2xlen execution environment determines mapping hardware resources hart address space different address ranges hart address space may 1 vacant 2 contain main memory 3 contain one io devices reads writes io devices may visible side effects accesses main memory although possible execution environment call everything hart address space io device usually expected portion specified main memory riscv platform multiple harts address spaces two harts may entirely entirely different may partly different sharing subset resources mapped different address ranges purely bare metal environment harts may see identical address space accessed entirely physical addresses however execution environment includes operating system employing address translation common hart given virtual address space largely entirely executing riscv machine instruction entails one memory accesses subdivided implicit explicit accesses instruction executed implicit memory read instruction fetch done obtain encoded instruction execute many riscv instructions perform memory accesses beyond instruction fetch specific load store instructions perform explicit read write memory address determined instruction execution environment may dictate instruction execution performs implicit memory accesses implement address translation beyond documented unprivileged isa execution environment determines portions nonvacant address space accessible kind memory access example set locations implicitly read instruction fetch may may overlap set locations explicitly read load instruction set locations explicitly written store instruction may subset locations read ordinarily instruction attempts access memory inaccessible address exception raised instruction vacant locations address space never accessible except specified otherwise implicit reads raise exception side effects may occur arbitrarily early speculatively even machine could possibly prove read needed instance valid implementation could attempt read main memory earliest opportunity cache many fetchable executable bytes possible later instruction fetches avoid reading main memory instruction fetches ever ensure certain implicit reads ordered writes memory locations software must execute specific fence cachecontrol instructions defined purpose fencei instruction defined chapter 3 memory accesses implicit explicit made hart may appear occur different order perceived another hart agent access memory perceived reordering memory accesses always constrained however applicable memory consistency model default memory consistency model riscv riscv weak memory ordering rvwmo defined chapter 14 appendices optionally implementation may adopt stronger model total store ordering defined chapter 23 execution environment may also add constraints limit perceived reordering memory accesses since rvwmo model weakest model allowed riscv implementation software written model compatible actual memory consistency rules riscv implementations implicit reads software must execute fence cachecontrol instructions ensure specific ordering memory accesses beyond requirements assumed memory consistency model execution environment", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-09-18 14:50:13", "segment": "segment4", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "1.5 Base Instruction-Length Encoding ", "content": "base riscv isa fixedlength 32bit instructions must naturally aligned 32bit boundaries however standard riscv encoding scheme designed support isa extensions variablelength instructions instruction number 16bit instruction parcels length parcels naturally aligned 16bit boundaries standard compressed isa extension described chapter 16 reduces code size providing compressed 16bit instructions relaxes alignment constraints allow instructions 16 bit 32 bit aligned 16bit boundary improve code density use term ialign measured bits refer instructionaddress alignment constraint implementation enforces ialign 32 bits base isa isa extensions including compressed isa extension relax ialign 16 bits ialign may take value 16 32 use term ilen measured bits refer maximum instruction length supported implementation always multiple ialign implementations supporting base instruction set ilen 32 bits implementations supporting longer instructions larger values ilen figure 11 illustrates standard riscv instructionlength encoding convention 32bit instructions base isa lowest two bits set 11 optional compressed 16bit instructionset extensions lowest two bits equal 00 01 10 expanded instructionlength encoding portion 32bit instructionencoding space tentatively allocated instructions longer 32 bits entirety space reserved time following proposal encoding instructions longer 32 bits considered frozen standard instructionset extensions encoded 32 bits additional loworder bits set 1 conventions 48bit 64bit lengths shown figure 11 instruction lengths 80 bits 176 bits encoded using 3bit field bits 1412 giving number 16bit words addition first 516bit words encoding bits 1412 set 111 reserved future longer instruction encodings given code size energy savings compressed format wanted build support compressed format isa encoding scheme rather adding afterthought allow simpler implementations want make compressed format mandatory also wanted optionally allow longer instructions support experimentation larger instructionset extensions although encoding convention required tighter encoding core riscv isa several beneficial effects implementation standard imafd isa need hold mostsignificant 30 bits instruction caches 625 saving instruction cache refills instructions encountered either low bit clear recoded illegal 30bit instructions storing cache preserve illegal instruction exception behavior perhaps importantly condensing base isa subset 32bit instruction word leave space available nonstandard custom extensions particular base rv32i isa uses less 18 encoding space 32bit instruction word described chapter 26 implementation require support standard compressed instruction extension map 3 additional nonconforming 30bit instruction spaces 32bit fixedwidth format preserving support standard 32bit instructionset extensions implementation also need instructions 32bits length recover four major opcodes nonconforming extensions encodings bits 150 zeros defined illegal instructions instructions con sidered minimal length 16 bits 16bit instructionset extension present otherwise 32 bits encoding bits ilen10 ones also illegal instruction considered ilen bits long consider feature length instruction containing zero bits legal quickly traps erroneous jumps zeroed memory regions similarly also reserve instruction encoding containing ones illegal instruction catch common pattern observed unprogrammed nonvolatile memory devices disconnected memory buses broken memory devices software rely naturally aligned 32bit word containing zero act illegal instruction riscv implementations used software illegal instruction explicitly desired defining corresponding known illegal value ones difficult due variablelength encoding software generally use illegal value ilen bits 1s software might know ilen eventual target machine eg software compiled standard binary library used many different machines defining 32bit word ones illegal also considered machines must support 32bit instruction size requires instructionfetch unit machines ilen 32 report illegal instruction exception rather access fault instruction borders protection boundary complicating variableinstructionlength fetch decode riscv base isas either littleendian bigendian memory systems privileged architecture defining biendian operation instructions stored memory sequence 16bit littleendian parcels regardless memory system endianness parcels forming one in struction stored increasing halfword addresses lowestaddressed parcel holding lowestnumbered bits instruction specification originally chose littleendian byte ordering riscv memory system little endian systems currently dominant commercially x86 systems ios android win dows arm minor point also found littleendian memory systems natural hardware designers however certain application areas ip networking operate bigendian data structures certain legacy code bases built assuming bigendian processors defined bigendian biendian variants riscv fix order instruction parcels stored memory independent memory system endianness ensure lengthencoding bits always appear first halfword address order allows length variablelength instruction quickly determined instructionfetch unit examining first bits first 16bit instruction parcel make instruction parcels littleendian decouple instruction encoding memory system endianness altogether design benefits software tooling biendian hardware otherwise instance riscv assembler disassembler would always need know intended active endianness despite biendian systems endianness mode might change dynamically execution contrast giving instructions fixed endianness sometimes possible carefully written software endianness agnostic even binary form much like positionindependent code choice instructions littleendian consequences however riscv software encodes decodes machine instructions bigendian jit compilers example must swap byte order storing instruction memory decided fix littleendian instruction encoding naturally led placing lengthencoding bits lsb positions instruction format avoid breaking opcode fields", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-09-18 14:50:14", "segment": "segment5", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%201.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "1.6 Exceptions, Traps, and Interrupts ", "content": "use term exception refer unusual condition occurring run time associated instruction current riscv hart use term interrupt refer external asynchronous event may cause riscv hart experience unexpected transfer control use term trap refer transfer control trap handler caused either exception interrupt instruction descriptions following chapters describe conditions raise exception execution general behavior riscv eeis trap handler occurs exception signaled instruction except floatingpoint exceptions standard floatingpoint extensions cause traps manner interrupts generated routed enabled hart depends eei use exception trap compatible ieee754 floatingpoint stan dard traps handled made visible software running hart depends enclosing execution environment perspective software running inside execution environment traps encountered hart runtime four different effects contained trap trap visible handled software running inside execution environment example eei providing supervisor user mode harts ecall usermode hart generally result transfer control supervisor mode handler running hart similarly environment hart interrupted interrupt handler run supervisor mode hart requested trap trap synchronous exception explicit call execution environment requesting action behalf software inside execution environment example system call case execution may may resume hart requested action taken execution environment example system call could remove hart cause orderly termination entire execution environment invisible trap trap handled transparently execution environment execution resumes normally trap handled examples include emulating missing instructions handling nonresident page faults demandpaged virtualmemory system handling device interrupts different job multiprogrammed machine cases software running inside execution environment aware trap ignore timing effects definitions fatal trap trap represents fatal failure causes execution environment terminate execution examples include failing virtualmemory pageprotection check allowing watchdog timer expire eei define execution terminated reported external environment eei defines trap whether handled precisely though recommendation maintain preciseness possible contained requested traps observed imprecise software inside execution environment invisible traps definition observed precise imprecise software running inside execution environment fatal traps observed imprecise software running inside execution environment knownerrorful instructions cause immediate termination document describes unprivileged instructions traps rarely mentioned architec tural means handle contained traps defined privileged architecture manual along features support richer eeis unprivileged instructions defined solely cause requested traps documented invisible traps nature scope document instruction encodings defined defined means may cause fatal trap", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-09-18 14:50:14", "segment": "segment6", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%201.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "1.7 UNSPECIFIED Behaviors and Values ", "content": "architecture fully describes implementations must constraints may cases architecture intentionally constrain implementations term unspecified explicitly used term unspecified refers behavior value intentionally unconstrained definition behaviors values open extensions platform standards implementations extensions platform standards implementation documentation may provide normative content constrain cases base architecture defines unspecified like base architecture extensions fully describe allowable behavior values use term unspecified cases intentionally unconstrained cases may constrained defined extensions platform standards implementations", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-09-18 14:50:14", "segment": "segment7", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 2 ", "content": "rv32i base integer instruction set version 21 chapter describes version 20 rv32i base integer instruction set rv32i designed sufficient form compiler target support modern operating system environments isa also designed reduce hardware required minimal implementation rv32i contains 40 unique instructions though simple implementation might cover ecallebreak instructions single system hardware instruction al ways traps might able implement fence instruction nop reducing base instruction count 38 total rv32i emulate almost isa extension except extension requires additional hardware support atomicity practice hardware implementation including machinemode privileged architecture also require 6 csr instructions subsets base integer isa might useful pedagogical purposes base defined little incentive subset real hardware implementation beyond omitting support misaligned memory accesses treating system instructions single trap commentary rv32i also applies rv64i base", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-09-18 14:50:14", "segment": "segment8", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "2.1 Programmers\u2019 Model for Base Integer ISA ", "content": "figure 21 shows unprivileged state base integer isa rv32i 32 x registers 32 bits wide ie xlen32 register x0 hardwired bits equal 0 general purpose registers x1x31 hold values various instructions interpret collection boolean values two complement signed binary integers unsigned binary integers one additional unprivileged register program counter pc holds address current instruction dedicated stack pointer subroutine return address link register base integer isa instruction encoding allows x register used purposes however standard software calling convention uses register x1 hold return address call register x5 available alternate link register standard calling convention uses register x2 stack pointer hardware might choose accelerate function calls returns use x1 x5 see descriptions jal jalr instructions optional compressed 16bit instruction format designed around assumption x1 return address register x2 stack pointer software using conventions operate correctly may greater code size number available architectural registers large impacts code size performance energy consumption although 16 registers would arguably sufficient integer isa running compiled code impossible encode complete isa 16 registers 16bit instructions using 3address format although 2address format would possible would increase instruction count lower efficiency wanted avoid intermediate instruction sizes xtensa 24bit instructions simplify base hardware implementations 32bit instruction size adopted straightforward support 32 integer registers larger number integer registers also helps performance highperformance code extensive use loop unrolling software pipelining cache tiling reasons chose conventional size 32 integer registers base isa dy namic register usage tends dominated frequently accessed registers regfile im plementations optimized reduce access energy frequently accessed registers 20 optional compressed 16bit instruction format mostly accesses 8 registers hence provide dense instruction encoding additional instructionset extensions could support much larger register space either flat hierarchical desired resourceconstrained embedded applications defined rv32e subset 16 registers chapter 4", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-09-18 14:50:14", "segment": "segment9", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%202.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.2 Base Instruction Formats ", "content": "base rv32i isa four core instruction formats risu shown figure 22 fixed 32 bits length must aligned fourbyte boundary memory instructionaddressmisaligned exception generated taken branch unconditional jump target address fourbyte aligned exception reported branch jump instruction target instruction instructionaddressmisaligned exception generated conditional branch taken alignment constraint base isa instructions relaxed twobyte boundary instruction extensions 16bit lengths odd multiples 16bit lengths added ie ialign16 instructionaddressmisaligned exceptions reported branch jump would cause instruction misalignment help debugging simplify hardware design systems ialign32 places misalignment occur behavior upon decoding reserved instruction unspecified platforms may require opcodes reserved standard use raise illegalinstruction exception platforms may permit reserved opcode space used nonconforming exten sions riscv isa keeps source rs1 rs2 destination rd registers position formats simplify decoding except 5bit immediates used csr instructions chapter 9 immediates always signextended generally packed towards leftmost available bits instruction allocated reduce hardware complexity partic ular sign bit immediates always bit 31 instruction speed signextension circuitry decoding register specifiers usually critical paths implementations in struction format chosen keep register specifiers position formats expense move immediate bits across formats property shared risciv aka spur 11 practice immediates either small require xlen bits chose asym metric immediate split 12 bits regular instructions plus special loadupperimmediate in struction 20 bits increase opcode space available regular instructions immediates signextended observe benefit using zeroextension immediates mips isa wanted keep isa simple possible", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-09-18 14:50:14", "segment": "segment10", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%202.2.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.3 Immediate Encoding Variants ", "content": "two variants instruction formats bj based handling imme diates shown figure 23 difference b formats 12bit immediate field used encode branch offsets multiples 2 b f ormat instead shifting bits instructionencoded immediate left one hardware conventionally done middle bits imm 101 sign bit stay fixed p ositions lowest bit format inst 7 encodes highorder bit b format similarly difference u j formats 20bit immediate shifted left 12 bits form u immediates 1 bit form j immediates location instruction bits u j format immediates chosen maximize overlap formats figure 24 shows immediates produced base instruction formats labeled show instruction bit inst produces bit immediate value signextension one critical operations immediates particularly xlen 32 riscv sign bit immediates always held bit 31 instruction allow signextension proceed parallel instruction decoding although complex implementations might separate adders branch jump calculations would benefit keeping location immediate bits constant across types instruction wanted reduce hardware cost simplest implementations rotating bits instruction encoding b j immediates instead using dynamic hard ware muxes multiply immediate 2 reduce instruction signal fanout immediate mux costs around factor 2 scrambled immediate encoding add negligible time static aheadoftime compilation dynamic generation instructions small additional overhead common short forward branches straightforward immediate encodings", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-09-18 14:50:14", "segment": "segment11", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%202.3.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%202.4.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.4 Integer Computational Instructions ", "content": "integer computational instructions operate xlen bits values held integer register file integer computational instructions either encoded registerimmediate operations using itype format registerregister operations using rtype format destination register rd registerimmediate registerregister instructions integer computational instructions cause arithmetic exceptions include special instructionset support overflow checks integer arithmetic operations base instruction set many overflow checks cheaply implemented using riscv branches overflow checking unsigned addition requires single additional branch instruction addition add t0 t1 t2 bltu t0 t1 overflow signed addition one operand sign known overflow checking requires single branch addition addi t0 t1 imm blt t0 t1 overflow covers common case addition immediate operand general signed addition three additional instructions addition required leveraging observation sum less one operands operand negative add t0 t1 t2 slti t3 t2 0 slt t4 t0 t1 bne t3 t4 overflow rv64i checks 32bit signed additions optimized comparing results add addw operands integer registerimmediate instructions addi adds signextended 12bit immediate register rs1 arithmetic overflow ignored result simply low xlen bits result addi rd rs1 0 used implement mv rd rs1 assembler pseudoinstruction slti set less immediate places value 1 register rd register rs1 less sign extended immediate treated signed numbers else 0 written rd sltiu similar compares values unsigned numbers ie immediate first signextended xlen bits treated unsigned number note sltiu rd rs1 1 sets rd 1 rs1 equals zero otherwise sets rd 0 assembler pseudoinstruction seqz rd rs andi ori xori logical operations perform bitwise xor register rs1 signextended 12bit immediate place result rd note xori rd rs1 1 performs bitwise logical inversion register rs1 assembler pseudoinstruction rd rs shifts constant encoded specialization itype format operand shifted rs1 shift amount encoded lower 5 bits iimmediate field right shift type encoded bit 30 slli logical left shift zeros shifted lower bits srli logical right shift zeros shifted upper bits srai arithmetic right shift original sign bit copied vacated upper bits lui load upper immediate used build 32bit constants uses utype format lui places uimmediate value top 20 bits destination register rd filling lowest 12 bits zeros auipc add upper immediate pc used build pcrelative addresses uses utype format auipc forms 32bit offset 20bit uimmediate filling lowest 12 bits zeros adds offset address auipc instruction places result register rd auipc instruction supports twoinstruction sequences access arbitrary offsets pc controlflow transfers data accesses combination auipc 12bit immediate jalr transfer control 32bit pcrelative address auipc plus 12bit immediate offset regular load store instructions access 32bit pcrelative data address current pc obtained setting uimmediate 0 although jal 4 instruction could also used obtain local pc instruction following jal might cause pipeline breaks simpler microarchitectures pollute btb structures complex microarchitectures integer registerregister operations rv32i defines several arithmetic rtype operations operations read rs1 rs2 registers source operands write result register rd funct7 funct3 fields select type operation add performs addition rs1 rs2 sub performs subtraction rs2 rs1 overflows ignored low xlen bits results written destination rd slt sltu perform signed unsigned compares respectively writing 1 rd rs1 rs2 0 otherwise note sltu rd x0 rs2 sets rd 1 rs2 equal zero otherwise sets rd zero assembler pseudoinstruction snez rd rs xor perform bitwise logical operations sll srl sra perform logical left logical right arithmetic right shifts value register rs1 shift amount held lower 5 bits register rs2 nop instruction nop instruction change architecturally visible state except advancing pc incrementing applicable performance counters nop encoded addi x0 x0 0 nops used align code segments microarchitecturally significant address boundaries leave space inline code modifications although many possible ways encode nop define canonical nop encoding allow microarchitectural optimizations well readable disassembly output nop encodings made available hint instructions section 29 addi chosen nop encoding likely take fewest resources execute across range systems optimized away decode particular instruction reads one register also addi functional unit likely available superscalar design adds common operation particular addressgeneration functional units execute addi using hardware needed baseoffset address calculations registerregister add logicalshift operations require additional hardware", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-09-18 14:50:15", "segment": "segment12", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(2.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(2.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(2.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(2.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra5(2.4).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.5 Control Transfer Instructions ", "content": "rv32i provides two types control transfer instructions unconditional jumps conditional branches control transfer instructions rv32i architecturally visible delay slots unconditional jumps jump link jal instruction uses jtype format jimmediate encodes signed offset multiples 2 bytes offset signextended added address jump instruction form jump target address jumps therefore target 1 mib range jal stores address instruction following jump pc4 register rd standard software calling convention uses x1 return address register x5 alternate link register alternate link register supports calling millicode routines eg save restore registers compressed code preserving regular return address register register x5 chosen alternate link register maps temporary standard calling convention encoding one bit different regular link register plain unconditional jumps assembler pseudoinstruction j encoded jal rdx0 indirect jump instruction jalr jump link register uses itype encoding target address obtained adding signextended 12bit iimmediate register rs1 setting leastsignificant bit result zero address instruction following jump pc4 written register rd register x0 used destination result required unconditional jump instructions use pcrelative addressing help support position independent code jalr instruction defined enable twoinstruction sequence jump anywhere 32bit absolute address range lui instruction first load rs1 upper 20 bits target address jalr add lower bits similarly auipc jalr jump anywhere 32bit pcrelative address range note jalr instruction treat 12bit immediate multiples 2 bytes unlike conditional branch instructions avoids one immediate format hardware practice uses jalr either zero immediate paired lui auipc slight reduction range significant clearing leastsignificant bit calculating jalr target address simplifies hardware slightly allows low bit function pointers used store auxiliary information although potentially slight loss error checking case practice jumps incorrect instruction address usually quickly raise exception used base rs1x0 jalr used implement single instruction subrou tine call lowest 2 kib highest 2 kib address region anywhere address space could used implement fast calls small runtime library alternatively abi could dedicate generalpurpose register point library elsewhere address space jal jalr instructions generate instructionaddressmisaligned exception target address aligned fourbyte boundary instructionaddressmisaligned exceptions possible machines support extensions 16bit aligned instructions compressed instructionset extension c returnaddress prediction stacks common feature highperformance instructionfetch units require accurate detection instructions used procedure calls returns effective riscv hints instructions usage encoded implicitly via register numbers used jal instruction push return address onto returnaddress stack ras rdx1x5 jalr instructions pushpop ras shown table 21 isas added explicit hint bits indirectjump instructions guide return address stack manipulation use implicit hinting tied register numbers calling convention reduce encoding space used hints two different link registers x1 x5 given rs1 rd ras popped pushed support coroutines rs1 rd link regis ter either x1 x5 ras pushed enable macroop fusion sequences lui ra imm20 jalr ra imm12 ra auipc ra imm20 jalr ra imm12 ra conditional branches branch instructions use btype instruction format 12bit bimmediate encodes signed offsets multiples 2 bytes offset signextended added address branch instruction give target address conditional branch range 4 kib branch instructions compare two registers beq bne take branch registers rs1 rs2 equal unequal respectively blt bltu take branch rs1 less rs2 using signed unsigned comparison respectively bge bgeu take branch rs1 greater equal rs2 using signed unsigned comparison respectively note bgt bgtu ble bleu synthesized reversing operands blt bltu bge bgeu respectively signed array bounds may checked single bltu instruction since negative index compare greater nonnegative bound software optimized sequential code path common path lessfrequently taken code paths placed line software also assume backward branches predicted taken forward branches taken least first time encountered dynamic predictors quickly learn predictable branch behavior unlike architectures riscv jump jal rdx0 instruction always used unconditional branches instead conditional branch instruction always true condition riscv jumps also pcrelative support much wider offset range branches pollute conditionalbranch prediction tables conditional branches designed include arithmetic comparison operations two registers also done parisc xtensa mips r6 rather use condition codes x86 arm sparc powerpc compare one register zero alpha mips two registers equality mips design motivated observation combined compareandbranch instruction fits regular pipeline avoids additional condition code state use temporary register reduces static code size dynamic instruction fetch traffic another point comparisons zero require nontrivial circuit delay especially move static logic advanced processes almost expensive arithmetic magnitude compares another advantage fused compareandbranch instruction branches observed earlier frontend instruction stream predicted earlier perhaps advantage design condition codes case multiple branches taken based condition codes believe case relatively rare considered include static branch hints instruction encoding reduce pressure dynamic predictors require instruction encoding space software profiling best results result poor performance production runs match profiling runs considered include conditional moves predicated instructions effectively replace unpredictable short forward branches conditional moves simpler two difficult use conditional code might cause exceptions memory accesses floatingpoint operations predication adds additional flag state system addi tional instructions set clear flags additional encoding overhead every instruction conditional move predicated instructions add complexity outoforder microarchitec tures adding implicit third source operand due need copy original value destination architectural register renamed destination physical register predicate false also static compiletime decisions use predication instead branches result lower performance inputs included compiler training set especially given unpredictable branches rare becoming rarer branch prediction techniques improve note various microarchitectural techniques exist dynamically convert unpredictable short forward branches internally predicated code avoid cost flushing pipelines branch mispredict 6 10 9 implemented commercial processors 17 simplest techniques reduce penalty recovering mispredicted short forward branch flushing instructions branch shadow instead entire fetch pipeline fetching instructions sides using wide instruction fetch idle instruction fetch slots complex techniques outoforder cores add internal predicates instructions branch shadow internal predicate value written branch instruction allowing branch following instructions executed speculatively outoforder respect code 17 conditional branch instructions generate instructionaddressmisaligned exception target address aligned fourbyte boundary branch condition evaluates true branch condition evaluates false instructionaddressmisaligned exception raised instructionaddressmisaligned exceptions possible machines support extensions 16bit aligned instructions compressed instructionset extension c", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-09-18 14:50:15", "segment": "segment13", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(2.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(2.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%202.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(2.5).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.6 Load and Store Instructions ", "content": "rv32i loadstore architecture load store instructions access memory arithmetic instructions operate cpu registers rv32i provides 32bit address space byteaddressed eei define portions address space legal access instructions eg addresses might read support word access loads destination x0 must still raise exceptions cause side effects even though load value discarded eei define whether memory system littleendian bigendian riscv endian ness byteaddress invariant system endianness byteaddress invariant following property holds byte stored memory address endianness bytesized load address endianness returns stored value littleendian configuration multibyte stores write leastsignificant register byte lowest memory byte address followed register bytes ascending order significance loads similarly transfer contents lesser memory byte addresses lesssignificant register bytes bigendian configuration multibyte stores write mostsignificant register byte lowest memory byte address followed register bytes descending order significance loads similarly transfer contents greater memory byte addresses lesssignificant register bytes load store instructions transfer value registers memory loads encoded itype format stores stype effective address obtained adding register rs1 signextended 12bit offset loads copy value memory register rd stores copy value register rs2 memory lw instruction loads 32bit value memory rd lh loads 16bit value memory signextends 32bits storing rd lhu loads 16bit value memory zero extends 32bits storing rd lb lbu defined analogously 8bit values sw sh sb instructions store 32bit 16bit 8bit values low bits register rs2 memory regardless eei loads stores whose effective addresses naturally aligned shall raise addressmisaligned exception loads stores effective address naturally aligned referenced datatype ie fourbyte boundary 32bit accesses twobyte boundary 16bit accesses behavior dependent eei eei may guarantee misaligned loads stores fully supported software run ning inside execution environment never experience contained fatal addressmisaligned trap case misaligned loads stores handled hardware via invisible trap execution environment implementation possibly combination hardware invisible trap depending address eei may guarantee misaligned loads stores handled invisibly case loads stores naturally aligned may either complete execution successfully raise exception exception raised either addressmisaligned exception accessfault exception memory access would otherwise able complete except misalign ment access exception raised instead addressmisaligned exception misaligned access emulated eg accesses memory region side effects eei guarantee misaligned loads stores handled invisibly eei must define exceptions caused address misalignment result contained trap allowing software running inside execution environment handle trap fatal trap terminating execution misaligned accesses occasionally required porting legacy code help performance applications using form packedsimd extension handling externally packed data structures rationale allowing eeis choose support misaligned accesses via regular load store instructions simplify addition misaligned hardware support one option would disallow misaligned accesses base isa provide separate isa support misaligned accesses either special instructions help software handle misaligned accesses new hardware addressing mode misaligned accesses special instructions difficult use complicate isa often add new processor state eg sparc vis align address offset register complicate access existing processor state eg mips lwllwr partial register writes addition looporiented packedsimd code extra overhead operands misaligned motivates software provide multiple forms loop depending operand alignment complicates code generation adds loop startup overhead new misaligned hardware addressing modes take considerable space instruction encoding require simplified addressing modes eg register indirect even misaligned loads stores complete successfully accesses might run extremely slowly depending implementation eg implemented via invisible trap further whereas naturally aligned loads stores guaranteed execute atomically misaligned loads stores might hence require additional synchronization ensure atomicity mandate atomicity misaligned accesses execution environment implementa tions use invisible machine trap software handler handle misaligned accesses hardware misaligned support provided software exploit simply using regular load store instructions hardware automatically optimize accesses depend ing whether runtime addresses aligned", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-09-18 14:50:16", "segment": "segment14", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(2.6).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.7 Memory Ordering Instructions ", "content": "fence instruction used order device io memory accesses viewed risc v harts external devices coprocessors combination device input device output memory reads r memory writes w may ordered respect combination informally riscv hart external device observe operation successor set following fence operation predecessor set preceding fence chapter 14 provides precise description riscv memory consistency model eei define io operations possible particular memory addresses accessed load store instructions treated ordered device input device output operations respectively rather memory reads writes example memory mapped io devices typically accessed uncached loads stores ordered using bits rather r w bits instructionset extensions might also describe new io instructions also ordered using bits fence fence mode field fm defines semantics fence fence fm0000 orders memory operations predecessor set memory operations successor set optional fencetso instruction encoded fence instruction fm1000 predeces sorrw successorrw fencetso orders load operations predecessor set memory operations successor set store operations predecessor set store operations successor set leaves nonamo store operations fencetso predecessor set unordered nonamo loads successor set fencetso encoding added optional extension original base fence instruction encoding base definition requires implementations ignore set bits treat fence global backwardscompatible extension unused fields fence instructionsrs1 rdare reserved finergrain fences future extensions forward compatibility base implementations shall ignore fields standard software shall zero fields likewise many fm predecessorsuccessor set settings table 22 also reserved future use base implementations shall treat reserved configurations normal fences fm0000 standard software shall use nonreserved configurations chose relaxed memory model allow high performance simple machine implementa tions likely future coprocessor accelerator extensions separate io ordering memory rw ordering avoid unnecessary serialization within devicedriver hart also support alternative nonmemory paths control added coprocessors io devices simple implementations may additionally ignore predecessor successor fields always execute conservative fence operations", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-09-18 14:50:16", "segment": "segment15", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(2.7).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.8 Environment Call and Breakpoints ", "content": "system instructions used access system functionality might require privileged ac cess encoded using itype instruction format divided two main classes atomically readmodifywrite control status registers csrs potentially privileged instructions csr instructions described chapter 9 base unprivileged instructions described following section system instructions defined allow simpler implementations always trap single software trap handler sophisticated implementations might execute system instruction hardware two instructions cause precise requested trap supporting execution environment ecall instruction used make service request execution environment eei define parameters service request passed usually defined locations integer register file ebreak instruction used return control debugging environment ecall ebreak previously named scall sbreak instructions functionality encoding renamed reflect used generally call supervisorlevel operating system debugger ebreak primarily designed used debugger cause execution stop fall back debugger ebreak also used standard gcc compiler mark code paths executed another use ebreak support semihosting execution environment in cludes debugger provide services alternate system call interface built around ebreak instruction riscv base isa provide one ebreak instruction riscv semihosting uses special sequence instructions distin guish semihosting ebreak debugger inserted ebreak slli x0 x0 0x1f entry nop ebreak break debugger srai x0 x0 7 nop encoding semihosting call number 7 note three instructions must 32bitwide instructions ie among compressed 16bit instructions described chapter 16 shift nop instructions still considered available use hints semihosting form service call would naturally encoded ecall using existing abi would require debugger able intercept ecalls newer addition debug standard intend move using ecalls standard abi case semihosting share service abi existing standard note arm processors also moved using svc instead bkpt semi hosting calls newer designs", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-09-18 14:50:16", "segment": "segment16", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(2.8).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "2.9 HINT Instructions ", "content": "rv32i reserves large encoding space hint instructions usually used commu nicate performance hints microarchitecture hints encoded integer computational instructions rdx0 hence like nop instruction hints change architecturally visible state except advancing pc applicable performance counters implementa tions always allowed ignore encoded hints hint encoding chosen simple implementations ignore hints alto gether instead execute hint regular computational instruction happens mutate architectural state example add hint destination register x0 fivebit rs1 rs2 fields encode arguments hint however simple implementation simply execute hint add rs1 rs2 writes x0 architecturally visible effect table 23 lists rv32i hint code points 91 hint space reserved standard hints none presently defined remainder hint space reserved custom hints standard hints ever defined subspace standard hints presently defined anticipate standard hints eventually include memorysystem spatial temporal locality hints branch prediction hints threadscheduling hints security tags instrumentation flags simulationemulation", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-09-18 14:50:16", "segment": "segment17", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%202.3.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 3 ", "content": "zifencei instructionfetch fence version 20 chapter defines zifencei extension includes fencei instruction provides explicit synchronization writes instruction memory instruction fetches hart currently instruction standard mechanism ensure stores visible hart also visible instruction fetches considered include store instruction word instruction majc 19 jit compilers may generate large trace instructions single fencei amortize instruction cache snoopinginvalidation overhead writing translated instructions memory regions known reside icache fencei instruction designed support wide variety implementations sim ple implementation flush local instruction cache instruction pipeline fencei executed complex implementation might snoop instruction data cache every data instruction cache miss use inclusive unified private l2 cache invalidate lines primary instruction cache written local store instruction instruction data caches kept coherent way memory system consists uncached rams fetch pipeline needs flushed fencei fencei instruction previously part base instruction set two main issues driving moving mandatory base although time writing still standard method maintaining instructionfetch coherence first recognized systems fencei expensive implement alternate mechanisms discussed memory model task group particular designs incoherent instruction cache incoherent data cache instruction cache refill snoop coherent data cache caches must completely flushed fencei instruction encountered problem exacerbated multiple levels cache front unified cache outer memory system second instruction powerful enough make available user level unixlike operating system environment fencei synchronizes local hart os reschedule user hart different physical hart fencei would require os execute additional fencei part every context migration reason standard linux abi removed fencei userlevel requires system call maintain instructionfetch coherence allows os minimize number fencei executions required current systems provides forwardcompatibility future improved instructionfetch coherence mechanisms future approaches instructionfetch coherence discussion include providing restricted versions fencei target given address specified rs1 andor allowing software use abi relies machinemode cachemaintenance operations fencei instruction used synchronize instruction data streams riscv guarantee stores instruction memory made visible instruction fetches riscv hart hart executes fencei instruction fencei instruction ensures subsequent instruction fetch riscv hart see previous data stores already visible riscv hart fencei ensure riscv harts instruction fetches observe local hart stores multiprocessor system make store instruction memory visible riscv harts writing hart execute data fence requesting remote riscv harts execute fencei unused fields fencei instruction imm 110 rs1 rd reserved finergrain fences future extensions forward compatibility base implementations shall ignore fields standard software shall zero fields fencei orders stores hart instruction fetches application code rely upon fencei application thread migrated different hart eei provide mechanisms efficient multiprocessor instructionstream synchroniza tion", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-09-18 14:50:16", "segment": "segment18", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 4 ", "content": "rv32e base integer instruction set version 19 chapter describes draft proposal rv32e base integer instruction set reduced version rv32i designed embedded systems change reduce number integer registers 16 chapter outlines differences rv32e rv32i read chapter 2 rv32e designed provide even smaller base core embedded microcontrollers al though mentioned possibility version 20 document initially resisted defining subset however given demand smallest possible 32bit microcontroller interests preempting fragmentation space defined rv32e fourth standard base isa addition rv32i rv64i rv128i also interest defining rv64e reduce context state highly threaded 64bit processors", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-09-18 14:50:16", "segment": "segment19", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "4.1 RV32E Programmers\u2019 Model ", "content": "rv32e reduces integer register count 16 generalpurpose registers x0x15 x0 dedicated zero register found small rv32i core designs upper 16 registers consume around one quarter total area core excluding memories thus removal saves around 25 core area corresponding core power reduction change requires different calling convention abi particular rv32e used softfloat calling convention new embedded abi consideration would work across rv32e rv32i", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-09-18 14:50:16", "segment": "segment20", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "4.2 RV32E Instruction Set ", "content": "rv32e uses instructionset encoding rv32i except registers x0x15 provided future standard extensions make use instruction bits freed reduced registerspecifier fields available custom extensions rv32e combined current standard extensions defining f q exten sions 16entry floating point register file combined rv32e considered decided support systems reduced floatingpoint register state intend define zfinx extension makes floatingpoint computations use integer registers removing floatingpoint loads stores moves floating point integer registers", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-09-18 14:50:17", "segment": "segment21", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 5 ", "content": "rv64i base integer instruction set version 21 chapter describes rv64i base integer instruction set builds upon rv32i variant described chapter 2 chapter presents differences rv32i read conjunction earlier chapter", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-09-18 14:50:17", "segment": "segment22", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "5.1 Register State ", "content": "rv64i widens integer registers supported user address space 64 bits xlen64 figure 21", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-09-18 14:50:17", "segment": "segment23", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "5.2 Integer Computational Instructions ", "content": "integer computational instructions operate xlenbit values additional instruction vari ants provided manipulate 32bit values rv64i indicated w suffix opcode w instructions ignore upper 32 bits inputs always produce 32bit signed values ie bits xlen1 31 equal compiler calling convention maintain invariant 32bit values held signextended format 64bit registers even 32bit unsigned integers extend bit 31 bits 63 32 consequently conversion unsigned signed 32bit integers noop conversion signed 32bit integer signed 64bit integer existing 64bit wide sltu unsigned branch compares still operate correctly unsigned 32bit integers invariant similarly existing 64bit wide logical operations 32bit signextended integers preserve signextension property new instructions add wsubwsxxw required addition shifts ensure reasonable performance 32bit values integer registerimmediate instructions addiw rv64i instruction adds signextended 12bit immediate register rs1 produces proper signextension 32bit result rd overflows ignored result low 32 bits result signextended 64 bits note addiw rd rs1 0 writes signextension lower 32 bits register rs1 register rd assembler pseudoinstruction sextw shifts constant encoded specialization itype format using instruction opcode rv32i operand shifted rs1 shift amount encoded lower 6 bits iimmediate field rv64i right shift type encoded bit 30 slli logical left shift zeros shifted lower bits srli logical right shift zeros shifted upper bits srai arithmetic right shift original sign bit copied vacated upper bits slliw srliw sraiw rv64ionly instructions analogously defined operate 32bit values produce signed 32bit results slliw srliw sraiw encodings imm 5 0 reserved previously slliw srliw sraiw imm 5 0 defined cause illegal in struction exceptions whereas marked reserved backwardscompatible change lui load upper immediate uses opcode rv32i lui places 20bit uimmediate bits 3112 register rd places zero lowest 12 bits 32bit result signextended 64 bits auipc add upper immediate pc uses opcode rv32i auipc used build pc relative addresses uses utype format auipc appends 12 loworder zero bits 20bit uimmediate signextends result 64 bits adds address auipc instruction places result register rd integer registerregister operations addw subw rv64ionly instructions defined analogously add sub operate 32bit values produce signed 32bit results overflows ignored low 32bits result signextended 64bits written destination register sll srl sra perform logical left logical right arithmetic right shifts value register rs1 shift amount held register rs2 rv64i low 6 bits rs2 considered shift amount sllw srlw sraw rv64ionly instructions analogously defined operate 32bit values produce signed 32bit results shift amount given rs2 40", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-09-18 14:50:17", "segment": "segment24", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(5.2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(5.2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(5.2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(5.2).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "5.3 Load and Store Instructions ", "content": "rv64i extends address space 64 bits execution environment define portions address space legal access ld instruction loads 64bit value memory register rd rv64i lw instruction loads 32bit value memory signextends 64 bits storing register rd rv64i lwu instruction hand zeroextends 32bit value memory rv64i lh lhu defined analogously 16bit values lb lbu 8bit values sd sw sh sb instructions store 64bit 32bit 16bit 8bit values low bits register rs2 memory respectively", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-09-18 14:50:17", "segment": "segment25", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(5.3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "5.4 HINT Instructions ", "content": "instructions microarchitectural hints rv32i see section 29 also hints rv64i additional computational instructions rv64i expand standard custom hint encoding spaces table 51 lists rv64i hint code points 91 hint space reserved standard hints none presently defined remainder hint space reserved custom hints standard hints ever defined subspace", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-09-18 14:50:17", "segment": "segment26", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%205.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 6 ", "content": "rv128i base integer instruction set version 17 one mistake made computer design difficult re cover fromnot enough address bits memory addressing memory man agement bell strecker isca3 1976 chapter describes rv128i variant riscv isa supporting flat 128bit address space variant straightforward extrapolation existing rv32i rv64i designs primary reason extend integer register width support larger address spaces clear flat address space larger 64 bits required time writing fastest supercomputer world measured top500 benchmark 1 pb dram would require 50 bits address space dram resided single address space warehousescale computers already contain even larger quantities dram new dense solidstate nonvolatile memories fast interconnect technologies might drive demand even larger memory spaces exascale systems research targeting 100 pb memory systems occupy 57 bits address space historic rates growth possible greater 64 bits address space might required 2030 history suggests whenever becomes clear 64 bits address space needed architects repeat intensive debates alternatives extending address space including segmentation 96bit address spaces software workarounds finally flat 128 bit address spaces adopted simplest best solution frozen rv128 spec time might need evolve design based actual usage 128bit address spaces rv128i builds upon rv64i way rv64i builds upon rv32i integer registers extended 128 bits ie xlen128 integer computational instructions unchanged defined operate xlen bits rv64i w integer instructions operate 32bit values low bits register retained sign extend results bit 31 bit 127 new set integer instructions added operate 64bit values held low bits 128bit integer registers sign extend results bit 63 bit 127 instructions consume two major opcodes opimm64 op64 standard 32bit encoding improve compatibility rv64 reverse rv32 rv64 handled might change decoding around rename rv64i add 64bit addd add 128bit addq previously op64 major opcode renamed op128 major opcode shifts immediate sllisrlisrai encoded using low 7 bits iimmediate variable shifts sllsrlsra use low 7 bits shift amount source register ldu load double unsigned instruction added using existing load major opcode along new lq sq instructions load store quadword values sq added store major opcode lq added miscmem major opcode floatingpoint instruction set unchanged although 128bit q floatingpoint extension support fmvxq fmvqx instructions together additional fcvt instructions 128bit integer format", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-09-18 14:50:17", "segment": "segment27", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 7 ", "content": "standard extension integer multiplication division version 20 chapter describes standard integer multiplication division instruction extension named contains instructions multiply divide values held two integer registers separate integer multiply divide base simplify lowend implementations applications integer multiply divide operations either infrequent better handled attached accelerators", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-09-18 14:50:17", "segment": "segment28", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "7.1 Multiplication Operations ", "content": "mul performs xlenbitxlenbit multiplication rs1 rs2 places lower xlen bits destination register mulh mulhu mulhsu perform multiplication re turn upper xlen bits full 2xlenbit product signedsigned unsignedunsigned signed rs1unsigned rs2 multiplication respectively high low bits product required recommended code sequence mulh u rdh rs1 rs2 mul rdl rs1 rs2 source register specifiers must order rdh rs1 rs2 microarchitectures fuse single multiply operation instead performing two separate multiplies mulhsu used multiword signed multiplication multiply mostsignificant word multiplicand contains sign bit lesssignificant words multiplier unsigned mulw rv64 instruction multiplies lower 32 bits source registers placing signextension lower 32 bits result destination register rv64 mul used obtain upper 32 bits 64bit product signed arguments must proper 32bit signed values whereas unsigned arguments must upper 32 bits clear arguments known sign zeroextended alternative shift arguments left 32 bits use mulh u", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-09-18 14:50:17", "segment": "segment28", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(7.1).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "7.2 Division Operations ", "content": "div divu perform xlen bits xlen bits signed unsigned integer division rs1 rs2 rounding towards zero rem remu provide remainder corresponding division operation rem sign result equals sign dividend signed unsigned division holds dividend divisor quotient remainder quotient remainder required division recommended code sequence div u rdq rs1 rs2 rem u rdr rs1 rs2 rdq rs1 rs2 microarchitectures fuse single divide operation instead performing two separate divides divw divuw rv64 instructions divide lower 32 bits rs1 lower 32 bits rs2 treating signed unsigned integers respectively placing 32bit quotient rd signextended 64 bits remw remuw rv64 instructions provide corresponding signed unsigned remainder operations respectively remw remuw always signextend 32bit result 64 bits including divide zero semantics division zero division overflow summarized table 71 quotient division zero bits set remainder division zero equals dividend signed division overflow occurs mostnegative integer divided 1 quotient signed division overflow equal dividend remainder zero unsigned division overflow occur considered raising exceptions integer divide zero exceptions causing trap execution environments however would arithmetic trap standard isa floatingpoint exceptions set flags write default values cause traps would require language implementors interact execution environment trap handlers case language standards mandate dividebyzero exception must cause immediate control flow change single branch instruction needs added divide operation branch instruction inserted divide normally predictably taken adding little runtime overhead value bits set returned unsigned signed divide zero simplify divider circuitry value 1s natural value return unsigned divide representing largest unsigned number also natural result simple unsigned divider implementations signed division often implemented using unsigned division circuit specifying overflow result simplifies hardware", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-09-18 14:50:17", "segment": "segment29", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(7.2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%207.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 8 ", "content": "standard extension atomic instructions version 21 standard atomicinstruction extension named contains instructions atomically readmodifywrite memory support synchronization multiple riscv harts running memory space two forms atomic instruction provided loadreservedstore conditional instructions atomic fetchandop memory instructions types atomic in struction support various memory consistency orderings including unordered acquire release sequentially consistent semantics instructions allow riscv support rcsc memory consistency model 5 much debate language community architecture community appear finally settled release consistency standard memory consistency model riscv atomic support built around model", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-09-18 14:50:18", "segment": "segment30", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "8.1 Specifying Ordering of Atomic Instructions ", "content": "base riscv isa relaxed memory model fence instruction used impose additional ordering constraints address space divided execution environment memory io domains fence instruction provides options order accesses one two address domains provide efficient support release consistency 5 atomic instruction two bits aq rl used specify additional memory ordering constraints viewed riscv harts bits order accesses one two address domains memory io depending address domain atomic instruction accessing ordering constraint implied accesses domain fence instruction used order across domains bits clear additional ordering constraints imposed atomic memory op eration aq bit set atomic memory operation treated acquire access ie following memory operations riscv hart observed take place acquire memory operation rl bit set atomic memory operation treated release access ie release memory operation observed take place earlier memory operations riscv hart aq rl bits set atomic memory operation sequentially consistent observed happen earlier memory operations later memory operations riscv hart address domain", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-09-18 14:50:18", "segment": "segment31", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "8.2 Load-Reserved/Store-Conditional Instructions ", "content": "complex atomic memory operations single memory word doubleword performed loadreserved lr storeconditional sc instructions lrw loads word address rs1 places signextended value rd registers reservation seta set bytes subsumes bytes addressed word scw conditionally writes word rs2 address rs1 scw succeeds reservation still valid reservation set contains bytes written scw succeeds instruction writes word rs2 memory writes zero rd scw fails instruction write memory writes nonzero value rd regardless success failure executing scw instruction invalidates reservation held hart lrd scd act analogously doublewords available rv64 rv64 lrw scw signextend value placed rd compareandswap cas lrsc used build lockfree data structures extensive discussion opted lrsc several reasons 1 cas suffers aba problem lrsc avoids monitors accesses address rather checking changes data value 2 cas would also require new integer instruction for mat support three source operands address compare value swap value well different memory system message format would complicate microarchitectures 3 furthermore avoid aba problem systems provide doublewide cas dwcas allow counter tested incremented along data word requires reading five regis ters writing two one instruction also new larger memory system message type complicating implementations 4 lrsc provides efficient implementation many primitives requires one load opposed two cas one load cas instruction obtain value speculative computation second load part cas instruction check value unchanged updating main disadvantage lrsc cas livelock avoid certain cir cumstances architected guarantee eventual forward progress described an concern whether influence current x86 architecture dwcas complicate porting synchronization libraries software assumes dwcas basic machine primitive possible mitigating factor recent addition transactional memory instructions x86 might cause move away dwcas generally multiword atomic primitive desirable still considerable debate form take guaranteeing forward progress adds complexity system current thoughts include small limitedcapacity transactional memory buffer along lines original transactional memory proposals optional standard extension failure code value 1 reserved encode unspecified failure failure codes reserved time portable software assume failure code nonzero reserve failure code 1 mean unspecified simple implementations may return value using existing mux required sltsltu instructions specific failure codes might defined future versions extensions isa lr sc extension requires address held rs1 naturally aligned size operand ie eightbyte aligned 64bit words fourbyte aligned 32bit words address naturally aligned addressmisaligned exception accessfault exception generated accessfault exception generated memory access would otherwise able complete except misalignment misaligned access emulated emulating misaligned lrsc sequences impractical systems misaligned lrsc sequences also raise possibility accessing multiple reservation sets present definitions provide implementation register arbitrarily large reservation set lr provided reser vation set includes bytes addressed data word doubleword sc pair recent lr program order sc may succeed store another hart reservation set observed occurred lr sc sc lr program order sc may succeed write device hart bytes accessed lr instruction observed occurred lr sc note lr might different effective address data size reserved sc address part reservation set following model systems memory translation sc allowed succeed earlier lr reserved location using alias different virtual address also allowed fail virtual address different accommodate legacy devices buses writes devices riscv harts required invalidate reservations overlap bytes accessed lr writes required invalidate reservation access bytes reservation set sc must fail address within reservation set recent lr program order sc must fail store reservation set another hart observed occur lr sc sc must fail write device bytes accessed lr observed occur lr sc device writes reservation set write bytes accessed lr sc may may fail sc must fail another sc address lr sc program order precise statement atomicity requirements successful lrsc sequences defined atomicity axiom section 141 platform provide means determine size shape reservation set platform specification may constrain size shape reservation set example unix platform expected require main memory reservation set fixed size contiguous naturally aligned greater virtual memory page size storeconditional instruction scratch word memory used forcibly invalidate existing load reservation preemptive context switch necessary changing virtual physical address mappings migrating pages might contain active reservation invalidation hart reservation executes lr sc imply hart hold one reservation time sc pair recent lr lr next following sc program order restriction atomicity axiom section 141 ensures software runs correctly expected common implementations operate manner sc instruction never observed another riscv hart lr instruction established reservation lrsc sequence given acquire semantics setting aq bit lr instruction lrsc sequence given release semantics setting rl bit sc instruction setting aq bit lr instruction setting aq rl bit sc instruction makes lrsc sequence sequentially consistent meaning reordered earlier later memory operations hart neither bit set lr sc lrsc sequence observed occur surrounding memory operations riscv hart appropriate lrsc sequence used implement parallel reduction operation software set rl bit lr instruction unless aq bit also set software set aq bit sc instruction unless rl bit also set lrrl scaq instructions guaranteed provide stronger ordering bits clear may result lower performance lrsc used construct lockfree data structures example using lrsc implement compareandswap function shown figure 81 inlined compareandswap functionality need take four instructions", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-09-18 14:50:18", "segment": "segment32", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(8.2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%208.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "8.3 Eventual Success of Store-Conditional Instructions ", "content": "standard extension defines constrained lrsc loops following properties loop comprises lrsc sequence code retry sequence case failure must comprise 16 instructions placed sequentially memory lrsc sequence begins lr instruction ends sc instruction dynamic code executed lr sc instructions contain instructions base instruction set excluding loads stores backward jumps taken backward branches jalr fence fencei system instructions c extension supported compressed forms aforementioned instructions also permitted code retry failing lrsc sequence contain backwards jumps andor branches repeat lrsc sequence otherwise constraint code lr sc lr sc addresses must lie within memory region lrsc eventuality property execution environment responsible communicating regions property sc must effective address data size latest lr executed hart lrsc sequences lie within constrained lrsc loops unconstrained unconstrained lrsc sequences might succeed attempts implementations might never succeed implementations restricted length lrsc loops fit within 64 contiguous instruction bytes base isa avoid undue restrictions instruction cache tlb size associativity simi larly disallowed loads stores within loops avoid restrictions datacache associativity simple implementations track reservation within private cache restrictions branches jumps limit time spent sequence floating point operations integer multiplydivide disallowed simplify operating system emulation instructions implementations lacking appropriate hardware support software forbidden using unconstrained lrsc sequences portable software must detect case sequence repeatedly fails fall back alternate code sequence rely unconstrained lrsc sequence implementations permitted unconditionally fail unconstrained lrsc sequence hart h enters constrained lrsc loop execution environment must guarantee one following events eventually occurs h hart executes successful sc reservation set lr instruction h constrained lrsc loops hart executes unconditional store amo instruction reservation set lr instruction h constrained lrsc loop device system writes reservation set h executes branch jump exits constrained lrsc loop h traps note definitions permit implementation fail sc instruction occasionally reason provided aforementioned guarantee violated consequence eventuality guarantee harts execution environment executing constrained lrsc loops harts devices execution environment execute unconditional store amo reservation set least one hart eventually exit constrained lrsc loop contrast harts devices continue write reservation set guaranteed hart exit lrsc loop loads loadreserved instructions impede progress harts lrsc sequences note constraint implies among things loads load reserved instructions executed harts possibly within core impede lrsc progress indefinitely example cache evictions caused another hart sharing cache impede lrsc progress indefinitely typically implies reservations tracked independently evictions shared cache similarly cache misses caused speculative execution within hart impede lrsc progress indefinitely definitions admit possibility sc instructions may spuriously fail imple mentation reasons provided progress eventually made one advantage cas guarantees hart eventually makes progress whereas lrsc atomic sequence could livelock indefinitely systems avoid concern added architectural guarantee livelock freedom certain lrsc sequences earlier versions specification imposed stronger starvationfreedom guarantee how ever weaker livelockfreedom guarantee sufficient implement c11 c11 lan guages substantially easier provide microarchitectural styles", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-09-18 14:50:19", "segment": "segment33", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "8.4 Atomic Memory Operations ", "content": "atomic memory operation amo instructions perform readmodifywrite operations mul tiprocessor synchronization encoded rtype instruction format amo in structions atomically load data value address rs1 place value register rd apply binary operator loaded value original value rs2 store result back address rs1 amos either operate 64bit rv64 32bit words memory rv64 32bit amos always signextend value placed rd amos extension requires address held rs1 naturally aligned size operand ie eightbyte aligned 64bit words fourbyte aligned 32bit words address naturally aligned addressmisaligned exception accessfault exception generated accessfault exception generated memory access would otherwise able complete except misalignment misaligned access emulated zam extension described chapter 22 relaxes requirement specifies semantics misaligned amos operations supported swap integer add bitwise bitwise bitwise xor signed unsigned integer maximum minimum without ordering constraints amos used implement parallel reduction operations typically return value would discarded writing x0 provided fetchandop style atomic primitives scale highly parallel systems better lrsc cas simple microarchitecture implement amos using lrsc primi tives provided implementation guarantee amo eventually completes complex implementations might also implement amos memory controllers optimize away fetching original value destination x0 set amos chosen support c11c11 atomic memory operations effi ciently also support parallel reductions memory another use amos provide atomic updates memorymapped device registers eg setting clearing toggling bits io space help implement multiprocessor synchronization amos optionally provide release consis tency semantics aq bit set later memory operations riscv hart observed take place amo conversely rl bit set riscv harts observe amo memory accesses preceding amo riscv hart setting aq rl bit amo makes sequence sequentially consistent meaning reordered earlier later memory operations hart amos designed implement c11 c11 memory models efficiently al though fence r rw instruction suffices implement acquire operation fence rw w suffices implement release imply additional unnecessary ordering compared amos corresponding aq rl bit set example code sequence critical section guarded testandtestandset spinlock shown figure 82 note first amo marked aq order lock acquisition critical section second amo marked rl order critical section lock relinquishment recommend use amo swap idiom shown lock acquire release simplify implementation speculative lock elision 16 instructions extension also used provide sequentially consistent loads stores sequentially consistent load implemented lr aq rl set sequentially consistent store implemented amoswap writes old value x0 aq rl set", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-09-18 14:50:19", "segment": "segment34", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(8.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%208.2.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 9 ", "content": "zicsr control status register csr instructions version 20 riscv defines separate address space 4096 control status registers associated hart chapter defines full set csr instructions operate csrs csrs primarily used privileged architecture several uses unprivi leged code including counters timers floatingpoint status counters timers longer considered mandatory parts standard base isas csr instructions required access moved base isa chapter separate chapter", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-09-18 14:50:19", "segment": "segment35", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "9.1 CSR Instructions ", "content": "csr instructions atomically readmodifywrite single csr whose csr specifier encoded 12bit csr field instruction held bits 3120 immediate forms use 5bit zeroextended immediate encoded rs1 field csrrw atomic readwrite csr instruction atomically swaps values csrs integer registers csrrw reads old value csr zeroextends value xlen bits writes integer register rd initial value rs1 written csr rdx0 instruction shall read csr shall cause side effects might occur csr read csrrs atomic read set bits csr instruction reads value csr zero extends value xlen bits writes integer register rd initial value integer register rs1 treated bit mask specifies bit positions set csr bit high rs1 cause corresponding bit set csr csr bit writable bits csr unaffected though csrs might side effects written csrrc atomic read clear bits csr instruction reads value csr zero extends value xlen bits writes integer register rd initial value integer register rs1 treated bit mask specifies bit positions cleared csr bit high rs1 cause corresponding bit cleared csr csr bit writable bits csr unaffected csrrs csrrc rs1x0 instruction write csr shall cause side effects might otherwise occur csr write raising illegal instruction exceptions accesses readonly csrs csrrs csrrc always read addressed csr cause read side effects regardless rs1 rd fields note rs1 specifies register holding zero value x0 instruction still attempt write unmodified value back csr cause attendant side effects csrrw rs1x0 attempt write zero destination csr csrrwi csrrsi csrrci variants similar csrrw csrrs csrrc re spectively except update csr using xlenbit value obtained zeroextending 5bit unsigned immediate uimm 40 field encoded rs1 field instead value integer register csrrsi csrrci uimm 40 field zero instructions write csr shall cause side effects might otherwise occur csr write csrrwi rdx0 instruction shall read csr shall cause side effects might occur csr read csrrsi csrrci always read csr cause read side effects regardless rd rs1 fields csrs defined far architectural side effects reads beyond raising illegal instruction exceptions disallowed accesses custom extensions might add csrs side effects reads csrs instructionsretired counter instret may modified side effects instruction execution cases csr access instruction reads csr reads value prior execution instruction csr access instruction writes csr write done instead increment particular value written instret one instruction value read following instruction assembler pseudoinstruction read csr csrr rd csr encoded csrrs rd csr x0 assembler pseudoinstruction write csr csrw csr rs1 encoded csrrw x0 csr rs1 csrwi csr uimm encoded csrrwi x0 csr uimm assembler pseudoinstructions defined set clear bits csr old value required csrscsrc csr rs1 csrsicsrci csr uimm csr access ordering given hart explicit implicit csr access performed program order respect instructions whose execution behavior affected state accessed csr particular csr access performed execution prior instructions program order whose behavior modifies modified csr state execution subsequent instructions program order whose behavior modifies modified csr state furthermore csr read access instruction returns accessed csr state execution instruction csr write access instruction updates accessed csr state execution instruction program order hold csr accesses weakly ordered local hart harts may observe csr accesses order different program order addition csr accesses ordered respect explicit memory accesses unless csr access modifies execution behavior instruction performs explicit memory access unless csr access explicit memory access ordered either syntactic dependencies defined memory model ordering requirements defined memoryordering pmas section volume ii manual enforce ordering cases software execute fence instruction relevant accesses purposes fence instruction csr read accesses classified device input csr write accesses classified device output informally csr space acts weakly ordered memorymapped io region defined memoryordering pmas section volume ii manual result order csr accesses respect accesses constrained mechanisms constrain order memorymapped io accesses region csrordering constraints imposed primarily support ordering main memory memorymapped io accesses respect reads time csr exception time cycle mcycle csrs csrs defined thus far volumes ii specification directly accessible harts devices cause side effects visible harts devices thus accesses csrs aforementioned three freely reordered respect fence instructions without violating specification csr accesses cause side effects ordering constraints apply order initiation side effects necessarily apply order completion side effects hardware platform may define accesses certain csrs strongly ordered defined memoryordering pmas section volume ii manual accesses strongly ordered csrs stronger ordering constraints respect accesses weakly ordered csrs accesses memorymapped io regions", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-09-18 14:50:19", "segment": "segment36", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(9.1).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%209.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 10 ", "content": "counters riscv isas provide set 3264bit performance counters timers accessible via unprivileged xlen readonly csr registers 0xc000xc1f upper 32 bits accessed via csr registers 0xc800xc9f rv32 first three cycle time instret dedicated functions cycle count realtime clock instructionsretired respectively remaining counters implemented provide programmable event counting", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-09-18 14:50:19", "segment": "segment37", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "10.1 Base Counters and Timers ", "content": "rv32i provides number 64bit readonly userlevel counters mapped 12 bit csr address space accessed 32bit pieces using csrrs instructions rv64i csr instructions manipulate 64bit csrs particular rdcycle rdtime rdinstret pseudoinstructions read full 64 bits cycle time instret counters hence rdcycleh rdtimeh rdinstreth instructions required rv64i execution environments might prohibit access counters impede timing sidechannel attacks rdcycle pseudoinstruction reads low xlen bits cycle csr holds count number clock cycles executed processor core hart running arbitrary start time past rdcycleh rv32i instruction reads bits 6332 cycle counter underlying 64bit counter never overflow practice rate cycle counter advances depend implementation operating environment execution environment provide means determine current rate cyclessecond cycle counter incrementing rdcycle intended return number cycles executed processor core hart precisely defining core difficult given implementation choices eg amd bulldozer precisely defining clock cycle also difficult given range implementations including software emulations intent rdcycle used performance monitoring along performance counters particular one hartcore one would expect cyclecountinstructionsretired measure cpi hart cores exposed software implementor might choose pretend multiple harts one physical core running separate cores one hartcore provide separate cycle counters hart might make sense simple barrel processor eg cdc 6600 peripheral processors interhart timing interactions non existent minimal one hartcore dynamic multithreading generally possible separate cycles per hart especially smt might possible define separate performance counter tried capture number cycles particular hart running definition would fuzzy cover possible threading implementations example count cycles instruction issued execution hart andor cycles instruction retired include cycles hart occupying machine resources execute due stalls harts went execution likely would needed understandable performance stats complexity defining perhart cycle count also need case total percore cycle count tuning multithreaded code led standardizing percore cycle counter also happens work well common single hartcore case standardizing happens sleep practical given sleep means standardized across execution environments entire core paused entirely clock gated powereddown deep sleep executing clock cycles cycle count increasing per spec many details eg whether clock cycles required reset processor waking powerdown event counted considered executionenvironmentspecific details even though precise definition works platforms still useful facility platforms imprecise common usually correct standard better standard intent rdcycle primarily performance monitoringtuning specification written goal mind rdtime pseudoinstruction reads low xlen bits time csr counts wallclock real time passed arbitrary start time past rdtimeh rv32ionly in struction reads bits 6332 realtime counter underlying 64bit counter never overflow practice execution environment provide means determining period realtime counter secondstick period must constant realtime clocks harts single user application synchronized within one tick realtime clock environment provide means determine accuracy clock simple platforms cycle count might represent valid implementation rdtime case platforms implement rdtime instruction alias rdcycle make code portable rather using rdcycle measure wallclock time rdinstret pseudoinstruction reads low xlen bits instret csr counts number instructions retired hart arbitrary start point past rdinstreth rv32ionly instruction reads bits 6332 instruction counter underlying 64bit counter never overflow practice recommend provision basic counters implementations essential basic performance analysis adaptive dynamic optimization allow application work realtime streams additional counters provided help diagnose performance problems made accessible userlevel application code low overhead required counters 64 bits wide even rv32 otherwise difficult software determine values overflowed lowend implementation upper 32 bits counter implemented using software counters incremented trap handler triggered overflow lower 32 bits sample code described shows full 64bit width value safely read using individual 32bit instructions applications important able read multiple counters instant time run multitasking environment user thread suffer context switch attempting read counters one solution user thread read realtime counter reading counters determine context switch occurred middle sequence case reads retried considered adding output latches allow user thread snapshot counter values atomically would increase size user context especially implementations richer set counters", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-09-18 14:50:20", "segment": "segment38", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(10.1).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%2010.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "10.2 Hardware Performance Counters ", "content": "csr space allocated 29 additional unprivileged 64bit hardware performance coun ters hpmcounter3hpmcounter31 rv32 upper 32 bits performance counters accessible via additional csrs hpmcounter3hhpmcounter31h counters count platform specific events configured via additional privileged registers number width additional counters set events count platformspecific privileged architecture manual describes privileged csrs controlling access coun ters set events counted would useful eventually standardize event settings count isalevel metrics number floatingpoint instructions executed example possibly common microarchitectural metrics l1 instruction cache misses", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-09-18 14:50:20", "segment": "segment39", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 11 ", "content": "f standard extension singleprecision floatingpoint version 22 chapter describes standard instructionset extension singleprecision floatingpoint named f adds singleprecision floatingpoint computational instructions compliant ieee 7542008 arithmetic standard 7 f extension depends zicsr extension control status register access", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-09-18 14:50:20", "segment": "segment40", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "11.1 F Register State ", "content": "f extension adds 32 floatingpoint registers f0f31 32 bits wide floatingpoint control status register fcsr contains operating mode exception status floatingpoint unit additional state shown figure 111 use term flen describe width floatingpoint registers riscv isa flen32 f singleprecision floatingpoint extension floatingpoint instructions operate values floatingpoint register file floatingpoint load store instructions transfer floatingpoint values registers memory instructions transfer values integer register file also provided considered unified register file integer floatingpoint values simplifies software register allocation calling conventions reduces total user state however split organization increases total number registers accessible given instruction width simplifies provision enough regfile ports wide superscalar issue supports decoupled floatingpointunit architectures simplifies use internal floatingpoint encoding techniques compiler support calling conventions split register file architectures well understood using dirty bits floatingpoint register file state reduce contextswitch overhead", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-09-18 14:50:20", "segment": "segment41", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%2011.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.2 Floating-Point Control and Status Register ", "content": "floatingpoint control status register fcsr riscv control status register csr 32bit readwrite register selects dynamic rounding mode floatingpoint arith metic operations holds accrued exception flags shown figure 112 fcsr register read written frcsr fscsr instructions assembler pseudoinstructions built underlying csr access instructions frcsr reads fcsr copying integer register rd fscsr swaps value fcsr copying original value integer register rd writing new value obtained integer register rs1 fcsr fields within fcsr also accessed individually different csr addresses separate assembler pseudoinstructions defined ccesses frrm instruction reads rounding mode field f rm nd c opies nto l eastsignificant th ree bi ts integer register rd zero bits fsrm swaps value frm copying original value integer register rd writing new value obtained three leastsignificant b integer register rs1 frm frflags fsflags defined nalogously f accrued exception flags field fflags bits 318 fcsr reserved standard extensions including l standard extension decimal floatingpoint f hese e xtensions n ot p resent mplementations shall ignore writes bits supply zero value read standard software preserve contents bits floatingpoint operations use either static rounding mode encoded instruction dynamic rounding mode held frm rounding modes encoded shown table 111 value 111 instruction rm field selects dynamic rounding mode held f rm frm set invalid value 101111 subsequent attempt execute floatingpoint peration w ith dynamic rounding mode raise illegal instruction exception instructions including widening conversions rm field b ut n evertheless u naffected th e ro unding mo de software set rm field rne 000 c99 language standard effectively mandates provision dynamic rounding mode reg ister typical implementations writes dynamic rounding mode csr state serialize pipeline static rounding modes used implement specialized arithmetic operations often switch frequently different rounding modes accrued exception flags indicate exception conditions arisen floatingpoint arithmetic instruction since field last reset software shown table 1 12 base riscv isa support generating trap setting floatingpoint exception flag allowed standard support traps floatingpoint exceptions base isa instead require explicit checks flags software considered adding branches controlled directly contents floatingpoint accrued exception flags ultimately chose omit instructions keep isa simple", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-09-18 14:50:20", "segment": "segment42", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%2011.2.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2011.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2011.2.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.3 NaN Generation and Propagation ", "content": "except otherwise stated result floatingpoint operation nan canonical nan canonical nan positive sign significand bits clear except msb aka quiet bit singleprecision floatingpoint corresponds pattern 0x7fc00000 considered propagating nan payloads recommended standard decision would increased hardware cost moreover since feature optional standard used portable code implementors free provide nan payload propagation scheme nonstandard exten sion enabled nonstandard operating mode however canonical nan scheme described must always supported default mode require implementations return standardmandated default values case ex ceptional conditions without intervention part userlevel software unlike alpha isa floatingpoint trap barriers believe full hardware handling exceptional cases become common wish avoid complicating userlevel isa opti mize approaches implementations always trap machinemode software handlers provide exceptional default values", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-09-18 14:50:20", "segment": "segment43", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "11.4 Subnormal Arithmetic ", "content": "operations subnormal numbers handled accordance ieee 7542008 standard parlance ieee standard tininess detected rounding detecting tininess rounding results fewer spurious underflow signals", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-09-18 14:50:20", "segment": "segment44", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "11.5 Single-Precision Load and Store Instructions ", "content": "floatingpoint loads stores use baseoffset addressing mode integer base isa base address register rs1 12bit signed byte offset flw instruction loads singleprecision floatingpoint value memory floatingpoint register rd fsw stores singleprecision value floatingpoint register rs2 memory flw fsw guaranteed execute atomically effective address naturally aligned flw fsw modify bits transferred particular payloads noncanonical nans preserved", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-09-18 14:50:20", "segment": "segment45", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(11.5).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.6 Single-Precision Floating-Point Computational Instructions ", "content": "floatingpoint arithmetic instructions one two source operands use rtype format opfp major opcode fadds fmuls perform singleprecision floatingpoint addition multiplication respectively rs1 rs2 fsubs performs singleprecision floating point subtraction rs2 rs1 fdivs performs singleprecision floatingpoint division rs1 rs2 fsqrts computes square root rs1 case result written rd 2bit floatingpoint format field fmt encoded shown table 113 set 00 instructions f extension floatingpoint perations hat p erform r ounding c elect r ounding ode u sing rm field encoding shown table 111 floatingpoint minimumnumber maximumnumber instructions fmins fmaxs write respectively smaller larger rs1 rs2 rd purposes instructions value 00 considered less value 00 inputs nans result canonical nan one operand nan result nonnan operand signaling nan inputs set invalid operation exception flag even result nan note version 22 f extension fmins fmaxs instructions amended implement proposed ieee 754201x minimumnumber maximumnumber operations rather ieee 7542008 minnum maxnum operations operations differ handling signaling nans floatingpoint fused multiplyadd instructions require new standard instruction format r4type instructions specify three source registers rs1 rs2 rs3 destination register rd format used floatingpoint fused multiplyadd instructions fmadds multiplies values rs1 rs2 adds value rs3 writes final result rd fmadds computes rs1rs2 rs3 fmsubs multiplies values rs1 rs2 subtracts value rs3 writes final result rd fmsubs computes rs1rs2 rs3 fnmsubs multiplies values rs1 rs2 negates product adds value rs3 writes final result rd fnmsubs computes rs1rs2 rs3 fnmadds multiplies values rs1 rs2 negates product subtracts value rs3 writes final result rd fnmadds computes rs1rs2 rs3 fnmsub fnmadd instructions counterintuitively named owing naming corresponding instructions mipsiv mips instructions defined negate sum rather negating product riscv instructions naming scheme rational time two definitions differ respect signedzero results riscv definition matches behavior x86 arm fused multiplyadd instructions unfortunately riscv fnmsub fnmadd instruction names swapped compared x86 arm fused multiplyadd fma instructions consume large part 32bit instruction en coding space alternatives considered restrict fma use dynamic rounding modes static rounding modes useful code exploits lack product rounding another alternative would use rd provide rs3 would require additional move instructions common sequences current design still leaves large portion 32bit encoding space open avoiding fma nonorthogonal fused multiplyadd instructions must set invalid operation exception flag multi plicands zero even addend quiet nan ieee 7542008 standard permits require raising invalid exception operation 0 qnan", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-09-18 14:50:21", "segment": "segment46", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2011.3.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(11.6).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(11.6).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.7 Single-Precision Floating-Point Conversion and Move Instructions ", "content": "floatingpointtointeger integertofloatingpoint conversion instructions encoded opfp major opcode space fcvtws fcvtls converts floatingpoint number floating point register rs1 signed 32bit 64bit integer respectively integer register rd fcvtsw fcvtsl converts 32bit 64bit signed integer respectively integer register rs1 floatingpoint number floatingpoint register rd fcvtwus fcvtlus fcvtswu fcvtslu variants convert unsigned integer values xlen 32 fcvtw u s signextends 32bit result destination register width fcvtl u s fcvtsl u rv64only instructions rounded result representable destination format clipped nearest value invalid flag set table 114 gives range valid inputs fcvtints behavior invalid inputs floatingpoint integer integer floatingpoint conversion instructions round according rm field floatingpoint register initialized floatingpoint positive zero using fcvtsw rd x0 never set exception flags floatingpoint floatingpoint signinjection instructions fsgnjs fsgnjns fsgnjxs produce result takes bits except sign bit rs1 fsgnj result sign bit rs2 sign bit fsgnjn result sign bit opposite rs2 sign bit fsgnjx sign bit xor sign bits rs1 rs2 signinjection instructions set floatingpoint exception flags canonicalize nans note fsgnjs rx ry ry moves ry rx assembler pseudoinstruction fmvs rx ry fsgnjns rx ry ry moves negation ry rx assembler pseudoinstruction fnegs rx ry fsgnjxs rx ry ry moves absolute value ry rx assembler pseudoinstruction fabss rx ry signinjection instructions provide floatingpoint mv abs neg well supporting operations including ieee copysign operation sign manipulation tran scendental math function libraries although mv abs neg need single register operand whereas fsgnj instructions need two unlikely microarchitectures would add optimizations benefit reduced number register reads relatively infrequent instructions even case microarchitecture simply detect source registers fsgnj instructions read single copy instructions provided move bit patterns floatingpoint integer registers fmvxw moves singleprecision value floatingpoint register rs1 represented ieee 754 2008 encoding lower 32 bits integer register rd bits modified transfer particular payloads noncanonical nans preserved rv64 higher 32 bits destination register filled copies floatingpoint number sign bit fmvwx moves singleprecision value encoded ieee 7542008 standard encoding lower 32 bits integer register rs1 floatingpoint register rd bits modified transfer particular payloads noncanonical nans preserved fmvwx fmvxw instructions previously called fmvsx fmvxs use w consistent semantics instruction moves 32 bits without interpreting became clearer defining nanboxing avoid disturbing existing code w versions supported tools base floatingpoint isa defined allow implementations employ internal recoding floatingpoint format registers simplify handling subnormal values possibly reduce functional unit latency end base isa avoids representing integer values floatingpoint registers defining conversion comparison operations read write integer register file directly also removes many common cases explicit moves integer floatingpoint registers required reducing instruction count critical paths common mixedformat code sequences", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-09-18 14:50:21", "segment": "segment47", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2011.4.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(11.7).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(11.7).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.7).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.8 Single-Precision Floating-Point Compare Instructions ", "content": "floatingpoint compare instructions feqs flts fles perform specified comparison be tween floatingpoint registers rs1 rs2 rs1 rs2 rs1 rs2 writing 1 integer register rd condition holds 0 otherwise flts fles perform ieee 7542008 standard refers signaling comparisons set invalid operation exception flag either input nan feqs performs quiet comparison sets invalid operation exception flag either input signaling nan three instructions result 0 either operand nan f extension provides comparison whereas base isa provides branch comparison synthesized viceversa performance implication inconsistency nevertheless unfortunate incongruity isa", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-09-18 14:50:21", "segment": "segment48", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(11.8).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "11.9 Single-Precision Floating-Point Classify Instruction ", "content": "fclasss instruction examines value floatingpoint register rs1 writes integer register rd 10bit mask indicates class floatingpoint number format mask described table 115 corresponding bit rd set property true clear otherwise bits rd cleared note exactly one bit rd set fclasss set floatingpoint exception flags", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-09-18 14:50:21", "segment": "segment49", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(11.9).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2011.5.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 12 ", "content": "standard extension doubleprecision floatingpoint version 22 chapter describes standard doubleprecision floatingpoint instructionset extension named adds doubleprecision floatingpoint computational instructions compliant ieee 7542008 arithmetic standard extension depends base singleprecision instruction subset f", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-09-18 14:50:21", "segment": "segment50", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "12.1 D Register State ", "content": "extension widens 32 floatingpoint registers f0f31 64 bits flen64 fig ure 111 f registers hold either 32bit 64bit floatingpoint values described section 122 flen 32 64 128 depending f q extensions supported four different floatingpoint precisions supported including h f q", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-09-18 14:50:21", "segment": "segment51", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "12.2 NaN Boxing of Narrower Values ", "content": "multiple floatingpoint precisions supported valid values narrower nbit types n flen represented lower n bits flenbit nan value process termed nanboxing upper bits valid nanboxed value must 1s valid nanboxed nbit values therefore appear negative quiet nans qnans viewed wider mbit value n flen operation writes narrower result f register must write 1s uppermost flenn bits yield legal nanboxed value software might know current type data stored floatingpoint register able save restore register values hence result using wider operations transfer narrower values defined common case calleesaved registers standard convention also desirable features including varargs userlevel threading libraries virtual machine migration debugging floatingpoint nbit transfer operations move external values held ieee standard formats f registers comprise floatingpoint loads stores flnfsn floating point move instructions fmvnxfmvxn narrower nbit transfer n flen f registers create valid nanboxed value narrower nbit transfer floatingpoint registers transfer lower n bits register ignoring upper flenn bits apart transfer operations described previous paragraph floatingpoint opera tions narrower nbit operations n flen check input operands correctly nanboxed ie upper flenn bits 1 n leastsignificant bits input used input value otherwise input value treated nbit canonical nan earlier versions document define behavior feeding results narrower wider operands operation except require wider saves restores would preserve value narrower operand new definition removes implementationspecific behav ior still accommodating nonrecoded recoded implementations floatingpoint unit new definition also helps catch software errors propagating nans values used incorrectly nonrecoded implementations unpack pack operands ieee standard format input output every floatingpoint operation nanboxing cost nonrecoded implementation primarily checking upper bits narrower operation represent legal nanboxed value writing 1s upper bits result recoded implementations use convenient internal format represent floatingpoint values added exponent bit allow values held normalized cost recoded implementation primarily extra tagging needed track internal types sign bits done without adding new state bits recoding nans internally exponent field small modifications needed pipelines used transfer values recoded format datapath latency costs minimal recoding process handle shifting input subnormal values wide operands case extracting nanboxed value similar process normalization except skipping leading1 bits instead skipping leading0 bits allowing datapath muxing shared", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-09-18 14:50:21", "segment": "segment52", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "12.3 Double-Precision Load and Store Instructions ", "content": "fld instruction loads doubleprecision floatingpoint value memory floatingpoint register rd fsd stores doubleprecision value floatingpoint registers memory fld fsd guaranteed execute atomically effective address naturally aligned xlen64 fld fsd modify bits transferred particular payloads noncanonical nans preserved", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-09-18 14:50:21", "segment": "segment53", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(12.3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "12.4 Double-Precision Floating-Point Computational Instructions ", "content": "doubleprecision floatingpoint computational instructions defined analogously singleprecision counterparts operate doubleprecision operands produce double precision results", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-09-18 14:50:21", "segment": "segment54", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.4).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "12.5 Double-Precision Floating-Point Conversion and Move In- structions ", "content": "floatingpointtointeger integertofloatingpoint conversion instructions encoded opfp major opcode space fcvtwd fcvtld converts doubleprecision floatingpoint number floatingpoint register rs1 signed 32bit 64bit integer respectively inte ger register rd fcvtdw fcvtdl converts 32bit 64bit signed integer respec tively integer register rs1 doubleprecision floatingpoint number floatingpoint reg ister rd fcvtwud fcvtlud fcvtdwu fcvtdlu variants convert unsigned integer values rv64 fcvtw u d signextends 32bit result fcvtl u d fcvtdl u rv64only instructions range valid inputs fcvtintd behavior invalid inputs fcvtints floatingpoint integer integer floatingpoint conversion instructions round according rm field note fcvtdw u always produces exact result unaffected rounding mode doubleprecision singleprecision singleprecision doubleprecision conversion instruc tions fcvtsd fcvtds encoded opfp major opcode space source destination floatingpoint registers rs2 field encodes datatype source fmt field encodes datatype destination fcvtsd rounds according rm field fcvtds never round floatingpoint floatingpoint signinjection instructions fsgnjd fsgnjnd fsgnjxd defined analogously singleprecision signinjection instruction xlen64 instructions provided move bit patterns floatingpoint integer registers fmvxd moves doubleprecision value floatingpoint register rs1 representation ieee 7542008 standard encoding integer register rd fmvdx moves doubleprecision value encoded ieee 7542008 standard encoding integer register rs1 floatingpoint register rd fmvxd fmvdx modify bits transferred particular payloads noncanonical nans preserved early versions riscv isa additional instructions allow rv32 systems transfer upper lower portions 64bit floatingpoint register integer register however would instructions partial register writes would add com plexity implementations recoded floatingpoint register renaming requiring pipeline readmodifywrite sequence scaling handling quadprecision rv32 rv64 would also require additional instructions follow pattern isa defined reduce number explicit intfloat register moves conversions comparisons write results appropriate register file expect benefit instructions lower isas note systems implement 64bit floatingpoint unit including fused multiply add support 64bit floatingpoint loads stores marginal hardware cost moving 32bit 64bit integer datapath low software abi supporting 32bit wide address space pointers used avoid growth static data dynamic memory traffic", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-09-18 14:50:21", "segment": "segment55", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(12.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(12.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(12.5).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "12.6 Double-Precision Floating-Point Compare Instructions ", "content": "doubleprecision floatingpoint compare instructions defined analogously single precision counterparts operate doubleprecision operands", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-09-18 14:50:21", "segment": "segment56", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.6).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "12.7 Double-Precision Floating-Point Classify Instruction ", "content": "doubleprecision floatingpoint classify instruction fclassd defined analogously singleprecision counterpart operates doubleprecision operands", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-09-18 14:50:21", "segment": "segment57", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(12.7).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 13 ", "content": "q standard extension quadprecision floatingpoint version 22 chapter describes q standard extension 128bit quadprecision binary floatingpoint instructions compliant ieee 7542008 arithmetic standard quadprecision binary floatingpoint instructionset extension named q depends doubleprecision floating point extension d floatingpoint registers extended hold either single double quadprecision floatingpoint value flen128 nanboxing scheme described section 122 extended recursively allow singleprecision value nanboxed inside double precision value nanboxed inside quadprecision value", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-09-18 14:50:22", "segment": "segment58", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "13.1 Quad-Precision Load and Store Instructions ", "content": "new 128bit variants loadfp storefp instructions added encoded new value funct3 width field flq fsq guaranteed execute atomically effective address naturally aligned xlen128 flq fsq modify bits transferred particular payloads noncanonical nans preserved", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-09-18 14:50:22", "segment": "segment59", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(13.1).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "13.2 Quad-Precision Computational Instructions ", "content": "new supported format added format field instructions shown table 131 quadprecision floatingpoint c omputational nstructions efined alogously doubleprecision counterparts operate quadprecision operands produce quadprecision results", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-09-18 14:50:22", "segment": "segment60", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2013.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(13.2).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "13.3 Quad-Precision Convert and Move Instructions ", "content": "new floatingpointtointeger integertofloatingpoint conversion instructions added instructions defined analogously doubleprecisiontointeger integertodouble precision conversion instructions fcvtwq fcvtlq converts quadprecision floating point number signed 32bit 64bit integer respectively fcvtqw fcvtql con verts 32bit 64bit signed integer respectively quadprecision floatingpoint number fcvtwuq fcvtluq fcvtqwu fcvtqlu variants convert unsigned integer values fcvtl u q fcvtql u rv64only instructions new floatingpointtofloatingpoint conversion instructions added instructions de fined analogously doubleprecision floatingpointtofloatingpoint conversion instructions fcvtsq fcvtqs converts quadprecision floatingpoint number singleprecision floatingpoint number viceversa respectively fcvtdq fcvtqd converts quad precision floatingpoint number doubleprecision floatingpoint number viceversa respec tively floatingpoint floatingpoint signinjection instructions fsgnjq fsgnjnq fsgnjxq defined analogously doubleprecision signinjection instruction fmvxq fmvqx instructions provided rv32 rv64 quadprecision bit patterns must moved integer registers via memory rv128 support fmvxq fmvqx q extension", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-09-18 14:50:22", "segment": "segment61", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(13.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(13.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(13.3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "13.4 Quad-Precision Floating-Point Compare Instructions ", "content": "quadprecision floatingpoint compare instructions defined analogously double precision counterparts operate quadprecision operands", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-09-18 14:50:22", "segment": "segment62", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(13.4).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "13.5 Quad-Precision Floating-Point Classify Instruction ", "content": "quadprecision floatingpoint classify instruction fclassq defined analogously doubleprecision counterpart operates quadprecision operands", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-09-18 14:50:22", "segment": "segment63", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(13.5).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 14 ", "content": "rvwmo memory consistency model version 01 chapter defines riscv memory consistency model memory consistency model set rules specifying values returned loads memory riscv uses memory model called rvwmo riscv weak memory ordering designed provide flexibility architects build highperformance scalable designs simultaneously supporting tractable programming model rvwmo code running single hart appears execute order perspective memory instructions hart memory instructions another hart may observe memory instructions first hart executed different order there fore multithreaded code may require explicit synchronization guarantee ordering mem ory instructions different harts base riscv isa provides fence instruction purpose described section 27 atomics extension additionally defines load reservedstoreconditional atomic readmodifywrite instructions standard isa extension misaligned atomics zam chapter 22 standard isa extension total store ordering ztso chapter 23 augment rvwmo additional rules specific extensions appendices specification provide axiomatic operational formalizations memory consistency model well additional explanatory material chapter defines memory model regular main memory operations interaction memory model io memory instruction fetches fencei page table walks sfencevma yet formalized may formalized future revision specification rv128 base isa future isa extensions v vector transactional memory j jit extensions need incorporated future revision well memory consistency models supporting overlapping memory accesses different widths si multaneously remain active area academic research yet fully understood specifics memory accesses different sizes interact rvwmo specified best current abilities subject revision new issues uncovered 83", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-09-18 14:50:22", "segment": "segment64", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "14.1 Definition of the RVWMO Memory Model ", "content": "rvwmo memory model defined terms global memory order total ordering memory operations produced harts general multithreaded program many different possible executions execution corresponding global memory order global memory order defined primitive load store operations generated memory instructions subject constraints defined rest chapter execution satisfying memory model constraints legal execution far memory model concerned memory model primitives program order memory operations reflects order instructions generate load store logically laid hart dynamic instruction stream ie order simple inorder processor would execute instructions hart memoryaccessing instructions give rise memory operations memory operation either load operation store operation simultaneously memory operations singlecopy atomic never observed partiallycomplete state among instructions rv32gc rv64gc aligned memory instruction gives rise ex actly one memory operation two exceptions first unsuccessful sc instruction give rise memory operations second fld fsd instructions may give rise multiple memory operations xlen 64 stated section 123 clarified aligned amo gives rise single memory operation load operation store operation simultaneously instructions rv128 base instruction set future isa extensions v vector p simd may give rise multiple memory operations however memory model extensions yet formalized misaligned load store instruction may decomposed set component memory opera tions granularity fld fsd instruction xlen 64 may also decomposed set component memory operations granularity memory operations generated instructions ordered respect program order ordered normally respect memory operations generated preceding subsequent instructions program order atomics extension require execution environments support misaligned atomic instructions however misaligned atomics supported via zam extension lrs scs amos may decomposed subject constraints atomicity axiom misaligned atomics defined chapter 22 decomposition misaligned memory operations byte granularity facilitates emula tion implementations natively support misaligned accesses implementations might example simply iterate bytes misaligned access one one lr instruction sc instruction said paired lr precedes sc program order lr sc instructions corresponding memory operations said paired well except case failed sc store operation generated complete list conditions determining whether sc must succeed may succeed must fail defined section 82 load store operations may also carry one ordering annotations following set acquirercpc acquirercsc releasercpc releasercsc amo lr instruction aq set acquirercsc annotation amo sc instruction rl set release rcsc annotation amo lr sc instruction aq rl set acquirercsc releasercsc annotations convenience use term acquire annotation refer acquirercpc annotation acquirercsc annotation likewise release annotation refers releasercpc annotation releasercsc annotation rcpc annotation refers acquirercpc annotation release rcpc annotation rcsc annotation refers acquirercsc annotation releasercsc annotation memory model literature term rcpc stands release consistency processor consistent synchronization operations term rcsc stands release consistency sequentiallyconsistent synchronization operations 5 many different definitions acquire release annotations litera ture context rvwmo terms concisely completely defined preserved program order rules 57 rcpc annotations currently used implicitly assigned every memory ac cess per standard extension ztso chapter 23 furthermore although isa currently contain native loadacquire storerelease instructions rcpc variants thereof rvwmo model designed forwardscompatible potential addition isa future extension syntactic dependencies definition rvwmo memory model depends part notion syntactic depen dency defined follows context defining dependencies register refers either entire generalpurpose register portion csr entire csr granularity dependencies tracked csrs specific csr defined section 142 syntactic dependencies defined terms instructions source registers instructions desti nation registers way instructions carry dependency source registers destination registers section provides general definition terms however section 143 provides complete listing specifics instruction general register r x0 source register instruction following hold opcode rs1 rs2 rs3 set r csr instruction opcode csr set r unless csrrw csrrwi rd set x0 r csr implicit source register defined section 143 r csr aliases another source register memory instructions also specify source registers address source registers data source registers general register r x0 destination register instruction following hold opcode rd set r csr instruction opcode csr set r unless csrrs csrrc rs1 set x0 csrrsi csrrci uimm 40 set zero r csr implicit destination register defined section 143 r csr aliases another destination register nonmemory instructions carry dependency source registers destination registers however exceptions rule see section 143 instruction j syntactic dependency instruction via destination register source register r j either following hold r instruction programordered j r destination register instruction programordered j following hold 1 j syntactic dependency via destination register q source register r 2 syntactic dependency via destination register source register p 3 carries dependency p q finally definitions follow let b two memory operations let j instructions generate b respectively b syntactic address dependency r address source register j j syntactic dependency via source register r b syntactic data dependency b store operation r data source register j j syntactic dependency via source register r b syntactic control dependency instruction programordered j branch indirect jump syntactic dependency i generally speaking nonamo load instructions data source registers uncondi tional nonamo store instructions destination registers however successful sc instruction considered register specified rd destination register hence possible instruction syntactic dependency successful sc instruction precedes program order preserved program order global memory order given execution program respects hart program order subset program order must respected global memory order known preserved program order complete definition preserved program order follows note amos simul taneously loads stores memory operation precedes memory operation b preserved program order hence also global memory order precedes b program order b access regular main memory rather io regions following hold overlappingaddress orderings 1 b store b access overlapping memory addresses 2 b loads x byte read b store x b program order b return values x written different memory operations 3 generated amo sc instruction b load b returns value written explicit synchronization 4 fence instruction orders b 5 acquire annotation 6 b release annotation 7 b rcsc annotations 8 paired b syntactic dependencies 9 b syntactic address dependency 10 b syntactic data dependency 11 b store b syntactic control dependency pipeline dependencies 12 b load exists store b program order address data dependency b returns value written 13 b store exists instruction b program order address dependency memory model axioms execution riscv program obeys rvwmo memory consistency model exists global memory order conforming preserved program order satisfying load value axiom atomicity axiom progress axiom load value axiom byte load returns value written byte store latest global memory order among following stores 1 stores write byte precede global memory order 2 stores write byte precede program order atomicity axiom r w paired load store operations generated aligned lr sc instructions hart h store byte x r returns value written must precede w global memory order store hart h byte x following preceding w global memory order atomicity axiom theoretically supports lrsc pairs different widths mismatched addresses since implementations permitted allow sc operations succeed cases however practice expect patterns rare use discouraged progress axiom memory operation may preceded global memory order infinite sequence memory operations", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-09-18 14:50:23", "segment": "segment65", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "14.2 CSR Dependency Tracking Granularity", "content": "note read only csrs listed participate definition syntactic dependencies ", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-09-18 14:50:23", "segment": "segment65", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2014.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "14.3 Source and Destination Register Listings ", "content": "section provides concrete listing source destination registers instruction listings used definition syntactic dependencies section 141 term accumulating csr used describe csr source destination register carries dependency instructions carry dependency source register source registers column destination register destination registers column source register source registers column csr accumulating csrs column csr accumulating csrs column except annotated otherwise key aaddress source register ddata source register the instruction carry dependency source register destination register the instruction carries dependencies source register destination register specified", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-09-18 14:50:23", "segment": "segment66", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(14.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(14.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(14.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(14.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra5(14.3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 15 ", "content": "l standard extension decimal floatingpoint version 00 chapter draft proposal ratified foundation chapter placeholder specification standard extension named l designed support decimal floatingpoint arithmetic defined ieee 7542008 standard", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-09-18 14:50:23", "segment": "segment67", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "15.1 Decimal Floating-Point Registers ", "content": "existing floatingpoint registers used hold 64bit 128bit decimal floatingpoint values existing floatingpoint load store instructions used move values memory due large opcode space required fused multiplyadd instructions decimal floating point instruction extension require five 25bit major opcodes 30bit encoding space", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-09-18 14:50:23", "segment": "segment68", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 16 ", "content": "c standard extension compressed instructions version 20 chapter describes current proposal riscv standard compressed instructionset extension named c reduces static dynamic code size adding short 16bit instruction encodings common operations c extension added base isas rv32 rv64 rv128 use generic term rvc cover typically 50 60 riscv instructions program replaced rvc instructions resulting 25 30 codesize reduction", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-09-18 14:50:23", "segment": "segment69", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "16.1 Overview ", "content": "rvc uses simple compression scheme offers shorter 16bit versions common 32bit riscv instructions immediate address offset small one registers zero register x0 abi link register x1 abi stack pointer x2 destination register first source register identical registers used 8 popular ones c extension compatible standard instruction extensions c extension allows 16bit instructions freely intermixed 32bit instructions latter able start 16bit boundary ie ialign16 addition c extension instructions raise instructionaddressmisaligned exceptions removing 32bit alignment constraint original 32bit instructions allows significantly greater code density compressed instruction encodings mostly common across rv32c rv64c rv128c shown table 164 opcodes used different purposes depending base isa width example wider addressspace rv64c rv128c variants require additional opcodes compress loads stores 64bit integer values rv32c uses opcodes compress loads stores singleprecision floatingpoint values similarly rv128c requires additional opcodes capture loads stores 128bit integer values opcodes used loads stores doubleprecision floatingpoint values rv32c rv64c c extension implemented appropriate compressed floatingpoint load store instructions must provided whenever relevant standard floatingpoint extension f andor also implemented addition rv32c includes compressed jump link instruction compress shortrange subroutine calls opcode used compress addiw rv64c rv128c doubleprecision loads stores significant fraction static dynamic instructions hence motivation include rv32c rv64c encoding although singleprecision loads stores significant source static dynamic compression benchmarks compiled currently supported abis microcontrollers provide hardware singleprecision floatingpoint units abi sup ports singleprecision floatingpoint numbers singleprecision loads stores used least frequently doubleprecision loads stores measured benchmarks hence motivation provide compressed support rv32c shortrange subroutine calls likely small binaries microcontrollers hence motivation include rv32c although reusing opcodes different purposes different base register widths adds complexity documentation impact implementation complexity small even designs support multiple base isa register widths compressed floatingpoint load store variants use instruction format register specifiers wider integer loads stores rvc designed constraint rvc instruction expands single 32bit instruction either base isa rv32ie rv64i rv128i f standard extensions present adopting constraint two main benefits hardware designs simply expand rvc instructions decode simplifying verification minimizing modifications existing microarchitectures compilers unaware rvc extension leave code compression assembler linker although compressionaware compiler generally able produce better results felt multiple complexity reductions simple oneone mapping c base ifd instructions far outweighed potential gains slightly denser encoding added additional instructions supported c extension allowed encoding multiple ifd instructions one c instruction important note c extension designed standalone isa meant used alongside base isa variablelength instruction sets long used improve code density example ibm stretch 4 developed late 1950s isa 32bit 64bit instructions 32bit instructions compressed versions full 64bit instructions stretch also employed concept limiting set registers addressable shorter instruction formats short branch instructions could refer one index registers later ibm 360 architecture 3 supported simple variablelength instruction encoding 16bit 32bit 48bit instruction formats 1963 cdc introduced craydesigned cdc 6600 18 precursor risc archi tectures introduced registerrich loadstore architecture instructions two lengths 15bits 30bits later cray1 design used similar instruction format 16bit 32bit instruction lengths initial risc isas 1980s picked performance code size reasonable workstation environment embedded systems hence arm mips subsequently made versions isas offered smaller code size offering alternative 16bit wide instruction set instead standard 32bit wide instructions com pressed risc isas reduced code size relative starting points 2530 yielding code significantly smaller 80x86 result surprised intuition variablelength cisc isa smaller risc isas offered 16bit 32bit formats since original risc isas leave sufficient opcode space free include unplanned compressed instructions instead developed complete new isas meant compilers needed different code generators separate compressed isas first compressed risc isa extensions eg arm thumb mips16 used fixed 16bit in struction size gave good reductions static code size caused increase dynamic instruction count led lower performance compared original fixedwidth 32bit instruction size led development second generation compressed risc isa designs mixed 16bit 32bit instruction lengths eg arm thumb2 micromips pow erpc vle performance similar pure 32bit instructions significant code size savings unfortunately different generations compressed isas incompati ble original uncompressed isa leading significant complexity documentation implementations software tools support commonly used 64bit isas powerpc micromips currently supports compressed instruction format surprising popular 64bit isa mobile platforms arm v8 include compressed instruction format given static code size dynamic instruction fetch bandwidth important metrics although static code size major concern larger systems instruction fetch bandwidth major bottleneck servers running commercial workloads often large instruction working set benefiting 25 years hindsight riscv designed support compressed instruc tions outset leaving enough opcode space rvc added simple extension top base isa along many extensions philosophy rvc reduce code size embedded applications improve performance energyefficiency applications due fewer misses instruction cache waterman shows rvc fetches 25 30 fewer instruction bits reduces instruction cache misses 20 25 roughly performance impact doubling instruction cache size 22", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-09-18 14:50:23", "segment": "segment70", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "16.2 Compressed Instruction Formats ", "content": "table 161 shows nine compressed instruction formats cr ci css use 32 rvi registers ciw cl cs ca cb limited 8 table 162 lists popular registers correspond registers x8 x15 note separate version load store instructions use stack pointer base address register since saving restoring stack prevalent use ci css formats allow access 32 data registers ciw supplies 8bit immediate addi4spn instruction riscv abi changed make frequently used registers map registers x8x15 simplifies decompression decoder contiguous naturally aligned set register numbers also compatible rv32e base isa 16 integer registers compressed registerbased floatingpoint loads stores also use cl cs formats respec tively eight registers mapping f8 f15 standard riscv calling convention maps frequently used floatingpoint registers registers f8 f15 allows register decompression decoding integer register numbers formats designed keep bits two register source specifiers place instructions destination register field move full 5bit destination register specifier present place 32bit riscv encoding immediates signextended signextension always bit 12 immediate fields scrambled base specification reduce number immediate muxes required immediate fields scrambled instruction formats instead sequential order many bits possible position every instruction thereby simplify ing implementations example immediate bits 1710 always sourced instruction bit positions five immediate bits 5 4 3 1 0 two source instruction bits four 9 7 6 2 three sources one 8 four sources many rvc instructions zerovalued immediates disallowed x0 valid 5bit register specifier restrictions free encoding space instructions requiring fewer operand bits", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-09-18 14:50:23", "segment": "segment71", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.2.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "16.3 Load and Store Instructions ", "content": "increase reach 16bit instructions datatransfer instructions use zeroextended immediates scaled size data bytes 4 words 8 double words 16 quad words rvc provides two variants loads stores one uses abi stack pointer x2 base address target data register reference one 8 base address registers one 8 data registers stackpointerbased loads stores instructions use ci format clwsp loads 32bit value memory register rd computes effective address adding zeroextended offset scaled 4 stack pointer x2 expands lw rd offset 72 x2 clwsp valid rdx0 code points rdx0 reserved cldsp rv64crv128conly instruction loads 64bit value memory register rd computes effective address adding zeroextended offset scaled 8 stack pointer x2 expands ld rd offset 83 x2 cldsp valid rdx0 code points rdx0 reserved clqsp rv128conly instruction loads 128bit value memory register rd computes effective address adding zeroextended offset scaled 16 stack pointer x2 expands lq rd offset 94 x2 clqsp valid rdx0 code points rdx0 reserved cflwsp rv32fconly instruction loads singleprecision floatingpoint value memory floatingpoint register rd computes effective address adding zeroextended offset scaled 4 stack pointer x2 expands flw rd offset 72 x2 cfldsp rv32dcrv64dconly instruction loads doubleprecision floatingpoint value memory floatingpoint register rd computes effective address adding zeroextended offset scaled 8 stack pointer x2 expands fld rd offset 83 x2 instructions use css format cswsp stores 32bit value register rs2 memory computes effective address adding zeroextended offset scaled 4 stack pointer x2 expands sw rs2 offset 72 x2 csdsp rv64crv128conly instruction stores 64bit value register rs2 memory computes effective address adding zeroextended offset scaled 8 stack pointer x2 expands sd rs2 offset 83 x2 csqsp rv128conly instruction stores 128bit value register rs2 memory computes effective address adding zeroextended offset scaled 16 stack pointer x2 expands sq rs2 offset 94 x2 cfswsp rv32fconly instruction stores singleprecision floatingpoint value floatingpoint register rs2 memory computes effective address adding zeroextended offset scaled 4 stack pointer x2 expands fsw rs2 offset 72 x2 cfsdsp rv32dcrv64dconly instruction stores doubleprecision floatingpoint value floatingpoint register rs2 memory computes effective address adding zero extended offset scaled 8 stack pointer x2 expands fsd rs2 offset 83 x2 register saverestore code function entryexit represents significant portion static code size stackpointerbased compressed loads stores rvc effective reducing saverestore static code size factor 2 improving performance reducing dynamic instruction bandwidth common mechanism used isas reduce saverestore code size load multiple storemultiple instructions considered adopting riscv noted following drawbacks instructions instructions complicate processor implementations virtual memory systems data accesses could resident physical memory could requires new restart mechanism partially executed instructions unlike rest rvc instructions ifd equivalent load multiple store multiple unlike rest rvc instructions compiler would aware instructions generate instructions allocate registers order maxi mize chances saved stored since would saved restored sequential order simple microarchitectural implementations constrain instructions scheduled around load store multiple instructions leading potential perfor mance loss desire sequential register allocation might conflict featured registers selected ciw cl cs ca cb formats furthermore much gains realized software replacing prologue epilogue code subroutine calls common prologue epilogue code technique described section 56 23 reasonable architects might come different conclusions decided omit load store multiple instead use softwareonly approach calling saverestore millicode routines attain greatest code size reduction registerbased loads stores instructions use cl format clw loads 32bit value memory register rd computes effective address adding zeroextended offset scaled 4 base address register rs1 expands lw rd offset 62 rs1 cld rv64crv128conly instruction loads 64bit value memory register rd computes effective address adding zeroextended offset scaled 8 base address register rs1 expands ld rd offset 73 rs1 clq rv128conly instruction loads 128bit value memory register rd computes effective address adding zeroextended offset scaled 16 base address register rs1 expands lq rd offset 84 rs1 cflw rv32fconly instruction loads singleprecision floatingpoint value mem ory floatingpoint register rd computes effective address adding zeroextended offset scaled 4 base address register rs1 expands flw rd offset 62 rs1 cfld rv32dcrv64dconly instruction loads doubleprecision floatingpoint value memory floatingpoint register rd computes effective address adding zeroextended offset scaled 8 base address register rs1 expands fld rd offset 73 rs1 instructions use cs format csw stores 32bit value register rs2 memory computes effective address adding zeroextended offset scaled 4 base address register rs1 expands sw rs2 offset 62 rs1 csd rv64crv128conly instruction stores 64bit value register rs2 memory computes effective address adding zeroextended offset scaled 8 base address register rs1 expands sd rs2 offset 73 rs1 csq rv128conly instruction stores 128bit value register rs2 memory computes effective address adding zeroextended offset scaled 16 base address register rs1 expands sq rs2 offset 84 rs1 cfsw rv32fconly instruction stores singleprecision floatingpoint value floating point register rs2 memory computes effective address adding zeroextended offset scaled 4 base address register rs1 expands fsw rs2 offset 62 rs1 cfsd rv32dcrv64dconly instruction stores doubleprecision floatingpoint value floatingpoint register rs2 memory computes effective address adding zero extended offset scaled 8 base address register rs1 expands fsd rs2 offset 73 rs1", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-09-18 14:50:24", "segment": "segment72", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(16.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(16.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(16.3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(16.3).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "16.4 Control Transfer Instructions ", "content": "rvc provides unconditional jump instructions conditional branch instructions base rvi instructions offsets rvc control transfer instruction multiples 2 bytes cj performs unconditional control transfer offset signextended added pc form jump target address cj therefore target 2 kib range cj expands jal x0 offset 111 cjal rv32conly instruction performs operation cj additionally writes address instruction following jump pc2 link register x1 cjal expands jal x1 offset 111 instructions use cr format cjr jump register performs unconditional control transfer address register rs1 cjr expands jalr x0 0 rs1 cjr valid rs1x0 code point rs1x0 reserved cjalr jump link register performs operation cjr additionally writes address instruction following jump pc2 link register x1 cjalr expands jalr x1 0 rs1 cjalr valid rs1x0 code point rs1x0 corresponds cebreak instruction strictly speaking cjalr expand exactly base rvi instruction value added pc form link address 2 rather 4 base isa supporting offsets 2 4 bytes minor change base microarchitecture instructions use cb format cbeqz performs conditional control transfers offset signextended added pc form branch target address therefore target 256 b range cbeqz takes branch value register rs1 zero expands beq rs1 x0 offset 81 cbnez defined analogously takes branch rs1 contains nonzero value expands bne rs1 x0 offset 81", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-09-18 14:50:24", "segment": "segment73", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(16.4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(16.4).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "16.5 Integer Computational Instructions ", "content": "rvc provides several instructions integer arithmetic constant generation integer constantgeneration instructions two constantgeneration instructions use ci instruction format target integer register cli loads signextended 6bit immediate imm register rd cli expands addi rd x0 imm 50 cli valid rdx0 code points rdx0 encode hints clui loads nonzero 6bit immediate field bits 1712 destination register clears bottom 12 bits signextends bit 17 higher bits destination clui expands lui rd nzimm 1712 clui valid rd x0 x2 immediate equal zero code points nzimm0 reserved remaining code points rdx0 hints remaining code points rdx2 correspond caddi16sp instruction integer registerimmediate operations integer registerimmediate operations encoded ci format perform operations integer register 6bit immediate caddi adds nonzero signextended 6bit immediate value register rd writes result rd caddi expands addi rd rd nzimm 50 caddi valid rdx0 nzimm0 code points rdx0 encode cnop instruction remaining code points nzimm0 encode hints caddiw rv64crv128conly instruction performs computation pro duces 32bit result signextends result 64 bits caddiw expands addiw rd rd imm 50 immediate zero caddiw corresponds sextw rd caddiw valid rdx0 code points rdx0 reserved caddi16sp shares opcode clui destination field x2 caddi16sp adds nonzero signextended 6bit immediate value stack pointer spx2 immediate scaled represent multiples 16 range 512496 caddi16sp used adjust stack pointer procedure prologues epilogues expands addi x2 x2 nzimm 94 caddi16sp valid nzimm0 code point nzimm0 reserved standard riscv calling convention stack pointer sp always 16byte aligned caddi4spn ciwformat instruction adds zeroextended nonzero immediate scaled 4 stack pointer x2 writes result rd instruction used generate pointers stackallocated variables expands addi rd x2 nzuimm 92 caddi4spn valid nzuimm0 code points nzuimm0 reserved cslli ciformat instruction performs logical left shift value register rd writes result rd shift amount encoded shamt field rv128c shift amount zero used encode shift 64 cslli expands slli rd rd shamt 50 except rv128c shamt0 expands slli rd rd 64 rv32c shamt 5 must zero code points shamt 5 1 reserved custom extensions rv32c rv64c shift amount must nonzero code points shamt0 hints base isas code points rdx0 hints except shamt 5 1 rv32c csrli cbformat instruction performs logical right shift value register rd writes result rd shift amount encoded shamt field rv128c shift amount zero used encode shift 64 furthermore shift amount signextended rv128c legal shift amounts 131 64 96127 csrli expands srli rd rd shamt 50 except rv128c shamt0 expands srli rd rd 64 rv32c shamt 5 must zero code points shamt 5 1 reserved custom extensions rv32c rv64c shift amount must nonzero code points shamt0 hints csrai defined analogously csrli instead performs arithmetic right shift csrai expands srai rd rd shamt 50 left shifts usually frequent right shifts left shifts frequently used scale address values right shifts therefore granted less encoding space placed encoding quadrant immediates signextended rv128 decision made 6bit shiftamount immediate also signextended apart reducing decode complexity believe rightshift amounts 96127 useful 6495 allow extraction tags located high portions 128bit address pointers note rv128c frozen point rv32c rv64c allow evaluation typical usage 128bit addressspace codes candi cbformat instruction computes bitwise value register rd signextended 6bit immediate writes result rd candi expands andi rd rd imm 50 integer registerregister operations instructions use cr format cmv copies value register rs2 register rd cmv expands add rd x0 rs2 cmv valid rs2x0 code points rs2x0 correspond cjr instruction code points rs2x0 rdx0 hints cmv expands different instruction canonical mv pseudoinstruction instead uses addi implementations handle mv specially eg using registerrenaming hardware may find convenient expand cmv mv instead add slight additional hard ware cost cadd adds values registers rd rs2 writes result register rd cadd expands add rd rd rs2 cadd valid rs2x0 code points rs2x0 correspond cjalr cebreak instructions code points rs2x0 rdx0 hints instructions use ca format cand computes bitwise values registers rd rs2 writes result register rd cand expands rd rd rs2 cor computes bitwise values registers rd rs2 writes result register rd cor expands rd rd rs2 cxor computes bitwise xor values registers rd rs2 writes result register rd cxor expands xor rd rd rs2 csub subtracts value register rs2 value register rd writes result register rd csub expands sub rd rd rs2 caddw rv64crv128conly instruction adds values registers rd rs2 signextends lower 32 bits sum writing result register rd caddw expands addw rd rd rs2 csubw rv64crv128conly instruction subtracts value register rs2 value register rd signextends lower 32 bits difference writing result register rd csubw expands subw rd rd rs2 group six instructions provide large savings individually occupy much encoding space straightforward implement group provide worthwhile im provement static dynamic compression defined illegal instruction 16bit instruction bits zero permanently reserved illegal instruction reserve allzero instructions illegal instructions help trap attempts execute zeroed nonexistent portions memory space allzero value redefined nonstandard extension similarly reserve instructions bits set 1 corresponding long instructions riscv variablelength encoding scheme illegal capture another common value seen nonexistent memory regions nop instruction cnop ciformat instruction change uservisible state except advancing pc incrementing applicable performance counters cnop expands nop cnop valid imm0 code points imm0 encode hints breakpoint instruction debuggers use cebreak instruction expands ebreak cause control transferred back debugging environment cebreak shares opcode cadd instruction rd rs2 zero thus also use cr format", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-09-18 14:50:25", "segment": "segment74", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra1(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra2(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra3(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra4(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra5(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra6(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra7(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra8(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra91(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra10(16.5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/extra11(16.5).jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "16.6 Usage of C Instructions in LR/SC Sequences ", "content": "implementations support c extension compressed forms instructions per mitted inside constrained lrsc sequences described section 83 also permitted inside constrained lrsc sequences implication implementation claims support c extensions must ensure lrsc sequences containing valid c instructions eventually complete", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-09-18 14:50:25", "segment": "segment75", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "16.7 HINT Instructions ", "content": "portion rvc encoding space reserved microarchitectural hints like hints rv32i base isa see section 29 instructions modify architectural state except advancing pc applicable performance counters hints executed noops implementations ignore rvc hints encoded computational instructions modify architectural state either rdx0 eg cadd x0 t0 rd overwritten copy eg caddi t0 0 hint encoding chosen simple implementations ignore hints alto gether instead execute hint regular computational instruction happens mutate architectural state rvc hints necessarily expand rvi hint counterparts example cadd x0 t0 might encode hint add x0 x0 t0 primary reason require rvc hint expand rvi hint hints unlikely compressible manner underlying computational instruction also decoupling rvc rvi hint mappings allows scarce rvc hint space allocated popular hints particular hints amenable macroop fusion table 163 lists rvc hint code points rv32c 78 hint space reserved standard hints none presently defined remainder hint space reserved custom hints standard hints ever defined subspace", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-09-18 14:50:25", "segment": "segment76", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.3.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "16.8 RVC Instruction Set Listings ", "content": "table 164 shows map major opcodes rvc row table corresponds one quadrant encoding space last quadrant two leastsignificant bits set corresponds instructions wider 16 bits including base isas several instructions valid certain operands invalid marked either res indicate opcode reserved future standard extensions nse indicate opcode reserved custom extensions hint indicate opcode reserved microarchitectural hints see section 167", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-09-18 14:50:25", "segment": "segment77", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.4.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.5.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.6.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2016.7.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 17 ", "content": "b standard extension bit manipulation version 00 chapter placeholder future standard extension provide bit manipulation instruc tions including instructions insert extract test bit fields rotations funnel shifts bit byte permutations although bit manipulation instructions effective application domains particu larly dealing externally packed data structures excluded base isa useful domains add additional complexity instruction formats supply needed operands anticipate b extension brownfield encoding within base 30bit instruction space", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-09-18 14:50:25", "segment": "segment78", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 18 ", "content": "j standard extension dynamically translated languages version 00 chapter placeholder future standard extension support dynamically translated languages many popular languages usually implemented via dynamic translation including java javascript languages benefit additional isa support dynamic checks garbage collection", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-09-18 14:50:25", "segment": "segment79", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 19 ", "content": "standard extension transactional memory version 00 chapter placeholder future standard extension provide transactional memory operations despite much research last twenty years initial commercial implementations still much debate best way support atomic operations involving multiple addresses current thoughts include small limitedcapacity transactional memory buffer along lines original transactional memory proposals", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-09-18 14:50:25", "segment": "segment80", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 20 ", "content": "p standard extension packedsimd instructions version 02 discussions 5th riscv workshop indicated desire drop packedsimd proposal floatingpoint registers favor standardizing v extension large floatingpoint simd operations however interest packedsimd fixedpoint operations use integer registers small riscv implementations task group working define new p extension", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-09-18 14:50:25", "segment": "segment81", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 21 ", "content": "v standard extension vector operations version 07 current working group draft hosted https githubcomriscvriscvvspec base vector extension intended provide general support dataparallel execution within 32bit instruction encoding space later vector extensions supporting richer functionality certain domains", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-09-18 14:50:25", "segment": "segment82", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 22 ", "content": "zam standard extension misaligned atomics v01 chapter defines zam extension extends extension standardizing support misaligned atomic memory operations amos platforms implementing zam misaligned amos need execute atomically respect accesses including nonatomic loads stores address size precisely execution environments implementing zam subject following axiom atomicity axiom misaligned atomics r w paired misaligned load store instructions hart h address size store instruction hart h address size r w store operation generated lies memory operations generated r w global memory order furthermore load instruction l hart h address size r w load operation generated l lies two memory operations generated r w global memory order restricted form atomicity intended balance needs applications require sup port misaligned atomics ability implementation actually provide necessary degree atomicity aligned instructions zam continue behave normally rvwmo intention zam implemented one two ways 1 hardware natively supports atomic misaligned accesses address size question eg misaligned accesses within single cache line simply following rules would applied aligned amos 2 hardware natively support misaligned accesses address size question trapping instructions including loads address size executing via number memory operations inside mutex function given memory address access size amos may emulated splitting separate load store operations preserved program order rules eg incoming outgoing syntactic dependencies must behave amo still single memory operation", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-09-18 14:50:25", "segment": "segment83", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 23 ", "content": "ztso standard extension total store ordering v01 chapter defines ztso extension riscv total store ordering rvtso memory consistency model rvtso defined delta rvwmo defined chapter 141 ztso extension meant facilitate porting code originally written x86 sparc architectures use tso default also supports implementations inherently provide rvtso behavior want expose fact software rvtso makes following adjustments rvwmo load operations behave acquirercpc annotation store operations behave releasercpc annotation amos behave acquirercsc releasercsc annotations rules render ppo rules except 47 redundant also make redundant nonio fences pw sr set finally also imply memory operation reordered past amo either direction context rvtso case rvwmo storage ordering annotations concisely completely defined ppo rules 57 memory models load value axiom allows hart forward value store buffer subsequent program order loadthat say stores forwarded locally visible harts spite fact ztso adds new instructions isa code written assuming rvtso run correctly implementations supporting ztso binaries compiled run ztso indicate via flag binary platforms implement ztso simply refuse run", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-09-18 14:50:25", "segment": "segment84", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 24 ", "content": "rv3264g instruction set listings one goal riscv project used stable software development target purpose define combination base isa rv32i rv64i plus selected standard extensions imafd zicsr zifencei generalpurpose isa use abbreviation g imafdzicsr zifencei combination instructionset extensions chapter presents opcode maps instructionset listings rv32g rv64g table 241 shows map major opcodes rvg major opcodes 3 lower bits set reserved instruction lengths greater 32 bits opcodes marked reserved avoided custom instructionset extensions might used future standard extensions major opcodes marked custom0 custom1 avoided future standard extensions recommended use custom instructionset extensions within base 32bit instruction format opcodes marked custom2rv128 custom3rv128 reserved future use rv128 otherwise avoided standard extensions also used custom instructionset extensions rv32 rv64 believe rv32g rv64g provide simple complete instruction sets broad range generalpurpose computing optional compressed instruction set described chapter 16 added forming rv32gc rv64gc improve performance code size energy efficiency though additional hardware complexity move beyond imafdc instructionset extensions added instructions tend domainspecific provide benefits restricted class applications eg multimedia security unlike commercial isas riscv isa design clearly separates base isa broadly applicable standard extensions specialized additions chapter 26 extensive discussion ways add extensions riscv isa", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-09-18 14:50:25", "segment": "segment85", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(1).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(2).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(3).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(4).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(5).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.2(6).jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2024.3.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 25 ", "content": "riscv assembly programmer handbook chapter placeholder assembly programmer manual table 251 lists assembler mnemonics x f registers role first standard calling convention may future different calling conventions note registers x1 x2 x5 special meanings encoded standard isa andor compressed extension tables 252 253 contain listing standard riscv pseudoinstructions", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-09-18 14:50:25", "segment": "segment86", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2025.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2025.2.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2025.3.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 26 ", "content": "extending riscv addition supporting standard generalpurpose software development another goal riscv provide basis specialized instructionset extensions customized accelerators instruction encoding spaces optional variablelength instruction encoding designed make easier leverage software development effort standard isa toolchain building customized processors example intent continue provide full software support implementations use standard base perhaps together many nonstandard instructionset extensions chapter describes various ways base riscv isa extended together scheme managing instructionset extensions developed independent groups volume deals unprivileged isa although approach terminology used supervisorlevel extensions described second volume", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-09-18 14:50:25", "segment": "segment87", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2026.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2026.2.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "26.1 Extension Terminology ", "content": "section defines standard terminology describing riscv extensions standard versus nonstandard extension riscv processor implementation must support base integer isa rv32i rv64i addition implementation may support one extensions divide extensions two broad categories standard versus nonstandard standard extension one generally useful designed conflict standard extension currently mafdqlcbtpv described chapters manual either complete planned standard extensions nonstandard extension may highly specialized may conflict standard nonstandard extensions anticipate wide variety nonstandard extensions developed time eventually promoted standard extensions instruction encoding spaces prefixes instruction encoding space number instruction bits within base isa isa extension encoded riscv supports varying instruction lengths even within single instruction length various sizes encoding space available example base isa defined within 30bit encoding space bits 312 32bit instruction atomic extension fits within 25bit encoding space bits 317 use term prefix refer bits right instruction encoding space since instruction fetch riscv littleendian bits right stored earlier memory addresses hence form prefix instructionfetch order prefix standard base isa encoding twobit 11 field held bits 10 32bit word prefix standard atomic extension sevenbit 0101111 field held bits 60 32bit word representing amo major opcode quirk encoding format 3bit funct3 field used encode minor opcode contiguous major opcode bits 32bit instruction format considered part prefix 22bit instruction spaces although instruction encoding space could size adopting smaller set common sizes simplifies packing independently developed extensions single global encoding table 261 gives suggested sizes riscv greenfield versus brownfield extensions use term greenfield e xtension escribe n e xtension hat b egins p opulating n ew in struction encoding space hence cause encoding conflicts p refix le vel use term brownfield e xtension escribe n e xtension hat fi ts ar ound ex isting en codings previously defined nstruction pace b rownfield ex tension ne cessarily ti ed particular greenfield parent encoding may multiple brownfield extensions greenfield parent encoding example base isas greenfield encodings 30bit instruction space fdq floatingpoint e xtensions b rownfield ex tensions ad ding th e pa rent base isa 30bit encoding space note consider standard extension greenfield encoding defines new previously empty 25bit encoding space leftmost bits full 32bit base instruction encoding even though standard prefix locates within 30bit encoding space base isa changing single 7bit prefix could move extension different 30bit encoding space worrying conflicts prefix level within encoding space table 262 shows bases standard extensions placed simple twodimensional taxonomy one axis whether extension greenfield b rownfield axis whether extension adds architectural state greenfield e xtensions ize f nstruction encoding space given parentheses brownfield extensions name extension greenfield brownfield builds upon given p arentheses additional userlevel architectural state usually implies changes supervisorlevel system possibly standard calling convention note rv64i considered extension rv32i different complete base encoding standardcompatible global encodings complete global encoding isa actual riscv implementation must allocate unique nonconflicting p refix fo r every cluded struction en coding sp ace th e ba ses every standard extension standard prefix llocated e nsure hey c c oexist n global encoding standardcompatible global encoding one base every included standard extension standard prefixes tandardcompatible g lobal e ncoding c nclude nonstandard extensions conflict w ith ncluded tandard e xtensions standardcompatible global encoding also use standard prefixes nonstandard extensions associated standard extensions included global encoding words standard extension must use standard prefix included standardcompatible global encoding otherwise prefix free reallocated constraints allow common toolchain target standard subset riscv standardcompatible global encoding guaranteed nonstandard encoding space support development proprietary custom extensions portions encoding space guaranteed never used standard extensions", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-09-18 14:50:26", "segment": "segment88", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "26.2 RISC-V Extension Design Philosophy ", "content": "intend support large number independently developed extensions encouraging ex tension developers operate within instruction encoding spaces providing tools pack standardcompatible global encoding allocating unique prefixes extensions naturally implemented brownfield augmentations existing extensions share whatever prefix allocated parent greenfield extension standard extension prefixes avoid spurious incompatibilities encoding core functionality allowing custom packing esoteric extensions capability repacking riscv extensions different standardcompatible global encodings used number ways one usecase developing highly specialized custom accelerators designed run kernels important application domains might want drop base integer isa add extensions required task hand base isa designed place minimal requirements hardware implementation encoded use small fraction 32bit instruction encoding space another usecase build research prototype new type instructionset extension researchers might want expend effort implement variablelength instructionfetch unit would like prototype extension using simple 32bit fixedwidth instruction encoding however new extension might large coexist standard extensions 32bit space research experiments need standard extensions standard compatible global encoding might drop unused standard extensions reuse prefixes place proposed extension nonstandard location simplify engineering research prototype standard tools still able target base standard extensions present reduce development time instructionset extension evaluated refined could made available packing larger variablelength encoding space avoid conflicts standard extensions following sections describe increasingly sophisticated strategies developing implementations new instructionset extensions mostly intended use highly customized edu cational experimental architectures rather main line riscv isa development", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-09-18 14:50:26", "segment": "segment89", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "26.3 Extensions within fixed-width 32-bit instruction format ", "content": "section discuss adding extensions implementations support base fixed width 32bit instruction format anticipate simplest fixedwidth 32bit encoding popular many restricted accel erators research prototypes available 30bit instruction encoding spaces standard encoding three available 30bit instruction encoding spaces 2bit prefixes 00 01 10 used enable optional compressed instruction extension however compressed instructionset extension required three 30bit encoding spaces become available quadruples available encoding space within 32bit format available 25bit instruction encoding spaces 25bit instruction encoding space corresponds major opcode base standard extension encodings four major opcodes expressly reserved custom extensions table 241 represents 25bit encoding space two reserved eventual use rv128 base encoding opimm64 op64 used standard nonstandard extensions rv32 rv64 two opcodes reserved rv64 opimm32 op32 also used standard nonstandard extensions rv32 implementation require floatingpoint seven major opcodes reserved standard floatingpoint extensions loadfp storefp madd msub nmsub nmadd opfp reused nonstandard extensions similarly amo major opcode reused standard atomic extensions required implementation require instructions longer 32bits additional four major opcodes available marked gray table 241 base rv32i encoding uses 11 major opcodes plus 3 reserved opcodes leaving 18 available extensions base rv64i encoding uses 13 major opcodes plus 3 reserved opcodes leaving 16 available extensions available 22bit instruction encoding spaces 22bit encoding space corresponds funct3 minor opcode space base standard extension encodings several major opcodes funct3 field minor opcode completely occupied leaving available several 22bit encoding spaces usually major opcode selects format used encode operands remaining bits instruction ideally extension follow operand format major opcode simplify hardware decoding spaces smaller spaces available certain major opcodes minor opcodes entirely filled", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-09-18 14:50:26", "segment": "segment90", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "26.4 Adding aligned 64-bit instruction extensions ", "content": "simplest approach provide space extensions large base 32bit fixed width instruction format add naturally aligned 64bit instructions implementation must still support 32bit base instruction format require 64bit instructions aligned 64bit boundaries simplify instruction fetch 32bit nop instruction used alignment padding necessary simplify use standard tools 64bit instructions encoded described fig ure 11 however implementation might choose nonstandard instructionlength encoding 64bit instructions retaining standard encoding 32bit instructions example compressed instructions required 64bit instruction could encoded using one zero bits first two bits instruction anticipate processor generators produce instructionfetch units capable automatically handling combination supported variablelength instruction encodings", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-09-18 14:50:26", "segment": "segment91", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "26.5 Supporting VLIW encodings ", "content": "although riscv designed base pure vliw machine vliw encodings added extensions using several alternative approaches cases base 32bit encoding supported allow use standard software tools fixedsize instruction group simplest approach define single large naturally aligned instruction format eg 128 bits within vliw operations encoded conventional vliw approach would tend waste instruction memory hold nops riscvcompatible implementation would also support base 32bit instructions confining vliw code size expansion vliw accelerated functions encodedlength groups another approach use standard length encoding figure 11 encode parallel in struction groups allowing nops compressed vliw instruction example 64bit instruction could hold two 28bit operations 96bit instruction could hold three 28bit operations alternatively 48bit instruction could hold one 42bit operation 96bit instruction could hold two 42bit operations approach advantage retaining base isa encoding instructions holding single operation disadvantage requiring new 28bit 42bit encoding operations within vliw instructions misaligned instruction fetch larger groups one simplification allow vliw instructions straddle certain microarchitecturally significant boundaries eg cache lines virtual memory pages fixedsize instruction bundles another approach similar itanium use larger naturally aligned fixed instruction bundle size eg 128 bits across parallel operation groups encoded simplifies instruction fetch shifts complexity group execution engine remain riscv compatible base 32bit instruction would still supported endofgroup bits prefix none approaches retains riscv encoding individual operations within vliw instruction yet another approach repurpose two prefix bits fixedwidth 32bit encoding one prefix bit used signal endofgroup set second bit could indicate execution predicate clear standard riscv 32bit instructions generated tools unaware vliw extension would prefix bits set 11 thus correct semantics instruction end group predicated main disadvantage approach base isa lacks complex predication support usually required aggressive vliw system difficult add space specify predicate registers standard 30bit encoding space", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-09-18 14:50:26", "segment": "segment92", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 27 ", "content": "isa extension naming conventions chapter describes riscv isa extension naming scheme used concisely describe set instructions present hardware implementation set instructions used application binary interface abi riscv isa designed support wide variety implementations various exper imental instructionset extensions found organized naming scheme simplifies software tools documentation", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-09-18 14:50:26", "segment": "segment93", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.1 Case Sensitivity ", "content": "isa naming strings case insensitive", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-09-18 14:50:26", "segment": "segment94", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.2 Base Integer ISA ", "content": "riscv isa strings begin either rv32i rv32e rv64i rv128i indicating supported address space size bits base integer isa", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-09-18 14:50:26", "segment": "segment95", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.3 Instruction-Set Extension Names ", "content": "standard isa extensions given name consisting single letter example first four standard extensions integer bases integer multiplication division atomic memory instructions f singleprecision floatingpoint instructions doubleprecision floatingpoint instructions riscv instructionset variant succinctly described concatenating base integer prefix names included extensions eg rv64imafd also defined abbreviation g represent imafdzicsr zifencei base ex tensions intended represent standard generalpurpose isa standard extensions riscv isa given reserved letters eg q quadprecision floatingpoint c 16bit compressed instruction format isa extensions depend presence extensions eg depends f f depends zicsr dependences may implicit isa name example rv32if equivalent rv32ifzicsr rv32id equivalent rv32ifd rv32ifdzicsr", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-09-18 14:50:26", "segment": "segment96", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.4 Version Numbers ", "content": "recognizing instruction sets may expand alter time encode extension version numbers following extension name version numbers divided major minor ver sion numbers separated p minor version 0 p0 omitted version string changes major version numbers imply loss backwards compatibility whereas changes minor version number must backwardscompatible example original 64bit standard isa defined release 10 manual written full rv64i1p0m1p0a1p0f1p0d1p0 concisely rv64i1m1a1f1d1 introduced version numbering scheme second release hence define default version standard extension version present time eg rv32i equivalent rv32i2", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-09-18 14:50:26", "segment": "segment97", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.5 Underscores ", "content": "underscores may used separate isa extensions improve readability provide disambiguation eg rv32i2 m2 a2 p extension packed simd confused decimal point version number must preceded underscore follows number example rv32i2p2 means version 22 rv32i whereas rv32i2 p2 means version 20 rv32i version 20 p extension", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-09-18 14:50:26", "segment": "segment98", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.6 Additional Standard Extension Names ", "content": "standard extensions also named using single z followed alphabetical name optional version number example zifencei names instructionfetch fence extension described chapter 3 zifencei2 zifencei2p0 name version 20 first letter following z conventionally indicates closely related alphabetical extension category imafdqlcbjtpvn zam extension misaligned atomics example letter indicates extension related standard extension multiple z extensions named ordered first category alphabetically within categoryfor example zicsr zifencei zam extensions z prefix must separated multiletter extensions under score eg rv32imaczicsr zifencei", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-09-18 14:50:26", "segment": "segment99", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.7 Supervisor-level Instruction-Set Extensions ", "content": "standard supervisorlevel instructionset extensions defined volume ii named using prefix followed alphabetical name optional version number supervisorlevel extensions must separated multiletter extensions underscore standard supervisorlevel extensions listed standard unprivileged extensions multiple supervisorlevel extensions listed ordered alphabetically", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-09-18 14:50:26", "segment": "segment100", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.8 Hypervisor-level Instruction-Set Extensions ", "content": "standard hypervisorlevel instructionset extensions named like supervisorlevel extensions beginning letter h instead letter standard hypervisorlevel extensions listed standard lesserprivileged extensions multiple hypervisorlevel extensions listed ordered alphabetically", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-09-18 14:50:26", "segment": "segment101", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.9 Machine-level Instruction-Set Extensions ", "content": "standard machinelevel instructionset extensions prefixed three letters zxm standard machinelevel extensions listed standard lesserprivileged extensions multiple machinelevel extensions listed ordered alphabetically", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-09-18 14:50:26", "segment": "segment102", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.10 Non-Standard Extension Names ", "content": "nonstandard extensions named using single x followed alphabetical name optional version number example xhwacha names hwacha vectorfetch isa extension xhwacha2 xhwacha2p0 name version 20 nonstandard extensions must listed standard extensions must separated multiletter extensions underscore example isa nonstandard extensions argle bargle may named rv64izifencei xargle xbargle multiple nonstandard extensions listed ordered alphabetically", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-09-18 14:50:26", "segment": "segment103", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "27.11 Subset Naming Convention ", "content": "table 271 summarizes standardized extension names", "url": "RV32ISPEC.pdf#segment104", "timestamp": "2023-09-18 14:50:26", "segment": "segment104", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2027.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "Chapter 28 ", "content": "history acknowledgments", "url": "RV32ISPEC.pdf#segment105", "timestamp": "2023-09-18 14:50:26", "segment": "segment105", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.1 \u201cWhy Develop a new ISA?\u201d Rationale from Berkeley Group ", "content": "developed riscv support needs research education group particularly interested actual hardware implementations research ideas completed eleven different silicon fabrications riscv since first edition specification providing real implementations students explore classes riscv processor rtl designs used multiple undergraduate graduate classes berkeley current re search especially interested move towards specialized heterogeneous accelerators driven power constraints imposed end conventional transistor scaling wanted highly flexible extensible base isa around build research effort question repeatedly asked develop new isa biggest obvious benefit using existing commercial isa large widely supported software ecosystem development tools ported applications leveraged research teaching benefits include existence large amounts documentation tutorial examples however experience using commercial instruction sets research teaching benefits smaller practice outweigh disadvantages commercial isas proprietary except sparc v8 open ieee standard 2 owners commercial isas carefully guard intellectual property welcome freely available competitive implementations much less issue academic research teaching using software simulators major concern groups wishing share actual rtl implementations also major concern entities want trust sources commercial isa implementations prohibited creating clean room implementations guarantee riscv implementations free thirdparty patent infringements guarantee attempt sue riscv implementor commercial isas popular certain market domains obvious examples time writing arm architecture well supported server space intel x86 architecture matter almost every architecture well supported mobile space though intel arm attempting enter market segments another example arc tensilica provide extensible cores focused embedded space market segmentation dilutes benefit supporting particular commercial isa practice software ecosystem exists certain domains built others commercial isas come go previous research infrastructures built around commercial isas longer popular sparc mips even longer production alpha lose benefit active software ecosystem lingering intellectual property issues around isa supporting tools interfere ability interested third parties continue supporting isa open isa might also lose popularity interested party continue using developing ecosystem popular commercial isas complex dominant commercial isas x86 arm complex implement hardware level supporting common software stacks operating systems worse nearly complexity due bad least outdated isa design decisions rather features truly improve efficiency commercial isas alone enough bring applications even expend effort implement commercial isa enough run existing applications isa applications need complete abi application binary interface run userlevel isa abis rely libraries turn rely operating system support run existing operating system requires implementing supervisorlevel isa device interfaces expected os usually much less wellspecified considerably complex implement userlevel isa popular commercial isas designed extensibility dominant com mercial isas particularly designed extensibility consequence added considerable instruction encoding complexity instruction sets grown companies tensilica acquired cadence arc acquired synopsys built isas toolchains around extensibility focused embedded applications rather generalpurpose computing systems modified commercial isa new isa one main goals support archi tecture research including major isa extensions even small extensions diminish benefit using standard isa compilers modified applications rebuilt source code use extension larger extensions introduce new architectural state also require modifications operating system ultimately modified commercial isa becomes new isa carries along legacy baggage base isa position isa perhaps important interface computing system reason important interface proprietary dominant commercial isas based instructionset concepts already well known 30 years ago software developers able target open standard hardware target commercial processor designers compete implementation quality far first contemplate open isa design suitable hardware implementation also considered existing open isa designs closest goals openrisc architecture 12 decided adopting openrisc isa several technical reasons openrisc condition codes branch delay slots complicate higher performance implementations openrisc uses fixed 32bit encoding 16bit immediates precludes denser instruction encoding limits space later expansion isa openrisc support 2008 revision ieee 754 floatingpoint standard openrisc 64bit design completed began starting clean slate could design isa met goals though course took far effort planned outset invested considerable effort building riscv isa infrastructure including documentation compiler tool chains operating system ports reference isa simulators fpga implementations efficient asic imple mentations architecture test suites teaching materials since last edition manual considerable uptake riscv isa academia industry created nonprofit riscv foundation protect promote standard risc v foundation website https riscvorg contains latest information foundation membership various opensource projects using riscv", "url": "RV32ISPEC.pdf#segment106", "timestamp": "2023-09-18 14:50:27", "segment": "segment106", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.2 History from Revision 1.0 of ISA manual ", "content": "riscv isa instructionset manual builds upon several earlier projects several aspects supervisorlevel machine overall format manual date back t0 torrent0 vector microprocessor project uc berkeley icsi begun 1992 t0 vector processor based mipsii isa krste asanovic main architect rtl designer brian kingsbury bertrand irrisou principal vlsi implementors david johnson icsi major contributor t0 isa design particularly supervisor mode manual text john hauser also provided considerable feedback t0 isa design scale softwarecontrolled architecture low energy project mit begun 2000 built upon t0 project infrastructure refined supervisorlevel interface moved away mips scalar isa dropping branch delay slot ronny krashinsky christopher batten principal architects scale vectorthread processor mit mark hampton ported gccbased compiler infrastructure tools scale lightly edited version t0 mips scalar processor specification mips6371 used teaching new version mit 6371 introduction vlsi systems class fall 2002 semester chris terman krste asanovic lecturers chris terman contributed lab material class ta 6371 class evolved trial 6884 complex digital design class mit taught arvind krste asanovic spring 2005 became regular spring class 6375 reduced version scale mipsbased scalar isa named smips used 68846375 christopher batten ta early offerings classes developed considerable amount documentation lab material based around smips isa smips lab material adapted enhanced ta yunsup lee uc berkeley fall 2009 cs250 vlsi systems design class taught john wawrzynek krste asanovic john lazzaro maven malleable array vectorthread engines project secondgeneration vector thread architecture design led christopher batten exchange scholar uc berkeley starting summer 2007 hidetaka aoki visiting industrial fellow hitachi gave considerable feedback early maven isa microarchitecture design maven infrastructure based scale infrastructure maven isa moved away mips isa variant defined scale unified floatingpoint integer register file maven designed support experimentation alternative dataparallel accelerators yunsup lee main implementor various maven vector units rimas avizienis main implementor various maven scalar units yunsup lee christopher batten ported gcc work new maven isa christopher celio provided initial definition traditional vector instruction set flood variant maven based experience previous projects riscv isa definition begun summer 2010 andrew waterman yunsup lee krste asanovic david patterson principal designers initial version riscv 32bit instruction subset used uc berkeley fall 2010 cs250 vlsi systems design class yunsup lee ta riscv clean break earlier mipsinspired designs john hauser contributed floatingpoint isa definition including signinjection instructions register encoding scheme permits internal recoding floatingpoint values", "url": "RV32ISPEC.pdf#segment107", "timestamp": "2023-09-18 14:50:27", "segment": "segment107", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.3 History from Revision 2.0 of ISA manual ", "content": "multiple implementations riscv processors completed including several silicon fabrications shown figure 281 first riscv processors b e fabricated written verilog manufactured pre production 28 nm fdsoi technology st raven1 testchip 2011 two cores developed yunsup lee andrew waterman advised krste asanovic fabricated together 1 rv64 scalar core errordetecting flipflops 2 rv 64 co wi th attached 64bit floatingpoint v ector u nit fi rst mi croarchitecture wa formally kn trainwreck due short time available complete design immature design libraries subsequently clean microarchitecture inorder decoupled rv64 core developed andrew waterman rimas avizienis yunsup lee advised krste asanovic continuing railway theme codenamed rocket george stephenson successful steam locomotive design rocket written chisel new hardware design language developed uc berkeley ieee floatingpoint units used rocket developed john hauser andrew waterman brian richards rocket since refined developed fabricated two times 28 nm fdsoi raven2 raven3 five times ibm 45 nm soi technology eos14 eos16 eos18 eos20 eos22 photonics project work ongoing make rocket design available parameterized riscv processor generator eos14eos22 chips include early versions hwacha 64bit ieee floatingpoint vector unit developed yunsup lee andrew waterman huy vo albert ou quan nguyen stephen twigg advised krste asanovic eos16eos22 chips include dual cores cachecoherence protocol developed henry cook andrew waterman advised krste asanovic eos14 silicon successfully run 125 ghz eos16 silicon suffered bug ibm pad libraries eos18 eos20 successfully run 135 ghz contributors raven testchips include yunsup lee andrew waterman rimas avizienis brian zimmer jaehwa kwak ruzica jevtic milovan blagojevic alberto puggelli steven bailey ben keller pifeng chiu brian richards borivoje nikolic krste asanovic contributors eos testchips include yunsup lee rimas avizienis andrew waterman henry cook huy vo daiwei li chen sun albert ou quan nguyen stephen twigg vladimir sto janovic krste asanovic andrew waterman yunsup lee developed c isa simulator spike used golden model development named golden spike used celebrate completion us transcontinental railway spike made available bsd opensource project andrew waterman completed master thesis preliminary design riscv compressed instruction set 22 various fpga implementations riscv completed primarily part integrated demos par lab project research retreats largest fpga design 3 cachecoherent rv64ima processors running research operating system contributors fpga implemen tations include andrew waterman yunsup lee rimas avizienis krste asanovic riscv processors used several classes uc berkeley rocket used fall 2011 offering cs250 basis class projects brian zimmer ta undergraduate cs152 class spring 2012 christopher celio used chisel write suite educational rv32 processors named sodor island thomas tank engine friends live suite includes microcoded core unpipelined core 2 3 5stage pipelined cores publicly available bsd license suite subsequently updated used cs152 spring 2013 yunsup lee ta spring 2014 eric love ta christopher celio also developed outoforder rv64 design known boom berkeley outof order machine accompanying pipeline visualizations used cs152 classes cs152 classes also used cachecoherent versions rocket core developed andrew waterman henry cook summer 2013 rocc rocket custom coprocessor interface defined sim plify adding custom accelerators rocket core rocket rocc interface used extensively fall 2013 cs250 vlsi class taught jonathan bachrach several student accelerator projects built rocc interface hwacha vector unit rewritten rocc coprocessor two berkeley undergraduates quan nguyen albert ou successfully ported linux run riscv spring 2013 colin schmidt successfully completed llvm backend riscv 20 january 2014 darius rad bluespec contributed softfloat abi support gcc port march 2014 john hauser contributed definition floatingpoint classification instructions aware several riscv core implementations including one verilog tommy thorn one bluespec rishiyur nikhil acknowledgments thanks christopher f batten preston briggs christopher celio david chisnall stefan freudenberger john hauser ben keller rishiyur nikhil michael taylor tommy thorn robert watson comments draft isa version 20 specification", "url": "RV32ISPEC.pdf#segment108", "timestamp": "2023-09-18 14:50:27", "segment": "segment108", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%2028.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "28.4 History from Revision 2.1 ", "content": "uptake riscv isa rapid since introduction frozen version 20 may 2014 much activity record short history section perhaps important single event formation nonprofit riscv foundation august 2015 foundation take stewardship official riscv isa standard official website riscvorg best place obtain news updates riscv standard acknowledgments thanks scott beamer allen j baum christopher celio david chisnall paul clayton palmer dabbelt jan gray michael hamburg john hauser comments version 20 specifica tion", "url": "RV32ISPEC.pdf#segment109", "timestamp": "2023-09-18 14:50:27", "segment": "segment109", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.5 History from Revision 2.2 ", "content": "acknowledgments thanks jacob bachmeyer alex bradbury david horner stefan rear joseph myers comments version 21 specification", "url": "RV32ISPEC.pdf#segment110", "timestamp": "2023-09-18 14:50:27", "segment": "segment110", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.6 History for Revision 2.3 ", "content": "uptake riscv continues breakneck pace john hauser andrew waterman contributed hypervisor isa extension based upon proposal paolo bonzini daniel lustig arvind krste asanovic shaked flur paul loewenstein yatin manerkar luc maranget margaret martonosi vijayanand nagarajan rishiyur nikhil jonas oberhauser christopher pulte jose renau peter sewell susmit sarkar caroline trippel muralidaran vi jayaraghavan andrew waterman derek williams andrew wright sizhuo zhang contributed memory consistency model", "url": "RV32ISPEC.pdf#segment111", "timestamp": "2023-09-18 14:50:28", "segment": "segment111", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "28.7 Funding ", "content": "development riscv architecture implementations partially funded following sponsors par lab research supported microsoft award 024263 intel award 024894 funding matching funding uc discovery award dig0710227 additional support came par lab affiliates nokia nvidia oracle samsung project isis doe award desc0003624 aspire lab darpa perfect program award hr00111220016 darpa poem program award hr001111c0100 center future architectures research cfar starnet center funded semiconductor research corporation additional support aspire industrial sponsor intel aspire affiliates google hewlett packard enterprise huawei nokia nvidia oracle samsung content paper necessarily reflect position policy us government official endorsement inferred", "url": "RV32ISPEC.pdf#segment112", "timestamp": "2023-09-18 14:50:28", "segment": "segment112", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Appendix A ", "content": "section provides explanation rvwmo chapter 14 using informal language concrete examples intended clarify meaning intent axioms preserved program order rules appendix treated commentary normative material provided chapter 14 rest main body isa specification currently known discrepancies listed section a7 discrepancies unintentional", "url": "RV32ISPEC.pdf#segment113", "timestamp": "2023-09-18 14:50:28", "segment": "segment113", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.1 Why RVWMO? ", "content": "memory consistency models fall along loose spectrum weak strong weak memory models allow hardware implementation flexibility deliver arguably better performance performance per watt power scalability hardware verification overheads strong models expense complex programming model strong models provide simpler programming models cost imposing restrictions kinds nonspeculative hardware optimizations performed pipeline memory system turn imposing cost terms power area overhead verification burden riscv chosen rvwmo memory model variant release consistency places two extremes memory model spectrum rvwmo memory model enables architects build simple implementations aggressive implementations implementations embedded deeply inside much larger system subject complex memory system interactions number possibilities simultaneously strong enough support programming language memory models high performance facilitate porting code architectures hardware implementations may choose implement ztso extension provides stricter rvtso ordering semantics default code written rvwmo automatically inherently compatible rvtso code written assuming rvtso guaranteed run correctly rvwmo implementations fact rvwmo implementations simply refuse run rvtsoonly binaries implementation must therefore choose whether prioritize compatibility rvtso code eg facilitate porting x86 whether instead prioritize compatibility risc v cores implementing rvwmo fences andor memory ordering annotations code written rvwmo may become re dundant rvtso cost default rvwmo imposes ztso implementations incremental overhead fetching fences eg fence r rw fence rw w become noops implementation however fences must remain present code compatibility nonztso implementations desired", "url": "RV32ISPEC.pdf#segment114", "timestamp": "2023-09-18 14:50:28", "segment": "segment114", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.2 Litmus Tests ", "content": "explanations chapter make use litmus tests small programs designed test highlight one particular aspect memory model figure a1 shows example litmus test two harts convention figure figures follow chapter assume s0s2 preset value harts s0 holds address labeled x s1 holds s2 holds z x z disjoint memory locations aligned 8 byte boundaries figure shows litmus test code left visualization one particular valid invalid execution right litmus tests used understand implications memory model specific concrete situations example litmus test figure a1 final value a0 first hart either 2 4 5 depending dynamic interleaving instruction stream hart runtime however example final value f 0 n h art 0 w ill n ever b e 1 r 3 intuitively value 1 longer visible time load executes value 3 yet visible time load executes analyze test many others diagram shown right litmus test shows visual representation particular execution candidate considered diagrams use notation common memory model literature constraining set possible global memory orders could produce execution question also basis herd models presented appendix b2 notation explained table a1 listed relations rf edges harts co edges fr edges ppo edges directly constrain global memory order fence addr data ctrl edges via ppo edges intrahart rf edges informative constrain global memory order example figure a1 a01 could occur one following true b appears global memory order coherence order co however violates rvwmo ppo rule 1 co edge b highlights contradiction appears b global memory order coherence order co however case load value axiom would violated latest matching store prior c program order fr edge c b highlights contradiction since neither scenarios satisfies rvwmo axioms outcome a01 forbidden beyond described appendix suite seven thousand litmus tests available https githubcomlitmustestslitmustestsriscv litmus tests repository also provides instructions run litmus tests riscv hardware compare results operational axiomatic models future expect adapt memory model litmus tests use part riscv compliance test suite well", "url": "RV32ISPEC.pdf#segment115", "timestamp": "2023-09-18 14:50:28", "segment": "segment115", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.1.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3 Explaining the RVWMO Rules ", "content": "section provide explanation examples rvwmo rules axioms", "url": "RV32ISPEC.pdf#segment116", "timestamp": "2023-09-18 14:50:28", "segment": "segment116", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.3.1 Preserved Program Order and Global Memory Order ", "content": "preserved program order represents subset program order must respected within global memory order conceptually events hart ordered preserved program order must appear order perspective harts andor observers events hart ordered preserved program order hand may appear reordered perspective harts andor observers informally global memory order represents order loads stores perform formal memory model literature moved away specifications built around concept performing idea still useful building informal intuition load said performed return value determined store said performed executed inside pipeline rather value propagated globally visible memory sense global memory order also represents contribution coherence protocol andor rest memory system interleave possibly reordered memory accesses issued hart single total order agreed upon harts order loads perform always directly correspond relative age values two loads return particular load b may perform another load address ie b may execute b may appear global memory order may nevertheless return older value b discrepancy captures among things reordering effects buffering placed core memory example b may returned value store store buffer may ignored younger store read older value memory instead account time load performs value returns determined load value axiom strictly determining recent store address global memory order described", "url": "RV32ISPEC.pdf#segment117", "timestamp": "2023-09-18 14:50:28", "segment": "segment117", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.3.2 Load Value Axiom ", "content": "load value axiom byte load returns value written byte store latest global memory order among following stores 1 stores write byte precede global memory order 2 stores write byte precede program order preserved program order required respect ordering store followed load overlapping address complexity arises due ubiquity store buffers nearly implementations informally load may perform return value forwarding store store still store buffer hence store performs writes back globally visible memory hart therefore observe load performing store consider litmus test figure a2 running program implementation store buffers possible arrive final outcome a01 a10 a21 a30 follows executes enters first hart private store buffer b executes forwards return value 1 store buffer c executes since previous loads ie b completed executes reads value 0 memory e executes enters second hart private store buffer f executes forwards return value 1 e store buffer g executes since previous loads ie f completed h executes reads value 0 memory drains first hart store buffer memory e drains second hart store buffer memory therefore memory model must able account behavior put another way suppose definition preserved program order include following hypothetical rule memory access precedes memory access b preserved program order hence also global memory order precedes b program order b accesses memory location write b read call rule x get following precedes b rule x b precedes rule 4 precedes e load value axiom otherwise e preceded would required return value 1 perfectly legal execution one question e precedes f rule x f precedes h rule 4 h precedes load value axiom global memory order must total order cyclic cycle would imply every event cycle happens impossible therefore execution proposed would forbidden hence addition rule x would forbid implementations store buffer forwarding would clearly undesirable nevertheless even b precedes andor f precedes e global memory order sensible possibility example b return value written likewise f e combination circumstances leads second option definition load value axiom even though b precedes global memory order still visible b virtue sitting store buffer time b executes therefore even b precedes global memory order b return value written precedes b program order likewise e f another test highlights behavior store buffers shown figure 3 example ordered e control dependency f ordered g address dependency however e necessarily ordered f even though f returns value written e could correspond following sequence events e executes speculatively enters second hart private store buffer drain memory f executes speculatively forwards return value 1 e store buffer g executes speculatively reads value 0 memory executes enters first hart private store buffer drains memory b executes retires c executes enters first hart private store buffer drains memory executes reads value 1 memory e f g commit since speculation turned correct e drains store buffer memory", "url": "RV32ISPEC.pdf#segment118", "timestamp": "2023-09-18 14:50:28", "segment": "segment118", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.2.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.3.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3.3 Atomicity Axiom ", "content": "atomicity axiom aligned atomics r w paired load store operations generated aligned lr sc instructions hart h store byte x r returns value written must precede w global memory order store hart h byte x following preceding w global memory order riscv architecture decouples notion atomicity notion ordering unlike ar chitectures tso riscv atomics rvwmo impose ordering requirements default ordering semantics guaranteed ppo rules otherwise apply riscv contains two types atomics amos lrsc pairs conceptually behave differently following way lrsc behave old value brought core modified written back memory reservation held memory location amos hand conceptually behave performed directly memory amos therefore inherently atomic lrsc pairs atomic slightly different sense memory location question modified another hart time original hart holds reservation atomicity axiom forbids stores harts interleaved global memory order lr sc paired lr atomicity axiom forbid loads interleaved paired operations program order global memory order forbid stores hart stores nonoverlapping locations appearing paired operations either program order global memory order example sc instructions figure a4 may guaranteed succeed none successes would violate atomicity axiom intervening nonconditional stores hart paired loadreserved storeconditional instructions way memory system tracks memory accesses cache line granularity therefore see four snippets figure a4 identical forced fail storeconditional instruction happens falsely share another portion cache line memory location held reservation atomicity axiom also technically supports cases lr sc touch different ad dresses andor use different access sizes however use cases behaviors expected rare practice likewise scenarios stores hart lrsc pair actually overlap memory location referenced lr sc expected rare compared scenarios intervening store may simply fall onto cache line", "url": "RV32ISPEC.pdf#segment119", "timestamp": "2023-09-18 14:50:29", "segment": "segment119", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.4.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3.4 Progress Axiom ", "content": "progress axiom memory operation may preceded global memory order infinite sequence memory operations progress axiom ensures minimal forward progress guarantee ensures stores one hart eventually made visible harts system finite amount time loads harts eventually able read values successors thereof without rule would legal example spinlock spin infinitely value even store another hart waiting unlock spinlock progress axiom intended impose notion fairness latency quality service onto harts riscv implementation stronger notions fairness rest isa andor platform andor device define implement forward progress axiom almost cases naturally satisfied standard cache coherence protocol implementations noncoherent caches may provide mechanism ensure eventual visibility stores successors thereof harts", "url": "RV32ISPEC.pdf#segment120", "timestamp": "2023-09-18 14:50:29", "segment": "segment120", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.3.5 Overlapping-Address Orderings (Rules 1\u20133) ", "content": "rule 1 b store b access overlapping memory addresses rule 2 b loads x byte read b store x b program order b return values x written different memory operations rule 3 generated amo sc instruction b load b returns value written sameaddress orderings latter store straightforward load store never reordered later store overlapping memory location microarchitecture perspective generally speaking difficult impossible undo speculatively reordered store speculation turns invalid behavior simply disallowed model sameaddress orderings store later load hand need enforced discussed section a32 reflects observable behavior implementations forward values buffered stores later loads sameaddress loadload ordering requirements far subtle basic requirement younger load must return value older value returned older load hart address often known corr coherence readread pairs part broader coherence sequential consistency per location requirement architectures past relaxed sameaddress loadload ordering hindsight generally considered complicate programming model much rvwmo requires corr ordering enforced however global memory order corresponds consider litmus test figure a5 one particular instance general frirfi pattern term frirfi refers sequence e f fromreads ie reads earlier write e hart f reads e hart microarchitectural perspective outcome a01 a12 a20 legal various less subtle outcomes intuitively following would produce outcome question stalls whatever reason perhaps stalled waiting preceding instruc tion e executes enters store buffer yet drain memory f executes forwards e store buffer g h execute executes drains memory b executes c executes drains memory unstalls executes e drains store buffer memory corresponds global memory order f c e note even though f performs value returned f newer value returned therefore execution legal violate corr requirements likewise two backtoback loads return values written store may also appear outoforder global memory order without violating corr note saying two loads return value since two different stores may write value consider litmus test figure a6 outcome a01 a1v a2v a30 v value written another hart observed allowing g h reordered might done speculatively speculation justified microarchitecture eg snooping cache invalidations finding n one b ecause r eplaying h fter g w ould r eturn value written store anyway hence assuming a1 a2 would end value written store anyway g h legally reordered global memory order corresponding execution would h k c g executions test figure a6 a1 equal a2 fact require g appears h global memory order allowing h appear g global memory order would case result violation corr h would return older value returned g therefore ppo rule 2 forbids corr violation occurring ppo rule 2 strikes careful balance enforcing corr cases simultaneously weak enough permit rsw frirfi patterns commonly appear real microarchitectures one overlappingaddress rule ppo rule 3 simply states value returned amo sc subsequent load amo sc case sc successfully performed globally follows somewhat naturally conceptual view amos sc instructions meant performed atomically memory however notably ppo rule 3 states hardware may even nonspeculatively forward value stored amoswap subsequent load even though amoswap store value actually semantically dependent previous value memory case amos holds true even forwarding sc store values semantically dependent value returned paired lr three ppo rules also apply memory accesses question overlap partially occur example accesses different izes u sed ccess ame object note also base addresses two overlapping memory operations need necessarily two memory accesses overlap misaligned memory accesses used overlappingaddress ppo rules apply component memory accesses independently", "url": "RV32ISPEC.pdf#segment121", "timestamp": "2023-09-18 14:50:29", "segment": "segment121", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.5.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.6.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3.6 Fences (Rule 4) ", "content": "rule 4 fence instruction orders b default fence instruction ensures memory accesses instructions preceding fence program order predecessor set appear earlier global memory order memory accesses instructions appearing fence program order successor set however fences optionally restrict predecessor set andor successor set smaller set memory accesses order provide speedup specifically fences pr pw sr sw bits restrict predecessor andor successor sets predecessor set includes loads resp stores pr resp pw set similarly successor set includes loads resp stores sr resp sw set fence encoding currently nine nontrivial combinations four bits pr pw sr sw plus one extra encoding fencetso facilitates mapping acquirerelease rvtso semantics remaining seven combinations empty predecessor andor successor sets hence noops ten nontrivial options six commonly used practice fence rw rw fencetso fence rw w fence r rw fence r r fence w w fence instructions using combination pr pw sr sw reserved strongly recommend programmers stick six combinations may unknown unexpected interactions memory model finally note since riscv uses multicopy atomic memory model programmers rea son fences bits threadlocal manner complex notion fence cumulativity found memory models multicopy atomic", "url": "RV32ISPEC.pdf#segment122", "timestamp": "2023-09-18 14:50:29", "segment": "segment122", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.3.7 Explicit Synchronization (Rules 5\u20138) ", "content": "rule 5 acquire annotation rule 6 b release annotation rule 7 b rcsc annotations rule 8 paired b acquire operation would used start critical section requires memory operations following acquire program order also follow acquire global memory order ensures example loads stores inside critical section date respect synchronization variable used protect acquire ordering enforced one two ways acquire annotation enforces ordering respect synchronization variable fence r rw enforces ordering respect previous loads consider figure a7 example uses aq loads stores critical section guaranteed appear global memory order amoswap used acquire lock however assuming a0 a1 a2 point different memory locations loads stores critical section may may appear arbitrary unrelated load beginning example global memory order consider alternative figure a8 case even though amoswap enforce ordering aq bit fence nevertheless enforces acquire amoswap appears earlier global memory order loads stores critical section note however case fence also enforces additional orderings also requires arbitrary unrelated load start program appears earlier global memory order loads stores critical section particular fence however enforce ordering respect arbitrary unrelated store start snippet way fenceenforced orderings slightly coarser orderings enforced aq release orderings work exactly acquire orderings opposite direction release semantics require loads stores preceding release operation program order also precede release operation global memory order ensures example memory accesses critical section appear lockreleasing store global memory order acquire semantics release semantics enforced using release annotations fence rw w operation using examples ordering loads stores critical section arbitrary unrelated store end code snippet enforced fence rw w figure a8 rl figure a7 rcpc annotations alone storereleasetoloadacquire ordering enforced facilitates porting code written tso andor rcpc memory models enforce storerelease toloadacquire ordering code must use storereleasercsc loadacquirercsc operations ppo rule 7 applies rcpc alone sufficient many use cases cc insufficient many use cases cc java linux name examples see section a5 details ppo rule 8 indicates sc must appear paired lr global memory order follow naturally common use lrsc perform atomic readmodifywrite operation due inherent data dependency however ppo rule 8 also applies even value stored syntactically depend value returned paired lr lastly note fences programmers need worry cumulativity analyzing ordering annotations", "url": "RV32ISPEC.pdf#segment123", "timestamp": "2023-09-18 14:50:29", "segment": "segment123", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.7.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.8.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3.8 Syntactic Dependencies (Rules 9\u201311) ", "content": "rule 9 b syntactic address dependency rule 10 b syntactic data dependency rule 11 b store b syntactic control dependency dependencies load later memory operation hart respected rvwmo memory model alpha memory model notable choosing enforce ordering dependencies modern hardware software memory models consider allowing dependent instructions reordered confusing counterintuitive furthermore modern code sometimes intentionally uses dependencies particularly lightweight ordering enforcement mechanism terms section 141 work follows instructions said carry dependencies source register destination register whenever value written destination register function source register instructions means destina tion register carry dependency source register however notable exceptions case memory instructions value written destination register ulti mately comes memory system rather source register directly breaks chain dependencies carried source register case unconditional jumps value written destination register comes current pc never considered source register memory model likewise jalr jump source register carry dependency rs1 rd notion accumulating destination register rather writing reflects behavior csrs fflags particular accumulation register clobber previous writes accumulations register example figure a9 c syntactic dependency b like modern memory models rvwmo memory model uses syntactic rather se mantic dependencies words definition depends identities registers accessed different instructions actual contents registers means address control data dependency must enforced even calculation could seemingly optimized away choice ensures rvwmo remains compatible code uses false syntactic dependencies lightweight ordering mechanism example syntactic address dependency memory operation generated first nstruction emory peration g enerated l ast nstruction n f igure 10 even though a1 xor a1 zero hence effect address accessed second load benefit using dependencies lightweight synchronization mechanism ordering enforcement requirement limited specific two nstructions n q uestion ther non dependent instructions may freelyreordered aggressive implementations one alternative would use loadacquire would enforce ordering first l oad w ith r espect subsequent instructions another would use fence r r would include previous subsequent loads making option expensive control dependencies behave differently f rom ddress nd ata ependencies n ense hat control dependency always extends instructions following original target program order consider figure a11 instruction next always execute memory operation generated last instruction nevertheless still control dependency memory operation generated first instruction likewise consider figure a12 even though branch outcomes target still control dependency memory operation generated first instruction snippet memory operation generated last instruction definition f control dependency subtly stronger might seen contexts eg c conforms standard definitions control dependencies literature notably ppo rules 911 also intentionally designed respect dependencies originate output successful storeconditional instruction typically sc instruction followed conditional branch checking whether outcome successful implies control dependency store operation generated sc instruction memory operations following branch ppo rule 11 turn implies subsequent store operations appear later global memory order store operation generated sc however since control address data dependencies defined emory operations since unsuccessful sc generate memory operation order enforced unsuccessful sc dependent instructions moreover since sc defined carry dependencies source registers rd sc successful unsuccessful sc effect global memory order addition choice respect dependencies originating storeconditional instructions ensures certain outofthinairlike behaviors prevented consider figure a13 suppose hypothetical implementation could occasionally make early guarantee storeconditional operation succeed case c could return 0 a2 early actually executing allowing sequence e f b execute c might execute successfully point would imply c writes success value 0 s1 fortunately situation others like prevented fact rvwmo respects dependencies originating stores generated successful sc instructions also note syntactic dependencies instructions force take form syntactic address control andor data dependency example syntactic dependency two f instructions via one accumulating csrs section 143 imply two f instructions must executed order dependency would serve ultimately set later dependency f instructions later csr instruction accessing csr flag question", "url": "RV32ISPEC.pdf#segment124", "timestamp": "2023-09-18 14:50:30", "segment": "segment124", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.9.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.10.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.11.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.12.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.13.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.3.9 Pipeline Dependencies (Rules 12\u201313) ", "content": "rule 12 b load exists store b program order address data dependency b returns value written rule 13 b store exists instruction b program order address dependency ppo rules 12 13 reflect behaviors almost real processor pipeline implementations rule 12 states load forward store address data store known consider figure a14 f executed data e resolved f must return value written e something even later global memory order old value must clobbered writeback e chance perform therefore f never perform performed another store address e f figure a15 f would longer dependent data e resolved hence dependency f produces data e would broken rule 13 makes similar observation previous rule store performed memory previous loads might access address performed load must appear execute store store overwrite value memory load chance read old value likewise store generally performed known preceding instructions cause exception due consider figure a16 f executed address e resolved may turn addresses match ie a1s0 therefore f sent memory executed confirmed whether addresses indeed overlap", "url": "RV32ISPEC.pdf#segment125", "timestamp": "2023-09-18 14:50:30", "segment": "segment125", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.15.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.16.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.4 Beyond Main Memory ", "content": "rvwmo currently attempt formally describe fencei sfencevma io fences pmas behave behaviors described future formalizations meantime behavior fencei described section 27 behavior sfencevma described riscv instruction set privileged architecture manual behavior io fences effects pmas described", "url": "RV32ISPEC.pdf#segment126", "timestamp": "2023-09-18 14:50:30", "segment": "segment126", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.4.1 Coherence and Cacheability ", "content": "riscv privileged isa defines physical memory attributes pmas specify among things whether portions address space coherent andor cacheable see riscv privileged isa specification complete details simply discuss various details pma relate memory model main memory vs io io memory ordering pmas memory model defined applies main memory regions io ordering discussed supported access types atomicity pmas memory model simply applied top whatever primitives region supports cacheability pmas cacheability pmas general affect memory model noncacheable regions may restrictive behavior cacheable regions set allowed behaviors change regardless however platformspecific andor devicespecific cacheability settings may differ coherence pmas memory consistency model memory regions marked non coherent pmas currently platformspecific andor devicespecific loadvalue axiom atomicity axiom progress axiom may violated noncoherent memory note however coherent memory require hardware cache coherence protocol riscv privileged isa specification suggests hardwareincoherent regions main memory discouraged memory model compatible hardware coherence soft ware coherence implicit coherence due readonly memory implicit coherence due one agent access otherwise idempotency pmas idempotency pmas used specify memory regions loads andor stores may side effects turn used microarchitecture determine eg whether prefetches legal distinction affect memory model", "url": "RV32ISPEC.pdf#segment127", "timestamp": "2023-09-18 14:50:30", "segment": "segment127", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "A.4.2 I/O Ordering ", "content": "io load value axiom atomicity axiom general apply reads writes might devicespecific side effects may return values value written recent store address nevertheless following preserved program order rules still generally apply accesses io memory memory access precedes memory access b global memory order precedes b program order one following holds 1 precedes b preserved program order defined chapter 14 exception acquire release ordering annotations apply one memory operation another memory operation one io operation another io operation memory operation io vice versa 2 b accesses overlapping addresses io region 3 b accesses stronglyordered io region 4 b accesses io regions channel associated io region accessed either b channel 1 5 b accesses io regions associated channel except channel 0 note fence instruction distinguishes main memory operations io opera tions predecessor successor sets enforce ordering io operations main memory operations code must use fence pi po si andor plus pr pw sr andor sw example enforce ordering write main memory io write device register fence w stronger needed fence fact used implementations must assume device may attempt access memory immediately receiving mmio signal subsequent memory accesses device memory must observe effects accesses ordered prior mmio peration words figure a17 suppose 0 a0 main memory 0 a1 address device register io memory device accesses 0 a0 upon receiving mmio write load must conceptually appear first store 0 a0 according rules rvwmo memory model implementations way ensure require first tore oes n f act c omplete b efore mio w rite ssued ther mplementations may find ways aggressive others still may need anything different io main memory accesses nevertheless rvwmo memory model distinguish options simply provides implementationagnostic mechanism specify orderings must enforced many architectures include separate notions ordering completion fences especially relates io opposed regular main memory ordering fences simply ensure memory operations stay order completion fences ensure predecessor accesses completed successors made visible riscv explicitly distinguish ordering completion fences instead distinction simply inferred different uses fence bits implementations conform riscv unix platform specification o evices dma operations required access memory coherently via stronglyordered io channels therefore accesses regular main memory regions concurrently accessed external devices also use standard synchronization mechanisms implementations con form unix platform specification andor devices access memory coherently need use mechanisms currently platformspecific r evicespecific enforce coherency io regions address space considered noncacheable regions pmas regions regions considered coherent pma cached agent ordering guarantees section may apply beyond platformspecific boundary riscv cores device particular io accesses sent across external bus eg pcie may reordered reach ultimate destination ordering must enforced situations according platformspecific rules external devices buses", "url": "RV32ISPEC.pdf#segment128", "timestamp": "2023-09-18 14:50:31", "segment": "segment128", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.17.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.5 Code Porting and Mapping Guidelines ", "content": "table a2 provides mapping tso memory operations onto riscv memory instructions normal x86 loads stores inherently acquirercpc releasercpc operations tso enforces loadload loadstore storestore ordering default therefore rvwmo tso loads must mapped onto load followed fence r rw tso stores must mapped onto fence rw w followed store tso atomic readmodifywrites x86 instructions using lock prefix fullyordered implemented either via amo aq rl set via lr aq set arithmetic operation question sc aq rl set conditional branch checking success condition latter case rl annotation lr turns nonobvious reasons redundant omitted alternatives table a2 also possible tso store mapped onto amoswap rl set however since rvwmo ppo rule 3 forbids forwarding values amos subsequent loads use amoswap stores may negatively affect p erformance l oad c mapped using lr aq set lr instructions unpaired fact preclude use lr loads however mapping may also negatively affect performance puts pressure reservation mechanism originally intended table a3 provides mapping power memory operations onto riscv memory instructions power isync maps riscv fencei followed fence r r latter fence needed isync used define controlcontrol fence dependency present rvwmo table a4 provides mapping arm memory operations onto riscv memory instructions since riscv currently plain load store opcodes aq rl annotations arm loadacquire storerelease operations mapped using fences instead furthermore order enforce storereleasetoloadacquire ordering must fence rw rw storerelease loadacquire table a4 enforces always placing fence front acquire operation arm loadexclusive storeexclusive instructions likewise map onto riscv lr sc equivalents instead placing fence rw rw front lr aq set simply also set rl instead arm isb maps riscv fencei followed fence r r similarly isync maps power table a5 provides mapping linux memory ordering macros onto riscv memory instructions linux fences dma rmb dma wmb map onto fence r r fence w w respectively since riscv unix platform requires coherent dma would mapped onto fence ri ri fence wo wo respectively platform noncoherent dma platforms non coherent dma may also require mechanism cache lines flushed andor invalidated mechanisms devicespecific andor standardized future extension isa linux mappings release operations may seem stronger necessary mappings needed cover cases linux requires stronger orderings intuitive mappings would provide particular time text written linux actively debating whether require loadload loadstore storestore orderings accesses one critical section accesses subsequent critical section hart protected synchronization object combinations fence rw wfence r rw mappings aqrl mappings combine provide orderings ways around problem including 1 always use fence rw wfence r rw never use aqrl suffices undesir able defeats purpose aqrl modifiers 2 always use aqrl never use fence rw wfence r rw currently work due lack load store opcodes aq rl modifiers 3 strengthen mappings release operations would enforce sufficient order ings presence either type acquire mapping currentlyrecommended solution one shown table a5 example critical section ordering rule currently debated linux community would require ordered e figure a18 indeed required would insufficient b map fence rw w said mappings subject change linux kernel memory model evolves table a6 provides mapping c11c11 atomic operations onto riscv memory instruc tions load store opcodes aq rl modifiers ntroduced hen mappings table a7 suffice te ho wever th th e tw ppings ly teroperate co rrectly atomic op memory order seq cst mapped using lr aq rl set amo emulated lrsc pair care must taken ensure ppo orderings originate lr also made originate sc ppo orderings terminate sc also made terminate lr example lr must also made respect data dependencies amo given load operations otherwise notion data dependency likewise effect fence r r elsewhere hart must also made apply sc would otherwise respect fence emulator may achieve effect simply mapping amos onto lraq op scaqrl matching mapping used elsewhere fullyordered atomics", "url": "RV32ISPEC.pdf#segment129", "timestamp": "2023-09-18 14:50:31", "segment": "segment129", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.2.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.3.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.4.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.5.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.18.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.6.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/Table%20A.7.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.6 Implementation Guidelines ", "content": "rvwmo rvtso memory models means preclude microarchitectures employ ing sophisticated speculation techniques forms optimization order deliver higher performance models also impose requirement use one particular cache hierarchy even use cache coherence protocol instead models specify behaviors exposed software microarchitectures free use pipeline design coherent noncoherent cache hierarchy onchip interconnect etc long design admits executions satisfy memory model rules said help people understand actual implementations memory model section provide guidelines architects programmers interpret models rules rvwmo rvtso multicopy atomic othermulticopyatomic store value visible hart one originally issued must also conceptually visible harts system words harts may forward previous stores stores become globally visible harts early interhart forwarding permitted multicopy atomicity may enforced number ways might hold inherently due physical design caches store buffers may enforced via singlewritermultiple reader cache coherence protocol might hold due mechanism although multicopy atomicity impose restrictions microarchitecture one key properties keeping memory model becoming extremely complicated example hart may legally forward value neighbor hart private store buffer unless course done way new illegal behaviors become architecturally visible may cache coherence protocol forward value one hart another coherence protocol invalidated older copies caches course microarchitectures may high performance implementations likely violate rules covers speculation optimizations long noncompliant behaviors exposed programmer rough guideline interpreting ppo rules rvwmo expect following software perspective programmers use ppo rules 1 48 regularly actively expert programmers use ppo rules 911 speed critical paths important data structures even expert programmers rarely ever use ppo rules 23 1213 directly included facilitate common microarchitectural optimizations rule 2 operational formal modeling approach rules 3 1213 described section b3 also facilitate process porting code architectures similar rules also expect following hardware perspective ppo rules 1 36 reflect wellunderstood rules pose surprises architects ppo rule 2 reflects natural common hardware optimization one subtle hence worth double checking carefully ppo rule 7 may immediately obvious architects standard memory model requirement load value axiom atomicity axiom ppo rules 813 reflect rules hard ware implementations enforce naturally unless contain extreme optimizations course implementations make sure double check rules nevertheless hardware must also ensure syntactic dependencies optimized away architectures free implement memory model rules conservatively choose example hardware implementation may choose following interpret fences fence rw rw fence iorw iorw io involved regardless bits actually set implement fences pw sr fence rw rw fence iorw iorw io involved pw sr expensive four possible main memory ordering components anyway emulate aq rl described section a5 enforcing sameaddress loadload ordering even presence patterns frirfi rsw forbid forwarding value store store buffer subsequent amo lr address forbid forwarding value amo sc store buffer subsequent load address implement tso memory accesses ignore main memory fences include pw sr ordering eg ztso implementations implement atomics rcsc even fullyordered regardless annotation architectures implement rvtso safely following ignore fences pw sr unless fence also orders io ignore ppo rules except rules 4 7 since rest redundant ppo rules rvtso assumptions general notes silent stores ie stores write value already exists memory location behave like store memory model point view likewise amos actually change value memory eg amomax value rs2 smaller value currently memory still semantically considered store operations microarchitectures attempt implement silent stores must take care ensure memory model still obeyed particularly cases rsw section a35 tend incompatible silent stores writes may merged ie two consecutive writes address may merged subsumed ie earlier two backtoback writes address may elided long resulting behavior otherwise violate memory model semantics question write subsumption understood following example written load reads value 1 must precede f global memory order precedes c global memory order rule 2 c precedes global memory order load value axiom precedes e global memory order rule 7 e precedes f global memory order rule 1 words final value memory location whose address s0 must 2 value written store f 3 value written store aggressive microarchitecture might erroneously decide discard e f supersedes may turn lead microarchitecture break noweliminated dependency f hence also f would violate memory model rules hence forbidden write subsumption may cases legal example data dependency e", "url": "RV32ISPEC.pdf#segment130", "timestamp": "2023-09-18 14:50:32", "segment": "segment130", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.19.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.6.1 Possible Future Extensions ", "content": "expect following possible future extensions would compatible rvwmo memory model v vector isa extensions transactional memory subset isa extension j jit extension native encodings load store opcodes aq rl set fences limited certain addresses cache writebackflushinvalidateetc instructions", "url": "RV32ISPEC.pdf#segment131", "timestamp": "2023-09-18 14:50:32", "segment": "segment131", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20A.20-22.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "A.7.1 Mixed-size RSW ", "content": "known discrepancy operational axiomatic specifications within family mixedsize rsw variants shown figures a20a22 address may choose add something like following new ppo rule memory operation precedes memory operation b preserved program order hence also global memory order precedes b program order b access regular main memory rather io regions load b store load b byte x read store writes x precedes b ppo words herd syntax may choose add poloc rsw ppo w ppo many implementations already enforce ordering naturally even though rule official recommend implementers enforce nevertheless order ensure forwards compatibility possible future addition rule rvwmo", "url": "RV32ISPEC.pdf#segment132", "timestamp": "2023-09-18 14:50:32", "segment": "segment132", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Appendix B ", "content": "facilitate formal analysis rvwmo chapter presents set formalizations using different tools modeling approaches discrepancies unintended expectation models describe exactly sets legal behaviors appendix treated commentary normative material provided chapter 14 rest main body isa specification currently known discrepancies listed section a7 discrepancies unintentional", "url": "RV32ISPEC.pdf#segment133", "timestamp": "2023-09-18 14:50:32", "segment": "segment133", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.1 Formal Axiomatic Specification in Alloy ", "content": "present formal specification of the rvwmo memory model in alloy http alloymitedu model available online https githubcomdaniellustig riscvmemorymodel online material also contains litmus tests examples alloy used", "url": "RV32ISPEC.pdf#segment134", "timestamp": "2023-09-18 14:50:32", "segment": "segment134", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.1.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.2.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.3.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.4.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.5.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "B.2 Formal Axiomatic Specification in Herd ", "content": "tool herd takes memory model litmus test input simulates execution test top memory model memory models written domain specific language cat section provides two cat memory model rvwmo first model figure b7 follows global memory order chapter 14 definition rvwmo much possible cat model second model figure b8 equivalent efficient partial order based rvwmo model simulator herd part diy tool suite see http diyinriafr software doc umentation models available online http diyinriafrcats7riscv", "url": "RV32ISPEC.pdf#segment135", "timestamp": "2023-09-18 14:50:32", "segment": "segment135", "image_urls": [ "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.6.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.7.jpg?raw=true", "https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/fig%20B.8.jpg?raw=true" ], "Book": "riscv-spec-20191213" }, { "section": "B.3 An Operational Memory Model ", "content": "alternative presentation rvwmo memory model operational style aims admit exactly extensional behavior axiomatic presentation given program admitting execution axiomatic presentation allows axiomatic presentation defined predicate complete candidate executions contrast operational presentation abstract microarchitectural flavor expressed state machine states abstract representation hardware machine states explicit outoforder speculative execution abstracting implementationspecific microarchitectural details register renaming store buffers cache hierarchies cache proto cols etc provide useful intuition also construct executions incrementally making possible interactively randomly explore behavior larger examples axiomatic model requires complete candidate executions axioms checked operational presentation covers mixedsize execution potentially overlapping memory accesses different poweroftwo byte sizes misaligned accesses broken singlebyte accesses operational model together fragment riscv isa semantics rv64i integrated rmem exploration tool https githubcomremsprojectrmem rmem explore litmus tests see a2 small elf binaries exhaustively pseudorandomly interactively rmem isa semantics expressed explicitly sail see https githubcom remsprojectsail sail language https githubcomremsprojectsailriscv riscv isa model concurrency semantics expressed lem see https githubcomremsprojectlem lem language rmem commandline interface webinterface webinterface runs entirely client side provided online together library litmus tests http wwwclcam acukpes20rmem commandline interface faster webinterface specially exhaustive mode informal introduction model states transitions description formal model starts next subsection terminology contrast axiomatic presentation every memory operation either load store hence amos give rise two distinct memory operations load store used conjunction instruction terms load store refer instructions give rise memory operations include amo instructions term acquire refers instruction memory operation acquirercpc acquire rcsc annotation term release refers instruction memory operation releasercpc releasercsc annotation model states model state consists shared memory tuple hart states shared memory state records memory store operations propagated far order propagated made efficient simplicity presentation keep way hart state consists principally tree instruction instances finished nonfinished instruction instances subject restart eg depend outoforder speculative load turns unsound conditional branch indirect jump instructions may multiple successors instruction tree instruction finished untaken alternative paths discarded instruction instance instruction tree state includes execution state intrainstruction semantics isa pseudocode instruction model uses formaliza tion intrainstruction semantics sail one think execution state instruction representation pseudocode control state pseudocode call stack local variable values instruction instance state also includes information instance memory register footprints register reads writes memory operations whether finished etc model transitions model defines model state set allowed transitions single atomic step new abstract machine state execution single instruction typically involve many transitions may interleaved operationalmodel execution transitions arising instructions transition arises single instruction instance change state instance may depend change rest hart state shared memory state depend hart states change transitions introduced defined section b35 precondition construction posttransition model state transitions instructions fetch instruction transition represents fetch decode new instruction instance program order successor previously fetched instruction instance initial fetch address model assumes instruction memory fixed describe behavior selfmodifying code particular fetch instruction transition generate memory load operations shared memory involved transition instead model depends external oracle provides opcode given memory location register write write register value register read read register value recent programorder predecessor instruction instance writes register pseudocode internal step covers pseudocode internal computation arithmetic function calls etc finish instruction point instruction pseudocode done instruction restarted memory accesses discarded memory effects taken place conditional branch indirect jump instructions program order successors fetched address one written pc register discarded together subtree instruction instances transitions specific load instructions initiate memory load operations point memory footprint load instruction provisionally known could change earlier instructions restarted individual memory load operations start satisfied satisfy memory load operation forwarding unpropagated stores partially entirely satisfies single memory load operation forwarding programorderprevious memory store operations satisfy memory load operation memory entirely satisfies outstanding slices single memory load operation memory complete load operations point memory load operations instruction entirely satisfied instruction pseudocode continue executing load instruction subject restarted finish instruction transition conditions model might treat load instruction nonrestartable even finished eg see propagate store operation transitions specific store instructions initiate memory store operation footprints point memory footprint store provisionally known instantiate memory store operation values point memory store operations values programordersuccessor memory load operations satisfied forward ing commit store instruction point store operations guaranteed happen instruction longer restarted discarded start propagated memory propagate store operation propagates single memory store operation memory complete store operations point memory store operations instruction propagated memory instruction pseudocode continue executing transitions specific sc instructions early sc fail causes sc fail either spontaneous fail paired programorderprevious lr paired sc transition indicates sc paired lr might succeed commit propagate store operation sc atomic execution transitions commit store instruction propagate store operation enabled stores lr read overwritten late sc fail causes sc fail either spontaneous fail stores lr read overwritten transitions specific amo instructions satisfy commit propagate operations amo atomic execution transitions needed satisfy load operation required arithmetic propagate store operation transitions specific fence instructions commit fence transitions labeled always taken eagerly soon precondition satisfied without excluding behavior although fetch instruction marked taken eagerly long taken infinitely many times instance nonamo load instruction fetched typically experience following transitions order 1 register read 2 initiate memory load operations 3 satisfy memory load operation forwarding unpropagated stores andor satisfy mem ory load operation memory many needed satisfy load operations instance 4 complete load operations 5 register write 6 finish instruction transitions number pseudocode internal step transitions may appear addition fetch instruction transition fetching instruction next program location available taken concludes informal description operational model following sections describe formal operational model", "url": "RV32ISPEC.pdf#segment136", "timestamp": "2023-09-18 14:50:32", "segment": "segment136", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.1 Intra-instruction Pseudocode Execution ", "content": "intrainstruction semantics instruction instance expressed state machine es sentially running instruction pseudocode given pseudocode execution state computes next state states identify pending memory register operation requested pseudocode memory model states tagged union tags smallcaps load mem kind address size load continuation memory load operation early sc fail res continuation allow sc fail early store ea kind address size next state memory store effective address store memv mem value store continuation memory store value fence kind next state fence read reg reg name read continuation register read write reg reg name reg value next state register write internal next state pseudocode internal step done end pseudocode mem value reg value lists bytes address integer xlen bits loadstore kind identifies whether lrsc acquirercpcreleasercpc acquire rcscreleasercsc acquirereleasercsc fence kind identifies whether normal tso normal fences predecessor successor ordering bits reg name identifies register slice thereof start end bit indices continuations describe instruction instance continue value might provided surrounding memory model load continuation read continuation take value loaded memory read previous register write store continuation takes false sc failed true cases res continuation takes false sc fails true otherwise example given load instruction lw x10 x2 execution typically go follows initial execution state computed pseudocode given opcode expected read reg x2 read continuation feeding recently written value register x2 instruction semantics blocked necessary register value available say 0x4000 read continuation returns load mem plain load 0x4000 4 load continuation feeding 4byte value loaded memory location 0x4000 say 0x42 load continuation returns write reg x1 0x42 done many internal next state states may appear states notice writing memory split two steps store ea store memv first one makes memory footprint store provisionally known second one adds value stored ensure paired pseudocode store ea followed store memv may steps observable store ea occur value stored determined example litmus test lbfencerrwdatapo allowed operational model rvwmo first store hart 1 take store ea step value determined second store see nonoverlapping memory footprint allowing second store committed order without violating coherence pseudocode instruction performs one store one load except amos perform exactly one load one store memory accesses split apart architecturally atomic units hart semantics see initiate memory load operations initiate memory store operation footprints informally bit register read satisfied register write recent program order instruction instance write bit hart initial register state write hence essential know register write footprint instruction instance calculate instruction instance created see action fetch instruction ensure pseudocode instruction one register write register bit also try read register value wrote dataflow dependencies address data model emerge fact register read wait appropriate register write executed described", "url": "RV32ISPEC.pdf#segment137", "timestamp": "2023-09-18 14:50:33", "segment": "segment137", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.2 Instruction Instance State ", "content": "instruction instance state comprising program loc memory address instruction fetched instruction kind identifying whether load store amo fence branchjump simple instruction also includes kind similar one described pseudocode execution states src regs set source reg names including system registers statically determined pseudocode instruction dst regs destination reg names including system registers statically determined pseudocode instruction pseudocode state sometimes state short one tagged union tags smallcaps plain isa state ready make pseudocode transition pending mem loads load continuation requesting memory load operation pending mem stores store continuation requesting memory store operation reg reads register reads instance performed including one register write slices read reg writes register writes instance performed mem loads set memory load operations one asyetunsatisfied slices byte indices satisfied yet satisfied slices store slices consisting memory store operation subset byte indices satisfied mem stores set memory store operations one flag indicates whether propagated passed shared memory information recording whether instance committed finished etc memory load operation includes memory footprint address size memory store operations includes memory footprint available value load instruction instance nonempty mem loads load operations satisfied ie unsatisfied load slices said entirely satisfied informally instruction instance said fully determined data load sc in structions feeding source registers finished similarly said fully determined memory footprint load sc instructions feeding memory operation address register finished formally first define notion fully determined register write register write w reg writes instruction instance said fully determined one following conditions hold 1 finished 2 value written w affected memory operation made ie value loaded memory result sc every register read made affects w register write read fully determined read initial register state instruction instance said fully determined data every register read r reg reads register writes r reads fully determined instruction instance said fully determined memory footprint every register read r reg reads feeds memory operation address register writes r reads fully determined rmem tool records every register write set register writes instructions read instruction point performing write carefully arranging pseudocode instructions covered tool able make exactly set register writes write depends", "url": "RV32ISPEC.pdf#segment138", "timestamp": "2023-09-18 14:50:33", "segment": "segment138", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.3 Hart State ", "content": "model state single hart comprises hart id unique identifier hart initial register state initial register value register initial fetch address initial instruction fetch address instruction tree tree instruction instances fetched discarded program order", "url": "RV32ISPEC.pdf#segment139", "timestamp": "2023-09-18 14:50:33", "segment": "segment139", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.4 Shared Memory State ", "content": "model state shared memory comprises list memory store operations order propagated shared memory store operation propagated shared memory simply added end list load operation satisfied memory byte load operation recent corresponding store slice returned purposes simpler think shared memory array ie map memory locations memory store operation slices memory location mapped onebyte slice recent memory store operation location however abstraction detailed enough properly handle sc instruction rvwmo atomicity axiom allows store operations hart sc intervene store operation sc store operations paired lr read allow store operations intervene forbid others array abstraction must extended record information use list simple efficient scalable implementations probably use something better", "url": "RV32ISPEC.pdf#segment140", "timestamp": "2023-09-18 14:50:33", "segment": "segment140", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.5 Transitions ", "content": "paragraphs describes single kind system transition description starts condition current system state transition taken current state condition satisfied condition followed action applied state transition taken order generate new system state fetch instruction possible programordersuccessor instruction instance fetched address loc 1 already fetched ie none immediate successors hart instruction tree loc 2 pseudocode already written address pc loc must address otherwise loc conditional branch successor address branch target address direct jump link instruction jal target address indirect jump instruction jalr address instruction iprogram loc 4 action construct freshly initialized instruction instance i instruction program memory loc state plain isa state computed instruction pseudocode including static information available pseudocode instruction kind src regs dst regs add i hart instruction tree successor i possible next fetch addresses loc available immediately fetching model need wait pseudocode write pc allows outoforder execution speculation past conditional branches jumps instructions addresses easily obtained instruction pseudocode exception indirect jump instruction jalr address depends value held register principle mathematical model allow speculation arbitrary addresses exhaustive search rmem tool handles running exhaustive search multiple times growing set possible next fetch addresses indirect jump initial search uses empty sets hence fetch indirect jump instruction pseudocode instruction writes pc use value fetching next instruction starting next iteration exhaustive search collect indirect jump grouped code location set values wrote pc executions previous search iteration use possible next fetch addresses instruction process terminates new fetch addresses detected initiate memory load operations instruction instance state plain load mem kind address size load continuation always initiate corresponding memory load operations action 1 construct appropriate memory load operations mlos address aligned size mlos single memory load operation size bytes address otherwise mlos set size memory load operations one byte addresses address address size 1 2 set mem loads mlos 3 update state pending mem loads load continuation section 141 said misaligned memory accesses may decomposed granularity decompose onebyte accesses granularity subsumes others satisfy memory load operation forwarding unpropagated stores non amo load instruction instance state pending mem loads load continuation memory load operation mlo imem loads unsatisfied slices memory load operation partially entirely satisfied forwarding unpropagated memory store operations store instruction instances programorderbefore 1 programorderprevious fence instructions sr pw set finished 2 every programorderprevious fence instruction f sr pr set pw set f finished load instructions programorderbefore f entirely satisfied 3 every programorderprevious fencetso instruction f finished load instructions programorderbefore f entirely satisfied 4 loadacquirercsc programorderprevious storereleasesrcsc finished 5 loadacquirerelease programorderprevious instructions finished 6 nonfinished programorderprevious loadacquire instructions entirely satisfied 7 programorderprevious storeacquirerelease instructions finished let msoss set unpropagated memory store operation slices nonsc store in struction instances programorderbefore already calculated value stored overlap unsatisfied slices mlo superseded intervening store operations store operations read intervening load last condition requires memory store operation slice msos msoss instruction i store instruction programorderbetween i memory store oper ation overlapping msos load instruction programorderbetween i satisfied overlapping memory store operation slice different hart action 1 update imem loads indicate mlo satisfied msoss 2 restart speculative instructions violated coherence result ie every nonfinished instruction i programordersuccessor every memory load operation mlo i satisfied msoss exists memory store operation slice msos msoss overlapping memory store operation slice different memory store operation msoss msos instruction programordersuccessor restart i restartdependents restartdependents instruction j programordersuccessors j dataflow dependency register write j programordersuccessors j memory load operation reads memory store operation j forwarding j loadacquire programordersuccessors j j load every fence f sr pr set pw set program ordersuccessor j load instructions programordersuccessors f j load every fencetso f programordersuccessor j load instructions programordersuccessors f recursively restartdependents instruction instances forwarding memory store operations memory load might satisfy slices load leaving slices unsatisfied programorderprevious store operation available taking transition might make msoss provisionally unsound violating coherence becomes available store prevent load finished see finish instruction cause restart store operation propagated see propagate store operation consequence transition condition storereleasercsc memory store op erations forwarded loadacquirercsc instructions msoss include memory store operations finished stores must propagated memory store operations condition requires programorderprevious storereleasesrcsc finished load acquirercsc satisfy memory load operation memory instruction instance nonamo load instruction amo instruction context satisfy commit propagate oper ations amo transition memory load operation mlo imem loads unsatisfied slices satisfied memory conditions satisfy memory load operation for warding unpropagated stores satisfied action let msoss memory store operation slices memory covering unsatisfied slices mlo apply action satisfy memory load operation forwarding unpropagated stores note satisfy memory load operation forwarding unpropagated stores might leave slices memory load operation unsatisfied satisfied taking transition taking satisfy memory load operation memory satisfy memory load operation memory hand always satisfy unsatisfied slices memory load operation complete load operations load instruction instance state pend ing mem loads load continuation completed confused finished memory load operations imem loads entirely satisfied ie unsatisfied slices action update state plain load continuation mem value mem value assembled memory store operation slices satisfied imem loads early sc fail sc instruction instance state plain early sc fail res continuation always made fail action update state plain res continuation false paired sc sc instruction instance state plain early sc fail res continuation continue potentially successful execution paired lr action update state plain res continuation true initiate memory store operation footprints instruction instance state plain store ea kind address size next state always announce pending memory store operation footprint action 1 construct appropriate memory store operations msos without store value address aligned size msos single memory store operation size bytes address otherwise msos set size memory store operations onebyte size addresses address address size 1 2 set imem stores msos 3 update state plain next state note taking transition memory store operations yet values importance splitting transition transition allows programordersuccessor store instructions observe memory footprint instruction overlap propagate order early possible ie data register value becomes available instantiate memory store operation values instruction instance state plain store memv mem value store continuation always instantiate values memory store operations imem stores action 1 split mem value memory store operations imem stores 2 update state pending mem stores store continuation commit store instruction uncommitted instruction instance nonsc store instruction sc instruction context commit propagate store operation sc tran sition state pending mem stores store continuation committed confused propagated 1 fully determined data 2 programorderprevious conditional branch indirect jump instructions finished 3 programorderprevious fence instructions sw set finished 4 programorderprevious fencetso instructions finished 5 programorderprevious loadacquire instructions finished 6 programorderprevious storeacquirerelease instructions finished 7 storerelease programorderprevious instructions finished 8 programorderprevious memory access instructions fully determined memory foot print 9 programorderprevious store instructions except sc failed initiated nonempty mem stores 10 programorderprevious load instructions initiated nonempty mem loads action record committed notice condition 8 satisfied conditions 9 10 also satisfied satis fied taking eager transitions hence requiring strengthen model requiring guarantee previous memory access instructions taken enough transitions make memory operations visible condition check propagate store operation next transition instruction take making condition simpler propagate store operation committed instruction instance state pend ing mem stores store continuation unpropagated memory store operation mso imem stores mso propagated 1 memory store operations programorderprevious store instructions overlap mso already propagated 2 memory load operations programorderprevious load instructions overlap mso already satisfied load instructions nonrestartable see definition 3 memory load operations satisfied forwarding mso entirely satisfied nonfinished instruction instance j nonrestartable 1 exist store instruction unpropagated memory store operation mso applying action propagate store operation transition mso result restart j 2 exist nonfinished load instruction l memory load operation mlo l applying action satisfy memory load operation forwarding unpropagated stores satisfy memory load operation memory transition even mlo already satisfied mlo result restart j action 1 update shared memory state mso 2 update imem stores indicate mso propagated 3 restart speculative instructions violated coherence result ie every nonfinished instruction i programorderafter every memory load operation mlo i satisfied msoss exists memory store operation slice msos msoss overlaps mso mso msos program ordersuccessor restart i restartdependents see satisfy memory load operation forwarding unpropagated stores commit propagate store operation sc uncommitted sc instruction instance hart h state pending mem stores store continuation paired lr i satisfied store slices msoss committed propagated time 1 i finished 2 every memory store operation forwarded i propagated 3 conditions commit store instruction satisfied 4 conditions propagate store operation satisfied notice sc instruction one memory store operation 5 every store slice msos msoss msos overwritten shared memory store hart h point since msos propagated memory action 1 apply actions commit store instruction 2 apply action propagate store operation late sc fail sc instruction instance state pending mem stores store continuation propagated memory store operation always made fail action 1 clear imem stores 2 update state plain store continuation false efficiency rmem tool allows transition possible take commit propagate store operation sc transition affect set allowed final states explored interactively sc fail one use early sc fail transition instead waiting transition complete store operations store instruction instance state pend ing mem stores store continuation memory store operations imem stores propagated always completed confused finished action update state plain store continuation true satisfy commit propagate operations amo amo instruction instance state pending mem loads load continuation perform memory access possible perform following sequence transitions intervening transitions 1 satisfy memory load operation memory 2 complete load operations 3 pseudocode internal step zero times 4 instantiate memory store operation values 5 commit store instruction 6 propagate store operation 7 complete store operations addition condition finish instruction exception requiring state plain done holds transitions action perform sequence transi tions include finish instruction one intervening transitions notice programorderprevious stores forwarded load amo simply sequence transitions include forwarding transition even include sequence fail trying propagate store operation transition transition requires programorderprevious store operations overlapping memory footprints propagated forwarding requires store operation unpropa gated addition store amo forwarded programordersuccessor load taking transition store operation amo value therefore forwarded taking transition store operation propagated therefore forwarded commit fence fence instruction instance state plain fence kind next state committed 1 normal fence pr set programorderprevious load instructions finished 2 normal fence pw set programorderprevious store instructions finished 3 fencetso programorderprevious load store instructions finished action 1 record committed 2 update state plain next state register read instruction instance state plain read reg reg name read cont register read reg name every instruction instance needs read already performed expected reg name register write let read sources include bit reg name write bit recent program order instruction instance write bit instruction source initial register value initial register state let reg value value assembled read sources action 1 add reg name ireg reads read sources reg value 2 update state plain read cont reg value register write instruction instance state plain write reg reg name reg value next state always reg name register write action 1 add reg name ireg writes deps reg value 2 update state plain next state deps pair set read sources ireg reads flag true iff load instruction instance already entirely satisfied pseudocode internal step instruction instance state plain internal next state always pseudocodeinternal step action update state plain next state finish instruction nonfinished instruction instance state plain done finished 1 load instruction programorderprevious loadacquire instructions finished b programorderprevious fence instructions sr set finished c every programorderprevious fencetso instruction f finished load instructions programorderbefore f finished guaranteed values read memory load operations cause coherence violations ie programorderprevious instruction instance i let cfp combined footprint propagated memory store operations store instructions programorderbetween i fixed memory store operations forwarded store instructions programorderbetween i including i let cfp complement cfp memory footprint i cfp empty i i fully determined memory footprint ii i unpropagated memory store operations overlap cfp iii i load memory footprint overlaps cfp memory load operations i overlap cfp satisfied i nonrestartable see propagate store operation transition determined instruction nonrestartable memory store operation called fixed store instruction fully determined data 2 fully determined data 3 fence programorderprevious conditional branch indirect jump instructions finished action 1 conditional branch indirect jump instruction discard untaken paths execu tion ie remove instruction instances reachable branchjump taken instruction tree 2 record instruction finished ie set finished true", "url": "RV32ISPEC.pdf#segment141", "timestamp": "2023-09-18 14:50:34", "segment": "segment141", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "B.3.6 Limitations ", "content": "model covers userlevel rv64i rv64a particular support mis aligned atomics extension zam total store ordering extension ztso trivial adapt model rv32ia g q c extensions never tried involve mostly writing sail code instructions minimal changes concurrency model model covers normal memory accesses handle io accesses model cover tlbrelated effects model assumes instruction memory fixed particular fetch instruction transition generate memory load operations shared memory involved transition instead model depends external oracle provides opcode given memory location model cover exceptions traps interrupts", "url": "RV32ISPEC.pdf#segment142", "timestamp": "2023-09-18 14:50:35", "segment": "segment142", "image_urls": [], "Book": "riscv-spec-20191213" }, { "section": "Chapter 1 ", "content": "operating system interfaces job operating system share computer among multiple programs provide useful set services hardware alone supports operating system manages abstracts lowlevel hardware example word processor need concern type disk hardware used operating system shares hardware among multiple programs run appear run time finally operating systems provide controlled ways programs interact share data work together operating system provides services user programs interface designing good interface turns difcult one hand would like interface simple narrow makes easier get implementation right hand may tempted offer many sophisticated features applications trick resolving tension design interfaces rely mechanisms combined provide much generality book uses single operating system concrete example illustrate operating system concepts operating system xv6 provides basic interfaces introduced ken thompson dennis ritchie unix operating system 14 well mimicking unix internal design unix provides narrow interface whose mechanisms combine well offering surprising degree generality interface successful modern operating systemsbsd linux mac os x solaris even lesser extent microsoft windowshave unixlike interfaces understanding xv6 good start toward understanding systems many others figure 11 shows xv6 takes traditional form kernel special program provides services running programs running program called process memory containing instructions data stack instructions implement program computation data variables computation acts stack organizes program procedure calls given computer typically many processes single kernel process needs invoke kernel service invokes system call one calls operating system interface system call enters kernel kernel performs service returns thus process alternates executing user space kernel space kernel uses hardware protection mechanisms provided cpu1 ensure 1this text generally refers hardware element executes computation term cpu acronym central processing unit documentation eg riscv specication also uses words processor core andhartinsteadofcpu process executing user space access memory kernel executes hardware privileges required implement protections user programs execute without privileges user program invokes system call hardware raises privilege level starts executing prearranged function kernel collection system calls kernel provides interface user programs see xv6 kernel provides subset services system calls unix kernels traditionally offer figure 12 lists xv6 system calls rest chapter outlines xv6 servicesprocesses memory le descriptors pipes le systemand illustrates code snippets discussions shell unix commandline user interface uses shell use system calls illustrates carefully designed shell ordinary program reads commands user executes fact shell user program part kernel illustrates power system call interface nothing special shell also means shell easy replace result modern unix systems variety shells choose user interface scripting features xv6 shell simple implementation essence unix bourne shell implementation found usershc1", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-07 23:04:52", "segment": "segment0", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%201.1.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "1.1 Processes and memory ", "content": "xv6 process consists userspace memory instructions data stack perprocess state private kernel xv6 timeshares processes transparently switches available cpus among set processes waiting execute process executing xv6 saves cpu registers restoring next runs process kernel associates process identier pid process process may create new process using fork system call fork creates new process called child process exactly memory contents calling process called parent process fork returns parent child parent fork returns child pid child fork returns zero example consider following program fragment written c programming language 6 int pid fork pid 0 printf parent child dn pid pid wait int 0 printf child donen pid else pid 0 printf child exitingn exit 0 else printf fork errorn exit system call causes calling process stop executing release resources memory open les exit takes integer status argument conventionally 0 indicate success 1 indicate failure wait system call returns pid exited killed child current process copies exit status child address passed wait none caller children exited wait waits one caller children wait immediately returns 1 parent care exit status child pass 0 address wait example output lines parent child1234 child exiting might come either order depending whether parent child gets printf call rst child exits parent wait returns causing parent print parent child 1234 done although child memory contents parent initially parent child executing different memory different registers changing variable one affect example return value wait stored pid parent process change variable pid child value pid child still zero exec system call replaces calling process memory new memory image loaded le stored le system le must particular format species part le holds instructions part data instruction start etc xv6 uses elf format chapter 3 discusses detail exec succeeds return calling program instead instructions loaded le start executing entry point declared elf header exec takes two arguments name le containing executable array string arguments example char argv 3 argv 0 echo argv 1 hello argv 2 0 exec binecho argv printf exec errorn fragment replaces calling program instance program binecho running argument list echo hello programs ignore rst element argument array conventionally name program xv6 shell uses calls run programs behalf users main structure shell simple see main usershc145 main loop reads line input user getcmd calls fork creates copy shell process parent calls wait child runs command example user typed echo hello shell runcmd would called echo hello argument runcmd usershc58 runs actual command echo hello would call exec usershc78 exec succeeds child execute instructions echo instead runcmd point echo call exit cause parent return wait main usershc145 might wonder fork exec combined single call see later shell exploits separation implementation io redirection avoid wastefulness creating duplicate process immediately replacing exec operating kernels optimize implementation fork use case using virtual memory techniques copyonwrite see section 46 xv6 allocates userspace memory implicitly fork allocates memory required child copy parent memory exec allocates enough memory hold executable le process needs memory runtime perhaps malloc call sbrk n grow data memory n bytes sbrk returns location new memory", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-07 23:04:52", "segment": "segment1", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%201.2.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "1.2 I/O and File descriptors ", "content": "le descriptor small integer representing kernelmanaged object process may read write process may obtain le descriptor opening le directory device creating pipe duplicating existing descriptor simplicity often refer object le descriptor refers le le descriptor interface abstracts away differences les pipes devices making look like streams bytes refer input output io internally xv6 kernel uses le descriptor index perprocess table every process private space le descriptors starting zero convention process reads le descriptor 0 standard input writes output le descriptor 1 standard output writes error messages le descriptor 2 standard error see shell exploits convention implement io redirection pipelines shell ensures always three le descriptors open usershc151 default le descriptors console read write system calls read bytes write bytes open les named le descriptors call read fd buf n reads n bytes le descriptor fd copies buf returns number bytes read le descriptor refers le offset associated read reads data current le offset advances offset number bytes read subsequent read return bytes following ones returned rst read bytes read read returns zero indicate end le call write fd buf n writes n bytes buf le descriptor fd returns number bytes written fewer n bytes written error occurs like read write writes data current le offset advances offset number bytes written write picks previous one left following program fragment forms essence program cat copies data standard input standard output error occurs writes message standard error char buf 512 int n n read 0 buf sizeof buf n 0 break n 0 fprintf 2 read errorn exit 1 write 1 buf n n fprintf 2 write errorn exit 1 important thing note code fragment cat know whether reading le console pipe similarly cat know whether printing console le whatever use le descriptors convention le descriptor 0 input le descriptor 1 output allows simple implementation cat close system call releases le descriptor making free reuse future open pipe dup system call see newly allocated le descriptor always lowest numbered unused descriptor current process file descriptors fork interact make io redirection easy implement fork copies parent le descriptor table along memory child starts exactly open les parent system call exec replaces calling process memory preserves le table behavior allows shell implement io redirection forking re opening chosen le descriptors child calling exec run new program simplied version code shell runs command cat inputtxt char argv 2 argv 0 cat argv 1 0 fork 0 close 0 open inputtxt ordonly exec cat argv child closes le descriptor 0 open guaranteed use le descriptor newly opened inputtxt 0 smallest available le descriptor cat executes le descriptor 0 standard input referring inputtxt parent process le descriptors changed sequence since modies child descriptors code io redirection xv6 shell works exactly way usershc82 recall point code shell already forked child shell runcmd call exec load new program second argument open consists set ags expressed bits control open possible values dened le control fcntl header kernelfcntlh15 ordonly owronly ordwr ocreate otrunc instruct open open le reading writing reading writing create le exist truncate le zero length clear helpful fork exec separate calls two shell chance redirect child io without disturbing io setup main shellonecouldinsteadimagineahypotheticalcombinedforkexecsystemcall buttheoptions io redirection call seem awkward shell could modify io setup calling forkexec undo modications forkexec could take instructions io redirection arguments least attractively every program like cat could taught io redirection although fork copies le descriptor table underlying le offset shared parent child consider example fork 0 write 1 hello 6 exit 0 else wait 0 write 1 worldn 6 end fragment le attached le descriptor 1 contain data hello world write parent thanks wait runs child done picks child write left behavior helps produce sequential output sequences shell commands like echo hello echo world outputtxt dup system call duplicates existing le descriptor returning new one refers underlying io object le descriptors share offset le descriptors duplicated fork another way write hello world le fd dup 1 write 1 hello 6 write fd worldn 6 two le descriptors share offset derived original le descriptor sequence fork dup calls otherwise le descriptors share offsets even resulted open calls le dup allows shells implement commands like ls existingfile nonexistingfile tmp1 2 1 2 1 tells shell give command le descriptor 2 duplicate descriptor 1 name existing le error message nonexisting le show le tmp1 xv6 shell support io redirection error le descriptor know implement file descriptors powerful abstraction hide details con nected process writing le descriptor 1 may writing le device like console pipe", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-07 23:04:53", "segment": "segment2", "image_urls": [], "Book": "riscv-rev1" }, { "section": "1.3 Pipes ", "content": "pipe small kernel buffer exposed processes pair le descriptors one reading one writing writing data one end pipe makes data available reading end pipe pipes provide way processes communicate following example code runs program wc standard input connected read end pipe incorrect behavior could xed calling exit runcmd interior processes x complicates code runcmd needs know interior process complications also arise forking runcmd p right example modication sleep 10 echo hi immediately print hi instead 10 seconds echo runs immediately exits waiting sleep nish since goal shc simple possible try avoid creating interior processes pipes may seem powerful temporary les pipeline echo hello world wc could implemented without pipes echo hello world tmpxyz wc tmpxyz pipes least four advantages temporary les situation first pipes automatically clean le redirection shell would careful remove tmpxyz done second pipes pass arbitrarily long streams data le redirection requires enough free space disk store data third pipes allow parallel execution pipeline stages le approach requires rst program nish second starts fourth implementing interprocess communication pipes blocking reads writes efcient nonblocking semantics les", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-07 23:04:53", "segment": "segment3", "image_urls": [], "Book": "riscv-rev1" }, { "section": "1.4 File system ", "content": "xv6 le system provides data les contain uninterpreted byte arrays directories contain named references data les directories directories form tree starting special directory called root path like abc refers le directory named c inside directory named b inside directory named root directory paths begin evaluated relative calling process current directory changed chdir system call code fragments open le assuming directories involved exist chdir a chdir b open c ordonly open abc ordonly rst fragment changes process current directory ab second neither refers changes process current directory system calls create new les directories mkdir creates new directory open ocreate ag creates new data le mknod creates new device le example illustrates three mkdir dir fd open dirfile ocreateowronly close fd mknod console 1 1 mknod creates special le refers device associated device le major minor device numbers two arguments mknod uniquely identify kernel device process later opens device le kernel diverts read write system calls kernel device implementation instead passing le system le name distinct le underlying le called inode multiple names called links link consists entry directory entry contains le name reference inode inode holds metadata le including type le directory device length location le content disk number links le fstat system call retrieves information inode le descriptor refers lls struct stat dened stath kernelstath define tdir 1 directory define tfile 2 file define tdevice 3 device struct stat int dev file system disk device uint ino inode number short type type file short nlink number links file uint64 size size file bytes link system call creates another le system name referring inode exist ing le fragment creates new le named b open ocreateowronly link b reading writing reading writing b inode identied unique inode number code sequence possible determine b refer underlying contents inspecting result fstat return inode number ino nlink count set 2 unlink system call removes name le system le inode disk space holding content freed le link count zero le descriptors refer thus adding unlink last code sequence leaves inode le content accessible b furthermore fd open tmpxyz ocreateordwr unlink tmpxyz idiomatic way create temporary inode name cleaned process closes fd exits unix provides le utilities callable shell userlevel programs example mkdir ln rm design allows anyone extend commandline interface adding new user level programs hindsight plan seems obvious systems designed time unix often built commands shell built shell kernel one exception cd built shell usershc160 cd must change current working directory shell cd run regular command shell would fork child process child process would run cd cd would change child working directory parent ie shell working directory would change", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-07 23:04:53", "segment": "segment4", "image_urls": [], "Book": "riscv-rev1" }, { "section": "1.5 Real world ", "content": "unix combination standard le descriptors pipes convenient shell syntax operations major advance writing generalpurpose reusable programs idea sparked culture software tools responsible much unix power popularity shell rst socalled scripting language unix system call interface persists today systems like bsd linux mac os x unix system call interface standardized portable operating system interface posix standard xv6 posix compliant missing many system calls in cluding basic ones lseek many system calls provide differ standard main goals xv6 simplicity clarity providing simple unixlike systemcall interface several people extended xv6 system calls sim ple c library order run basic unix programs modern kernels however provide many system calls many kinds kernel services xv6 example support net working windowing systems userlevel threads drivers many devices modern kernels evolve continuously rapidly offer many features beyond posix unix unied access multiple types resources les directories devices single set lename ledescriptor interfaces idea extended kinds resources good example plan 9 13 applied resources les concept networks graph ics however unixderived operating systems followed route le system le descriptors powerful abstractions even models operating system interfaces multics predecessor unix abstracted le storage way made look like memory producing different avor interface complexity multics design direct inuence designers unix tried build something simpler xv6 provide notion users protecting one user another unix terms xv6 processes run root book examines xv6 implements unixlike interface ideas concepts apply unix operating system must multiplex processes onto underlying hardware isolate processes provide mechanisms controlled interprocess communication studying xv6 able look complex operating systemsandseetheconceptsunderlyingxv6inthosesystemsaswell", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-07 23:04:54", "segment": "segment5", "image_urls": [], "Book": "riscv-rev1" }, { "section": "1.6 Exercises ", "content": "1 write program uses unix system calls pingpong byte two processes pair pipes one direction measure program performance ex changes per second", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-07 23:04:54", "segment": "segment6", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 2 ", "content": "operating system organization key requirement operating system support several activities example using system call interface described chapter 1 process start new processes fork operating system must timeshare resources computer among processes example even processes hardware cpus operating system must ensure processes get chance execute operating system must also arrange isolation processes one process bug malfunctions affect processes depend buggy process complete isolation however strong since possible processes intentionally interact pipelines example thus operating system must fulll three requirements multiplexing isolation interaction chapter provides overview operating systems organized achieve three requirements turns many ways text focuses mainstream designs centered around monolithic kernel used many unix operating systems chapter also provides overview xv6 process unit isolation xv6 creation rst process xv6 starts xv6 runs multicore1 riscv microprocessor much lowlevel functionality example process implementation specic riscv riscv 64bit cpu xv6 written lp64 c means long l pointers p c programming language 64 bits int 32bit book assumes reader done bit machinelevel programming architecture introduce riscvspecic ideas come useful reference riscv riscv reader open architecture atlas 12 userlevel isa 2 privileged architecture 1 ofcial specications cpu complete computer surrounded support hardware much form io interfaces xv6 written support hardware simulated qemu machine virt option includes ram rom containing boot code serial connection user key boardscreen disk storage 1by multicore text means multiple cpus share memory execute parallel set registers text sometimes uses term multiprocessor synonym multicore though multiprocessor alsorefermorespecicallytoacomputerwithseveraldistinctprocessorchips", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-07 23:04:54", "segment": "segment7", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.1 Abstracting physical resources ", "content": "rst question one might ask encountering operating system one could implement system calls figure 12 library applications link plan application could even library tailored needs applications could directly interact hardware resources use resources best way application eg achieve high predictable performance operating systems embedded devices realtime systems organized way downside library approach one application running applications must wellbehaved example application must periodically give cpu applications run cooperative timesharing scheme may ok applications trust bugs typical applications trust bugs one often wants stronger isolation cooperative scheme provides achieve strong isolation helpful forbid applications directly accessing sensitive hardware resources instead abstract resources services example unix applica tions interact storage le system open read write close system calls instead reading writing disk directly provides application con venience pathnames allows operating system implementer interface manage disk even isolation concern programs interact intentionally wish keep way likely nd le system convenient abstraction direct use disk similarly unix transparently switches hardware cpus among processes saving restor ing register state necessary applications aware time sharing transparency allows operating system share cpus even applications innite loops another example unix processes use exec build memory image instead directly interacting physical memory allows operating system decide place process memory memory tight operating system might even store process data disk exec also provides users convenience le system store executable program images many forms interaction among unix processes occur via le descriptors le descriptors abstract away many details eg data pipe le stored also dened way simplies interaction example one application pipeline fails kernel generates endofle signal next process pipeline systemcall interface figure 12 carefully designed provide programmer con venience possibility strong isolation unix interface way abstract resources proven good one", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-07 23:04:54", "segment": "segment8", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.2 User mode, supervisor mode, and system calls ", "content": "strong isolation requires hard boundary applications operating system applicationmakesamistake wedon twanttheoperatingsystemtofailorotherapplicationsto fail instead operating system able clean failed application continue running applications achieve strong isolation operating system must arrange applications modify even read operating system data structures instructions applications access processes memory cpus provide hardware support strong isolation example riscv three modes cpu execute instructions machine mode supervisor mode user mode in structions executing machine mode full privilege cpu starts machine mode machine mode mostly intended conguring computer xv6 executes lines machine mode changes supervisor mode supervisor mode cpu allowed execute privileged instructions example en abling disabling interrupts reading writing register holds address page table etc application user mode attempts execute privileged instruction cpu execute instruction switches supervisor mode supervisormode code terminate application something figure 11 chapter 1 illustrates organization application execute usermode instructions eg adding numbers etc said running user space software supervisor mode also execute privileged instructions said running kernel space software running kernel space supervisor mode called kernel application wants invoke kernel function eg read system call xv6 must transition kernel cpus provide special instruction switches cpu user mode supervisor mode enters kernel entry point specied kernel riscv provides ecall instruction purpose cpu switched supervisor mode kernel validate arguments system call decide whether application allowed perform requested operation deny execute important kernel control entry point transitions supervisor mode application could decide kernel entry point malicious application could example enter kernel point validation arguments skipped", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-07 23:04:54", "segment": "segment9", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.3 Kernel organization ", "content": "key design question part operating system run supervisor mode one possibility entire operating system resides kernel implementations system calls run supervisor mode organization called monolithic kernel organization entire operating system runs full hardware privilege organi zation convenient os designer decide part operating system need full hardware privilege furthermore easier different parts op erating system cooperate example operating system might buffer cache shared le system virtual memory system downside monolithic organization interfaces different parts operating system often complex see rest text therefore easy operating system developer make mistake monolithic kernel mistake fatal becauseanerrorinsupervisormodewilloftencausethekerneltofailifthekernelfails computer stops working thus applications fail computer must reboot start reduce risk mistakes kernel os designers minimize amount operating system code runs supervisor mode execute bulk operating system user mode kernel organization called microkernel figure 21 illustrates microkernel design gure le system runs userlevel process os services running processes called servers allow applications interact le server kernel provides interprocess communication mechanism send messages one usermode process another example application like shell wants read write le sends message le server waits response microkernel kernel interface consists lowlevel functions starting applica tions sending messages accessing device hardware etc organization allows kernel relatively simple operating system resides userlevel servers xv6 implemented monolithic kernel like unix operating systems thus xv6 kernel interface corresponds operating system interface kernel implements com plete operating system since xv6 provide many services kernel smaller microkernels conceptually xv6 monolithic", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-07 23:04:54", "segment": "segment10", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%202.1.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "2.4 Code: xv6 organization ", "content": "xv6 kernel source kernel subdirectory source divided les following rough notion modularity figure 22 lists les intermodule interfaces dened defsh kerneldefsh", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-07 23:04:54", "segment": "segment11", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.5 Process overview ", "content": "unit isolation xv6 unix operating systems process process ab straction prevents one process wrecking spying another process memory cpu le descriptors etc also prevents process wrecking kernel process subvert kernel isolation mechanisms kernel must implement process abstraction care buggy malicious application may trick kernel hardware somethingbad eg circumventingisolation themechanismsusedbythekerneltoimplement processes include usersupervisor mode ag address spaces timeslicing threads help enforce isolation process abstraction provides illusion program private machine process provides program appears private memory system address space processes read write process also provides program appears cpu execute program instructions xv6 uses page tables implemented hardware give process ad dress space riscv page table translates maps virtual address address riscv instruction manipulates physical address address cpu chip sends main memory xv6 maintains separate page table process denes process address space illustrated figure 23 address space includes process user memory starting virtual address zero instructions come rst followed global variables stack nally heap area malloc process expand needed number factors limit maximum size process address space pointers riscv 64 bits wide hardware uses low 39 bits looking virtual addresses page tables xv6 uses 38 39 bits thus maximum address 238 1 0x3fffffffff maxva kernelriscvh348 top address space xv6 reserves page trampoline page mapping process trapframe switch kernel explain chapter 4 xv6 kernel maintains many pieces state process gathers struct proc kernelproch86 process important pieces kernel state page table kernel stack run state use notation p xxx refer elements proc structure example p pagetable pointer process page table process thread execution thread short executes process instruc tions thread suspended later resumed switch transparently processes kernel suspends currently running thread resumes another process thread much state thread local variables function call return addresses stored thread stacks process two stacks user stack kernel stack p kstack process executing user instructions user stack use kernel stack empty process enters kernel system call interrupt kernel code executes process kernel stack process kernel user stack still contains saved data ac tively used process thread alternates actively using user stack kernel stack kernel stack separate protected user code kernel execute even process wrecked user stack process make system call executing riscv ecall instruction instruction raises hardware privilege level changes program counter kerneldened entry point code entry point switches kernel stack executes kernel instructions implementthesystemcallwhenthesystemcallcompletes thekernelswitchesbacktotheuser stack returns user space calling sret instruction lowers hardware privilege level resumes executing user instructions system call instruction process thread block kernel wait io resume left io nished p state indicates whether process allocated ready run running waiting io exiting p pagetable holds process page table format riscv hardware ex pects xv6 causes paging hardware use process p pagetable executing process user space process page table also serves record addresses physical pages allocated store process memory", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-07 23:04:55", "segment": "segment12", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%202.2.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%202.3.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "2.6 Code: starting xv6 and the \ufb01rst process ", "content": "make xv6 concrete outline kernel starts runs rst process subsequent chapters describe mechanisms show overview detail riscv computer powers initializes runs boot loader stored readonly memory boot loader loads xv6 kernel memory machine mode cpu executes xv6 starting entry kernelentrys6 riscv starts paging hardware disabled virtual addresses map directly physical addresses loader loads xv6 kernel memory physical address 0x80000000 reason places kernel 0x80000000 rather 0x0 address range 0x00x80000000 contains io devices instructions entry set stack xv6 run c code xv6 declares space initial stack stack0 le startc kernelstartc11 code entry loads stack pointer register sp address stack04096 top stack stack riscv grows kernel stack entry calls c code start kernelstartc21 function start performs conguration allowed machine mode switches supervisor mode enter supervisor mode riscv provides instruction mret instruction often used return previous call supervisor mode machine mode start returning call instead sets things one sets previous privilege mode supervisor register mstatus sets return address main writing main address register mepc disables virtual address translation supervisor mode writing 0 pagetable register satp delegates interrupts exceptions supervisor mode jumping supervisor mode start performs one task programs clock chip generate timer interrupts housekeeping way start returns super visor mode calling mret causes program counter change main kernelmainc11 main kernelmainc11 initializes several devices subsystems creates rst pro cess calling userinit kernelprocc212 rst process executes small program written riscv assembly initcodes userinitcodes1 reenters kernel invoking execsystemcallaswesawinchapter1 execreplacesthememoryandregistersofthecurrent process new program case init kernel completed exec returns user space init process init userinitc15 creates new console device le needed opens le descriptors 0 1 2 starts shell console system", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-07 23:04:55", "segment": "segment13", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.7 Real world ", "content": "real world one nd monolithic kernels microkernels many unix kernels monolithic example linux monolithic kernel although os functions run user level servers eg windowing system kernels l4 minix qnx organized microkernel servers seen wide deployment embedded settings operating systems adopted process concept processes look similar xv6 modern operating systems however support several threads within process allow single process exploit multiple cpus supporting multiple threads process involves quite bit machinery xv6 including potential interface changes eg linux clone variant fork control aspects process threads share", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-07 23:04:55", "segment": "segment14", "image_urls": [], "Book": "riscv-rev1" }, { "section": "2.8 Exercises ", "content": "1 use gdb observe rst kerneltouser transition run make qemugdb another window directory run gdb type gdb command break 0x3ffffff10e sets breakpoint sret instruction kernel jumps user space type continue gdb command gdb stop breakpoint execute sret type stepi gdb indicate executing address 0x0 user space start initcodes", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-07 23:04:55", "segment": "segment15", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 3 ", "content": "page tables page tables mechanism operating system provides process private address space memory page tables determine memory addresses mean parts physical memory accessed allow xv6 isolate different process ad dress spaces multiplex onto single physical memory page tables also provide level indirection allows xv6 perform tricks mapping memory trampoline page several address spaces guarding kernel user stacks unmapped page rest chapter explains page tables riscv hardware provides xv6 uses", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-07 23:04:55", "segment": "segment16", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.1 Paging hardware ", "content": "reminder riscv instructions user kernel manipulate virtual addresses ma chine ram physical memory indexed physical addresses riscv page table hardware connects two kinds addresses mapping virtual address physical address xv6 runs sv39 riscv means bottom 39 bits 64bit virtual address used top 25 bits used sv39 conguration riscv page table logically array 227 134217728 page table entries ptes pte contains 44bit physical page number ppn ags paging hardware translates virtual address using top 27 bits 39 bits index page table nd pte making 56bit physical address whose top 44 bits come ppn pte whose bottom 12 bits copied original virtual address figure 31 shows process logical view page table simple array ptes see figure 32 fuller story page table gives operating system control virtualtophysical address translations granularity aligned chunks 4096 212 bytes chunk called page sv39 riscv top 25 bits virtual address used translation future riscv may use bits dene levels translation physical address also room growth room pte format physical page number grow another 10 bits figure 31 riscv virtual physical addresses simplied logical page table figure 32 shows actual translation happens three steps page table stored physical memory threelevel tree root tree 4096byte pagetable page contains 512 ptes contain physical addresses pagetable pages next level tree pages contains 512 ptes nal level tree paging hardware uses top 9 bits 27 bits select pte root pagetable page middle 9 bits select pte pagetable page next level tree bottom 9 bits select nal pte three ptes required translate address present paging hardware raises pagefault exception leaving kernel handle exception see chapter 4 threelevel structure allows page table omit entire page table pages common case large ranges virtual addresses mappings pte contains ag bits tell paging hardware associated virtual address allowed used ptev indicates whether pte present set reference page causes exception ie allowed pter controls whether instructions allowed read page ptew controls whether instructions allowed write page ptex controls whether cpu may interpret content page instructions execute pteu controls whether instructions user mode allowed access page pteu set pte used supervisor mode figure 32 shows works ags page hardwarerelated structures dened kernelriscvh tell hardware use page table kernel must write physical address root pagetable page satp register cpu satp cpu translate addresses generated subsequent instructions using page table pointed satp cpu satp different cpus run different processes private address space described page table notes terms physical memory refers storage cells dram byte physical memory address called physical address instructions use virtual addresses thepaginghardwaretranslatestophysicaladdresses andthensendstothedramhardwaretoread write storage unlike physical memory virtual addresses virtual memory physical object refers collection abstractions mechanisms kernel provides manage physical memory virtual addresses", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-07 23:04:55", "segment": "segment17", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%20%203.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%20%203.2.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "3.2 Kernel address space ", "content": "xv6 maintains one page table per process describing process user address space plus sin gle page table describes kernel address space kernel congures layout ad dress space give access physical memory various hardware resources predictable virtual addresses figure 33 shows layout maps kernel virtual addresses physical ad dresses le kernelmemlayouth declares constants xv6 kernel memory layout qemu simulates computer includes ram physical memory starting physical ad dress 0x80000000 continuing least 0x86400000 xv6 calls phystop qemu simulation also includes io devices disk interface qemu exposes de vice interfaces software memorymapped control registers sit 0x80000000 physical address space kernel interact devices readingwriting special physical addresses reads writes communicate device hardware rather ram chapter 4 explains xv6 interacts devices kernel gets ram memorymapped device registers using direct mapping mapping resources virtual addresses equal physical address example kernel located kernbase0x80000000 virtual address space physical memory direct mapping simplies kernel code reads writes physical memory example fork allocates user memory child process allocator returns physical address memory fork uses address directly virtual address copying parent user memory child couple kernel virtual addresses directmapped trampoline page mapped top virtual address space user page tables mapping chapter 4 discusses role trampoline page see interesting use case page tables physical page holding trampoline code mapped twice virtual address space kernel top virtual address spaceandoncewithadirectmapping kernel stack pages process kernel stack mapped high xv6 leave unmapped guard page guard page pte invalid ie ptev set kernel overows kernel stack likely cause excep tion kernel panic without guard page overowing stack would overwrite kernel memory resulting incorrect operation panic crash preferable kernel uses stacks via highmemory mappings also accessible kernel directmapped address alternate design might direct mapping use stacks directmapped address arrangement however providing guard pages would involve unmapping virtual addresses would otherwise refer physical memory would hard use kernel maps pages trampoline kernel text permissions pter ptex kernel reads executes instructions pages kernel maps pages permissions pter ptew read write memory pages mappings guard pages invalid", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-07 23:04:56", "segment": "segment18", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%203.3.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "3.3 Code: creating an address space ", "content": "xv6 code manipulating address spaces page tables resides vmc ker nelvmc1 central data structure pagetablet really pointer riscv root pagetable page pagetablet may either kernel page table one per process page tables central functions walk nds pte virtual address mappages installs ptes new mappings functions starting kvm manipulate kernel page table functions starting uvm manipulate user page table functions used copyout copyin copy data user virtual addresses provided system call arguments vmc need explicitly translate addresses order nd corresponding physical memory early boot sequence main calls kvminit kernelvmc22 create kernel page table call occurs xv6 enabled paging riscv addresses refer directly physical memory kvminit rst allocates page physical memory hold root pagetable page calls kvmmap install translations kernel needs translations include kernel instructions data physical memory phystop memory ranges actually devices kvmmap kernelvmc118 calls mappages kernelvmc149 installs mappings page table range virtual addresses corresponding range physical addresses separately virtual address range page intervals virtual address mapped mappages calls walk nd address pte address initializes pte hold relevant physical page number desired permissions ptew ptex andor pter ptev mark pte valid kernelvmc161 walk kernelvmc72 mimics riscv paging hardware looks pte virtual address see figure 32 walk descends 3level page table 9 bits time uses level s9bitsofvirtualaddresstondthepteofeitherthenextlevelpagetableorthenalpage kernelvmc78 pte valid required page yet allocated alloc argument set walk allocates new pagetable page puts physical address pte returns address pte lowest layer tree kernelvmc88 code depends physical memory directmapped kernel virtual ad dress space example walk descends levels page table pulls physical address nextleveldown page table pte kernelvmc80 uses address virtual address fetch pte next level kernelvmc78 main calls kvminithart kernelvmc53 install kernel page table writes phys ical address root pagetable page register satp cpu translate addresses using kernel page table since kernel uses identity mapping virtual address next instruction map right physical memory address procinit kernelprocc26 called main allocates kernel stack pro cess maps stack virtual address generated kstack leaves room invalid stackguard pages kvmmap adds mapping ptes kernel page table call kvminithart reloads kernel page table satp hardware knows new ptes riscv cpu caches page table entries translation lookaside buffer tlb xv6 changes page table must tell cpu invalidate corresponding cached tlb entries point later tlb might use old cached mapping pointing physical page meantime allocated another process result process might able scribble process memory riscv instruction sfencevma ushes current cpu tlb xv6 executes sfencevma kvminithart reloading satp register trampoline code switches user page table returning user space kerneltrampolines79", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-07 23:04:56", "segment": "segment19", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.4 Physical memory allocation ", "content": "kernel must allocate free physical memory runtime page tables user memory kernel stacks pipe buffers xv6 uses physical memory end kernel phystop runtime alloca tion allocates frees whole 4096byte pages time keeps track pages free threading linked list pages allocation consists removing page linked list freeing consists adding freed page list", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-07 23:04:56", "segment": "segment20", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.5 Code: Physical memory allocator ", "content": "allocator resides kallocc kernelkallocc1 allocator data structure free list physical memory pages available allocation free page list element struct run kernelkallocc17 allocator get memory hold data struc ture store free page run structure free page since nothing else stored therethefreelistisprotectedbyaspinlock kernelkallocc2124 thelistandthelockare wrapped struct make clear lock protects elds struct ignore lock calls acquire release chapter 6 examine locking detail function main calls kinit initialize allocator kernelkallocc27 kinit initializes free list hold every page end kernel phystop xv6 ought de termine much physical memory available parsing conguration information provided hardware instead xv6 assumes machine 128 megabytes ram kinit calls freerange add memory free list via perpage calls kfree pte refer physical address aligned 4096byte boundary multiple 4096 freerange uses pgroundup ensure frees aligned physical addresses allocator starts memory calls kfree give manage allocator sometimes treats addresses integers order perform arithmetic eg traversing pages freerange sometimes uses addresses pointers read write memory eg manipulating run structure stored page dual use addresses main reason allocator code full c type casts reason freeing allocation inherently change type memory function kfree kernelkallocc47 begins setting every byte memory freed value 1 cause code uses memory freeing uses dangling references read garbage instead old valid contents hopefully cause code break faster kfree prepends page free list casts pa pointer struct run records old start free list r next sets free list equal r kalloc removes returns rst element free list", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-07 23:04:56", "segment": "segment21", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.6 Process address space ", "content": "process separate page table xv6 switches processes also changes page tables figure 23 shows process user memory starts virtual address zero grow maxva kernelriscvh348 allowing process address principle 256 gigabytes memory process asks xv6 user memory xv6 rst uses kalloc allocate physical pages adds ptes process page table point new physical pages xv6 sets ptew ptex pter pteu ptev ags ptes processes use entire user address space xv6 leaves ptev clear unused ptes see nice examples use page tables first different processes page tables translate user addresses different pages physical memory process private user memory second process sees memory contiguous virtual addresses starting zero process physical memory noncontiguous third kernel maps page trampoline code top user address space thus single page physical memory shows address spaces figure 34 shows layout user memory executing process xv6 de tail stack single page shown initial contents created exec strings containing commandline arguments well array pointers topofthestackjustunderthatarevaluesthatallowaprogramtostartatmainasifthefunction main argc argv called detect user stack overowing allocated stack memory xv6 places invalid guard page right stack user stack overows process tries use address stack hardware generate pagefault exception mapping valid realworld operating system might instead automatically allocate memory user stack overows", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-07 23:04:56", "segment": "segment22", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%203.4.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "3.7 Code: sbrk ", "content": "sbrk system call process shrink grow memory system call implemented function growproc kernelprocc239 growproc calls uvmalloc uvmdealloc de pending whether n postive negative uvmalloc kernelvmc229 allocates physical mem ory kalloc adds ptes user page table mappages uvmdealloc calls uvmunmap kernelvmc174 uses walk nd ptes kfree free physical memory refer xv6 uses process page table tell hardware map user virtual addresses also record physical memory pages allocated process thereasonwhyfreeingusermemory inuvmunmap requiresexaminationoftheuserpagetable", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-07 23:04:56", "segment": "segment23", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.8 Code: exec ", "content": "exec system call creates user part address space initializes user part address space le stored le system exec kernelexecc13 opens named binary path using namei kernelexecc26 explained chapter 8 reads elf header xv6 applications described widelyused elf format dened kernelelfh elf binary consists elf header struct elfhdr kernelelfh6 followed sequence program section headers struct proghdr kernelelfh25 proghdr describes section application must loaded memory xv6 programs one program section header systems might separate sections instructions data rst step quick check le probably contains elf binary elf binary starts fourbyte magic number 0x7f e l f elfmagic kernelelfh3 elf header right magic number exec assumes binary wellformed exec allocates new page table user mappings procpagetable kernelexecc38 allocates memory elf segment uvmalloc kernelexecc52 loads segment memory loadseg kernelexecc10 loadseg uses walkaddr nd physical ad dress allocated memory write page elf segment readi read le program section header init rst user program created exec looks like program section header filesz may less memsz indicating gap lled zeroes c global variables rather read le init filesz 2112 bytes memsz 2136 bytes thus uvmalloc allocates enough physical memory hold 2136 bytes reads 2112 bytes le init exec allocates initializes user stack allocates one stack page exec copies argument strings top stack one time recording pointers ustack places null pointer end argv list passed main rst three entries ustack fake return program counter argc argv pointer exec places inaccessible page stack page programs try use one page fault inaccessible page also allows exec deal arguments large situation copyout kernelvmc355 function exec uses copy arguments stack notice destination page accessible return 1 preparation new memory image exec detects error like invalid program segment jumps label bad frees new image returns 1 exec must wait free old image sure system call succeed old image gone system call return 1 error cases exec happen creation image image complete exec commit new page table kernelexecc113 free old one kernelexecc117 exec loads bytes elf le memory addresses specied elf le users processes place whatever addresses want elf le thus exec risky addresses elf le may refer kernel accidentally purpose consequences unwary kernel could range crash malicious subversion kernel isolation mechanisms ie security exploit xv6 performs number checks avoid risks example phvaddr phmemsz phvaddr checks whether sum overows 64bit integer danger user could construct elf binary phvaddr points userchosen address phmemsz large enough sum overows 0x1000 look like valid value older version xv6 user address space also contained kernel readablewritable user mode user could choose address corresponded kernel memory would thus copy data elf binary kernel riscv version xv6 happen kernel separate page table loadseg loads process page table kernel page table easy kernel developer omit crucial check realworld kernels long history missing checks whose absence exploited user programs obtain kernel priv ileges likely xv6 complete job validating userlevel data supplied kernel malicious user program might able exploit circumvent xv6 isolation", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-07 23:04:57", "segment": "segment24", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.9 Real world ", "content": "like operating systems xv6 uses paging hardware memory protection mapping operating systems make far sophisticated use paging xv6 combining paging pagefault exceptions discuss chapter 4 xv6 simplied kernel use direct map virtual physical addresses assumption physical ram address 0x8000000 kernel expects loaded works qemu real hardware turns bad idea real hardware places ram devices unpredictable physical addresses example might ram 0x8000000 xv6 expect able store kernel serious kernel designs exploit page table turn arbitrary hardware physical memory layouts predictable kernel virtual address layouts riscv supports protection level physical addresses xv6 use feature machines lots memory might make sense use riscv support super pages small pages make sense physical memory small allow allocation pageout disk ne granularity example program uses 8 kilobytes memory giving whole 4megabyte superpage physical memory wasteful larger pages make sense machines lots ram may reduce overhead pagetable manipulation xv6 kernel lack malloclike allocator provide memory small objects prevents kernel using sophisticated data structures would require dynamic allocation memory allocation perennial hot topic basic problems efcient use limited memory preparing unknown future requests 7 today people care speed space efciency addition elaborate kernel would likely allocate many different sizes small blocks rather xv6 4096byte blocks real kernel allocator would need handle small allocations well large ones", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-07 23:04:57", "segment": "segment25", "image_urls": [], "Book": "riscv-rev1" }, { "section": "3.10 Exercises ", "content": "1 parse riscv device tree nd amount physical memory computer 2 write user program grows address space one byte calling sbrk 1 run program investigate page table program call sbrk call sbrk much space kernel allocated pte new memory contain 3 modify xv6 use super pages kernel 4 modify xv6 user program dereferences null pointer receive ex ception modify xv6 virtual address 0 mapped user programs 5 unix implementations exec traditionally include special handling shell scripts le execute begins text rst line taken program run interpret le example exec called run myprog arg1 myprog rst line interp exec runs interp command line interp myprog arg1 implement support convention xv6 6 implement address space randomization kernel", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-07 23:04:57", "segment": "segment26", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 4 ", "content": "traps system calls three kinds event cause cpu set aside ordinary execution instructions force transfer control special code handles event one situation system call user program executes ecall instruction ask kernel something another situation exception instruction user kernel something illegal divide zero use invalid virtual address third situation device interrupt device signals needs attention example disk hardware nishes read write request book uses trap generic term situations typically whatever code execut ing time trap later need resume need aware anything special happened often want traps transparent particularly important interrupts interrupted code typically expect usual sequence trap forces transfer control kernel kernel saves registers state execution resumed kernel executes appropriate handler code eg system call imple mentation device driver kernel restores saved state returns trap original code resumes left xv6 kernel handles traps natural system calls makes sense interrupts since isolation demands user processes directly use devices kernel state needed device handling also makes sense exceptions since xv6 responds exceptions user space killing offending program xv6 trap handling proceeds four stages hardware actions taken riscv cpu assembly vector prepares way kernel c code c trap handler decides trap system call devicedriver service routine commonality among three trap types suggests kernel could handle traps single code path turns convenient separate assembly vectors c trap handlers three distinct cases trapsfromuserspace trapsfromkernelspace andtimerinterrupts", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-07 23:04:57", "segment": "segment27", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.1 RISC-V trap machinery ", "content": "riscv cpu set control registers kernel writes tell cpu handle traps kernel read nd trap occured riscv documents contain full story 1 riscvh kernelriscvh1 contains denitions xv6 uses outline important registers stvec kernel writes address trap handler riscv jumps handle trap sepc trap occurs riscv saves program counter since pc overwritten stvec sret return trap instruction copies sepc pc kernel write sepc control sret goes scause riscv puts number describes reason trap sscratch kernel places value comes handy start trap handler sstatus sie bit sstatus controls whether device interrupts enabled kernel clears sie riscv defer device interrupts kernel sets sie spp bit indicates whether trap came user mode supervisor mode controls mode sret returns registers relate traps handled supervisor mode read written user mode equivalent set control registers traps handled machine mode xv6 uses special case timer interrupts cpu multicore chip set registers one cpu may handling trap given time needs force trap riscv hardware following trap types timer interrupts 1 trap device interrupt sstatus sie bit clear following 2 disable interrupts clearing sie 3 copy pc sepc 4 save current mode user supervisor spp bit sstatus 5 set scause reect trap cause 6 set mode supervisor 7 copy stvec pc 8 start executing new pc note cpu switch kernel page table switch stack kernel save registers pc kernel software must perform tasks one reason cpu minimal work trap provide exibility software example operating systems require page table switch situations increase performance might wonder whether cpu hardware trap handling sequence could simpli ed example suppose cpu switch program counters trap could switch supervisor mode still running user instructions user instructions could break userkernel isolation example modifying satp register point page table allowed accessing physical memory thus important cpu switch kernel specied instruction address namely stvec", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-07 23:04:57", "segment": "segment28", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.2 Traps from user space ", "content": "trap may occur executing user space user program makes system call ecall instruction something illegal device interrupts highlevel path trap user space uservec kerneltrampolines16 usertrap kerneltrapc37 re turning usertrapret kerneltrapc90 userret kerneltrampolines16 traps user code challenging kernel since satp points user page table map kernel stack pointer may contain invalid even mali cious value riscv hardware switch page tables trap user page table must include mapping uservec trap vector instructions stvec points uservec must switch satp point kernel page table order continue executing instructions switch uservec must mapped address kernel page table user page table xv6 satises constraints trampoline page contains uservec xv6 maps trampoline page virtual address kernel page table every user page table virtual address trampoline saw figure 23 figure 33 trampoline contents set trampolines executing user code stvec set uservec kerneltrampolines16 uservec starts 32 registers contain values owned interrupted code uservec needs able modify registers order set satp generate addresses save registers riscv provides helping hand form sscratch register csrrw instruction start uservec swaps contents a0 sscratch user code a0 saved uservec one register a0 play a0 contains value kernel previously placed sscratch uservec next task save user registers entering user space kernel previously set sscratch point perprocess trapframe among things spacetosavealltheuserregisters kernelproch44 becausesatpstillreferstotheuserpage table uservec needs trapframe mapped user address space creating process xv6 allocates page process trapframe arranges always mapped user virtual address trapframe trampoline process p trapframe also points trapframe though physical address kernel use kernel page table thus swapping a0 sscratch a0 holds pointer current process trapframe uservec saves user registers including user a0 read sscratch trapframe contains pointers current process kernel stack current cpu hartid address usertrap address kernel page table uservec retrieves values switches satp kernel page table calls usertrap job usertrap determine cause trap process return kernel trapc37 mentioned rst changes stvec trap kernel handled kernelvec saves sepc saved user program counter might process switch usertrap could cause sepc overwritten trap system call syscall handles device interrupt devintr otherwise exception kernel kills faulting process system call path adds four saved user pc riscv case system call leaves program pointer pointing ecall instruction way usertrap checks process killed yield cpu trap timer interrupt rst step returning user space call usertrapret kerneltrapc90 function sets riscv control registers prepare future trap user space in volves changing stvec refer uservec preparing trapframe elds uservec relies setting sepc previously saved user program counter end usertrapret calls userret trampoline page mapped user kernel page tables reason assembly code userret switch page tables usertrapret call userret passes pointer process user page table a0 trapframe a1 kerneltrampolines88 userret switches satp process user page table recall user page table maps trampoline page trapframe nothing else kernel fact trampoline page mapped virtual address user kernel page tables allows uservec keep executing changing satp userret copies trapframe saved user a0 sscratch preparation later swap trapframe point data userret use register contents content trapframe next userret restores saved user registers trapframe nal swap a0 sscratch restore user a0 save trapframe next trap uses sret return user space", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-07 23:04:58", "segment": "segment29", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 2 ended with initcode.S invoking the exec system call (user/initcode.S:11). Let\u2019s look at how the user call makes its way to the exec system call\u2019s implementation in the kernel. ", "content": "user code places arguments exec registers a0 a1 puts system call number a7 system call numbers match entries syscalls array table function pointers kernelsyscallc108 ecall instruction traps kernel executes uservec usertrap syscall saw syscall kernelsyscallc133 retrieves system call number saved a7 trapframe uses index syscalls rst system call a7 contains sysexec ker nelsyscallh8 resulting call system call implementation function sysexec system call implementation function returns syscall records return value p trapframe a0 cause original userspace call exec return value since c calling convention riscv places return values a0 system calls conventionally return negative numbers indicate errors zero positive numbers success system call number invalid syscall prints error returns 1", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-07 23:04:58", "segment": "segment30", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.4 Code: System call arguments ", "content": "system call implementations kernel need nd arguments passed user code user code calls system call wrapper functions arguments initially riscv c calling convention places registers kernel trap code saves user registers current process trap frame kernel code nd functions argint argaddr argfd retrieve n th system call argument trap frame integer pointer le descriptor call argraw retrieve appropriate saved user register kernelsyscallc35 system calls pass pointers arguments kernel must use pointers read write user memory exec system call example passes kernel array pointers referring string arguments user space pointers pose two challenges first user pro gram may buggy malicious may pass kernel invalid pointer pointer intended trick kernel accessing kernel memory instead user memory second xv6 kernel page table mappings user page table mappings kernel use ordinary instructions load store usersupplied addresses kernel implements functions safely transfer data usersupplied addresses fetchstr example kernelsyscallc25 file system calls exec use fetchstr retrieve string lename arguments user space fetchstr calls copyinstr hard work copyinstr kernelvmc406 copies max bytes dst virtual address srcva user page table pagetable uses walkaddr calls walk walk page table software determine physical address pa0 srcva since kernel maps physical ram addresses kernel virtual address copyinstr directly copy string bytes pa0 dst walkaddr kernelvmc95 checks usersupplied virtual address part process user address space programs trick kernel reading memory asimilarfunction copyout copiesdatafromthekerneltoausersuppliedaddress", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-07 23:04:58", "segment": "segment31", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.5 Traps from kernel space ", "content": "xv6 congures cpu trap registers somewhat differently depending whether user kernel code executing kernel executing cpu kernel points stvec assembly code kernelvec kernelkernelvecs10 since xv6 already kernel kernelvec rely satp set kernel page table stack pointer referring valid kernel stack kernelvec saves registers interrupted code eventually resume without disturbance kernelvec saves registers stack interrupted kernel thread makes sense register values belong thread particularly important trap causes switch different thread case trap actually return stack new thread leaving interrupted thread saved registers safely stack kernelvec jumps kerneltrap kerneltrapc134 saving registers kerneltrap prepared two types traps device interrrupts exceptions calls devintr kernel trapc177 check handle former trap device interrupt must exception always fatal error occurs xv6 kernel kernel calls panic stops executing kerneltrap called due timer interrupt process kernel thread running rather scheduler thread kerneltrap calls yield give threads chance run point one threads yield let thread kerneltrap resume chapter 7 explains happens yield kerneltrap work done needs return whatever code interrupted trap yield may disturbed saved sepc saved previous mode sstatus kerneltrap saves starts restores control registers returns kernelvec kernelkernelvecs48 kernelvec pops saved registers stack executes sret copies sepc pc resumes interrupted kernel code worth thinking trap return happens kerneltrap called yield due timer interrupt xv6 sets cpu stvec kernelvec cpu enters kernel user space see usertrap kerneltrapc29 window time kernel executing stvec set uservec crucial device interrupts disabled window luckily riscv always disables interrupts starts take trap xv6 enable sets stvec", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-07 23:04:58", "segment": "segment32", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.6 Page-fault exceptions ", "content": "xv6 response exceptions quite boring exception happens user space kernel kills faulting process exception happens kernel kernel panics real operating systems often respond much interesting ways example many kernels use page faults implement copyonwrite cow fork explain copyonwrite fork consider xv6 fork described chapter 3 fork causes child tohavethesamememorycontentastheparent bycallinguvmcopy kernelvmc309 toallocate physical memory child copy parent memory would efcient child parent could share parent physical memory straightforward implementation would work however since would cause parent child disrupt execution writes shared stack heap parent child safely share phyical memory using copyonwrite fork driven page faults cpu translate virtual address physical address cpu generates pagefault exception riscv three different kinds page fault load page faults load instruction translate virtual address store page faults store instruction translate virtual address instruction page faults address instruction translate value scause register indicates type page fault stval register contains address translated basic plan cow fork parent child initially share physical pages map readonly thus child parent executes store instruction riscv cpu raises pagefault exception response exception kernel makes copy page contains faulted address maps one copy readwrite child address space copy readwrite parent address space updating page tables kernel resumes faulting process instruction caused fault kernel updated relevant pte allow writes faulting instruction execute without fault cow plan works well fork often child calls exec immediately fork replacing address space new address space common case child experience page faults kernel avoid making complete copy furthermore cow fork transparent modications applications necessary benet combination page tables page faults opens widerange interesting possibil ities cow fork another widelyused feature called lazy allocation two parts first application calls sbrk kernel grows address space marks new addresses valid page table second page fault one new addresses kernel allocates physical memory maps page table since applications often ask memory need lazy allocation win kernel allocates memory application actually uses like cow fork kernel implement feature transparently applications yet another widelyused feature exploits page faults paging disk applications need memory available physical ram kernel evict pages write storage device disk mark ptes valid application reads writes evicted page cpu experience page fault kernel inspect faulting address address belongs page disk kernel allocates page physical memory reads page disk memory updates pte valid refer memory resumes application make room page kernel may evict another page feature requires changes applications works well applications locality reference ie use subset memory given time features combine paging pagefault exceptions include automatically extending stacks memorymapped les", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-07 23:04:58", "segment": "segment33", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.7 Real world ", "content": "need special trampoline pages could eliminated kernel memory mapped ev ery process user page table appropriate pte permission ags would also eliminate need page table switch trapping user space kernel turn would allow system call implementations kernel take advantage current process user memory mapped allowing kernel code directly dereference user pointers many operat ing systems used ideas increase efciency xv6 avoids order reduce chances security bugs kernel due inadvertent use user pointers reduce complexity would required ensure user kernel virtual addresses overlap", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-07 23:04:58", "segment": "segment34", "image_urls": [], "Book": "riscv-rev1" }, { "section": "4.8 Exercises ", "content": "1 functions copyin copyinstr walk user page table software set kernel page table kernel user program mapped copyin copyinstr use memcpy copy system call arguments kernel space relying hardware page table walk 2 implement lazy memory allocation 3 implement cow fork", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-07 23:04:58", "segment": "segment35", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 5 ", "content": "interrupts device drivers driver code operating system manages particular device congures device hardware tells device perform operations handles resulting interrupts interacts processes may waiting io device driver code tricky driver executes concurrently device manages addition driver must understand device hardware interface complex poorly documented devices need attention operating system usually congured generate interrupts one type trap kernel trap handling code recognizes device raised interrupt calls driver interrupt handler xv6 dispatch happens devintr kerneltrapc177 many device drivers execute code two contexts top half runs process kernel thread bottom half executes interrupt time top half called via system calls read write want device perform io code may ask hardware start operation eg ask disk read block code waits operation complete eventually device completes operation raises interrupt driver interrupt handler acting bottom half gures operation completed wakes waiting process appropriate tells hardware start work waiting next operation", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-07 23:04:58", "segment": "segment36", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.1 Code: Console input ", "content": "console driver consolec simple illustration driver structure console driver accepts characters typed human via uart serialport hardware attached riscv console driver accumulates line input time processing special input characters backspace controlu user processes shell use read system call fetch lines input console type input xv6 qemu keystrokes delivered xv6 way qemu simulated uart hardware uart hardware driver talks 16550 chip 11 emulated qemu real computer 16550 would manage rs232 serial link connecting terminal computer running qemu connected keyboard display uart hardware appears software set memorymapped control registers physical addresses riscv hardware connects uart device loads stores interact device hardware rather ram memorymapped ad dresses uart start 0x10000000 uart0 kernelmemlayouth21 handful uart control registers width byte offsets uart0 dened kerneluartc22 example lsr register contain bits indicate whether input characters waiting read software characters available reading rhr register time one read uart hardware deletes internal fifo waiting characters clears ready bit lsr fifo empty uart transmit hardware largely independent receive hardware software writes byte thr uart transmit byte xv6 main calls consoleinit kernelconsolec184 initialize uart hardware code congures uart generate receive interrupt uart receives byte input transmit complete interrupt time uart nishes sending byte output kerneluartc53 xv6 shell reads console way le descriptor opened initc userinitc19 calls read system call make way kernel consoleread kernelcon solec82 consoleread waits input arrive via interrupts buffered consbuf copies input user space whole line arrived returns user process user typed full line yet reading processes wait sleep call kernelcon solec98 chapter 7 explains details sleep user types character uart hardware asks riscv raise interrupt activates xv6 trap handler trap handler calls devintr kerneltrapc177 looks riscv scause register discover interrupt external device asks hardware unit called plic 1 tell device interrupted kerneltrapc186 uart devintr calls uartintr uartintr kerneluartc180 reads waiting input characters uart hardware hands consoleintr kernelconsolec138 wait characters since future input raise new interrupt job consoleintr accumulate input characters consbuf whole line arrives consoleintr treats backspace characters specially newline arrives consoleintr wakes waiting consoleread one woken consoleread observe full line consbuf copy user space return via system call machinery user space", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-07 23:04:59", "segment": "segment37", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.2 Code: Console output ", "content": "write system call le descriptor connected console eventually arrives uartputc kerneluartc87 device driver maintains output buffer uarttxbuf writing processes wait uart nish sending instead uartputc appends character buffer calls uartstart start device transmitting already returns situation uartputc waits buffer already full time uart nishes sending byte generates interrupt uartintr calls uartstart checks device really nished sending hands device next buffered output character thus process writes multiple bytes console typically rst byte sent uartputc call uartstart remaining buffered bytes sent uartstart calls uartintr transmit complete interrupts arrive general pattern note decoupling device activity process activity via buffering interrupts console driver process input even process waiting read subsequent read see input similarly processes send output without wait device decoupling increase performance allowing processes execute concur rently device io particularly important device slow uart needs immediate attention echoing typed characters idea sometimes called io concurrency", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-07 23:04:59", "segment": "segment38", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.3 Concurrency in drivers ", "content": "may noticed calls acquire consoleread consoleintr calls acquire lock protects console driver data structures concurrent access three concurrency dangers two processes different cpus might call consoleread time hardware might ask cpu deliver console really uart interrupt cpu already executing inside consoleread hardware might deliver console interrupt different cpu consoleread executing chapter 6 explores locks help scenarios another way concurrency requires care drivers one process may waiting input device interrupt signaling arrival input may arrive different process process running thus interrupt handlers allowed think process code interrupted example interrupt handler safely call copyout current process page table interrupt handlers typically relatively little work eg copy input data buffer wake tophalf code rest", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-07 23:04:59", "segment": "segment39", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.4 Timer interrupts ", "content": "xv6 uses timer interrupts maintain clock enable switch among computebound processes yield calls usertrap kerneltrap cause switching timer inter rupts come clock hardware attached riscv cpu xv6 programs clock hardware interrupt cpu periodically riscv requires timer interrupts taken machine mode supervisor mode risc v machine mode executes without paging separate set control registers practical run ordinary xv6 kernel code machine mode result xv6 handles timer interrupts completely separately trap mechanism laid code executed machine mode startc main sets receive timer interrupts kernelstartc57 part job program clint hardware corelocal interruptor generateaninterruptafteracertaindelayanotherpartistosetupascratcharea analogoustothe trapframe help timer interrupt handler save registers address clint registers finally start sets mtvec timervec enables timer interrupts timer interrupt occur point user kernel code executing way kernel disable timer interrupts critical operations thus timer interrupt handler must job way guaranteed disturb interrupted kernel code basic strategy handler ask riscv raise software interrupt immediately return riscv delivers software interrupts kernel ordinary trap mechanism allows kernel disable code handle software interrupt generated timer interrupt seen devintr kerneltrapc204 machinemode timer interrupt vector timervec kernelkernelvecs93 saves registers scratch area prepared start tells clint generate next timer interrupt asks riscv raise software interrupt restores registers returns c code timer interrupt handler", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-07 23:04:59", "segment": "segment40", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.5 Real world ", "content": "xv6 allows device timer interrupts executing kernel well executing user programs timer interrupts force thread switch call yield timer interrupt handler even executing kernel ability timeslice cpu fairly among kernel threads useful kernel threads sometimes spend lot time computing without returning user space however need kernel code mindful might suspended due timer interrupt later resume different cpu source complexity xv6 kernel could made somewhat simpler device timer interrupts occurred executing user code supporting devices typical computer full glory much work many devices devices many features protocol device driver complex poorly documented many operating systems drivers account code core kernel uart driver retrieves data byte time reading uart control registers pattern called programmed io since software driving data movement programmed io simple slow used high data rates devices need move lots data high speed typically use direct memory access dma dma device hardware directly writes incoming data ram reads outgoing data ram modern disk network devices use dma driver dma device would prepare data ram use single write control register tell device process prepared data interrupts make sense device needs attention unpredictable times often interrupts high cpu overhead thus high speed devices networks disk con trollers use tricks reduce need interrupts one trick raise single interrupt whole batch incoming outgoing requests another trick driver disable interrupts entirely check device periodically see needs attention technique called polling polling makes sense device performs operations quickly wastes cpu timeifthedeviceismostlyidlesomedriversdynamicallyswitchbetweenpollingandinterrupts depending current device load uart driver copies incoming data rst buffer kernel user space makes sense low data rates double copy signicantly reduce performance devices generate consume data quickly operating systems able directly move data userspace buffers device hardware often dma", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-07 23:04:59", "segment": "segment41", "image_urls": [], "Book": "riscv-rev1" }, { "section": "5.6 Exercises ", "content": "1 modify uartc use interrupts may need modify consolec well 2 add driver ethernet card", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-07 23:04:59", "segment": "segment42", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 6 ", "content": "locking kernels including xv6 interleave execution multiple activities one source inter leaving multiprocessor hardware computers multiple cpus executing independently xv6 riscv multiple cpus share physical ram xv6 exploits sharing main tain data structures cpus read write sharing raises possibility one cpu reading data structure another cpu midway updating even multiple cpus updating data simultaneously without careful design parallel access likely yield incorrect results broken data structure even uniprocessor kernel may switch cpu among number threads causing execution interleaved finally device interrupt handler modies data interruptible code could damage data interrupt occurs wrong time word concurrency refers situations multiple instruction streams interleaved due multiprocessor parallelism thread switching interrupts kernels full concurrentlyaccessed data example two cpus could simultaneously call kalloc thereby concurrently popping head free list kernel designers like allow lots concurrency since yield increased performance though parallelism increased responsiveness however result kernel designers spend lot effort convincing correctness despite concurrency many ways arrive correct code easier reason others strategies aimed correctness concurrency abstractions support called concurrency control techniques xv6 uses number concurrency control techniques depending situation many possible chapter focuses widely used technique lock lock provides mutual exclusion ensuring one cpu time hold lock programmer associates lock shared data item code always holds associated lock using item item used one cpu time situation say lock pro tects data item although locks easytounderstand concurrency control mechanism downside locks kill performance serialize concurrent operations rest chapter explains xv6 needs locks xv6 implements uses", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-07 23:04:59", "segment": "segment43", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%206.1.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "6.1 Race conditions ", "content": "example need locks consider two processes calling wait two different cpus wait frees child memory thus cpu kernel call kfree free chil dren pages kernel allocator maintains linked list kalloc kernelkallocc69 pops page memory list free pages kfree kernelkallocc47 pushes page onto free list best performance might hope kfrees two parent processes would execute parallel without either wait would correct given xv6 kfree implementation figure 61 illustrates setting detail linked list memory shared two cpus manipulate linked list using load store instructions reality processors caches conceptually multiprocessor systems behave single shared memory concurrent requests might implement list push operation follows 1 struct element 2 int data 3 struct element next 4 5 6 struct element list 0 7 8 void 9 push int data 10 11 struct element l 12 13 l malloc sizeof l 14 l data data 15 l next list 16 list l 17 implementation correct executed isolation however code correct one copy executes concurrently two cpus execute push time might execute line 15 shown fig 61 either executes line 16 results incorrect outcome illustrated figure 62 would two list elements next set former value list two assignments list happen line 16 second one overwrite rst element involved rst assignment lost lost update line 16 example race condition race condition situation memory location accessed concurrently least one access write race often sign bug either lost update accesses writes read incompletelyupdated data structure outcome race depends exact timing two cpus involved memory operations ordered memory system make raceinduced errors difcult reproduce debug example adding print statements debugging push might change timing execution enough make race disappear usual way avoid races use lock locks ensure mutual exclusion one cpu time execute sensitive lines push makes scenario impossible correctly locked version code adds lines highlighted yellow 6 struct element list 0 7 struct lock listlock 8 9 void 10 push int data 11 12 struct element l 13 l malloc sizeof l 14 l data data 15 16 acquire listlock 17 l next list 18 list l 19 release listlock 20 sequence instructions acquire release often called critical section lock typically said protecting list say lock protects data really mean lock protects collection invariants apply data invariants properties data structures maintained across operations typically operation correct behavior depends invariants true operation begins operation may temporarily violate invariants must reestab lish nishing example linked list case invariant list points rst element list element next eld points next element implementation push violates invariant temporarily line 17 l points next list ele ment list point l yet reestablished line 18 race condition examined happened second cpu executed code depended list invariants temporarily violated proper use lock ensures one cpu time operate data structure critical section cpu execute data structure operation data structure invariants hold think lock serializing concurrent critical sections run one time thus preserve invariants assuming critical sections correct isolation also think critical sections guarded lock atomic respect sees complete set changes earlier critical sections never sees partiallycompleted updates although correct use locks make incorrect code correct locks limit performance example two processes call kfree concurrently locks serialize two calls obtain benet running different cpus say multiple processes conict want lock time lock experiences contention major challenge kernel design avoid lock contention xv6 little sophisticated kernels organize data structures algorithms specically avoid lock contention list example kernel may maintain free list per cpu touch another cpu free list cpu list empty must steal memory another cpu use cases may require complicated designs placement locks also important performance example would correct move acquire earlier push ne move call acquire line 13 may reduce performance calls malloc also serialized section using locks provides guidelines insert acquire release invocations", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-07 23:05:00", "segment": "segment44", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%206.2.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "6.2 Code: Locks ", "content": "xv6 two types locks spinlocks sleeplocks start spinlocks xv6 represents aspinlockasastructspinlock kernelspinlockh2 theimportanteldinthestructureis locked word zero lock available nonzero held logically xv6 acquire lock executing code like 21 void 22 acquire struct spinlock lk work 23 24 25 lk locked 0 26 lk locked 1 27 break 28 29 30 unfortunately implementation guarantee mutual exclusion multiprocessor could happen two cpus simultaneously reach line 25 see lk locked zero grab lock executing line 26 point two different cpus hold lock violates mutual exclusion property need way make lines 25 26 execute atomic ie indivisible step locks widely used multicore processors usually provide instructions imple ment atomic version lines 25 26 riscv instruction amoswap r a amoswap reads value memory address writes contents register r address puts value read r swaps contents register memory address performs sequence atomically using special hardware prevent cpu using memory address read write xv6 acquire kernelspinlockc22 uses portable c library call synclocktestandset boils amoswap instruction return value old swapped contents lk locked acquire function wraps swap loop retrying spinning acquired lock iteration swaps one lk locked checks previous value previous value zero acquired lock swap set lk locked one previous value one cpu holds lock fact atomically swapped one lk locked change value lock acquired acquire records debugging cpu acquired lock lk cpu eld protected lock must changed holding lock function release kernelspinlockc47 opposite acquire clears lk cpu eld releases lock conceptually release requires assigning zero lk locked c standard allows compilers implement assignment multiple store instructions c assignment might nonatomic respect concurrent code instead release uses c library function synclockrelease performs atomic assignment function also boils riscv amoswap instruction", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-07 23:05:00", "segment": "segment45", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.3 Code: Using locks ", "content": "xv6 uses locks many places avoid race conditions described kalloc kernelkallocc69 kfree kernelkallocc47 form good example try exercises 1 2 see happens functions omit locks likely nd difcult trigger incorrect behavior suggesting hard reliably test whether code free locking errors races unlikely xv6 races hard part using locks deciding many locks use data invariants lock protect basic principles first time variable written one cpu time another cpu read write lock used keep two operations overlapping second remember locks protect invariants invariant involves multiple memory locations typically need protected single lock ensure invariant maintained rules say locks necessary say nothing locks unnec essary important efciency lock much locks reduce parallelism parallelism important one could arrange single thread worry locks simple kernel multiprocessor single lock must acquired entering kernel released exiting kernel though system calls pipe reads wait would pose problem many uniprocessor operating systems converted run multiprocessors using approach sometimes called big kernel lock approach sacrices parallelism one cpu execute kernel time kernel heavy computation would efcient use larger set negrained locks kernel could execute multiple cpus simultaneously example coarsegrained locking xv6 kallocc allocator single free list pro tected single lock multiple processes different cpus try allocate pages time wait turn spinning acquire spinning reduces performance since useful work contention lock wasted signicant fraction cpu time perhaps performance could improved changing allocator design multiple free lists lock allow truly parallel allocation example negrained locking xv6 separate lock le processes manipulate different les often proceed without waiting locks le locking scheme could made even negrained one wanted allow processes simul taneously write different areas le ultimately lock granularity decisions need driven performance measurements well complexity considerations subsequent chapters explain part xv6 mention examples xv6 use locks deal concurrency preview figure 63 lists locks xv6", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-07 23:05:00", "segment": "segment46", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.4 Deadlock and lock ordering ", "content": "code path kernel must hold several locks time important code paths acquire locks order risk deadlock let saytwocodepathsinxv6needlocksaandb butcodepath1acquireslocksintheorderathen b path acquires order b a suppose thread t1 executes code path 1 acquires lock thread t2 executes code path 2 acquires lock b next t1 try acquire lock b t2 try acquire lock a acquires block indenitely cases thread holds needed lock release acquire returns avoid deadlocks code paths must acquire locks order need global lock acquisition order means locks effectively part function specication callers must invoke functions way causes locks acquired agreedon order xv6 many lockorder chains length two involving perprocess locks lock struct proc due way sleep works see chapter 7 example consoleintr kernelconsolec138 interrupt routine handles typed characters newline ar rives process waiting console input woken consoleintr holds conslock calling wakeup acquires waiting process lock order wake consequence global deadlockavoiding lock order includes rule conslock must acquired process lock lesystem code contains xv6 longest lock chains example creating le requires simultaneously holding lock directory lock new le inode lock disk block buffer disk driver vdisklock calling pro cess p lock avoid deadlock lesystem code always acquires locks order mentioned previous sentence honoring global deadlockavoiding order surprisingly difcult sometimes lock order conicts logical program structure eg perhaps code module m1 calls module m2 lock order requires lock m2 acquired lock m1 sometimes identities locks known advance perhaps one lock must held order discover identity lock acquired next kind situation arises le system looks successive components path name code wait exit search table ofprocesseslookingforchildprocessesfinally thedangerofdeadlockisoftenaconstrainton negrained one make locking scheme since locks often means opportunity deadlock need avoid deadlock often major factor kernel implementation", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-07 23:05:01", "segment": "segment47", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%206.3.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "6.5 Locks and interrupt handlers ", "content": "xv6 spinlocks protect data used threads interrupt handlers example clockintr timer interrupt handler might increment ticks kerneltrapc163 time kernel thread reads ticks syssleep kernelsysprocc64 lock tickslock serializes two accesses interaction spinlocks interrupts raises potential danger suppose syssleep holds tickslock cpu interrupted timer interrupt clockintr would try acquire tickslock see held wait released situation tickslock never released syssleep release syssleep continue running clockintr returns cpu deadlock code needs either lock also freeze avoid situation spinlock used interrupt handler cpu must never hold lock interrupts enabled xv6 conservative cpu acquires lock xv6 always disables interrupts cpu interrupts may still occur cpus interrupt acquire wait thread release spinlock cpu xv6 reenables interrupts cpu holds spinlocks must little bookkeeping cope nested critical sections acquire calls pushoff kernelspinlockc89 release calls popoff kernelspinlockc100 track nesting level locks current cpu count reaches zero popoff restores interrupt enable state existed start outermost critical section introff intron functions execute riscv instructions disable enable interrupts respectively important acquire call pushoff strictly setting lk locked kernelspin lockc28 two reversed would brief window lock held interrupts enabled unfortunately timed interrupt would deadlock system similarly important release call popoff releasing lock kernelspinlockc66", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-07 23:05:01", "segment": "segment48", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.6 Instruction and memory ordering ", "content": "natural think programs executing order source code statements appear many compilers cpus however execute code order achieve higher performance instruction takes many cycles complete cpu may issue instruction early overlap instructions avoid cpu stalls example cpu may notice serial sequence instructions b dependent cpu may start instruction b rst either inputs ready inputs order overlap execution b compiler may perform similar reordering emitting instructions one statement instructions statement precedes source compilers cpus follow rules reorder ensure change results correctlywritten serial code however rules allow reordering changes results concurrent code easily lead incorrect behavior multiprocessors 2 3 cpu ordering rules called memory model example code push would disaster compiler cpu moved store corresponding line 4 point release line 6 1 l malloc sizeof l 2 l data data 3 acquire listlock 4 l next list 5 list l 6 release listlock reordering occurred would window another cpu could acquire lock observe updated list see uninitialized list next tell hardware compiler perform reorderings xv6 uses syncsynchronize acquire kernelspinlockc22 release kernelspinlockc47 syncsynchronize memory barrier tells compiler cpu reorder loads stores across barrier barriers xv6 acquire release force order almost cases matters since xv6 uses locks around accesses shared data chapter 9 discusses exceptions", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-07 23:05:01", "segment": "segment49", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.7 Sleep locks ", "content": "sometimes xv6 needs hold lock long time example le system chapter 8 keeps le locked reading writing content disk disk operations take tens milliseconds holding spinlock long would lead waste another process wanted acquire since acquiring process would waste cpu long time spinning another drawback spinlocks process yield cpu retaining spinlock like processes use cpu process lock waits disk yielding holding spinlock illegal might lead deadlock second thread tried acquire spinlock since acquire yield cpu second thread spinning might prevent rst thread running releasing lock yielding holding lock would also violate requirement interrupts must spinlock held thus like type lock yields cpu waiting acquire allows yields interrupts lock held xv6 provides locks form sleeplocks acquiresleep kernelsleeplockc22 yields cpu waiting using techniques explained chapter 7 high level sleeplock locked eld protected spinlock acquiresleep call sleep atomically yields cpu releases spinlock result threads execute acquiresleep waits sleeplocks leave interrupts enabled used interrupt handlers be cause acquiresleep may yield cpu sleeplocks used inside spinlock critical sections though spinlocks used inside sleeplock critical sections spinlocks best suited short critical sections since waiting wastes cpu time sleeplocks work well lengthy operations", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-07 23:05:01", "segment": "segment50", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.8 Real world ", "content": "programming locks remains challenging despite years research concurrency primitives parallelism often best conceal locks within higherlevel constructs like synchronized queues although xv6 program locks wise use tool attempts identify race conditions easy miss invariant requires lock operating systems support posix threads pthreads allow user process several threads running concurrently different cpus pthreads support userlevel locks barriers etc supporting pthreads requires support operating system example case one pthread blocks system call another pthread process able run cpu another example pthread changes process address space eg maps unmaps memory kernel must arrange cpus run threads process update hardware page tables reect change address space possible implement locks without atomic instructions 8 expensive operating systems use atomic instructions locks expensive many cpus try acquire lock time one cpu lock cached local cache another cpu must acquire lock atomic instruction update cache line holds lock must move line one cpu cache cpu cache perhaps invalidate copies cache line fetching cache line another cpu cache orders magnitude expensive fetching line local cache avoid expenses associated locks many operating systems use lockfree data struc tures algorithms 5 10 example possible implement linked list like one beginning chapter requires locks list searches one atomic instruction insert item list lockfree programming complicated however programming locks example one must worry instruction memory reordering programming locks already hard xv6 avoids additional complexity lockfree programming", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-07 23:05:01", "segment": "segment51", "image_urls": [], "Book": "riscv-rev1" }, { "section": "6.9 Exercises ", "content": "1 comment calls acquire release kalloc kernelkallocc69 seems like cause problems kernel code calls kalloc symptoms expect see run xv6 see symptoms running usertests see problem see provoke problem inserting dummy loops critical section kalloc 2 suppose instead commented locking kfree restoring locking kalloc might go wrong lack locks kfree less harmful kalloc 3 two cpus call kalloc time one wait bad performance modify kallocc parallelism simultaneous calls kallocfromdifferentcpuscanproceedwithoutwaitingforeachother 4 write parallel program using posix threads supported operating sys tems example implement parallel hash table measure number putsgets scales increasing number cores 5 implement subset pthreads xv6 implement userlevel thread library user process 1 thread arrange threads run parallel different cpus come design correctly handles thread making blocking system call changing shared address space", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-07 23:05:01", "segment": "segment52", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 7 ", "content": "scheduling operating system likely run processes computer cpus plan needed timeshare cpus among processes ideally sharing would transparent user processes common approach provide process illusion virtual cpu multiplexing processes onto hardware cpus chapter explains xv6 achieves multiplexing", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-07 23:05:01", "segment": "segment53", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.1 Multiplexing ", "content": "xv6 multiplexes switching cpu one process another two situations first xv6 sleep wakeup mechanism switches process waits device pipe io complete waits child exit waits sleep system call second xv6 periodically forces switch cope processes compute long periods without sleeping multiplexing creates illusion process cpu xv6 uses memory allocator hardware page tables create illusion process memory implementing multiplexing poses challenges first switch one process another although idea context switching simple implementation opaque code xv6 second force switches way transparent user processes xv6 uses standard technique driving context switches timer interrupts third many cpus may switching among processes concurrently locking plan necessary avoid races fourth process memory resources must freed process exits example free kernel stack still using fifth core multicore machine must remember process executing system calls affect correct process kernel state finally sleep wakeup allow process give cpu sleep waiting event allows another process wake rst process care needed avoid races result loss wakeup notications xv6 tries tosolvetheseproblemsassimplyaspossible butneverthelesstheresultingcodeistricky", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-07 23:05:01", "segment": "segment54", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%207.1.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "7.2 Code: Context switching ", "content": "figure 71 outlines steps involved switching one user process another userkernel transition system call interrupt old process kernel thread context switch current cpu scheduler thread context switch new process kernel thread trap return userlevel process xv6 scheduler dedicated thread saved registers stack per cpu safe scheduler execute old process kernel stack core might wake process run would disaster use stack two different cores section examine mechanics switching kernel thread scheduler thread switching one thread another involves saving old thread cpu registers restor ing previouslysaved registers new thread fact stack pointer program counter saved restored means cpu switch stacks switch code executing function swtch performs saves restores kernel thread switch swtch directly know threads saves restores register sets called contexts time process give cpu process kernel thread calls swtch save context return scheduler context context contained struct context kernelproch2 contained process struct proc cpu struct cpu swtch takes two arguments struct context old struct context new saves current registers old loads registers new returns let follow process swtch scheduler saw chapter 4 one possibil ity end interrupt usertrap calls yield yield turn calls sched calls swtch save current context p context switch scheduler context previously saved cpu scheduler kernelprocc509 swtch kernelswtchs3 saves calleesaved registers callersaved registers saved stack needed calling c code swtch knows offset register eld struct context save program counter instead swtch saves ra register holds return address swtch called swtch restores registers new context holds register values saved previous swtch swtch returns returns instructions pointed restored ra register instruction new thread previously called swtch addition returns new thread stack example sched called swtch switch cpu scheduler percpu scheduler context context saved scheduler call swtch kernelprocc475 swtch tracing returns returns sched scheduler stack pointer points current cpu scheduler stack", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-07 23:05:02", "segment": "segment55", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.3 Code: Scheduling ", "content": "last section looked lowlevel details swtch let take swtch given examine switching one process kernel thread scheduler another process scheduler exists form special thread per cpu running scheduler function function charge choosing process run next process wants give cpu must acquire process lock p lock release locks holding update state p state call sched yield kernelprocc515 follows convention sleep exit examine later sched doublechecks conditions kernelprocc499504 implication conditions since lock held interrupts disabled finally sched calls swtch save current context p context switch scheduler context cpu scheduler swtch returns scheduler stack though scheduler swtch returned scheduler continues loop nds process run switches cycle repeats saw xv6 holds p lock across calls swtch caller swtch must already hold lock control lock passes switchedto code convention unusual locks usually thread acquires lock also responsible releasing lock makes easier reason correctness context switching necessary break convention p lock protects invariants process state context elds true executing swtch one example problem could arise p lock held swtch different cpu might decide run process yield set state runnable swtch caused stop using kernel stack result would two cpus running stack right kernel thread always gives cpu sched always switches loca tion scheduler almost always switches kernel thread previously called sched thus one print line numbers xv6 switches threads one would ob serve following simple pattern kernelprocc475 kernelprocc509 kernelprocc475 ker nelprocc509 procedures stylized switching two threads happens sometimes referred coroutines example sched scheduler coroutines one case scheduler call swtch end sched new process rst scheduled begins forkret kernelprocc527 forkret exists release p lock otherwise new process could start usertrapret scheduler kernelprocc457 runs simple loop nd process run run yields repeat scheduler loops process table looking runnable process one p state runnable nds process sets percpu current process variable c proc marks process running calls swtch start running kernelprocc470 475 one way think structure scheduling code enforces set invariants process holds p lock whenever invariants true one invariant process running timer interrupt yield must able safely switch away process means cpu registers must hold process register values ie swtch moved context c proc must refer process another invariant process runnable must safe idle cpu scheduler run means p context must hold process registers ie actually real registers cpu executing process kernel stack cpu c proc refers process observe properties often true p lock held maintaining invariants reason xv6 often acquires p lock one thread releases example acquiring yield releasing scheduler yield started modify running process state make runnable lock must remain held invariants restored earliest correct release point scheduler running stack clears c proc similarly scheduler starts convert runnable process running lock released kernel thread completely running swtch example yield p lock protects things well interplay exit wait machinery avoid lost wakeups see section 75 avoidance races process exiting processes reading writing state eg exit system call looking p pid setting p killed kernelprocc611 might worth thinking whether different functions p lock could split clarity perhaps performance", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-07 23:05:02", "segment": "segment56", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.4 Code: mycpu and myproc ", "content": "xv6 often needs pointer current process proc structure uniprocessor one could global variable pointing current proc work multicore machine since core executes different process way solve problem exploit fact core set registers use one registers help nd percore information xv6 maintains struct cpu cpu kernelproch22 records process cur rently running cpu saved registers cpu scheduler thread count nested spinlocks needed manage interrupt disabling function mycpu kernelprocc60 returns pointer current cpu struct cpu riscv numbers cpus giving hartid xv6 ensures cpu hartid stored cpu tp register kernel allows mycpu use tp index array cpu structures nd right one ensuring cpu tp always holds cpu hartid little involved mstart sets tp register early cpu boot sequence still machine mode kernelstartc46 usertrapretsavestpinthetrampolinepage becausetheuserprocessmightmodifytpfinally uservec restores saved tp entering kernel user space kerneltrampolines70 compiler guarantees never use tp register would convenient riscv allowed xv6 read current hartid directly allowed machine mode supervisor mode return values cpuid mycpu fragile timer interrupt cause thread yield move different cpu previously returned value would longer correct avoid problem xv6 requires callers disable interrupts enable nish using returned struct cpu function myproc kernelprocc68 returns struct proc pointer process running current cpu myproc disables interrupts invokes mycpu fetches current process pointer c proc struct cpu enables interrupts return value myproc safe use even interrupts enabled timer interrupt moves calling process different cpu struct proc pointer stay", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-07 23:05:02", "segment": "segment57", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.5 Sleep and wakeup ", "content": "scheduling locks help conceal existence one process another far abstractions help processes intentionally interact many mechanisms invented solve problem xv6 uses one called sleep wakeup allow one process sleep waiting event another process wake event happened sleep wakeup often called sequence coordination conditional synchronization mechanisms illustrate let consider synchronization mechanism called semaphore 4 coordi nates producers consumers semaphore maintains count provides two operations v operation producer increments count p operation consumer waits count nonzero decrements returns one producer thread one consumer thread executed different cpus compiler optimize aggressively implementation would correct 100 struct semaphore 101 struct spinlock lock 102 int count 103 104 105 void 106 v struct semaphore 107 108 acquire s lock 109 s count 1 110 release s lock 111 112 113 void 114 p struct semaphore 115 116 s count 0 117 118 acquire s lock 119 s count 1 120 release s lock 121 implementation expensive producer acts rarely consumer spend time spinning loop hoping nonzero count consumer cpu could nd productive work busy waiting repeatedly polling s count avoid ing busy waiting requires way consumer yield cpu resume v incre ments count step direction though see enough let imagine pair calls sleep wakeup work follows sleep chan sleeps arbitrary value chan called wait channel sleep puts calling process sleep releasing cpu work wakeup chan wakes processes sleeping chan causing sleep calls return processes waiting chan wakeup nothing change semaphore implementation use sleep wakeup changes highlighted yellow 200 void 201 v struct semaphore 202 203 acquire s lock 204 s count 1 205 wakeup 206 release s lock 207 208 209 void 210 p struct semaphore 211 212 s count 0 213 sleep 214 acquire s lock 215 s count 1 216 release s lock 217 p gives cpu instead spinning nice however turns straightforward design sleep wakeup interface without suffering known lost wakeup problem suppose p nds s count 0 line 212 p lines 212 213 v runs another cpu changes s count nonzero calls wakeup nds processes sleeping thus nothing p continues executing line 213 calls sleep goes sleep causes problem p asleep waiting v call already happened unless get lucky producer calls v consumer waitforevereventhoughthecountisnonzero root problem invariant p sleeps s count 0 violated v running wrong moment incorrect way protect invariant would move lock acquisition highlighted yellow p check count call sleep atomic 300 void 301 v struct semaphore 302 303 acquire s lock 304 s count 1 305 wakeup 306 release s lock 307 308 309 void 310 p struct semaphore 311 312 acquire s lock 313 s count 0 314 sleep 315 s count 1 316 release s lock 317 one might hope version p would avoid lost wakeup lock prevents v executing lines 313 314 also deadlocks p holds lock sleeps v block forever waiting lock x preceding scheme changing sleep interface caller must pass con dition lock sleep release lock calling process marked asleep waiting sleep channel lock force concurrent v wait p nished putting sleep wakeup nd sleeping consumer wake con sumer awake sleep reacquires lock returning new correct sleepwakeup scheme usable follows change highlighted yellow 400 void 401 v struct semaphore 402 403 acquire s lock 404 s count 1 405 wakeup 406 release s lock 407 408 409 void 410 p struct semaphore 411 412 acquire s lock 413 s count 0 414 sleep s lock 415 s count 1 416 release s lock 417 fact p holds s lock prevents v trying wake p check c count call sleep note however need sleep atomically release s lock put consuming process sleep", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-07 23:05:03", "segment": "segment58", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.6 Code: Sleep and wakeup ", "content": "let look implementation sleep kernelprocc548 wakeup kernelprocc582 basic idea sleep mark current process sleeping call sched re lease cpu wakeup looks process sleeping given wait channel marks runnable callers sleep wakeup use mutually convenient number chan nel xv6 often uses address kernel data structure involved waiting sleep acquires p lock kernelprocc559 process going sleep holds p lock lk holding lk necessary caller example p ensured pro cess example one running v could start call wakeup chan sleep holds p lock safe release lk process may start call wakeup chan wakeup wait acquire p lock thus wait sleep nished putting process sleep keeping wakeup missing sleep minor complication lk lock p lock sleep would deadlock tried acquire p lock process calling sleep already holds p lock need anything order avoiding missing concurrent wakeup case arises wait kernelprocc582 calls sleep p lock sleep holds p lock others put process sleep recording sleep channel changing process state sleeping calling sched kernelprocc564567 moment clear critical p lock released scheduler process marked sleeping point process acquire condition lock set condition sleeper waiting call wakeup chan important wakeup called holding condition lock1 wakeup loops process table kernelprocc582 acquires p lock process inspects may manipulate process state p lock ensures sleep wakeup miss wakeup nds process state sleeping matching chan changes process state runnable next time scheduler runs see process ready run locking rules sleep wakeup ensure sleeping process miss wakeup sleeping process holds either condition lock p lock 1strictly speaking sufcient wakeup merely follows acquire one could call wakeup release point checks condition point marked sleeping process calling wakeup holds locks wakeup loop thus waker either makes condition true consuming thread checks condition waker wakeup examines sleep ing thread strictly marked sleeping wakeup see sleeping process wake unless something else wakes rst sometimes case multiple processes sleeping channel example one process reading pipe single call wakeup wake one run rst acquire lock sleep called case pipes read whatever data waiting pipe processes nd despite woken data read point view wakeup spurious must sleep reason sleep always called inside loop checks condition harm done two uses sleepwakeup accidentally choose channel see spurious wakeups looping described tolerate problem much charm sleepwakeup lightweight need create special data structures act sleep channels provides layer indirection callers need know specic process interacting", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-07 23:05:03", "segment": "segment59", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.7 Code: Pipes ", "content": "complex example uses sleep wakeup synchronize producers consumers xv6 implementation pipes saw interface pipes chapter 1 bytes written one end pipe copied inkernel buffer read end pipe future chapters examine le descriptor support surrounding pipes let look implementations pipewrite piperead pipe represented struct pipe contains lock data buffer elds nread nwrite count total number bytes read written buffer buffer wraps around next byte written buf pipesize1 buf 0 counts wrap convention lets implementation distinguish full buffer nwrite nreadpipesize empty buffer nwrite nread means indexing buffer must use buf nread pipesize instead buf nread similarly nwrite let suppose calls piperead pipewrite happen simultaneously two different cpus pipewrite kernelpipec77 begins acquiring pipe lock protects counts data associated invariants piperead kernelpipec103 tries acquire lock spins acquire kernelspinlockc22 waiting lock piperead waits pipewrite loops bytes written addr 0 n1 adding pipe turn kernelpipec95 loop could happen buffer lls kernelpipec85 case pipewrite calls wakeup alert sleeping readers fact data waiting buffer sleeps pi nwrite wait reader take bytes buffer sleep releases pi lock part putting pipewrite process sleep pi lock available piperead manages acquire enters critical sec tion nds pi nread pi nwrite kernelpipec110 pipewrite went sleep be cause pi nwrite pi nreadpipesize kernelpipec85 falls loop copies data pipe kernelpipec117 increments nread number bytes copied many bytes available writing piperead calls wakeup ker nelpipec124 wake sleeping writers returns wakeup nds process sleeping pi nwrite process running pipewrite stopped buffer lled marks process runnable pipe code uses separate sleep channels reader writer pi nread pi nwrite might make system efcient unlikely event lots readers writers waiting pipe pipe code sleeps inside loop checking sleep condition multiple readers writers rst process wake see condition still false sleep", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-07 23:05:03", "segment": "segment60", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.8 Code: Wait, exit, and kill ", "content": "sleep wakeup used many kinds waiting interesting example introduced chapter 1 interaction child exit parent wait time child death parent may already sleeping wait may something else latter case subsequent call wait must observe child death perhaps long calls exit way xv6 records child demise wait observes exit put caller zombie state stays parent wait notices changes child state unused copies child exit status returns child process id parent parent exits child parent gives child init process perpetually calls wait thus every child parent clean main implementation challenge possibility races deadlock parent child wait exit well exit exit wait uses calling process p lock condition lock avoid lost wakeups acquires lock start kernelprocc398 scans process table nds child zombie state frees child resources proc structure copies child exit status address supplied wait 0 returns child process id wait nds children none exited calls sleep wait one exit kernelprocc445 scans condition lock released sleep waiting process p lock special case mentioned note wait often holds two locks acquires lock trying acquire child lock thus xv6 must obey locking order parent child order avoid deadlock wait looks every process np parent nd children uses np parent with holding np lock violation usual rule shared variables must pro tected locks possible np ancestor current process case acquiring np lock could cause deadlock since would violate order mentioned examining np parent without lock seems safe case process parent eld changed parent np parentp true value change unless current process changes exit kernelprocc333 records exit status frees resources gives children init process wakes parent case wait marks caller zombie permanently yields cpu nal sequence little tricky exiting process must hold parent lock sets state zombie wakes parent since parent lock condition lock guards lost wakeups wait child must also hold p lock since otherwise parent might see state zombie free still running lock acquisition order important avoid deadlock since wait acquires parent lock child lock exit must use order exit calls specialized wakeup function wakeup1 wakes parent sleeping wait kernelprocc598 may look incorrect child wake parent setting state zombie safe although wakeup1 may cause parent run loop wait examine child child p lock released scheduler wait look exiting process well exit set state zombie ker nelprocc386 exit allows process terminate kill kernelprocc611 lets one process re quest another terminate would complex kill directly destroy victim process since victim might executing another cpu perhaps middle sensitive sequence updates kernel data structures thus kill little sets victim p killed sleeping wakes eventually victim enter leave kernel point code usertrap call exit p killed set victim running user space soon enter kernel making system call timer device interrupts victim process sleep kill call wakeup cause victim return sleep potentially dangerous condition waiting may true however xv6 calls sleep always wrapped loop retests condition sleep returns calls sleep also test p killed loop abandon current activity set done abandonment would correct example pipe read write code returns killed ag set eventually code return back trap check ag exit xv6 sleep loops check p killed code middle multi step system call atomic virtio driver kernelvirtiodiskc242 example check p killed disk operation may one set writes needed order le system left correct state process killed waiting disk io exit completes current system call usertrap sees killed ag", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-07 23:05:04", "segment": "segment61", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.9 Real world ", "content": "xv6 scheduler implements simple scheduling policy runs process turn policy called round robin real operating systems implement sophisticated policies example allow processes priorities idea runnable highpriority process preferred scheduler runnable lowpriority process policies become complex quickly often competing goals example operating might also wanttoguaranteefairnessandhighthroughputinaddition complexpoliciesmayleadtounin tended interactions priority inversion convoys priority inversion happen lowpriority highpriority process share lock acquired lowpriority process prevent highpriority process making progress long convoy waiting processes form many highpriority processes waiting lowpriority process acquires shared lock convoy formed persist long time avoid kinds problems additional mechanisms necessary sophisticated schedulers sleep wakeup simple effective synchronization method many others rst challenge avoid lost wakeups problem saw beginning chapter original unix kernel sleep simply disabled interrupts suf ced unix ran singlecpu system xv6 runs multiprocessors adds explicit lock sleep freebsd msleep takes approach plan 9 sleep uses callback function runs scheduling lock held going sleep function serves lastminute check sleep condition avoid lost wakeups linux kernel sleep uses explicit process queue called wait queue instead wait channel queue internal lock scanning entire process list wakeup processes matching chan inefcient better solution replace chan sleep wakeup data structure holds list processes sleeping structure linux wait queue plan 9 sleep wakeup call structure rendezvous point rendez many thread libraries refer structure condition variable context operations sleep wakeup called wait signal mechanisms share avor sleep condition protected kind lock dropped atomically sleep implementation wakeup wakes processes waiting particular chan nel might case many processes waiting particular channel operating system schedule processes race check sleep condition processes behave way sometimes called thundering herd best avoided condition variables two primitives wakeup signal wakes one process broadcast wakes waiting processes semaphores often used synchronization count typically corresponds something like number bytes available pipe buffer number zombie children process using explicit count part abstraction avoids lost wakeup problem explicit count number wakeups occurred count also avoids spurious wakeup thundering herd problems terminating processes cleaning introduces much complexity xv6 oper ating systems even complex example victim process may deep inside kernel sleeping unwinding stack requires much careful programming many operating systems unwind stack using explicit mechanisms exception handling longjmp furthermore events cause sleeping process woken even though event waiting happened yet example unix process sleeping another process may send signal case process return interrupted system call value 1 error code set eintr application check thesevaluesanddecidewhattodoxv6doesn tsupportsignalsandthiscomplexitydoesn tarise xv6 support kill entirely satisfactory sleep loops probably check p killed related problem even sleep loops check p killed race sleep kill latter may set p killed try wake victim victim loop checks p killed calls sleep problem occurs victim notice p killed condition waiting occurs may quite bit later eg virtio driver returns disk block victim waiting never eg victim waiting input console user type input real operating system would nd free proc structures explicit free list constant time instead lineartime search allocproc xv6 uses linear scan simplicity", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-07 23:05:04", "segment": "segment62", "image_urls": [], "Book": "riscv-rev1" }, { "section": "7.10 Exercises ", "content": "1 sleep check lk p lock avoid deadlock kernelprocc558561 suppose special case eliminated replacing lk p lock acquire p lock release lk release lk acquire p lock would break sleep 2 process cleanup could done either exit wait turns exit must one close open les answer involves pipes 3 implement semaphores xv6 without using sleep wakeup ok use spin locks replace uses sleep wakeup xv6 semaphores judge result 4 fix race mentioned kill sleep kill occurs victim sleep loop checks p killed calls sleep results victim abandoning current system call 5 design plan every sleep loop checks p killed example process virtio driver return quickly loop killed another process 6 modify xv6 use one context switch switching one process kernel thread another rather switching scheduler thread yielding thread need select next thread call swtch challenges prevent multiple cores executing thread accidentally get locking right avoid deadlocks 7 modify xv6 scheduler use riscv wfi wait interrupt instruction processes runnable try ensure time runnable processes waiting run cores pausing wfi 8 lock p lock protects many invariants looking particular piece xv6 code protected p lock difcult gure invariant enforced design plan clean splitting p lock several locks", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-07 23:05:04", "segment": "segment63", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 8 ", "content": "file system purpose le system organize store data file systems typically support sharing data among users applications well persistence data still available reboot xv6 le system provides unixlike les directories pathnames see chapter 1 stores data virtio disk persistence see chapter 4 le system addresses several challenges le system needs ondisk data structures represent tree named directories les record identities blocks hold le content record areas disk free le system must support crash recovery crash eg power failure occurs le system must still work correctly restart risk crash might interrupt sequence updates leave inconsistent ondisk data structures eg block used le marked free different processes may operate le system time lesystem code must coordinate maintain invariants accessing disk orders magnitude slower accessing memory le system must maintain inmemory cache popular blocks rest chapter explains xv6 addresses challenges", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-07 23:05:04", "segment": "segment64", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.1 Overview ", "content": "xv6 le system implementation organized seven layers shown figure 81 disk layer reads writes blocks virtio hard drive buffer cache layer caches disk blocks synchronizes access making sure one kernel process time modify data stored particular block logging layer allows higher layers wrap updates toseveralblocksinatransaction andensuresthattheblocksareupdatedatomicallyintheface crashes ie updated none inode layer provides individual les represented inode unique inumber blocks holding le data di rectory layer implements directory special kind inode whose content sequence directory entries contains le name inumber pathname layer provides hierarchical path names like usrrtmxv6fsc resolves recursive lookup le descriptor layer abstracts many unix resources eg pipes devices les etc using le system interface simplifying lives application programmers le system must plan stores inodes content blocks disk xv6 divides disk several sections figure 82 shows le system use block 0 holds boot sector block 1 called superblock contains metadata le system le system size blocks number data blocks number inodes number blocks log blocks starting 2 hold log log inodes multiple inodes per block come bitmap blocks tracking data blocks use remaining blocks data blocks either marked free bitmap block holds content le directory superblock lled separate program called mkfs builds initial le system rest chapter discusses layer starting buffer cache look situa tions wellchosen abstractions lower layers ease design higher ones", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-07 23:05:04", "segment": "segment65", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%208.1.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "8.2 Buffer cache layer ", "content": "buffer cache two jobs 1 synchronize access disk blocks ensure one copy block memory one kernel thread time uses copy 2 cache popular blocks need reread slow disk code bioc main interface exported buffer cache consists bread bwrite former obtains buf containing copy block read modied memory latter writes modied buffer appropriate block disk kernel thread must release buffer bycallingbrelsewhenitisdonewithitthebuffercacheusesaperbuffersleeplocktoensure one thread time uses buffer thus disk block bread returns locked buffer brelse releases lock let return buffer cache buffer cache xed number buffers hold disk blocks means le system asks block already cache buffer cache must recycle buffer currently holding block buffer cache recycles least recently used buffer new block assumption least recently used buffer one least likely used soon", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-07 23:05:04", "segment": "segment66", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%208.2.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "8.3 Code: Buffer cache ", "content": "buffer cache doublylinked list buffers function binit called main kernel mainc27 initializes list nbuf buffers static array buf kernelbioc4352 access buffer cache refer linked list via bcachehead buf array buffer two state elds associated eld valid indicates buffer con tains copy block eld disk indicates buffer content handed disk may change buffer eg write data disk data bread kernelbioc93 calls bget get buffer given sector kernelbioc97 buffer needs read disk bread calls virtiodiskrw returning buffer bget kernelbioc59 scans buffer list buffer given device sector numbers kernelbioc6573 buffer bget acquires sleeplock buffer bget returns locked buffer cached buffer given sector bget must make one possibly reusing buffer held different sector scans buffer list second time looking buffer use b refcnt 0 buffer used bget edits buffer metadata record new device sector number acquires sleeplock note assignment b valid 0 ensures bread read block data disk rather incorrectly using buffer previous contents important one cached buffer per disk sector ensure readers see writes le system uses locks buffers synchronization bget ensures invariant holding bachelock continuously rst loop check whether block cached second loop declaration block cached setting dev blockno refcnt causes check block presence present designation buffer hold block atomic safe bget acquire buffer sleeplock outside bcachelock critical section since nonzero b refcnt prevents buffer reused different disk block sleeplock protects reads writes block buffered content bcachelock protects information blocks cached buffers busy many processes simultaneously executing le system calls bget panics graceful response might sleep buffer became free though would possibility deadlock bread read disk needed returned buffer caller caller exclusive use buffer read write data bytes caller modify buffer must call bwrite write changed data disk releasing buffer bwrite kernelbioc107 calls virtiodiskrw talk disk hardware caller done buffer must call brelse release name brelse shortening brelease cryptic worth learning originated unix used bsd linux solaris brelse kernelbioc117 releases sleeplock moves buffer front linked list kernelbioc128133 moving buffer causes list ordered recently buffers used meaning released rst buffer list recently used last least recently used two loops bget take advantage scan existing buffer must process entire list worst case checking recently used buffers rst starting bcachehead following next pointers reduce scan time good locality reference scan pick buffer reuse picks least recently used buffer scanning backward following prev pointers", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-07 23:05:05", "segment": "segment67", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.4 Logging layer ", "content": "one interesting problems le system design crash recovery problem arises many lesystem operations involve multiple writes disk crash subset writes may leave ondisk le system inconsistent state example suppose crash occurs le truncation setting length le zero freeing content blocks depending order disk writes crash may either leave inode reference content block marked free may leave allocated unreferenced content block latter relatively benign inode refers freed block likely cause serious problems reboot reboot kernel might allocate block another le two different les pointing unintentionally block xv6 supported multiple users situation could security problem since old le owner would able read write blocks new le owned different user xv6 solves problem crashes lesystem operations simple form logging xv6 system call directly write ondisk le system data structures instead places description disk writes wishes make log disk system call logged writes writes special commit record disk indicating log contains complete operation point system call copies writes ondisk le system data structures writes completed system call erases log disk system crash reboot lesystem code recovers crash follows running processes log marked containing complete operation recovery code copies writes belong ondisk le system log marked containing complete operation recovery code ignores log recovery code nishes erasing log xv6 log solve problem crashes le system operations crash occurs operation commits log disk marked complete re covery code ignore state disk operation even started crash occurs operation commits recovery replay operation writes perhaps repeating operation started write ondisk data structure either case log makes operations atomic respect crashes recovery either operation writes appear disk none appear", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-07 23:05:05", "segment": "segment68", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.5 Log design ", "content": "log resides known xed location specied superblock consists header block followed sequence updated block copies logged blocks header block contains array sector numbers one logged blocks count log blocks count header block disk either zero indicating transaction log non zero indicating log contains complete committed transaction indicated number logged blocks xv6 writes header block transaction commits sets count zero copying logged blocks le system thus crash midway transaction result count zero log header block crash commit result nonzero count system call code indicates start end sequence writes must atomic respect crashes allow concurrent execution lesystem operations different pro cesses logging system accumulate writes multiple system calls one transaction thus single commit may involve writes multiple complete system calls avoid splitting system call across transactions logging system commits lesystem system calls underway idea committing several transactions together known group commit group commit reduces number disk operations amortizes xed cost commit multiple operations group commit also hands disk system concurrent writes time perhaps allowing disk write single disk rotation xv6 virtio driver support kind batching xv6 le system design allows xv6 dedicates xed amount space disk hold log total number blocks written system calls transaction must t space two consequences single system call allowed write distinct blocks space log problem system calls two potentially write many blocks write unlink large le write may write many data blocks many bitmap blocks well inode block unlinking large le might write many bitmap blocks inode xv6 write system call breaks large writes multiple smaller writes t log unlink cause problems practice xv6 le system uses one bitmap block otherconsequenceoflimitedlogspaceisthattheloggingsystemcannotallowasystemcallto start unless certain system call writes t space remaining log", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-07 23:05:05", "segment": "segment69", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.6 Code: logging ", "content": "typical use log system call looks like beginop bp bread bp data logwrite bp endop beginop kernellogc126 waits logging system currently committing enough unreserved log space hold writes call logoutstanding counts number system calls reserved log space total reserved space logoutstanding times maxopblocks incrementing logoutstanding reserves space prevents com mit occuring system call code conservatively assumes system call might write maxopblocks distinct blocks logwrite kernellogc214 acts proxy bwrite records block sector number memory reserving slot log disk pins buffer block cache prevent block cache evicting block must stay cache committed cached copy record modication written place disk commit reads transaction must see modications logwrite notices block written multiple times single transaction allocates block slot log optimization often called absorption common example disk block containing inodes several les written several times within transaction absorbing several disk writes one le system save log space achieve better performance one copy disk block must written disk endop kernellogc146 rst decrements count outstanding system calls count zero commits current transaction calling commit four stages process writelog kernellogc178 copies block modied transaction buffer cache slot log disk writehead kernellogc102 writes header block disk commit point crash write result recovery replaying transaction writes log installtrans kernellogc69 reads block log writes proper place le system finally endop writes log header count zero happen next transaction starts writing logged blocks crash result recovery using one transaction header subsequent transaction logged blocks recoverfromlog kernellogc116 called initlog kernellogc55 called fsinit kernelfsc42 boot rst user process runs kernelprocc539 reads log header mimics actions endop header indicates log con tainsacommittedtransaction example use log occurs filewrite kernellec135 transaction looks like beginop ilock f ip r writei f ip iunlock f ip endop code wrapped loop breaks large writes individual transactions sectors time avoid overowing log call writei writes many blocks part transaction le inode one bitmap blocks data blocks", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-07 23:05:05", "segment": "segment70", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.7 Code: Block allocator ", "content": "file directory content stored disk blocks must allocated free pool xv6 block allocator maintains free bitmap disk one bit per block zero bit indicates corresponding block free one bit indicates use program mkfs sets bits corresponding boot sector superblock log blocks inode blocks bitmap blocks block allocator provides two functions balloc allocates new disk block bfree frees block balloc loop balloc kernelfsc71 considers every block starting block 0 sbsize number blocks le system looks block whose bitmap bit zero indicating free balloc nds block updates bitmap returns block efciency loop split two pieces outer loop reads block bitmap bits inner loop checks bpb bits single bitmap block race might occur two processes try allocate block time prevented fact buffer cache lets one process use one bitmap block time bfree kernelfsc90 nds right bitmap block clears right bit exclusive use implied bread brelse avoids need explicit locking much code described remainder chapter balloc bfree must called inside transaction", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-07 23:05:05", "segment": "segment71", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.8 Inode layer ", "content": "term inode one two related meanings might refer ondisk data structure containing le size list data block numbers inode might refer inmemory inode contains copy ondisk inode well extra information needed within kernel ondisk inodes packed contiguous area disk called inode blocks every inode size easy given number n nd nth inode disk fact number n called inode number inumber inodes identied implementation ondisk inode dened struct dinode kernelfsh32 type eld distin guishes les directories special les devices type zero indicates disk inode free nlink eld counts number directory entries refer inode order recognize ondisk inode data blocks freed size eld records number bytes content le addrs array records block numbers disk blocks holding le content kernel keeps set active inodes memory struct inode kernelleh17 inmemory copy struct dinode disk kernel stores inode memory c pointers referring inode ref eld counts number c pointers referring inmemory inode kernel discards inode memory reference count drops zero iget iput functions acquire release pointers inode modifying reference count pointers inode come le descriptors current working directories transient kernel code exec four lock locklike mechanisms xv6 inode code icachelock protects invariant inode present cache invariant cached inode ref eld counts number inmemory pointers cached inode inmemory inode lock eld containing sleeplock ensures exclusive access inode elds le length well inode le directory content blocks inode ref greater zero causes system maintain inode cache reuse cache entry different inode finally inode contains nlink eld disk copied memory cached counts number directory entries refer le xv6 free inode link count greater zero struct inode pointer returned iget guaranteed valid corresponding call iput inode deleted memory referred pointer reused different inode iget provides nonexclusive access inode many pointers inode many parts lesystem code depend behavior iget hold longterm references inodes open les current directories prevent races avoiding deadlock code manipulates multiple inodes pathname lookup struct inode iget returns may useful content order ensure holds copy ondisk inode code must call ilock locks inode process ilock reads inode disk already read iunlock releases lock inode separating acquisition inode pointers locking helps avoid deadlock situations example directory lookup multiple processes hold c pointer inode returned iget one process lock inode time inode cache caches inodes kernel code data structures hold c pointers main job really synchronizing access multiple processes caching secondary inode used frequently buffer cache probably keep memory kept inode cache inode cache writethrough means code modies cached inode must immediatelywriteittodiskwithiupdate", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-07 23:05:06", "segment": "segment72", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.9 Code: Inodes ", "content": "allocate new inode example creating le xv6 calls ialloc kernelfsc196 ialloc similar balloc loops inode structures disk one block time looking one marked free nds one claims writing new type disk returns entry inode cache tail call iget kernelfsc210 correct operation ialloc depends fact one process time holding reference bp ialloc sure process simultaneously see inode available try claim iget kernelfsc243 looks inode cache active entry ip ref 0 desired device inode number nds one returns new reference inode kernelfsc252256 iget scans records position rst empty slot kernelfsc257 258 uses needs allocate cache entry code must lock inode using ilock reading writing metadata content ilock kernelfsc289 uses sleeplock purpose ilock exclusive access inode reads inode disk likely buffer cache needed function iunlock kernelfsc317 releases sleeplock may cause processes sleeping woken iput kernelfsc333 releases c pointer inode decrementing reference count kernelfsc356 last reference inode slot inode cache free reused different inode iput sees c pointer references inode inode links occurs directory inode data blocks must freed iput calls itrunc truncate le zero bytes freeing data blocks sets inode type 0 unallocated writes inode disk kernelfsc338 locking protocol iput case frees inode deserves closer look one danger concurrent thread might waiting ilock use inode eg read le list directory prepared nd inode longer allocated happen way system call get pointer cached inode links ip ref one one reference reference owned thread calling iput true iput checks reference count one outside icachelock critical section point link count known zero thread try acquire new reference main danger concurrent call ialloc might choose inode iput freeing happen iupdate writes disk inode type zero race benign allocating thread politely wait acquire inode sleeplock reading writing inode point iput done iput write disk means system call uses le system may write disk system call may last one reference le even calls like read appear readonly may end calling iput turn means even readonly system calls must wrapped transactions use le system challenging interaction iput crashes iput truncate le immediately link count le drops zero process might still hold reference inode memory process might still reading writing le itsuccessfullyopeneditbut ifacrashhappensbeforethelastprocessclosestheledescriptor le le marked allocated disk directory entry point file systems handle case one two ways simple solution recovery reboot le system scans whole le system les marked allocated directory entry pointing le exists free les second solution require scanning le system solution le system records disk eg super block inode inumber le whose link count drops zero whose reference count zero le system removes le reference counts reaches 0 updates ondisk list removing inode list recovery le system frees le list xv6 implements neither solution means inodes may marked allocated disk even though use anymore means time xv6 runs risk may run disk space", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-07 23:05:06", "segment": "segment73", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-rev1/fig%208.3.jpg?raw=true"], "Book": "riscv-rev1" }, { "section": "8.10 Code: Inode content ", "content": "ondisk inode structure struct dinode contains size array block numbers see figure 83 inode data found blocks listed dinode addrs array rst ndirectblocksofdataarelistedintherstndirectentriesinthearray theseblocksarecalled direct blocks next nindirect blocks data listed inode data block called indirect block last entry addrs array gives address indirect block thus rst 12 kb ndirect x bsize bytes le loaded blocks listed inode next 256 kb nindirect x bsize bytes loaded consulting indirect block good ondisk representation complex one clients function bmap manages representation higherlevel routines readi writei see shortly bmap returns disk block number bn th data block inode ip ip block yet bmap allocates one function bmap kernelfsc378 begins picking easy case rst ndirect blocks listed inode kernelfsc383387 next nindirect blocks listed indirect block ip addrs ndirect bmap reads indirect block kernelfsc394 reads block number right position within block kernelfsc395 block number exceeds ndirectnindirect bmap panics writei contains check prevents happening kernelfsc490 bmap allocates blocks needed ip addrs indirect entry zero indicates block allocated bmap encounters zeros replaces numbers fresh blocks allocated demand kernelfsc384385 kernelfsc392393 itrunc frees le blocks resetting inode size zero itrunc kernelfsc410 starts freeing direct blocks kernelfsc416421 ones listed indirect block kernelfsc426 429 nally indirect block kernelfsc431432 bmap makes easy readi writei get inode data readi kernelfsc456 starts making sure offset count beyond end le reads start beyond end le return error kernelfsc461462 reads start cross end le return fewer bytes requested kernelfsc463464 main loop processes block le copying data buffer dst kernelfsc466474 writei kernelfsc483 identical readi three exceptions writes start cross end le grow le maximum le size kernelfsc490491 loop copies data buffers instead kernelfsc36 write extended le writei must update size kernelfsc504511 readi writei begin checking ip type tdev case handles spe cial devices whose data live le system return case le descriptor layer function stati kernelfsc442 copies inode metadata stat structure exposed user programs via stat system call", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-07 23:05:06", "segment": "segment74", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.11 Code: directory layer ", "content": "directory implemented internally much like le inode type tdir data sequence directory entries entry struct dirent kernelfsh56 contains name inode number name dirsiz 14 characters shorter terminated nul 0 byte directory entries inode number zero free function dirlookup kernelfsc527 searches directory entry given name nds one returns pointer corresponding inode unlocked sets poff byte offset entry within directory case caller wishes edit dirlookup nds entry right name updates poff returns unlocked inode obtained via iget dirlookup reason iget returns unlocked inodes caller locked dp lookup alias current directory attempting lock inode returning would try relock dp deadlock complicated deadlock scenarios involving multiple processes alias parent directory problem caller unlock dp lock ip ensuring holds one lock time function dirlink kernelfsc554 writes new directory entry given name in ode number directory dp name already exists dirlink returns error kernelfsc560 564 main loop reads directory entries looking unallocated entry nds one stops loop early kernelfsc538539 set offset available entry other wise loop ends set dp size either way dirlink adds new entry directory writing offset kernelfsc574577", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-07 23:05:06", "segment": "segment75", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.12 Code: Path names ", "content": "path name lookup involves succession calls dirlookup one path component namei kernelfsc661 evaluates path returns corresponding inode function nameiparent variant stops last element returning inode parent directory copying nal element name call generalized function namex real work namex kernelfsc626 starts deciding path evaluation begins path begins slash evaluation begins root otherwise current directory kernelfsc630633 uses skipelem consider element path turn kernelfsc635 iteration loop must look name current inode ip iteration begins locking ip checking directory lookup fails kernelfsc636640 locking ip necessary ip type change underfootit tbut ilock runs ip type guaranteed loaded disk call nameiparent last path element loop stops early per denition nameiparent nal path element already copied name namex need return unlocked ip kernelfsc641645 finally loop looks path element using dirlookup prepares next iteration setting ip next kernelfsc646651 loop runs path elements returns ip procedure namex may take long time complete could involve several disk opera tions read inodes directory blocks directories traversed pathname buffer cache xv6 carefully designed invocation namex one kernel thread blocked disk io another kernel thread looking different pathname pro ceed concurrently namex locks directory path separately lookups different directories proceed parallel concurrency introduces challenges example one kernel thread looking pathname another kernel thread may changing directory tree unlinking directory potential risk lookup may searching directory deleted another kernelthreadanditsblockshavebeenreusedforanotherdirectoryorle xv6 avoids races example executing dirlookup namex lookup thread holds lock directory dirlookup returns inode obtained using iget iget increases reference count inode receiving inode dirlookup namex release lock directory another thread may unlink inode directory xv6 delete inode yet reference count inode still larger zero another risk deadlock example next points inode ip looking locking next releasing lock ip would result deadlock avoid deadlock namex unlocks directory obtaining lock next see separation iget ilock important", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-07 23:05:07", "segment": "segment76", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.13 File descriptor layer ", "content": "cool aspect unix interface resources unix represented les including devices console pipes course real les le descriptor layer layer achieves uniformity xv6 gives process table open les le descriptors saw chapter 1 open le represented struct file kernelleh1 wrapper around either inode pipe plus io offset call open creates new open le new struct file multiple processes open le independently different instances different io offsets hand single open le struct file appear multiple times one process le table also le tables multiple processes would happen one process used open open le created aliases using dup shared child using fork reference count tracks number references particular open le le open reading writing readable writable elds track open les system kept global le table ftable le table functions allocate le filealloc create duplicate reference filedup release reference fileclose read write data fileread filewrite rst three follow nowfamiliar form filealloc kernellec30 scans le table unreferenced le f ref 0 returns new reference filedup kernellec48 incre ments reference count fileclose kernellec60 decrements le reference count reaches zero fileclose releases underlying pipe inode according type functions filestat fileread filewrite implement stat read write operations les filestat kernellec88 allowed inodes calls stati fileread filewrite check operation allowed open mode pass call either pipe inode implementation le represents inode fileread filewrite use io offset offset operation advance kernellec122 123 kernellec153154 pipes concept offset recall inode functions require caller handle locking kernellec9496 kernellec121124 kernellec163166 in ode locking convenient side effect read write offsets updated atomically multiple writing le simultaneously overwrite data though writes may end interlaced", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-07 23:05:07", "segment": "segment77", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.14 Code: System calls ", "content": "functions lower layers provide implementation system calls trivial see kernelsyslec calls deserve closer look functions syslink sysunlink edit directories creating removing references inodes another good example power using transactions syslink kernelsys lec120 begins fetching arguments two strings old new kernelsyslec125 assuming old exists directory kernelsyslec129132 syslink increments ip nlink count syslink calls nameiparent nd parent directory nal path element new kernelsyslec145 creates new directory entry pointing old inode kernelsys lec148 new parent directory must exist device existing inode inode numbers unique meaning single disk error like occurs syslink must go back decrement ip nlink transactions simplify implementation requires updating multiple disk blocks worry order either succeed none example without transactions updating ip nlink creating link would put le system temporarily unsafe state crash could result havoc transactions worry syslink creates new name existing inode function create kernelsyslec242 creates new name new inode generalization three le creation system calls open ocreate ag makes new ordinary le mkdir makes new directory mkdev makes new device le like syslink create starts caling nameiparent get inode parent directory calls dirlookup check whether name already exists kernelsyslec252 name exist create behavior depends system call used open different semantics mkdir mkdev create used behalf open type tfile name exists regular le open treats success create kernelsyslec256 otherwise error kernelsyslec257258 name already exist create allocates new inode ialloc kernelsyslec261 new inode directory create initializes entries finally data initialized properly create link parent directory kernelsyslec274 create like syslink holds two inode locks simultaneously ip dp possibility deadlock inode ip freshly allocated process system hold ip lock try lock dp using create easy implement sysopen sysmkdir sysmknod sysopen kernelsyslec287 complex creating new le small part open passed ocreate ag calls create kernelsyslec301 otherwise calls namei kernelsyslec307 create returns locked inode namei sysopen must lock inode provides convenient place check directories opened reading writing assuming inode obtained one way sysopen allocatesaleandaledescriptor kernelsyslec325 andthenllsinthele kernelsyslec337 342 note process access partially initialized le since current process table", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-07 23:05:07", "segment": "segment78", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 7 examined the implementation of pipes before we even had a \ufb01le system. The function ", "content": "syspipe connects implementation le system providing way create pipe pair argument pointer space two integers record two new le descriptors allocates pipe installs le descriptors", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-07 23:05:07", "segment": "segment79", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.15 Real world ", "content": "buffer cache realworld operating system signicantly complex xv6 serves two purposes caching synchronizing access disk xv6 buffer cache like v6 uses simple least recently used lru eviction policy many complex policies implemented good workloads good others efcient lru cache would eliminate linked list instead using hash table lookups heap lru evictions modern buffer caches typically integrated virtual memory system support memorymapped les xv6 logging system inefcient commit occur concurrently lesystem sys tem calls system logs entire blocks even bytes block changed performs synchronous log writes block time likely require entire disk rotation time real logging systems address problems logging way provide crash recovery early le systems used scavenger reboot example unix fsck program examine every le directory block inode free lists looking resolving inconsistencies scavenging take hours large le systems situations possible resolve inconsistencies way causes original system calls atomic recovery log much faster causes system calls atomic face crashes xv6 uses basic ondisk layout inodes directories early unix scheme remarkably persistent years bsd ufsffs linux ext2ext3 use essen tially data structures inefcient part le system layout directory requires linear scan disk blocks lookup reasonable directories disk blocks expensive directories holding many les microsoft windows ntfs mac os x hfs solaris zfs name implement direc tory ondisk balanced tree blocks complicated guarantees logarithmictime directory lookups xv6 naive disk failures disk operation fails xv6 panics whether reasonable depends hardware operating systems sits atop special hardware uses redundancy mask disk failures perhaps operating system sees failures infrequently panicking okay hand operating systems using plain disks expect failures handle gracefully loss block one le affect use rest le system xv6 requires le system t one disk device change size large databases multimedia les drive storage requirements ever higher operating systems de veloping ways eliminate one disk per le system bottleneck basic approach combine many disks single logical disk hardware solutions raid still popular current trend moving toward implementing much logic software possible software implementations typically allow rich functionality like growing shrink ing logical device adding removing disks y course storage layer grow shrink y requires le system xedsize array inode blocks used xv6 would work well environments separating disk management le system may cleanest design complex interface two led systems like sun zfs combine xv6 le system lacks many features modern le systems example lacks sup port snapshots incremental backup modern unix systems allow many kinds resources accessed system calls ondisk storage named pipes network connections remotelyaccessed network le systems monitoring control interfaces proc instead xv6 statements fileread filewrite systems typically give open le table function pointers one per operation call function pointer invoke inode implementation call network le systems userlevel le systems provide functions turn calls network rpcs wait response returning", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-07 23:05:07", "segment": "segment80", "image_urls": [], "Book": "riscv-rev1" }, { "section": "8.16 Exercises ", "content": "1 panic balloc xv6 recover 2 panic ialloc xv6 recover 3 filealloc panic runs les common therefore worth handling 4 suppose le corresponding ip gets unlinked another process syslink calls iunlock ip dirlink link created correctly 5 create makes four function calls one ialloc three dirlink requires succeed create calls panic acceptable four calls fail 6 syschdir calls iunlock ip iput cp cwd might try lock cp cwd yet postponing iunlock ip iput would cause deadlocks 7 implement lseek system call supporting lseek also require modify filewrite ll holes le zero lseek sets beyond f ip size 8 add otrunc oappend open operators work shell 9 modify le system support symbolic links 10 modify le system support named pipes 11 modify le vm system support memorymapped les", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-07 23:05:07", "segment": "segment81", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 9 ", "content": "concurrency revisited simultaneously obtaining good parallel performance correctness despite concurrency under standable code big challenge kernel design straightforward use locks best path correctness always possible chapter highlights examples xv6 forced use locks involved way examples xv6 uses locklike techniques locks", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-07 23:05:07", "segment": "segment82", "image_urls": [], "Book": "riscv-rev1" }, { "section": "9.1 Locking patterns ", "content": "cached items often challenge lock example lesystem block cache kernelbioc26 stores copies nbuf disk blocks vital given disk block one copy cache otherwise different processes might make conicting changes different copies ought block cached block stored struct buf kernelbufh1 struct buf lock eld helps ensure one process uses given disk block time however lock enough block present cache two processes want use time struct buf since block yet cached thus nothing lock xv6 deals situation associating addi tional lock bcachelock set identities cached blocks code needs check block cached eg bget kernelbioc59 change set cached blocks must hold bcachelock code found block struct buf needs release bcachelock lock specic block common pattern one lock set items plus one lock per item ordinarily function acquires lock release precise way view things lock acquired start sequence must appear atomic released sequence ends sequence starts ends different functions different threads different cpus lock acquire release must function lock force uses wait pin piece data particular agent one example acquire yield kernelprocc515 released scheduler thread rather acquiring process another example acquiresleep ilock kernelfsc289 code often sleeps reading disk may wake different cpu means lock may beacquiredandreleasedondifferentcpus freeing object protected lock embedded object delicate business since owning lock enough guarantee freeing would correct problem case arises thread waiting acquire use object freeing object implicitly frees embedded lock cause waiting thread malfunction one so lution track many references object exist freed last reference disappears see pipeclose kernelpipec59 example pi readopen pi writeopen track whether pipe le descriptors referring", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-07 23:05:08", "segment": "segment83", "image_urls": [], "Book": "riscv-rev1" }, { "section": "9.2 Lock-like patterns ", "content": "many places xv6 uses reference count ag kind soft lock indicate object allocated freed reused process p state acts way reference counts file inode buf structures case lock protects ag reference count latter prevents object prematurely freed le system uses struct inode reference counts kind shared lock held multiple processes order avoid deadlocks would occur code used ordinary locks example loop namex kernelfsc626 locks directory named pathname component turn however namex must release lock end loop since held multiple locks could deadlock pathname included dot eg ab might also deadlock concurrent lookup involving directory chapter 8 explains solution loop carry directory inode next iteration reference count incremented locked data items protected different mechanisms different times may times protected concurrent access implicitly structure xv6 code rather explicit locks example physical page free protected kmemlock ker nelkallocc24 page allocated pipe kernelpipec23 protected different lock embedded pi lock page reallocated new process user memory protected lock instead fact allocator give page process freed protects concurrent access ownership new process memory complex rst parent allocates manipulates fork child uses child exits parent owns memory passes kfree two lessons data object may protected concurrency different ways different points lifetime protection may take form implicit structure rather explicit locks nal locklike example need disable interrupts around calls mycpu ker nelprocc68 disabling interrupts causes calling code atomic respect timer in terrupts could force context switch thus move process different cpu", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-07 23:05:08", "segment": "segment84", "image_urls": [], "Book": "riscv-rev1" }, { "section": "9.3 No locks at all ", "content": "places xv6 shares mutable data locks one implemen tationofspinlocks althoughonecouldviewtheriscvatomicinstructionsasrelyingonlocks implemented hardware another started variable mainc kernelmainc7 used prevent cpus running cpu zero nished initializing xv6 volatile ensures compiler actually generates load store instructions third uses p parent procc kernelprocc398 kernelprocc306 proper locking could dead lock seems clear process could simultaneously modifying p parent fourth example p killed set holding p lock kernelprocc611 checked without holding lock kerneltrapc56 xv6 contains cases one cpu thread writes data another cpu thread reads data specic lock dedicated protecting data example fork parent writes child user memory pages child different thread perhaps different cpu reads pages lock explicitly protects pages strictly locking problem since child start executing parent nished writing potential memory ordering problem see chapter 6 since without memory barrier reason expect one cpu see another cpu writes however since parent releases locks child acquires locks starts memory barriers acquire release ensure child cpu sees parent writes", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-07 23:05:08", "segment": "segment85", "image_urls": [], "Book": "riscv-rev1" }, { "section": "9.4 Parallelism ", "content": "locking primarily suppressing parallelism interests correctness perfor mance also important kernel designers often think use locks way achieves correctness good parallelism xv6 systematically designed high performance still worth considering xv6 operations execute parallel might conict locks pipes xv6 example fairly good parallelism pipe lock dif ferent processes read write different pipes parallel different cpus given pipe however writer reader must wait release lock readwrite pipe time also case read empty pipe write full pipe must block due locking scheme context switching complex example two kernel threads executing cpu call yield sched swtch time calls execute parallel threads hold lock different locks wait scheduler however two cpus may conict locks searching table processes one runnable xv6 likely get performance benet multiple cpus context switch perhaps much could another example concurrent calls fork different processes different cpus calls may wait pidlock kmemlock perprocess locks needed search process table unused process hand two forking processes copy user memory pages format pagetable pages fully parallel locking scheme examples sacrices parallel performance certain cases case possible obtain parallelism using elaborate design whether sworthwhiledependsondetails howoftentherelevantoperationsareinvoked howlongthe code spends contended lock held many cpus might running conicting operations time whether parts code restrictive bottlenecks difcult guess whether given locking scheme might cause performance problems whether new design signicantly better measurement realistic workloads often required", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-07 23:05:08", "segment": "segment86", "image_urls": [], "Book": "riscv-rev1" }, { "section": "9.5 Exercises ", "content": "1 modify xv6 pipe implementation allow read write pipe proceed parallel different cores 2 modify xv6 scheduler reduce lock contention different cores looking runnable processes time 3 eliminate serialization xv6 fork", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-10-07 23:05:08", "segment": "segment87", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 10 ", "content": "summary text introduced main ideas operating systems studying one operating system xv6 line line code lines embody essence main ideas eg context switching userk ernel boundary locks etc line important code lines provide illustration implement particular operating system idea could easily done different ways eg better algorithm scheduling better ondisk data structures represent les better log ging allow concurrent transactions etc ideas illustrated context one particular successful system call interface unix interface ideas carry design operating systems", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-10-07 23:05:08", "segment": "segment88", "image_urls": [], "Book": "riscv-rev1" }, { "section": "Chapter 1 ", "content": "introduction riscv pronounced riskve new instruction set architecture isa originally designed support computer architecture research education hope also become standard free open architecture industry implementations goals dening riscv include completely open isa freely available academia industry real isa suitable direct native hardware implementation simulation binary translation isa avoids overarchitecting particular microarchitecture style eg mi crocoded inorder decoupled outoforder implementation technology eg fullcustom asic fpga allows ecient implementation isa separated small base integer isa usable base customized accelerators educational purposes optional standard extensions support general purpose software development support revised 2008 ieee754 oatingpoint standard 14 isa supporting extensive userlevel isa extensions specialized variants 32bit 64bit address space variants applications operating system kernels hardware implementations isa support highlyparallel multicore manycore implementations including heterogeneous multiprocessors optional variablelength instructions expand available instruction encoding space support optional dense instruction encoding improved performance static code size energy eciency fully virtualizable isa ease hypervisor development isa simplies experiments new supervisorlevel hypervisorlevel isa de signs commentary design decisions formatted paragraph skipped reader interested specication name riscv chosen represent fth major risc isa design uc berkeley risci 23 riscii 15 soar 32 andspur 18 weretherstfour wealsopunonthe use roman numeral v signify variations vectors support range architecture research including various dataparallel accelerators explicit goal isa design developed riscv support needs research education group particularly interested actual hardware implementations research ideas completed eleven dierent silicon fabrications riscv since rst edition specication providing real implementations students explore classes riscv processor rtl de signs used multiple undergraduate graduate classes berkeley current research especially interested move towards specialized heterogeneous accel erators driven power constraints imposed end conventional transistor scaling wanted highly exible extensible base isa around build research eort question repeatedly asked develop new isa biggest obvious benet using existing commercial isa large widely supported software ecosystem development tools ported applications leveraged research teaching benets include existence large amounts documentation tutorial examples however experience using commercial instruction sets research teaching benets smaller practice outweigh disadvantages commercial isas proprietary except sparc v8 open ieee standard 2 owners commercial isas carefully guard intellectual property welcome freely available competitive implementations much less issue academic research teaching using software simulators major concern groups wishing share actual rtl implementations also major concern entities want trust sources commercial isa imple mentations prohibited creating clean room implementations guarantee riscv implementations free thirdparty patent infringements guarantee attempt sue riscv implementor commercial isas popular certain market domains obvious examples time writing arm architecture well supported server space intel x86 architecture matter almost every architecture well supported mobile space though intel arm attempting enter market segments another example arc tensilica provide extensible cores focused embedded space market segmentation dilutes benet supporting particular commercial isa practice software ecosystem exists certain domains built others commercial isas come go previous research infrastructures built around commercial isas longer popular sparc mips even longer production alpha lose benet active software ecosystem lingering intellectual property issues around isa supporting tools interfere ability interested third parties continue supporting isa open isa might also lose popularity interested party continue using developing ecosystem popular commercial isas complex dominant commercial isas x86 arm complex implement hardware level supporting common software stacks operating systems worse nearly complexity due bad least outdated isa design decisions rather features truly improve eciency commercial isas alone enough bring applications even expend eort implement commercial isa enough run existing appli cations isa applications need complete abi application binary interface run userlevel isa abis rely libraries turn rely operating system support run existing operating system requires implementing supervisorlevel isa device interfaces expected os usually much less wellspecied considerably complex implement userlevel isa popular commercial isas designed extensibility dominant commercial isas particularly designed extensibility consequence added considerable instruction encoding complexity instruction sets grown companies tensilica acquired cadence arc acquired synopsys built isas toolchains around extensibility focused embedded applications rather generalpurpose computing systems modied commercial isa new isa one main goals support ar chitecture research including major isa extensions even small extensions diminish benet using standard isa compilers modied applications rebuilt source code use extension larger extensions introduce new architectural state also require modications operating system ultimately modied commer cial isa becomes new isa carries along legacy baggage base isa position isa perhaps important interface computing system reason important interface proprietary dominant commercial isas based instruction set concepts already well known 30 years ago software developers able target open standard hardware target commercial processor designers compete implementation quality far rst contemplate open isa design suitable hardware imple mentation also considered existing open isa designs closest goals openrisc architecture 22 decided adopting openrisc isa several technical reasons openrisc condition codes branch delay slots complicate higher performance implementations openrisc uses xed 32bit encoding 16bit immediates precludes denser instruction encoding limits space later expansion isa openrisc support 2008 revision ieee 754 oatingpoint standard openrisc 64bit design completed began starting clean slate could design isa met goals though course took far eort planned outset invested con siderable eort building riscv isa infrastructure including documentation compiler tool chains operating system ports reference isa simulators fpga implementations ecient asic implementations architecture test suites teaching materials since last edition manual considerable uptake riscv isa academia indus try created nonprot riscv foundation protect promote standard riscv foundation website http riscv org contains latest information foundation membership various opensource projects using riscv riscv manual structured two volumes volume covers userlevel isa design including optional isa extensions second volume provides privileged architecture userlevel manual aim remove dependence particular microarchitectural features privileged architecture details clarity allow maximum exibility alternative implementations", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-08 02:32:08", "segment": "segment0", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "1.1 RISC-V ISA Overview ", "content": "riscv isa dened base integer isa must present implementation plus optional extensions base isa base integer isa similar early risc processors except branch delay slots support optional variablelength instruction encodings base carefully restricted minimal set instructions sucient provide reasonable target compilers assemblers linkers operating systems additional supervisorlevel operations provides convenient isa software toolchain skeleton around customized processor isas built base integer instruction set characterized width integer registers corresponding size user address space two primary base integer variants rv32i rv64i described chapters 2 4 provide 32bit 64bit userlevel address spaces respectively hardware implementations operating systems might provide one rv32i rv64i user programs chapter 3 describes rv32e subset variant rv32i base instruction set added support small microcontrollers chapter 5 describes future rv128i variant base integer instruction set supporting at 128bit user address space base integer instruction sets use two scomplement representation signed integer values although 64bit address spaces requirement larger systems believe 32bit address spaces remain adequate many embedded client devices decades come desirable lower memory trac energy consumption addition 32bit address spaces sucient educational purposes larger at 128bit address space might eventually required ensured could accommodated within riscv isa framework base integer isa may subset hardware implementation opcode traps software emulation privileged layer must used implement functionality provided hardware subsets base integer isa might useful pedagogical purposes base dened little incentive subset real hardware implementation beyond omitting support misaligned memory accesses treating system instructions single trap riscv designed support extensive customization specialization base integer isa extended one optional instructionset extensions base integer instructions redened divide riscv instructionset extensions standard nonstandard extensions standard extensions generally useful conict standard extensions nonstandard extensions may highly specialized may conict standard nonstandard extensions instructionset extensions may provide slightly dierent functionality depending width base integer instruction set chapter 21 describes various ways extending riscv isa also developed naming convention riscv base instructions instructionset extensions described detail chapter 22 support general software development set standard extensions dened provide integer multiplydivide atomic operations single doubleprecision oatingpoint arith metic base integer isa named prexed rv32 rv64 depending integer reg ister width contains integer computational instructions integer loads integer stores controlow instructions mandatory riscv implementations standard integer multiplication division extension named adds instructions multiply divide values held integer registers standard atomic instruction extension denoted adds instructions atomically read modify write memory interprocessor synchroniza tion standard singleprecision oatingpoint extension denoted f adds oatingpoint registers singleprecision computational instructions singleprecision loads stores standard doubleprecision oatingpoint extension denoted expands oatingpoint registers adds doubleprecision computational instructions loads stores integer base plus four standard extensions imafd given abbreviation g provides generalpurpose scalar instruction set rv32g rv64g currently default target compiler toolchains later chapters describe planned standard riscv extensions beyond base integer isa standard extensions rare new instruction provide signicant benet applications although may benecial certain domain energy eciency concerns forcing greater specialization believe important simplify required portion isa specication whereas architectures usually treat isa single entity changes new version instructions added time riscv endeavor keep base standard extension constant time instead layer new instructions optional extensions example base integer isas continue fully supported standalone isas regardless subsequent extensions 20 release user isa specication intend rv32imafd rv64imafd base standard extensions aka rv32g rv64g remain con stant future development", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-08 02:32:09", "segment": "segment1", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "1.2 Instruction Length Encoding ", "content": "base riscv isa xedlength 32bit instructions must naturally aligned 32bit boundaries however standard riscv encoding scheme designed support isa extensions variablelength instructions instruction number 16bit instruction parcels length parcels naturally aligned 16bit boundaries standard compressed isa extension described chapter 12 reduces code size providing compressed 16bit instructions relaxes alignment constraints allow instructions 16 bit 32 bit aligned 16bit boundary improve code density figure 11 illustrates standard riscv instructionlength encoding convention 32bit instructions base isa lowest two bits set 11 optional compressed 16bit instructionset extensions lowest two bits equal 00 01 10 standard instruction set extensions encoded 32 bits additional loworder bits set 1 conventions 48bit 64bit lengths shown figure 11 instruction lengths 80 bits 176 bits encoded using 3bit eld bits 1412 giving number 16bit words addition rst 516bit words encoding bits 1412 set 111 reserved future longer instruction encodings given code size energy savings compressed format wanted build support compressed format isa encoding scheme rather adding afterthought allow simpler implementations want make compressed format mandatory also wanted optionally allow longer instructions support experimentation larger instructionset extensions although encoding convention required tighter encoding core riscv isa several benecial eects implementation standard g isa need hold mostsignicant 30 bits instruction caches 625 saving instruction cache rells instructions encountered either low bit clear recoded illegal 30bit instructions storing cache preserve illegal instruction exception behavior perhaps importantly condensing base isa subset 32bit instruc tion word leave space available custom extensions particular base rv32i isa uses less 18 encoding space 32bit instruction word described chapter 21 implementation require support standard compressed in struction extension map 3 additional 30bit instruction spaces 32bit xedwidth format preserving support standard 32bit instructionset extensions implementation also need instructions 32bits length recover four major opcodes consider feature length instruction containing zero bits legal quickly traps erroneous jumps zeroed memory regions similarly also reserve instruction encoding containing ones illegal instruction catch common pattern observed unprogrammed nonvolatile memory devices disconnected memory buses broken memory devices base riscv isa littleendian memory system nonstandard variants provide bigendian biendian memory system instructions stored memory 16bit parcel stored memory halfword according implementation natural endianness parcels forming one instruction stored increasing halfword addresses lowest addressed parcel holding lowest numbered bits instruction specication ie instructions always stored littleendian sequence parcels regardless memory system endianness code sequence figure 12 store 32bit instruction memory correctly regardless memory system endianness chose littleendian byte ordering riscv memory system littleendian sys tems currently dominant commercially x86 systems ios android windows arm minor point also found littleendian memory systems nat ural hardware designers however certain application areas ip networking operate bigendian data structures leave open possibility nonstandard bigendian biendian systems x order instruction parcels stored memory independent memory system endianness ensure lengthencoding bits always appear rst operates correctly onbothbigandlittleendianmemory systems andavoidsmisaligned accesses usedwithvariablelengthinstructionsetextensions halfword address order allows length variablelength instruction quickly determined instruction fetch unit examining rst bits rst 16bit instruction parcel decided x littleendian memory system instruction parcel ordering naturally led placing lengthencoding bits lsb positions instruction format avoid breaking opcode elds", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-08 02:32:09", "segment": "segment2", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%201.1.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%201.2.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "1.3 Exceptions, Traps, and Interrupts ", "content": "use term exception refer unusual condition occurring run time associated instruction current riscv thread use term trap refer synchronous transfer control trap handler caused exceptional condition occurring within riscv thread trap handlers usually execute privileged environment use term interrupt refer external event occurs asynchronously current riscv thread interrupt must serviced occurs instruction selected receive interrupt exception subsequently experiences trap instruction descriptions following chapters describe conditions raise exception dur ing execution whether converted traps dependent execution environment though expectation environments take precise trap exception signaled except oatingpoint exceptions standard oatingpoint extensions cause traps use exception trap matches ieee754 oatingpoint standard", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-08 02:32:09", "segment": "segment3", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 2 ", "content": "rv32i base integer instruction set version 20 chapter describes version 20 rv32i base integer instruction set much commen tary also applies rv64i variant rv32i designed sucient form compiler target support modern operating system environments isa also designed reduce hardware required mini mal implementation rv32i contains 47 unique instructions though simple implementation might cover eight scallsbreakcsrr instructions single system hardware instruction always traps might able implement fence fencei in structions nops reducing hardware instruction count 38 total rv32i emulate almost isa extension except extension requires additional hardware support atomicity", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-08 02:32:09", "segment": "segment4", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "2.1 Programmers\u2019 Model for Base Integer Subset ", "content": "figure 21 shows uservisible state base integer subset 31 generalpurpose registers x1x31 hold integer values register x0 hardwired constant 0 hardwired subroutine return address link register standard software calling convention uses register x1 hold return address call rv32 x registers 32 bits wide rv64 64 bits wide document uses term xlen refer current width x register bits either 32 64 one additional uservisible register program counter pc holds address current instruction number available architectural registers large impacts code size performance energy consumption although 16 registers would arguably sucient integer isa running compiled code impossible encode complete isa 16 registers 16bit instructions using 3address format although 2address format would possible would increase instruction count lower eciency wanted avoid intermediate instruction sizes suchasxtensa s24bitinstructions tosimplifybasehardwareimplementations andonce 32bit instruction size adopted straightforward support 32 integer registers larger number integer registers also helps performance highperformance code extensive use loop unrolling software pipelining cache tiling reasons chose conventional size 32 integer registers base isa dy namic register usage tends dominated frequently accessed registers regle im plementations optimized reduce access energy frequently accessed registers 31 optional compressed 16bit instruction format mostly accesses 8 registers hence provide dense instruction encoding additional instructionset extensions could support much larger register space either at hierarchical desired resourceconstrained embedded applications dened rv32e subset 16 registers chapter 3", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-08 02:32:10", "segment": "segment5", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%202.1.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.2 Base Instruction Formats ", "content": "base isa four core instruction formats risu shown figure 22 xed 32 bits length must aligned fourbyte boundary memory instruction address misaligned exception generated taken branch unconditional jump target address fourbyte aligned instruction fetch misaligned exception generated conditional branch taken alignment constraint base isa instructions relaxed twobyte boundary instruction extensions 16bit lengths odd multiples 16bit lengths added riscv isa keeps source rs1 rs2 destination rd registers position formats simplify decoding except 5bit immediates used csr instructions section 28 immediates always signextended generally packed towards leftmost available bits instruction allocated reduce hardware complexity partic ular sign bit immediates always bit 31 instruction speed signextension circuitry decoding register speciers usually critical paths implementations in struction format chosen keep register speciers position formats expense move immediate bits across formats property shared risciv aka spur 18 practice immediates either small require xlen bits chose asym metric immediate split 12 bits regular instructions plus special load upper immediate in struction 20 bits increase opcode space available regular instructions immediates signextended observe benet using zeroextension immediates mips isa wanted keep isa simple possible", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-08 02:32:10", "segment": "segment6", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%202.2.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.3 Immediate Encoding Variants ", "content": "two variants instruction formats bj based handling imme diates shown figure 23 dierence b formats 12bit immediate eld used encode branch osets multiples 2 b format instead shifting bits instructionencoded immediate left one hardware conventionally done middle bits imm 101 sign bit stay xed positions lowest bit format inst 7 encodes highorder bit b format similarly dierence u j formats 20bit immediate shifted left 12 bits form u immediates 1 bit form j immediates location instruction bits u j format immediates chosen maximize overlap formats figure 24 shows immediates produced base instruction formats labeled show instruction bit inst produces bit immediate value signextension one critical operations immediates particularly rv64i riscv sign bit immediates always held bit 31 instruction allow signextension proceed parallel instruction decoding although complex implementations might separate adders branch jump calculations would benet keeping location immediate bits constant across types instruction wanted reduce hardware cost simplest implementations rotating bits instruction encoding b j immediates instead using dynamic hard ware muxes multiply immediate 2 reduce instruction signal fanout immediate mux costs around factor 2 scrambled immediate encoding add negligible time static aheadoftime compilation dynamic generation instructions small additional overhead common short forward branches straightforward immediate encodings", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-08 02:32:10", "segment": "segment7", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%202.3.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%202.4.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.4 Integer Computational Instructions ", "content": "integer computational instructions operate xlen bits values held integer register le integer computational instructions either encoded registerimmediate operations using itype format registerregister operations using rtype format destination register rd registerimmediate registerregister instructions integer computational instructions cause arithmetic exceptions include special instruction set support overow checks integer arithmetic operations base instruction set many overow checks cheaply implemented using riscv branches overow checking unsigned addition requires single additional branch instruction addition add t0 t1 t2 bltu t0 t1 overflow signed addition one operand sign known overow checking requires single branch addition addi t0 t1 imm blt t0 t1 overflow covers common case addition immediate operand general signed addition three additional instructions addition required leveraging observation sum less one operands otheroperandisnegative rv64 checks 32bit signed additions optimized comparing results add addw operands integer registerimmediate instructions addi adds signextended 12bit immediate register rs1 arithmetic overow ignored result simply low xlen bits result addi rd rs1 0 used implement mv rd rs1 assembler pseudoinstruction slti set less immediate places value 1 register rd register rs1 less sign extended immediate treated signed numbers else 0 written rd sltiu similar compares values unsigned numbers ie immediate rst signextended xlen bits treated unsigned number note sltiu rd rs1 1 sets rd 1 rs1 equals zero otherwise sets rd 0 assembler pseudoop seqz rd rs andi ori xori logical operations perform bitwise xor register rs1 signextended 12bit immediate place result rd note xori rd rs1 1 performs bitwise logical inversion register rs1 assembler pseudoinstruction rd rs shifts constant encoded specialization itype format operand shifted rs1 shift amount encoded lower 5 bits iimmediate eld right shift type encoded high bit iimmediate slli logical left shift zeros shifted lower bits srli logical right shift zeros shifted upper bits srai arithmetic right shift original sign bit copied vacated upper bits lui load upper immediate used build 32bit constants uses utype format lui places uimmediate value top 20 bits destination register rd lling lowest 12 bits zeros auipc add upper immediate pc used build pcrelative addresses uses utype format auipc forms 32bit oset 20bit uimmediate lling lowest 12 bits zeros adds oset pc places result register rd auipc instruction supports twoinstruction sequences access arbitrary osets pc controlow transfers data accesses combination auipc 12bit immediate jalr transfer control 32bit pcrelative address auipc plus 12bit immediate oset regular load store instructions access 32bit pcrelative data address current pc obtained setting uimmediate 0 although jal 4 instruction could also used obtain pc might cause pipeline breaks simpler mi croarchitectures pollute btb structures complex microarchitectures integer registerregister operations rv32i denes several arithmetic rtype operations operations read rs1 rs2 registers source operands write result register rd funct7 funct3 elds select type operation add sub perform addition subtraction respectively overows ignored low xlen bits results written destination slt sltu perform signed unsigned compares respectively writing 1 rd rs1 rs2 0 otherwise note sltu rd x0 rs2 sets rd 1 rs2 equal zero otherwise sets rd zero assembler pseudoop snez rd rs xor perform bitwise logical operations sll srl sra perform logical left logical right arithmetic right shifts value register rs1 shift amount held lower 5 bits register rs2 nop instruction nop instruction change uservisible state except advancing pc nop encoded addi x0 x0 0 nops used align code segments microarchitecturally signicant address boundaries leave space inline code modications although many possible ways encode nop dene canonical nop encoding allow microarchitectural optimizations well readable disassembly output", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-08 02:32:10", "segment": "segment8", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(2.4).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(2.4).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203(2.4).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%204(2.4).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%205(2.4).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.5 Control Transfer Instructions ", "content": "rv32i provides two types control transfer instructions unconditional jumps conditional branches control transfer instructions rv32i architecturally visible delay slots unconditional jumps jump link jal instruction uses jtype format jimmediate encodes signed oset multiples 2 bytes oset signextended added pc form jump target address jumps therefore target 1 mib range jal stores address instruction following jump pc4 register rd standard software calling convention uses x1 return address register x5 alternate link register alternate link register supports calling millicode routines eg save restore registers compressed code preserving regular return address register register x5 chosen alternate link register maps temporary standard calling convention encoding one bit dierent regular link register plain unconditional jumps assembler pseudoop j encoded jal rdx0 indirect jump instruction jalr jump link register uses itype encoding target address obtained adding 12bit signed iimmediate register rs1 setting leastsignicant bit result zero address instruction following jump pc4 written register rd register x0 used destination result required unconditional jump instructions use pcrelative addressing help support position independent code jalr instruction dened enable twoinstruction sequence jump anywhere 32bit absolute address range lui instruction rst load rs1 upper 20 bits target address jalr add lower bits similarly auipc jalr jump anywhere 32bit pcrelative address range note jalr instruction treat 12bit immediate multiples 2 bytes unlike conditional branch instructions avoids one immediate format hardware practice uses jalr either zero immediate paired lui auipc slight reduction range signicant jalr instruction ignores lowest bit calculated target address simplies hardware slightly allows low bit function pointers used store auxiliary information although potentially slight loss error checking case practice jumps incorrect instruction address usually quickly raise exception used base rs1x0 jalr used implement single instruction sub routine call lowest 2 kib highest 2 kib address region anywhere address space could used implement fast calls small runtime library jal jalr instructions generate misaligned instruction fetch exception target address aligned fourbyte boundary instruction fetch misaligned exceptions possible machines support extensions 16bit aligned instructions compressed instruction set extension c returnaddress prediction stacks common feature highperformance instructionfetch units require accurate detection instructions used procedure calls returns eective riscv hints instructions usage encoded implicitly via register numbers used jal instruction push return address onto returnaddress stack ras rdx1x5 jalr instructions pushpop ras shown table 21 isas added explicit hint bits indirectjump instructions guide return address stack manipulation use implicit hinting tied register numbers calling convention reduce encoding space used hints two dierent link registers x1 x5 given rs1 rd ras pushed popped support coroutines rs1 rd link regis ter either x1 x5 ras pushed enable macroop fusion sequences lui ra imm20 jalr ra ra imm12 auipc ra imm20 jalr ra ra imm12 conditional branches branch instructions use btype instruction format 12bit bimmediate encodes signed osets multiples 2 added current pc give target address conditional branch range 4 kib branch instructions compare two registers beq bne take branch registers rs1 rs2 equal unequal respectively blt bltu take branch rs1 less rs2 using signed unsigned comparison respectively bge bgeu take branch rs1 greater equal rs2 using signed unsigned comparison respectively note bgt bgtu ble bleu synthesized reversing operands blt bltu bge bgeu respectively signed array bounds may checked single bltu instruction since negative index compare greater nonnegative bound software optimized sequential code path common path lessfrequently taken code paths placed line software also assume backward branches predicted taken forward branches taken least rst time encountered dynamic predictors quickly learn predictable branch behavior unlike architectures riscv jump jal rdx0 instruction always used unconditional branches instead conditional branch instruction always true condition riscv jumps also pcrelative support much wider oset range branches pressure conditional branch prediction tables conditional branches designed include arithmetic comparison operations two registers also done parisc xtensa isa rather use condition codes x86 arm sparc powerpc compare one register zero alpha mips two registers equality mips design motivated observation combined compareandbranch instruction ts regular pipeline avoids additional condition code state use temporary register reduces static code size dynamic instruction fetch trac another point comparisons zero require nontrivial circuit delay especially move static logic advanced processes almost expensive arithmetic magnitude compares another advantage fused compareandbranch instruction branches observed earlier frontend instruction stream predicted earlier perhaps advantage design condition codes case multiple branches taken based condition codes believe case relatively rare considered include static branch hints instruction encoding reduce pressure dynamic predictors require instruction encoding space software proling best results result poor performance production runs match proling runs considered include conditional moves predicated instructions eectively replace unpredictable short forward branches conditional moves simpler two dicult use conditional code might cause exceptions memory accesses oatingpoint operations predication adds additional ag state system addi tional instructions set clear ags additional encoding overhead every instruction conditional move predicated instructions add complexity outoforder microarchitec tures adding implicit third source operand due need copy original value destination architectural register renamed destination physical register predicate false also static compiletime decisions use predication instead branches result lower performance inputs included compiler training set especially given unpredictable branches rare becoming rarer branch prediction techniques improve note various microarchitectural techniques exist dynamically convert unpredictable short forward branches internally predicated code avoid cost ushing pipelines branch mispredict 13 17 16 implemented commercial processors 27 simplest techniques reduce penalty recovering mispredicted short forward branch ushing instructions branch shadow instead entire fetch pipeline fetching instructions sides using wide instruction fetch idle instruction fetch slots complex techniques outoforder cores add internal predicates instructions branch shadow internal predicate value written branch instruction allowing branch following instructions executed speculatively outoforder respect code 27", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-08 02:32:11", "segment": "segment9", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(2.5).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(2.5).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%202.1.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203(2.5).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.6 Load and Store Instructions ", "content": "rv32i loadstore architecture load store instructions access memory arithmetic instructions operate cpu registers rv32i provides 32bit user address space byteaddressed littleendian execution environment dene portions address space legal access loads destination x0 must still raise exceptions action side eects even though load value discarded load store instructions transfer value registers memory loads encoded itype format stores stype eective byte address obtained adding register rs1 signextended 12bit oset loads copy value memory register rd stores copy value register rs2 memory lw instruction loads 32bit value memory rd lh loads 16bit value memory signextends 32bits storing rd lhu loads 16bit value memory zero extends 32bits storing rd lb lbu dened analogously 8bit values sw sh sb instructions store 32bit 16bit 8bit values low bits register rs2 memory best performance eective address loads stores naturally aligned data type ie fourbyte boundary 32bit accesses twobyte boundary 16bit accesses base isa supports misaligned accesses might run extremely slowly depending implementation furthermore naturally aligned loads stores guaranteed execute atomically whereas misaligned loads stores might hence require additional synchronization ensure atomicity misaligned accesses occasionally required porting legacy code essential good performance many applications using form packedsimd extension ratio nale supporting misaligned accesses via regular load store instructions simplify addition misaligned hardware support one option would disallow misaligned accesses base isa provide separate isa support misaligned accesses either special instructions help software handle misaligned accesses new hardware address ing mode misaligned accesses special instructions dicult use complicate isa often add new processor state eg sparc vis align address oset register complicate access existing processor state eg mips lwllwr partial register writes addition looporiented packedsimd code extra overhead operands misaligned motivates software provide multiple forms loop depending operand alignment complicates code generation adds loop startup overhead new misaligned hardware addressing modes take considerable space instruction encoding require simplied addressing modes eg register indirect mandate atomicity misaligned accesses simple implementations use machine trap software handler handle misaligned accesses hardware misaligned support provided software exploit simply using regular load store instructions hardware automatically optimize accesses depending whether runtime addresses aligned", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-08 02:32:11", "segment": "segment10", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(2.6).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.7 Memory Model ", "content": "section date riscv memory model currently revision ensure eciently support current programming language memory models revised base mem ory model contain ordering constraints including least loads address hart reordered syntactic data dependencies instructions respected base riscv isa supports multiple concurrent threads execution within single user address space riscv hardware thread hart user register state program counter executes independent sequential instruction stream execution environment dene riscv harts created managed riscv harts communicate synchronize harts either via calls execution environment documented separately specication execution environment directly via shared memory system riscv harts also interact io devices indirectly via loads stores portions address space assigned io use term hart unambiguously concisely describe hardware thread opposed softwaremanaged thread contexts base riscv isa riscv hart observes memory operations executed sequentially program order riscv relaxed memory model harts requiring explicit fence instruction guarantee ordering memory operations dierent risc v harts chapter 7 describes optional atomic memory instruction extensions provide additional synchronization operations fence instruction used order device io memory accesses viewed risc v harts external devices coprocessors combination device input device output memory reads r memory writes w may ordered respect combination informally riscv hart external device observe operation successor set following fence operation predecessor set preceding fence execution environment dene io operations possible particular load store instructions might treated ordered device input device output operations respectively rather memory reads writes example memorymapped io devices typically accessed uncached loads stores ordered using bits rather r w bits instructionset extensions might also describe new coprocessor io instructions also ordered using bits fence unused elds fence instruction imm 118 rs1 rd reserved nergrain fences future extensions forward compatibility base implementations shall ignore elds standard software shall zero elds chose relaxed memory model allow high performance simple machine implementa tions however completely relaxed memory model weak support programming language memory models memory model tightened relaxed memory model also compatible likely future coprocessor accelerator extensions separate io ordering memory rw ordering avoid unnecessary serialization within devicedriver hart also support alternative nonmemory paths control added coprocessors io devices simple implementations may additionally ignore predecessor successor elds always execute conservative fence operations fencei instruction used synchronize instruction data streams riscv guarantee stores instruction memory made visible instruction fetches riscv hart fencei instruction executed fencei instruction ensures subsequent instruction fetch riscv hart see previous data stores already visible riscv hart fencei ensure riscv harts instruction fetches observe local hart stores multiprocessor system make store instruction memory visible riscv harts writing hart execute data fence requesting remote riscv harts execute fencei unused elds fencei instruction imm 110 rs1 rd reserved nergrain fences future extensions forward compatibility base implementations shall ignore elds standard software shall zero elds fencei instruction designed support wide variety implementations sim ple implementation ush local instruction cache instruction pipeline fencei executed complex implementation might snoop instruction data cache every data instruction cache miss use inclusive unied private l2 cache invalidate lines primary instruction cache written local store instruction instruction data caches kept coherent way pipeline needs ushed fencei considered include store instruction word instruction majc 30 jit compilers may generate large trace instructions single fencei amor tize instruction cache snoopinginvalidation overhead writing translated instructions memory regions known reside icache", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-08 02:32:12", "segment": "segment11", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(2.7).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(2.7).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.8 Control and Status Register Instructions ", "content": "system instructions used access system functionality might require privileged access encoded using itype instruction format divided two main classes atomically readmodifywrite control status registers csrs poten tially privileged instructions csr instructions described section two userlevel system instructions described following section system instructions dened allow simpler implementations always trap single software trap handler sophisticated implementations might execute system instruction hardware csr instructions dene full set csr instructions although standard userlevel base isa handful readonly counter csrs accessible csrrw atomic readwrite csr instruction atomically swaps values csrs integer registers csrrw reads old value csr zeroextends value xlen bits writes integer register rd initial value rs1 written csr rdx0 instruction shall read csr shall cause sideeects might occur csr read csrrs atomic read set bits csr instruction reads value csr zero extends value xlen bits writes integer register rd initial value integer register rs1 treated bit mask species bit positions set csr bit high rs1 cause corresponding bit set csr csr bit writable bits csr unaected though csrs might side eects written csrrc atomic read clear bits csr instruction reads value csr zero extends value xlen bits writes integer register rd initial value integer register rs1 treated bit mask species bit positions cleared csr bit high rs1 cause corresponding bit cleared csr csr bit writable bits csr unaected csrrs csrrc rs1x0 instruction write csr shall cause side eects might otherwise occur csr write raising illegal instruction exceptions accesses readonly csrs note rs1 species register holding zero value x0 instruction still attempt write unmodied value back csr cause attendant side eects csrrwi csrrsi csrrci variants similar csrrw csrrs csrrc re spectively except update csr using xlenbit value obtained zeroextending 5bit unsigned immediate uimm 40 eld encoded rs1 eld instead value integer register csrrsi csrrci uimm 40 eld zero instructions write csr shall cause side eects might otherwise occur csr write csrrwi rdx0 instruction shall read csr shall cause sideeects might occur csr read csrs instructions retired counter instret may modied side eects instruction execution cases csr access instruction reads csr reads value prior execution instruction csr access instruction writes csr update occurs execution instruction particular value written instret one instruction value read following instruction ie increment instret caused rst instruction retiring happens write new value assembler pseudoinstruction read csr csrr rd csr encoded csrrs rd csr x0 assembler pseudoinstruction write csr csrw csr rs1 encoded csrrw x0 csr rs1 csrwi csr uimm encoded csrrwi x0 csr uimm assembler pseudoinstructions dened set clear bits csr old value required csrscsrc csr rs1 csrsicsrci csr uimm timers counters rv32i provides number 64bit readonly userlevel counters mapped 12bit csr address space accessed 32bit pieces using csrrs instructions rdcycle pseudoinstruction reads low xlen bits cycle csr holds count number clock cycles executed processor core hart running arbitrary start time past rdcycleh rv32ionly instruction reads bits 6332 cycle counter underlying 64bit counter never overow practice rate cycle counter advances depend implementation operating environment execution environment provide means determine current rate cyclessecond cycle counter incrementing rdtime pseudoinstruction reads low xlen bits time csr counts wallclock real time passed arbitrary start time past rdtimeh rv32ionly in struction reads bits 6332 realtime counter underlying 64bit counter never overow practice execution environment provide means determining period realtime counter secondstick period must constant realtime clocks harts single user application synchronized within one tick realtime clock environment provide means determine accuracy clock rdinstret pseudoinstruction reads low xlen bits instret csr counts number instructions retired hart arbitrary start point past rdinstreth rv32ionly instruction reads bits 6332 instruction counter underlying 64bit counter never overow practice following code sequence read valid 64bit cycle counter value x3 x2 even counter overows reading upper lower halves mandate basic counters provided implementations essential basic performance analysis adaptive dynamic optimization allow application work realtime streams additional counters provided help diagnose performance problems made accessible userlevel application code low overhead required counters 64 bits wide even rv32 otherwise dicult software determine values overowed lowend implementation upper 32 bits counter implemented using software counters incremented trap handler triggered overow lower 32 bits sample code described shows full 64bit width value safely read using individual 32bit instructions applications important able read multiple counters instant time run multitasking environment user thread suer context switch attempting read counters one solution user thread read realtime counter reading counters determine context switch occurred middle sequence case reads retried considered adding output latches allow user thread snapshot counter values atomically would increase size user context especially implementations richer set counters", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-08 02:32:13", "segment": "segment12", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(2.8).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(2.8).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%202.5.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "2.9 Environment Call and Breakpoints ", "content": "ecall instruction used make request supporting execution environment usually operating system abi system dene parameters environment request passed usually dened locations integer register le ebreak instruction used debuggers cause control transferred back debug ging environment ecall ebreak previously named scall sbreak instructions functionality encoding renamed reect used generally call supervisorlevel operating system debugger", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-08 02:32:13", "segment": "segment13", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(2.9).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 3 ", "content": "rv32e base integer instruction set version 19 chapter describes rv32e base integer instruction set reduced version rv32i designed embedded systems main change reduce number integer registers 16 remove counters mandatory rv32i chapter outlines dierences rv32e rv32i read chapter 2 rv32e designed provide even smaller base core embedded microcontrollers al though mentioned possibility version 20 document initially resisted dening subset however given demand smallest possible 32bit microcontroller interests preempting fragmentation space dened rv32e fourth standard base isa addition rv32i rv64i rv128i e variant standardized 32bit address space width", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-08 02:32:13", "segment": "segment14", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "3.1 RV32E Programmers\u2019 Model ", "content": "rv32e reduces integer register count 16 generalpurpose registers x0x15 x0 dedicated zero register found small rv32i core designs upper 16 registers consume around one quarter total area core excluding memories thus removal saves around 25 core area corresponding core power reduction change requires dierent calling convention abi particular rv32e used softoat calling convention systems hardware oatingpoint must use base", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-08 02:32:13", "segment": "segment15", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "3.2 RV32E Instruction Set ", "content": "rv32e uses instruction set encoding rv32i except use register speciers x16x31 instruction result illegal instruction exception raised future standard extensions make use instruction bits freed reduced registerspecier elds available nonstandard extensions simplication counter instructions rdcycle h rdtime h rdinstret h longer mandatory mandatory counters require additional registers logic replaced applicationspecic facilities", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-08 02:32:13", "segment": "segment16", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "3.3 RV32E Extensions ", "content": "rv32e extended c userlevel standard extensions intend support hardware oatingpoint rv32e subset savings reduced register count become negligible context hardware oatingpoint unit wish reduce proliferation abis privileged architecture rv32e system include user mode well machine mode physical memory protection scheme described volume ii intend support full unixstyle operating systems rv32e subset savings reduced register count become negligible context oscapable core wish avoid os fragmentation", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-08 02:32:13", "segment": "segment17", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 4 ", "content": "rv64i base integer instruction set version 20 chapter describes rv64i base integer instruction set builds upon rv32i variant described chapter 2 chapter presents dierences rv32i read conjunction earlier chapter", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-08 02:32:13", "segment": "segment18", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "4.1 Register State ", "content": "rv64i widens integer registers supported user address space 64 bits xlen64 figure 21", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-08 02:32:13", "segment": "segment19", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "4.2 Integer Computational Instructions ", "content": "additional instruction variants provided manipulate 32bit values rv64i indicated w sux opcode w instructions ignore upper 32 bits inputs always produce 32bit signed values ie bits xlen1 31 equal cause illegal instruction exception rv32i compiler calling convention maintain invariant 32bit values held signextended format 64bit registers even 32bit unsigned integers extend bit 31 bits 63 32 consequently conversion unsigned signed 32bit integers noop conversion signed 32bit integer signed 64bit integer existing 64bit wide sltu unsigned branch compares still operate correctly unsigned 32bit integers invariant similarly existing 64bit wide logical operations 32bit signextended integers preserve signextension property new instructions add wsubwsxxw required addition shifts ensure reasonable performance 32bit values integer registerimmediate instructions addiw rv64ionly instruction adds signextended 12bit immediate register rs1 produces proper signextension 32bit result rd overows ignored result low 32 bits result signextended 64 bits note addiw rd rs1 0 writes signextension lower 32 bits register rs1 register rd assembler pseudoop sextw 31 26 25 24 20 19 15 14 12 11 7 6 0 imm 116 imm 5 imm 40 rs1 funct3 rd opcode 6 1 5 5 3 5 7 000000 shamt 5 shamt 40 src slli dest opimm 000000 shamt 5 shamt 40 src srli dest opimm 010000 shamt 5 shamt 40 src srai dest opimm 000000 0 shamt 40 src slliw dest opimm32 000000 0 shamt 40 src srliw dest opimm32 010000 0 shamt 40 src sraiw dest opimm32 shifts constant encoded specialization itype format using instruction opcode rv32i operand shifted rs1 shift amount encoded lower 6 bits iimmediate eld rv64i right shift type encoded bit 30 slli logical left shift zeros shifted lower bits srli logical right shift zeros shifted upper bits srai arithmetic right shift original sign bit copied vacated upper bits rv32i slli srli srai generate illegal instruction exception imm 5 0 slliw srliw sraiw rv64ionly instructions analogously dened operate 32bit values produce signed 32bit results slliw srliw sraiw generate illegal instruction exception imm 5 0 31 12 11 7 6 0 imm 3112 rd opcode 20 5 7 uimmediate 3112 dest lui uimmediate 3112 dest auipc lui load upper immediate uses opcode rv32i lui places 20bit uimmediate bits 3112 register rd places zero lowest 12 bits 32bit result signextended 64 bits auipc add upper immediate pc uses opcode rv32i auipc add upper immediate pc used build pcrelative addresses uses utype format auipc appends 12 low order zero bits 20bit uimmediate signextends result 64 bits adds pc places result register rd integer registerregister operations addw subw rv64ionly instructions dened analogously add sub operate 32bit values produce signed 32bit results overows ignored low 32bits result signextended 64bits written destination register sll srl sra perform logical left logical right arithmetic right shifts value register rs1 shift amount held register rs2 rv64i low 6 bits rs2 considered shift amount sllw srlw sraw rv64ionly instructions analogously dened operate 32bit values produce signed 32bit results shift amount given rs2 40", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-08 02:32:13", "segment": "segment20", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(4.2).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(4.2).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203(4.2).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%204(4.2).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "4.3 Load and Store Instructions ", "content": "rv64i extends address space 64 bits execution environment dene portions address space legal access ld instruction loads 64bit value memory register rd rv64i lw instruction loads 32bit value memory signextends 64 bits storing register rd rv64i lwu instruction hand zeroextends 32bit value memory rv64i lh lhu dened analogously 16bit values lb lbu 8bit values sd sw sh sb instructions store 64bit 32bit 16bit 8bit values low bits register rs2 memory respectively", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-08 02:32:13", "segment": "segment21", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(4.3).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "4.4 System Instructions ", "content": "rv64i csr instructions manipulate 64bit csrs particular rdcycle rd time rdinstret pseudoinstructions read full 64 bits cycle time instret counters hence rdcycleh rdtimeh rdinstreth instructions necessary illegal rv64i", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-08 02:32:13", "segment": "segment22", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 5 ", "content": "rv128i base integer instruction set version 17 one mistake made computer design dicult re cover fromnot enough address bits memory addressing memory man agement bell strecker isca3 1976 chapter describes rv128i variant riscv isa supporting at 128bit address space variant straightforward extrapolation existing rv32i rv64i designs primary reason extend integer register width support larger address spaces clear at address space larger 64 bits required time writing fastest supercomputer world measured top500 benchmark 1 pb dram would require 50 bits address space dram resided single address space warehousescale computers already contain even larger quantities dram new dense solidstate nonvolatile memories fast interconnect technologies might drive demand even larger memory spaces exascale systems research targeting 100 pb memory systems occupy 57 bits address space historic rates growth possible greater 64 bits address space might required 2030 history suggests whenever becomes clear 64 bits address space needed architects repeat intensive debates alternatives extending address space including segmentation 96bit address spaces software workarounds nally at 128 bit address spaces adopted simplest best solution frozen rv128 spec time might need evolve design based actual usage 128bit address spaces rv128i builds upon rv64i way rv64i builds upon rv32i integer registers extended 128 bits ie xlen128 integer computational instructions unchanged dened operate xlen bits rv64i w integer instructions operate 32bit values low bits register retained new set integer instructions operate 64bit values held low bits 128bit integer registers added instructions consume two major opcodes opimm64 op64 standard 32bit encoding shifts immediate sllisrlisrai encoded using low 7 bits iimmediate variable shifts sllsrlsra use low 7 bits shift amount source register ldu load double unsigned instruction added using existing load major opcode along new lq sq instructions load store quadword values sq added store major opcode lq added miscmem major opcode oatingpoint instruction set unchanged although 128bit q oatingpoint extension support fmvxq fmvqx instructions together additional fcvt instructions 128bit integer format", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-08 02:32:14", "segment": "segment23", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 6 ", "content": "standard extension integer multiplication division version 20 chapter describes standard integer multiplication division instruction extension named contains instructions multiply divide values held two integer registers separate integer multiply divide base simplify lowend implementations applications integer multiply divide operations either infrequent better handled attached accelerators", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-08 02:32:14", "segment": "segment24", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "6.1 Multiplication Operations ", "content": "mul performs xlenbitxlenbit multiplication places lower xlen bits destination register mulh mulhu mulhsu perform multiplication return upper xlen bits full 2xlenbit product signedsigned unsignedunsigned signedunsigned multiplication respectively high low bits product required recommended code sequence mulh u rdh rs1 rs2 mul rdl rs1 rs2 source register speciers must order rdh rs1 rs2 microarchitectures fuse single multiply operation instead performing two separate multiplies mulw valid rv64 multiplies lower 32 bits source registers placing signextensionofthelower32bitsoftheresultintothedestinationregistermulcanbeusedto obtain upper 32 bits 64bit product signed arguments must proper 32bit signed values whereas unsigned arguments must upper 32 bits clear", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-08 02:32:14", "segment": "segment25", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(6.1).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "6.2 Division Operations ", "content": "div divu perform signed unsigned integer division xlen bits xlen bits rem remu provide remainder corresponding division operation quotient remainder required division recommended code sequence div u rdq rs1 rs2 rem u rdr rs1 rs2 rdq rs1 rs2 microarchitectures fuse single divide operation instead performing two separate divides semantics division zero division overow summarized table 61 quotient division zero bits set ie 2xlen 1 unsigned division 1 signed division remainder division zero equals dividend signed division overow occurs mostnegative integer 2xlen1 divided 1 quotient signed division overow equal dividend remainder zero unsigned division overow occur considered raising exceptions integer divide zero exceptions causing trap execution environments however would arithmetic trap standard isa oatingpoint exceptions set ags write default values cause traps would require language implementors interact execution environment trap handlers case language standards mandate dividebyzero exception must cause immediate control ow change single branch instruction needs added divide operation branch instruction inserted divide normally predictably taken adding little runtime overhead value bits set returned unsigned signed divide zero simplify divider circuitry value 1s natural value return unsigned divide representing largest unsigned number also natural result simple unsigned divider implementations signed division often implemented using unsigned division circuit specifying overow result simplies hardware divw divuw instructions valid rv64 divide lower 32 bits rs1 lower 32 bits rs2 treating signed unsigned integers respectively placing 32bit quotient rd signextended 64 bits remw remuw instructions valid rv64 provide corresponding signed unsigned remainder operations respectively remw remuw always signextend 32bit result 64 bits including divide zero", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-08 02:32:14", "segment": "segment26", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(6.2).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%206.1.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 7 ", "content": "standard extension atomic instructions version 20 section somewhat date riscv memory model currently revision ensure eciently support current programming language memory models revised base memory model contain ordering constraints including least loads address hart reordered syntactic data dependencies instructions respected standard atomic instruction extension denoted instruction subset name con tains instructions atomically readmodifywrite memory support synchronization multiple riscv harts running memory space two forms atomic instruction provided loadreservedstoreconditional instructions atomic fetchandop memory instruc tions types atomic instruction support various memory consistency orderings including unordered acquire release sequentially consistent semantics instructions allow riscv support rcsc memory consistency model 10 much debate language community architecture community appear nally settled release consistency standard memory consistency model riscv atomic support built around model", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-08 02:32:14", "segment": "segment27", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "7.1 Specifying Ordering of Atomic Instructions ", "content": "base riscv isa relaxed memory model fence instruction used impose additional ordering constraints address space divided execution environment memory io domains fence instruction provides options order accesses one two address domains provide ecient support release consistency 10 atomic instruction two bits aq rl used specify additional memory ordering constraints viewed riscv harts bits order accesses one two address domains memory io depending address domain atomic instruction accessing ordering constraint implied accesses totheotherdomain andafenceinstructionshouldbeusedtoorderacrossbothdomains bits clear additional ordering constraints imposed atomic memory op eration aq bit set atomic memory operation treated acquire access ie following memory operations riscv hart observed take place acquire memory operation rl bit set atomic memory operation treated release access ie release memory operation observed take place earlier memory operations riscv hart aq rl bits set atomic memory operation sequentially consistent observed happen earlier memory operations later memory operations riscv hart observed hart global order sequentially consistent atomic memory operations address domain theoretically denition aq rl bits allows implementations without global store atomicity aq rl bits set however require full sequential consistency atomic operation implies global store atomicity addition acquire release semantics practice hardware systems usually implemented global store atomicity embodied local processor ordering rules together singlewriter cache coherence protocols", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-08 02:32:14", "segment": "segment28", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "7.2 Load-Reserved/Store-Conditional Instructions ", "content": "31 27 26 25 24 20 19 15 14 12 11 7 6 0 funct5 aq rl rs2 rs1 funct3 rd opcode 5 1 1 5 5 3 5 7 lr ordering 0 addr width dest amo sc ordering src addr width dest amo complex atomic memory operations single memory word performed loadreserved lr storeconditional sc instructions lr loads word address rs1 places signextended value rd registers reservation memory address sc writes word rs2 address rs1 provided valid reservation still exists address sc writes zero rd success nonzero code failure compareandswap cas lrsc used build lockfree data structures extensive discussion opted lrsc several reasons 1 cas suers aba problem lrsc avoids monitors accesses address rather checking changes data value 2 cas would also require new integer instruction for mat support three source operands address compare value swap value well dierent memory system message format would complicate microarchitectures 3 furthermore avoid aba problem systems provide doublewide cas dwcas allow counter tested incremented along data word requires reading ve regis ters writing two one instruction also new larger memory system message type complicating implementations 4 lrsc provides ecient implementation many primitives requires one load opposed two cas one load cas instruction obtain value speculative computation second load part cas instruction check value unchanged updating main disadvantage lrsc cas livelock avoid architected guarantee eventual forward progress described another concern whether inu ence current x86 architecture dwcas complicate porting synchronization libraries software assumes dwcas basic machine primitive possible mitigating factor recent addition transactional memory instructions x86 might cause move away dwcas failure code value 1 reserved encode unspecied failure failure codes reserved time portable software assume failure code nonzero lr sc operate naturallyaligned 64bit rv64 32bit words memory misaligned addresses generate misaligned address exceptions reserve failure code 1 mean unspecied simple implementations may return value using existing mux required sltsltu instructions specic failure codes might dened future versions extensions isa standard extension certain constrained lrsc sequences guaranteed succeed eventually static code lrsc sequence plus code retry sequence case failure must comprise 16 integer instructions placed sequentially memory sequence guaranteed eventually succeed dynamic code executed lr sc instructions contain instructions base subset excluding loads stores backward jumps taken backward branches fence fencei system instructions code retry failing lrsc sequence contain backward jumps andor branches repeat lrsc sequence otherwise constraints sc must address latest lr executed lrsc sequences meet constraints might complete attempts implementations guarantee eventual success one advantage cas guarantees hart eventually makes progress whereas lrsc atomic sequence could livelock indenitely systems avoid concern added architectural guarantee forward progress lrsc atomic sequences re strictions lrsc sequence contents allows implementation capture cache line lr complete lrsc sequence holding o remote cache interventions bounded short time interrupts tlb misses might cause reservation lost eventually atomic sequence complete restricted length lrsc sequences t within 64 contiguous instruction bytes base isa avoid undue restrictions instruction cache tlb size associativity similarly disallowed loads stores within se quences avoid restrictions data cache associativity restrictions branches jumps limits time spent sequence floatingpoint operations integer multi plydivide disallowed simplify operating system emulation instructions implementations lacking appropriate hardware support implementation reserve arbitrary subset memory space lr multiple lr reservations might active simultaneously single hart sc succeed accesses harts address observed occurred sc last lr hart reserve address note lr might dierent address argument reserved sc address part memory subset following model systems memory translation sc allowed succeed earlier lr reserved location using alias dierent virtual address also allowed fail virtual address dierent sc must fail observable memory access another hart address intervening context switch hart meantime hart executed privileged exceptionreturn instruction specication explicitly allows implementations support powerful implementations wider guarantees provided void atomicity guarantees constrained sequences lrsc used construct lockfree data structures example using lrsc implement compareandswap function shown figure 71 inlined compareandswap functionality needonlytakethreeinstructions sc instruction never observed another riscv hart immediately preceding lr due atomic nature lrsc sequence memory operations hart observed occurred lr successful sc lrsc sequence given acquire semantics setting aq bit sc instruction lrsc sequence given release semantics setting rl bit lr instruction setting aq rl bits lr instruction setting aq bit sc instruction makes lrsc sequence sequentially consistent respect sequentially consistent atomic operations neither bit set lr sc lrsc sequence observed occur surrounding memory operations riscv hart appropriate lrsc sequence used implement parallel reduction operation general multiword atomic primitive desirable still considerable debate form take guaranteeing forward progress adds complexity system current thoughts include small limitedcapacity transactional memory buer along lines original transactional memory proposals optional standard extension", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-08 02:32:15", "segment": "segment29", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(7.2).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%207.1.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "7.3 Atomic Memory Operations ", "content": "atomic memory operation amo instructions perform readmodifywrite operations mul tiprocessor synchronization encoded rtype instruction format amo in structions atomically load data value address rs1 place value register rd apply binary operator loaded value original value rs2 store result back address rs1 amos either operate 64bit rv64 32bit words memory rv64 32bit amos always signextend value placed rd address held rs1 must naturally aligned size operand ie eightbyte aligned 64bit words fourbyte aligned 32bit words address naturally aligned misaligned address exception generated operations supported swap integer add logical logical logical xor signed unsigned integer maximum minimum without ordering constraints amos used implement parallel reduction operations typically return value would discarded writing x0 provided fetchandop style atomic primitives scale highly parallel systems better lrsc cas simple microarchitecture implement amos using lrsc prim itives complex implementations might also implement amos memory controllers optimize away fetching original value destination x0 set amos chosen support c11c11 atomic memory operations e ciently also support parallel reductions memory another use amos provide atomic updates memorymapped device registers e g setting clearing toggling bits io space help implement multiprocessor synchronization amos optionally provide release consis tency semantics aq bit set later memory operations riscv hart observed take place amo conversely rl bit set riscv harts observe amo memory accesses preceding amo riscv hart amos designed implement c11 c11 memory models eciently al though fence r rw instruction suces implement acquire operation fence rw w suces implement release imply additional unnecessary ordering compared amos corresponding aq rl bit set example code sequence critical section guarded testandset spinlock shown figure 72 note rst amo marked aq order lock acquisition critical section second amo marked rl order critical section lock relinquishment recommend use amo swap idiom shown lock acquire release simplify implementation speculative lock elision 25 risk complicating implementation atomic operations microarchitectures elide store within acquire swap lock value matches swap value avoid dirtying cache line held shared exclusive clean state eect similar testandtest andset lock shorter code paths instructions extension also used provide sequentially consistent loads stores sequentially consistent load implemented lr aq rl set sequentially consistent store implemented amoswap writes old value x0 aq rl set", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-08 02:32:15", "segment": "segment30", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(7.3).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%207.2.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 8 ", "content": "f standard extension singleprecision floatingpoint version 20 chapter describes standard instructionset extension singleprecision oatingpoint named f adds singleprecision oatingpoint computational instructions compliant ieee 7542008 arithmetic standard 14", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-08 02:32:15", "segment": "segment31", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "8.1 F Register State ", "content": "f extension adds 32 oatingpoint registers f0f31 32 bits wide oatingpoint control status register fcsr contains operating mode exception status oatingpoint unit additional state shown figure 81 use term flen describe width oatingpoint registers riscv isa flen32 f singleprecision oatingpoint extension oatingpoint instructions operate values oatingpoint register le floatingpoint load store instructions transfer oatingpoint values registers memory instructions transfer values integer register le also provided considered unied register le integer oatingpoint values simplies software register allocation calling conventions reduces total user state however split organization increases total number registers accessible given instruction width simplies provision enough regle ports wide superscalar issue supports decoupled oatingpointunit architectures simplies use internal oatingpoint encoding techniques compiler support calling conventions split register le architectures well understood using dirty bits oatingpoint register le state reduce contextswitch overhead", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-08 02:32:15", "segment": "segment32", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%208.1.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/figure%208.2.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.2 Floating-Point Control and Status Register ", "content": "oatingpoint control status register fcsr riscv control status register csr 32bit readwrite register selects dynamic rounding mode oatingpoint arith metic operations holds accrued exception ags shown figure 82 fcsr register read written frcsr fscsr instructions assembler pseudoops built underlying csr access instructions frcsr reads fcsr copying integer register rd fscsr swaps value fcsr copying original value integer register rd writing new value obtained integer register rs1 fcsr elds within fcsr also accessed individually dierent csr addresses separate assembler pseudoops dened accesses frrm instruction reads rounding mode eld frm copies leastsignicant three bits integer register rd zero bits fsrm swaps value frm copying original value integer register rd writing new value obtained three leastsignicant bits integer register rs1 frm frflags fsflags dened analogously accrued exception flags eld fflags additional pseudoinstructions fsrmi fsflagsi swap values using immediate value instead register rs1 bits 318 fcsr reserved standard extensions including l standard extension decimal oatingpoint extensions present implementations shall ignore writes bits supply zero value read standard software preserve contents bits floatingpoint operations use either static rounding mode encoded instruction dynamic rounding mode held frm rounding modes encoded shown table 81 value 111 instruction rm eld selects dynamic rounding mode held frm frm set invalid value 101111 subsequent attempt execute oatingpoint operation dynamic rounding mode cause illegal instruction trap instructions rm eld nevertheless unaected rounding mode rm eld set rne 000 c99 language standard eectively mandates provision dynamic rounding mode register accrued exception ags indicate exception conditions arisen oatingpoint arithmetic instruction since eld last reset software shown table 82 allowed standard support traps oatingpoint exceptions base isa instead require explicit checks ags software considered adding branches controlled directly contents oatingpoint accrued exception ags ultimately chose omit instructions keep isa simple", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-08 02:32:16", "segment": "segment33", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%208.1.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%208.2.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.3 NaN Generation and Propagation ", "content": "except otherwise stated result oatingpoint operation nan canonical nan canonical nan positive sign signicand bits clear except msb aka quiet bit singleprecision oatingpoint corresponds pattern 0x7fc00000 fmin fmax least one input signaling nan inputs quiet nans result canonical nan one operand quiet nan nan result nonnan operand signinjection instructions fsgnj fsgnjn fsgnjx canonicalize nans ma nipulate underlying bit patterns directly considered propagating nan payloads recommended standard decision would increased hardware cost moreover since feature optional standard used portable code implementors free provide nan payload propagation scheme nonstandard exten sion enabled nonstandard operating mode however canonical nan scheme described must always supported default mode require implementations return standardmandated default values case ex ceptional conditions without intervention part userlevel software unlike alpha isa oatingpoint trap barriers believe full hardware handling exceptional cases become common wish avoid complicating userlevel isa opti mize approaches implementations always trap machinemode software handlers provide exceptional default values", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-08 02:32:16", "segment": "segment34", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "8.4 Subnormal Arithmetic ", "content": "operations subnormal numbers handled accordance ieee 7542008 standard parlance ieee standard tininess detected rounding detecting tininess rounding results fewer spurious underow signals", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-08 02:32:16", "segment": "segment35", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "8.5 Single-Precision Load and Store Instructions ", "content": "floatingpoint loads stores use baseoset addressing mode integer base isa base address register rs1 12bit signed byte oset flw instruction loads singleprecision oatingpoint value memory oatingpoint register rd fsw stores singleprecision value oatingpoint register rs2 memory flw fsw guaranteed execute atomically eective address naturally aligned", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-08 02:32:16", "segment": "segment36", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(8.5).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.6 Single-Precision Floating-Point Computational Instructions ", "content": "floatingpoint arithmetic instructions one two source operands use rtype format opfp major opcode fadds fsubs fmuls fdivs perform singleprecision oatingpoint addition subtraction multiplication division respectively rs1 rs2 writing result rd fmins fmaxs write respectively smaller larger rs1 rs2 rd fsqrts computes square root rs1 writes result rd 2bit oatingpoint format eld fmt encoded shown table 83 set 00 instructions f extension oatingpoint operations perform rounding select rounding mode using rm eld encoding shown table 81 floatingpoint fused multiplyadd instructions require new standard instruction format r4 type instructions specify three source registers rs1 rs2 rs3 destination register rd format used oatingpoint fused multiplyadd instructions fused multiplyadd instructions multiply values rs1 rs2 optionally negate product add subtract value rs3 writing nal result rd fmadds computes rs1rs2rs3 fmsubs computes rs1rs2rs3 fnmsubs computes rs1rs2rs3 fnmadds computes rs1rs2 rs3 fused multiplyadd instructions must raise invalid operation exception multipli cands zero even addend quiet nan ieee 7542008 standard permits require raising invalid exception operation 0 qnan", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-08 02:32:16", "segment": "segment37", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%208.3.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(8.6).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(8.6).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.7 Single-Precision Floating-Point Conversion and Move ", "content": "instructions floatingpointtointeger integertooatingpoint conversion instructions encoded opfp major opcode space fcvtws fcvtls converts oatingpoint number oating point register rs1 signed 32bit 64bit integer respectively integer register rd fcvtsw fcvtsl converts 32bit 64bit signed integer respectively integer register rs1 oatingpoint number oatingpoint register rd fcvtwus fcvtlus fcvtswu fcvtslu variants convert unsigned integer values fcvtl u s fcvtsl u illegal rv32 rounded result representable destination format clipped nearest value invalid ag set table 84 gives range valid inputs fcvtints behavior invalid inputs oatingpoint integer integer oatingpoint conversion instructions round according rm eld oatingpoint register initialized oatingpoint positive zero using fcvtsw rd x0 never raise exceptions floatingpoint oatingpoint signinjection instructions fsgnjs fsgnjns fsgnjxs produce result takes bits except sign bit rs1 fsgnj result sign bit rs2 sign bit fsgnjn result sign bit opposite rs2 sign bit fsgnjx sign bit xor sign bits rs1 rs2 signinjection instructions set oatingpoint exception ags note fsgnjs rx ry ry moves ry rx assembler pseudoop fmvs rx ry fsgnjns rx ry ry moves negation ry rx assembler pseudoop fnegs rx ry fsgnjxs rx ry ry moves absolute value ry rx assembler pseudoop fabss rx ry signinjection instructions provide oatingpoint mv abs neg well supporting operations including ieee copysign operation sign manipulation tran scendental math function libraries although mv abs neg need single register operand whereas fsgnj instructions need two unlikely microarchitectures would add optimizations benet reduced number register reads relatively infrequent instructions even case microarchitecture simply detect source registers fsgnj instructions read single copy instructions provided move bit patterns oatingpoint integer registers fmvxw moves singleprecision value oatingpoint register rs1 represented ieee 754 2008 encoding lower 32 bits integer register rd rv64 higher 32 bits destination register lled copies oatingpoint number sign bit fmvwx moves singleprecision value encoded ieee 7542008 standard encoding lower 32 bits integer register rs1 oatingpoint register rd bits modied transfer particular payloads noncanonical nans preserved fmvwx fmvxw instructions previously called fmvsx fmvxs use w consistent semantics instruction moves 32 bits without interpreting became clearer dening nanboxing avoid disturbing existing code w versions supported tools base oatingpoint isa dened allow implementations employ internal recoding oatingpoint format registers simplify handling subnormal values possibly reduce functional unit latency end base isa avoids representing integer values oatingpoint registers dening conversion comparison operations read write integer register le directly also removes many common cases explicit moves integer oatingpoint registers required reducing instruction count critical paths common mixedformat code sequences", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-08 02:32:17", "segment": "segment38", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%208.4.png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(8.7).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202(8.7).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203(8.7).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.8 Single-Precision Floating-Point Compare Instructions ", "content": "floatingpoint compare instructions perform specied comparison equal less less equal oatingpoint registers rs1 rs2 record boolean result integer register rd flts fles perform ieee 7542008 standard refers signaling comparisons invalid operation exception raised either input nan feqs performs quiet com parison signaling nan inputs cause invalid operation exception three instructions result 0 either operand nan", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-08 02:32:17", "segment": "segment39", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(8.8).png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "8.9 Single-Precision Floating-Point Classify Instruction ", "content": "fclasss instruction examines value oatingpoint register rs1 writes integer register rd 10bit mask indicates class oatingpoint number format mask described table 85 corresponding bit rd set property true clear otherwise bits rd cleared note exactly one bit rd set", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-08 02:32:17", "segment": "segment40", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201(8.9).png?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%208.5.png?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 9 ", "content": "standard extension doubleprecision floatingpoint version 20 chapter describes standard doubleprecision oatingpoint instructionset extension named adds doubleprecision oatingpoint computational instructions compliant ieee 7542008 arithmetic standard extension depends base singleprecision instruction subset f", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-08 02:32:17", "segment": "segment41", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "9.1 D Register State ", "content": "extension widens 32 oatingpoint registers f0f31 64 bits flen64 figure 81 f registers hold either 32bit 64bit oatingpoint values described section 92 flen 32 64 128 depending f q extensions supported four dierent oatingpoint precisions supported including h f q halfprecision h scalar values supported v vector extension supported", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-08 02:32:17", "segment": "segment42", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "9.2 NaN Boxing of Narrower Values ", "content": "multiple oatingpoint precisions supported valid values narrower nbit types n flen represented lower n bits flenbit nan value process termed nanboxing upper bits valid nanboxed value must 1s valid nanboxed nbit values therefore appear negative quiet nans qnans viewed wider mbit value n flen software might know current type data stored oatingpoint register able save restore register values hence result using wider operations transfer narrower values dened common case calleesaved registers standard convention also desirable features including varargs userlevel threading libraries virtual machine migration debugging floatingpoint nbit transfer operations move external values held ieee standard formats f registers comprise oatingpoint loads stores flnfsn oating point move instructions fmvnxfmvxn narrower nbit transfer n flen f registers create valid nanboxed value setting upper flenn bits destination f register 1 narrower nbit transfer oatingpoint registers transfer lower n bits register ignoring upper flenn bits floatingpoint compute signinjection operations calculate results based flenbit values held f registers narrow nbit operation n flen checks input operands correctly nanboxed ie upper flenn bits 1 n leastsignicant bits input used input value otherwise input value treated nbit canonical nan nbit oatingpoint result written n leastsignicant bits destination f register 1s written uppermost flenn bits yield legal nanboxed value conversions integer oatingpoint eg fcvtsx nanbox results narrower flen ll flenbit destination register conversions narrower nbit oating point values integer eg fcvtxs check legal nanboxing treat input nbit canonical nan legal nbit value earlier versions document dene behavior feeding results narrower wider operands operation except require wider saves restores would preserve value narrower operand new denition removes implementationspecic behav ior still accommodating nonrecoded recoded implementations oatingpoint unit new denition also helps catch software errors propagating nans values used incorrectly nonrecoded implementations unpack pack operands ieee standard format input output every oatingpoint operation nanboxing cost nonrecoded implementation primarily checking upper bits narrower operation represent legal nanboxed value writing 1s upper bits result recoded implementations use convenient internal format represent oatingpoint values added exponent bit allow values held normalized cost recoded implementation primarily extra tagging needed track internal types sign bits done without adding new state bits recoding nans internally exponent eld small modications needed pipelines used transfer values recoded format datapath latency costs minimal recoding process handle shifting input subnormal values wide operands case extracting nanboxed value similar process normalization except skipping leading1 bits instead skipping leading0 bits allowing datapath muxing shared", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-08 02:32:18", "segment": "segment43", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "9.3 Double-Precision Load and Store Instructions ", "content": "fld instruction loads doubleprecision oatingpoint value memory oatingpoint register rd fsd stores doubleprecision value oatingpoint registers memory doubleprecision value may nanboxed singleprecision value fld fsd guaranteed execute atomically eective address naturally aligned xlen64", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-08 02:32:18", "segment": "segment44", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(9.3).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "9.4 Double-Precision Floating-Point Computational Instructions ", "content": "doubleprecision oatingpoint computational instructions dened analogously singleprecision counterparts operate doubleprecision operands produce double precision results", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-08 02:32:18", "segment": "segment45", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(9.4).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "9.5 Double-Precision Floating-Point Conversion and Move In- structions ", "content": "floatingpointtointeger integertooatingpoint conversion instructions encoded opfp major opcode space fcvtwd fcvtld converts doubleprecision oatingpoint number oatingpoint register rs1 signed 32bit 64bit integer respectively integer register rd fcvtdw fcvtdl converts 32bit 64bit signed integer respectively integer register rs1 doubleprecision oatingpoint number oatingpoint register rd fcvtwud fcvtlud fcvtdwu fcvtdlu variants convert unsigned integer values fcvtl u d fcvtdl u illegal rv32 range valid inputs fcvtintd behavior invalid inputs fcvtints oatingpoint integer integer oatingpoint conversion instructions round according rm eld note fcvtdw u always produces exact result unaected rounding mode doubleprecision singleprecision singleprecision doubleprecision conversion instruc tions fcvtsd fcvtds encoded opfp major opcode space source destination oatingpoint registers rs2 eld encodes datatype source fmt eld encodes datatype destination fcvtsd rounds according rm eld fcvtds never round floatingpoint oatingpoint signinjection instructions fsgnjd fsgnjnd fsgnjxd dened analogously singleprecision signinjection instruction rv64 instructions provided move bit patterns oatingpoint integer registers fmvxd moves doubleprecision value oatingpoint register rs1 representation ieee 7542008 standard encoding integer register rd fmvdx moves doubleprecision value encoded ieee 7542008 standard encoding integer register rs1 oatingpoint register rd", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-08 02:32:18", "segment": "segment46", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(9.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202%20(9.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203%20(9.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%204%20(9.5).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "9.6 Double-Precision Floating-Point Compare Instructions ", "content": "doubleprecision oatingpoint compare instructions dened analogously single precision counterparts operate doubleprecision operands", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-08 02:32:18", "segment": "segment47", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(9.6).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "9.7 Double-Precision Floating-Point Classify Instruction ", "content": "doubleprecision oatingpoint classify instruction fclassd dened analogously singleprecision counterpart operates doubleprecision operands", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-08 02:32:18", "segment": "segment48", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(9.7).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 10 ", "content": "q standard extension quadprecision floatingpoint version 20 chapter describes q standard extension 128bit binary oatingpoint instructions com pliant ieee 7542008 arithmetic standard 128bit quadprecision binary oating point instruction subset named q requires rv64ifd oatingpoint registers extended hold either single double quadprecision oatingpoint value flen128 nanboxing scheme described section 92 extended recursively allow single precision value nanboxed inside doubleprecision value nanboxed inside quadprecision value", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-08 02:32:18", "segment": "segment49", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "10.1 Quad-Precision Load and Store Instructions ", "content": "new 128bit variants loadfp storefp instructions added encoded new value funct3 width eld flq fsq guaranteed execute atomically eective address naturally aligned xlen128", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-08 02:32:18", "segment": "segment50", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(10.1).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "10.2 Quad-Precision Computational Instructions ", "content": "new supported format added format eld instructions shown table 101 quadprecision oatingpoint computational instructions dened analogously doubleprecision counterparts operate quadprecision operands produce quadprecision results", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-08 02:32:18", "segment": "segment51", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2010.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(10.2).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "10.3 Quad-Precision Convert and Move Instructions ", "content": "new oatingpoint oatingpoint conversion instructions fcvtsq fcvtqs fcvtdq fcvtqd added floatingpoint oatingpoint signinjection instructions fsgnjq fsgnjnq fsgnjxq dened analogously doubleprecision signinjection instruction fmvxq fmvqx instructions provided quadprecision bit patterns must moved integer registers via memory rv128 supports fmvxq fmvqx q extension", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-08 02:32:18", "segment": "segment52", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(10.3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202%20(10.3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203%20(10.3).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "10.4 Quad-Precision Floating-Point Compare Instructions ", "content": "floatingpoint compare instructions perform specied comparison equal less less equal oatingpoint registers rs1 rs2 record boolean result integer register rd", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-08 02:32:18", "segment": "segment53", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(10.4).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "10.5 Quad-Precision Floating-Point Classify Instruction ", "content": "quadprecision oatingpoint classify instruction fclassq dened analogously doubleprecision counterpart operates quadprecision operands", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-08 02:32:18", "segment": "segment54", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(10.5).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 11 ", "content": "l standard extension decimal floatingpoint version 00 chapter placeholder specication standard extension named l designed support decimal oatingpoint arithmetic dened ieee 7542008 standard", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-08 02:32:18", "segment": "segment55", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "11.1 Decimal Floating-Point Registers ", "content": "existing oatingpoint registers used hold 64bit 128bit decimal oatingpoint values existing oatingpoint load store instructions used move values memory due large opcode space required fused multiplyadd instructions decimal oating point instruction extension require ve 25bit major opcodes 30bit encoding space", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-08 02:32:18", "segment": "segment56", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 12 ", "content": "c standard extension compressed instructions version 20 chapter describes current draft proposal riscv standard compressed instruction set extension named c reduces static dynamic code size adding short 16bit instruction encodings common operations c extension added base isas rv32 rv64 rv128 use generic term rvc cover typically 50 60 riscv instructions program replaced rvc instructions resulting 25 30 codesize reduction", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-08 02:32:18", "segment": "segment57", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "12.1 Overview ", "content": "rvc uses simple compression scheme oers shorter 16bit versions common 32bit riscv instructions immediate address oset small one registers zero register x0 abi link register x1 abi stack pointer x2 destination register rst source register identical registers used 8 popular ones c extension compatible standard instruction extensions c extension allows 16bit instructions freely intermixed 32bit instructions latter able start 16bit boundary addition c extension jal jalr instructions willnolongerraiseaninstructionmisalignedexception removing 32bit alignment constraint original 32bit instructions allows signicantly greater code density compressed instruction encodings mostly common across rv32c rv64c rv128c shown table 123 opcodes used dierent purposes depending base isa width example wider addressspace rv64c rv128c variants require additional opcodes compress loads stores 64bit integer values rv32c uses opcodes compress loads stores singleprecision oatingpoint values similarly rv128c requires additional opcodes capture loads stores 128bit integer values opcodes used loads stores doubleprecision oatingpoint values rv32c rv64c c extension implemented appropriate compressed oatingpoint load store instructions must provided whenever relevant standard oatingpoint extension f andor also implemented addition rv32c includes compressed jump link instruction compress shortrange subroutine calls opcode used compress addiw rv64c rv128c doubleprecision loads stores signicant fraction static dynamic instructions hence motivation include rv32c rv64c encoding although singleprecision loads stores signicant source static dynamic compression benchmarks compiled currently supported abis microcontrollers provide hardware singleprecision oatingpoint units abi sup ports singleprecision oatingpoint numbers singleprecision loads stores used least frequently doubleprecision loads stores measured benchmarks hence motivation provide compressed support rv32c shortrange subroutine calls likely small binaries microcontrollers hence motivation include rv32c although reusing opcodes dierent purposes dierent base register widths adds complexity documentation impact implementation complexity small even designs support multiple base isa register widths compressed oatingpoint load store variants use instruction format register speciers wider integer loads stores rvc designed constraint rvc instruction expands single 32bit instruction either base isa rv32ie rv64i rv128i f standard extensions present adopting constraint two main benets hardware designs simply expand rvc instructions decode simplifying verication minimizing modications existing microarchitectures compilers unaware rvc extension leave code compression assembler linker although compressionaware compiler generally able produce better results felt multiple complexity reductions simple oneone mapping c base ifd instructions far outweighed potential gains slightly denser encoding added additional instructions supported c extension allowed encoding multiple ifd instructions one c instruction important note c extension designed standalone isa meant used alongside base isa variablelength instruction sets long used improve code density example ibm stretch 6 developed late 1950s isa 32bit 64bit instructions 32bit instructions compressed versions full 64bit instructions stretch also employed concept limiting set registers addressable shorter instruction formats short branch instructions could refer one index registers later ibm 360 architecture 3 supported simple variablelength instruction encoding 16bit 32bit 48bit instruction formats 1963 cdc introduced craydesigned cdc 6600 28 precursor risc archi tectures introduced registerrich loadstore architecture instructions two lengths 15bits 30bits later cray1 design used similar instruction format 16bit 32bit instruction lengths initial risc isas 1980s picked performance code size reasonable workstation environment embedded systems hence arm mips subsequently made versions isas oered smaller code size oering alternative 16bit wide instruction set instead standard 32bit wide instructions com pressed risc isas reduced code size relative starting points 2530 yielding code signicantly smaller 80x86 result surprised intuition variablelength cisc isa smaller risc isas oered 16bit 32bit formats since original risc isas leave sucient opcode space free include unplanned compressed instructions instead developed complete new isas meant compilers needed dierent code generators separate compressed isas rst compressed risc isa extensions eg arm thumb mips16 used xed 16bit in struction size gave good reductions static code size caused increase dynamic instruction count led lower performance compared original xedwidth 32bit instruction size led development second generation compressed risc isa designs mixed 16bit 32bit instruction lengths eg arm thumb2 micromips pow erpc vle performance similar pure 32bit instructions signicant code size savings unfortunately dierent generations compressed isas incompati ble original uncompressed isa leading signicant complexity documentation implementations software tools support commonly used 64bit isas powerpc micromips currently supports compressed instruction format surprising popular 64bit isa mobile platforms arm v8 include compressed instruction format given static code size dynamic instruction fetch bandwidth important metrics although static code size major concern larger systems instruction fetch bandwidth major bottleneck servers running commercial workloads often large instruction working set beneting 25 years hindsight riscv designed support compressed instruc tions outset leaving enough opcode space rvc added simple extension top base isa along many extensions philosophy rvc reduce code size embedded applications improve performance energyeciency applications due fewer misses instruction cache waterman shows rvc fetches 25 30 fewer instruction bits reduces instruction cache misses 20 25 roughly performance impact doubling instruction cache size 33", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-08 02:32:19", "segment": "segment58", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "12.2 Compressed Instruction Formats ", "content": "table 121 shows eight compressed instruction formats cr ci css use 32 rvi registers ciw cl cs cb limited 8 table 122 lists popular registers correspond registers x8 x15 note separate version load store instructions use stack pointer base address register since saving restoring stack prevalent use ci css formats allow access 32 data registers ciw supplies 8bit immediate addi4spn instruction riscv abi changed make frequently used registers map registers x8x15 simplies decompression decoder contiguous naturally aligned set register numbers also compatible rv32e subset base specication 16 integer registers compressed registerbased oatingpoint loads stores also use cl cs formats respec tively eight registers mapping f8 f15 standard riscv calling convention maps frequently used oatingpoint registers registers f8 f15 allows register decompression decoding integer register numbers formats designed keep bits two register source speciers place instructions destination register eld move full 5bit destination register specier present place 32bit riscv encoding immediates signextended signextension always bit 12 immediate elds scrambled base specication reduce number immediate muxes required immediate elds scrambled instruction formats instead sequential order many bits possible position every instruction thereby simplify ing implementations example immediate bits 1710 always sourced instruction bit positions five immediate bits 5 4 3 1 0 two source instruction bits four 9 7 6 2 three sources one 8 four sources many rvc instructions zerovalued immediates disallowed x0 valid 5bit register specier restrictions free encoding space instructions requiring fewer operand bits", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-08 02:32:19", "segment": "segment59", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.2.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "12.3 Load and Store Instructions ", "content": "increase reach 16bit instructions datatransfer instructions use zeroextended immediates scaled size data bytes 4 words 8 double words 16 quad words rvc provides two variants loads stores one uses abi stack pointer x2 base address target data register reference one 8 base address registers one 8 data registers stackpointerbased loads stores instructions use ci format clwsp loads 32bit value memory register rd computes eective address adding zeroextended oset scaled 4 stack pointer x2 expands lw rd offset 72 x2 cldsp rv64crv128conly instruction loads 64bit value memory register rd computes eective address adding zeroextended oset scaled 8 stack pointer x2 expands ld rd offset 83 x2 clqsp rv128conly instruction loads 128bit value memory register rd computes eective address adding zeroextended oset scaled 16 stack pointer x2 expands lq rd offset 94 x2 cflwsp rv32fconly instruction loads singleprecision oatingpoint value memory oatingpoint register rd computes eective address adding zeroextended oset scaled 4 stack pointer x2 expands flw rd offset 72 x2 cfldsp rv32dcrv64dconly instruction loads doubleprecision oatingpoint value memory oatingpoint register rd computes eective address adding zeroextended oset scaled 8 stack pointer x2 expands fld rd offset 83 x2 instructions use css format cswsp stores 32bit value register rs2 memory computes eective address adding zeroextended oset scaled 4 stack pointer x2 expands sw rs2 offset 72 x2 csdsp rv64crv128conly instruction stores 64bit value register rs2 memory computes eective address adding zeroextended oset scaled 8 stack pointer x2 expands sd rs2 offset 83 x2 csqsp rv128conly instruction stores 128bit value register rs2 memory computes eective address adding zeroextended oset scaled 16 stack pointer x2 expands sq rs2 offset 94 x2 cfswsp rv32fconly instruction stores singleprecision oatingpoint value oatingpoint register rs2 memory computes eective address adding zeroextended oset scaled 4 stack pointer x2 expands fsw rs2 offset 72 x2 cfsdsp rv32dcrv64dconly instruction stores doubleprecision oatingpoint value oatingpoint register rs2 memory computes eective address adding zero extended oset scaled 8 stack pointer x2 expands fsd rs2 offset 83 x2 register saverestore code function entryexit represents signicant portion static code size stackpointerbased compressed loads stores rvc eective reducing saverestore static code size factor 2 improving performance reducing dynamic instruction bandwidth common mechanism used isas reduce saverestore code size load multiple storemultiple instructions considered adopting riscv noted following drawbacks instructions instructions complicate processor implementations virtual memory systems data accesses could resident physical memory could requires new restart mechanism partially executed instructions unlike rest rvc instructions ifd equivalent load multiple store multiple unlike rest rvc instructions compiler would aware instructions generate instructions allocate registers order maxi mize chances saved stored since would saved restored sequential order simple microarchitectural implementations constrain instructions scheduled around load store multiple instructions leading potential perfor mance loss desire sequential register allocation might conict featured registers selected ciw cl cs cb formats furthermore much gains realized software replacing prologue epilogue code subroutine calls common prologue epilogue code technique described section 56 34 reasonable architects might come dierent conclusions decided omit load store multiple instead use softwareonly approach calling saverestore millicode routines attain greatest code size reduction registerbased loads stores instructions use cl format clw loads 32bit value memory register rd computes eective address adding zeroextended oset scaled 4 base address register rs1 expands lw rd offset 62 rs1 cld rv64crv128conly instruction loads 64bit value memory register rd computes eective address adding zeroextended oset scaled 8 base address register rs1 expands ld rd offset 73 rs1 clq rv128conly instruction loads 128bit value memory register rd computes eective address adding zeroextended oset scaled 16 base address register rs1 expands lq rd offset 84 rs1 cflw rv32fconly instruction loads singleprecision oatingpoint value mem ory oatingpoint register rd computes eective address adding zeroextended oset scaled 4 base address register rs1 expands flw rd offset 62 rs1 cfld rv32dcrv64dconly instruction loads doubleprecision oatingpoint value memory oatingpoint register rd computes eective address adding zeroextended oset scaled 8 base address register rs1 expands fld rd offset 73 rs1 instructions use cs format csw stores 32bit value register rs2 memory computes eective address adding zeroextended oset scaled 4 base address register rs1 expands sw rs2 offset 62 rs1 csd rv64crv128conly instruction stores 64bit value register rs2 memory computes eective address adding zeroextended oset scaled 8 base address register rs1 expands sd rs2 offset 73 rs1 csq rv128conly instruction stores 128bit value register rs2 memory computes eective address adding zeroextended oset scaled 16 base address register rs1 expands sq rs2 offset 84 rs1 cfsw rv32fconly instruction stores singleprecision oatingpoint value oating point register rs2 memory computes eective address adding zeroextended oset scaled 4 base address register rs1 expands fsw rs2 offset 62 rs1 cfsd rv32dcrv64dconly instruction stores doubleprecision oatingpoint value oatingpoint register rs2 memory computes eective address adding zero extended oset scaled 8 base address register rs1 expands fsd rs2 offset 73 rs1", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-08 02:32:20", "segment": "segment60", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(12.3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202%20(12.3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203%20(12.3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%204%20(12.3).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "12.4 Control Transfer Instructions ", "content": "rvc provides unconditional jump instructions conditional branch instructions base rvi instructions osets rvc control transfer instruction multiples 2 bytes instructions use cj format cj performs unconditional control transfer oset signextended added pc form jump target address cj therefore target 2 kib range cj expands jal x0 offset 111 cjal rv32conly instruction performs operation cj additionally writes address instruction following jump pc2 link register x1 cjal expands jal x1 offset 111 instructions use cr format cjr jump register performs unconditional control transfer address register rs1 cjr expands jalr x0 rs1 0 cjalr jump link register performs operation cjr additionally writes address instruction following jump pc2 link register x1 cjalr expands jalr x1 rs1 0 strictly speaking cjalr expand exactly base rvi instruction value added pc form link address 2 rather 4 base isa supporting osets 2 4 bytes minor change base microarchitecture instructions use cb format cbeqz performs conditional control transfers oset signextended added pc form branch target address therefore target 256 b range cbeqz takes branch value register rs1 zero expands beq rs1 x0 offset 81 cbnez dened analogously takes branch rs1 contains nonzero value expands bne rs1 x0 offset 81", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-08 02:32:20", "segment": "segment61", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(12.4).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202%20(12.4).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203%20(12.4).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "12.5 Integer Computational Instructions ", "content": "rvc provides several instructions integer arithmetic constant generation integer constantgeneration instructions two constantgeneration instructions use ci instruction format target integer register cli loads signextended 6bit immediate imm register rd cli valid rdx0 cli expands addi rd x0 imm 50 clui loads nonzero 6bit immediate eld bits 1712 destination register clears bottom 12 bits signextends bit 17 higher bits destination clui valid rd x0 x2 immediate equal zero clui expands lui rd nzuimm 1712 integer registerimmediate operations integer registerimmediate operations encoded ci format perform operations nonx0 integer register 6bit immediate immediate zero caddi adds nonzero signextended 6bit immediate value register rd writes result rd caddi expands addi rd rd nzimm 50 caddiw rv64crv128conly instruction performs computation pro duces 32bit result signextends result 64 bits caddiw expands addiw rd rd imm 50 immediate zero caddiw corresponds sextw rd caddi16sp shares opcode clui destination eld x2 caddi16sp adds nonzero signextended 6bit immediate value stack pointer spx2 immediate scaled represent multiples 16 range 512496 caddi16sp used adjust stack pointer procedure prologues epilogues expands addi x2 x2 nzimm 94 standard riscv calling convention stack pointer sp always 16byte aligned caddi4spn ciwformat rv32crv64conly instruction adds zeroextended nonzero immediate scaled 4 stack pointer x2 writes result rd instruction used generate pointers stackallocated variables expands addi rd x2 nzuimm 92 cslli ciformat instruction performs logical left shift value register rd writes result rd shift amount encoded shamt eld shamt 5 must zero rv32c rv32c rv64c shift amount must nonzero rv128c shift amount zero used encode shift 64 cslli expands slli rd rd shamt 50 except rv128c shamt0 expands slli rd rd 64 csrli cbformat instruction performs logical right shift value register rd writes result rd shift amount encoded shamt eld shamt 5 must zero rv32c rv32c rv64c shift amount must nonzero rv128c shift amount zero used encode shift 64 furthermore shift amount signextended rv128c legal shift amounts 131 64 96127 csrli expands srli rd rd shamt 50 except rv128c shamt0 expands srli rd rd 64 csrai dened analogously csrli instead performs arithmetic right shift csrai expands srai rd rd shamt 50 left shifts usually frequent right shifts left shifts frequently used scale address values right shifts therefore granted less encoding space placed encoding quadrant immediates signextended rv128 decision made 6bit shiftamount immediate also signextended apart reducing decode complexity believe rightshift amounts 96127 useful 6495 allow extraction tags located high portions 128bit address pointers note rv128c frozen point rv32c rv64c allow evaluation typical usage 128bit addressspace codes candi cbformat instruction computes bitwise value register rd signextended 6bit immediate writes result rd candi expands andi rd rd imm 50 integer registerregister operations instructions use cr format cmv copies value register rs2 register rd cmv expands add rd x0 rs2 cadd adds values registers rd rs2 writes result register rd cadd expands add rd rd rs2 instructions use cs format cand computes bitwise values registers rd rs2 writes result register rd cand expands rd rd rs2 cor computes bitwise values registers rd rs2 writes result register rd cor expands rd rd rs2 cxor computes bitwise xor values registers rd rs2 writes result register rd cxor expands xor rd rd rs2 csub subtracts value register rs2 value register rd writes result register rd csub expands sub rd rd rs2 caddw rv64crv128conly instruction adds values registers rd rs2 signextends lower 32 bits sum writing result register rd caddw expands addw rd rd rs2 csubw rv64crv128conly instruction subtracts value register rs2 value register rd signextends lower 32 bits dierence writing result register rd csubw expands subw rd rd rs2 group six instructions provide large savings individually occupy much encoding space straightforward implement group provide worthwhile im provement static dynamic compression dened illegal instruction 16bit instruction bits zero permanently reserved illegal instruction reserve allzero instructions illegal instructions help trap attempts execute zeroed nonexistent portions memory space allzero value redened nonstandard extension similarly reserve instructions bits set 1 corresponding long instructions riscv variablelength encoding scheme illegal capture another common value seen nonexistent memory regions nop instruction cnop ciformat instruction change uservisible state except advancing pc cnop encoded caddi x0 0 expands addi x0 x0 0 breakpoint instruction debuggers use cebreak instruction expands ebreak cause control transferred back debugging environment cebreak shares opcode cadd instruction rd rs2 zero thus also use cr format", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-08 02:32:21", "segment": "segment62", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%201%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%202%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%203%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%204%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%205%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%206%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%207%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%208%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%209%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%2010%20(12.5).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/extra%2011%20(12.5).jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "12.6 Usage of C Instructions in LR/SC Sequences ", "content": "implementations support c extension compressed forms instructions permitted inside lrsc sequences used retaining guarantee eventual success described section 72 implication implementation claims support c extensions must ensure lrsc sequences containing valid c instructions eventually complete", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-08 02:32:21", "segment": "segment63", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "12.7 RVC Instruction Set Listings ", "content": "table 123 shows map major opcodes rvc opcodes lower two bits set correspond instructions wider 16 bits including base isas several instructions valid certain operands invalid marked either res indicate opcode reserved future standard extensions nse indicate opcode reserved nonstandard extensions hint indicate opcode reserved future standard microarchitectural hints instructions marked hint must execute noops implementations hint eect hint instructions designed support future addition microarchitectural hints might aect performance aect architectural state hint encodings chosen simple implementations ignore hint encoding execute hint regular operation change architectural state example cadd hint destination register x0 vebit rs2 eld encodes details hint however simple implementation simply execute hint add register x0 eect", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-08 02:32:21", "segment": "segment64", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.3.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.4.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.5.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2012.6.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 13 ", "content": "b standard extension bit manipulation version 00 chapter placeholder future standard extension provide bit manipulation instruc tions including instructions insert extract test bit elds rotations funnel shifts bit byte permutations although bit manipulation instructions eective application domains particu larly dealing externally packed data structures excluded base isa useful domains add additional complexity instruction formats supply needed operands anticipate b extension browneld encoding within base 30bit instruction space", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-08 02:32:21", "segment": "segment65", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 14 ", "content": "j standard extension dynamically translated languages version 00 chapter placeholder future standard extension support dynamically translated languages many popular languages usually implemented via dynamic translation including java javascript languages benet additional isa support dynamic checks garbage collection", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-08 02:32:21", "segment": "segment66", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 15 ", "content": "standard extension transactional memory version 00 chapter placeholder future standard extension provide transactional memory operations despite much research last twenty years initial commercial implementations still much debate best way support atomic operations involving multiple addresses current thoughts include small limitedcapacity transactional memory buer along lines original transactional memory proposals", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-08 02:32:21", "segment": "segment67", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 16 ", "content": "p standard extension packedsimd instructions version 01 discussions 5th riscv workshop indicated desire drop packedsimd proposal oatingpoint registers favor standardizing v extension large oatingpoint simd operations however interest packedsimd xedpoint operations use integer registers small riscv implementations chapter outline standard packedsimd extension riscv reserved instruction subset name p future standard set packedsimd extensions many extensions build upon packedsimd extension taking advantage wide data registers datapaths separate integer unit packedsimd extensions rst introduced lincoln labs tx2 9 become pop ular way provide higher throughput dataparallel codes earlier commercial microproces sor implementations include intel i860 hp parisc max 19 sparc vis 29 mips mdmx 12 powerpc altivec 8 intel x86 mmxsse 24 26 recent designs include intel x86 avx 20 arm neon 11 describe standard framework adding packed simd chapter actively working design opinion packed simd designs represent reasonable design point reusing existing wide datapath resources signicant additional resources devoted dataparallel execution designs based traditional vector architectures better choice use v extension riscv packedsimd extension reuses oatingpoint registers f0f31 registers dened widths flen32 flen1024 standard oatingpoint instruction subsets require registers width 32 bits f 64 bits 128 bits q natural use oatingpoint registers packedsimd values rather integer registers parisc alpha packedsimd extensions frees integer registers control address values simplies reuse scalar oatingpoint units simd oating point execution leads naturally decoupled integeroatingpoint hardware design oatingpoint load store instruction encodings also space handle wider packedsimd registers however reusing oatingpoint registers packedsimd values make morediculttousearecodedinternalformatforoatingpointvalues existing oatingpoint load store instructions used load store varioussized words memory f registers base isa supports 32bit 64bit loads stores loadfp storefp instruction encodings allows 8 dierent widths encoded shown table 161 used packedsimd operations desirable support nonnaturally aligned loads stores hardware packedsimd computational instructions operate packed values f registers value 8bit 16bit 32bit 64bit 128bit integer oatingpoint representations supported example 64bit packedsimd extension treat register 164bit 232bit 416bit 88bit packed values simple packedsimd extensions might t unused 32bit instruction opcodes exten sive packedsimd extensions likely require dedicated 30bit instruction space", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-08 02:32:22", "segment": "segment68", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2016.1.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 17 ", "content": "v standard extension vector operations version 02 chapter presents proposal riscv vector instruction set extension vector ex tension supports congurable vector unit tradeo number architectural vector registers supported element widths available maximum vector length vector extension designed allow binary code work eciently across variety hardware implemen tations varying physical vector storage capacity datapath parallelism vector extension based style vector register architecture introduced seymour cray 1970s opposed earlier packed simd approach introduced lincoln labs tx2 1957 adopted commercial instruction sets vector instruction set contains many features developed earlier research projects including berkeley t0 viram vector microprocessors mit scale vectorthread processor berkeley maven hwacha projects", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-08 02:32:22", "segment": "segment69", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "17.1 Vector Unit State ", "content": "additional vector unit architectural state consists 32 vector data registers v0v31 8 vector predicate registers vp0vp7 xlenbit warl vector length csr vl addition current conguration vector unit held set vector conguration csrs vcmaxw vctype vcnpred described implementation determines available maximum vector length mvl current conguration held vcmaxw vcnpred registers also 3bit xedpoint rounding mode csr vxrm singlebit xedpoint saturation status csr vxsat", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-08 02:32:22", "segment": "segment70", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "17.2 Element Datatypes and Width ", "content": "datatypes operations supported v extension depend upon base scalar isa supported extensions may include 8bit 16bit 32bit 64bit 128bit integer xed pointdatatypes x8 x16 x32 x64 andx128respectively and16bit32bit64bit and128bit oatingpoint types f16 f32 f64 f128 respectively v extension added must support vector data element types implied supported scalar types dened table 172 largest element width supported elen max xlen flen compiler support vectorization greatly simplied hardwaresupported data types supported scalar vector instructions adding vector extension machine oatingpoint support adds support ieee standard halfprecision 16bit oatingpoint data type includes set scalar halfprecision instructions described section scalar halfprecision instructions follow template oatingpoint precisions using hitherto unused fmt eld encoding 10 support scalar halfprecision oatingpoint types part vector extension main benets halfprecision obtained using vector instructions amortize peroperation control overhead supporting separate scalar halfprecision oatingpoint extension also reduces number standard instructionset variants", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-08 02:32:22", "segment": "segment71", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2017.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2017.2.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "17.3 Vector Con\ufb01guration Registers (vcmaxw, vctype, vcp) ", "content": "vector unit must congured use architectural vector data register v0v31 congured maximum number bits allowed element vector data register disabled free physical vector storage architectural vector data registers number available vector predicate registers also set independently available mvl depends conguration setting mvl must always value conguration parameters given implementation implementations must provide mvl least four elements supported conguration settings vector data register current maximumwidth held separate fourbit eld vcmaxw csrs encoded shown table 173 several earlier vector machines ability congure physical vector register storage larger number short vectors shorter number long vectors particular fujitsu vp series 21 addition vector data register associated dynamic type eld held fourbit eld vctype csrs encoded shown table 174 dynamic type eld vector data register constrained hold types equal lesser width value corresponding vcmaxw eld vector data register changes vctype alter mvl vector data registers maximum element width current element data type support vector function calls caller know types needed callee described reduce conguration time writes vcmaxw eld also write corresponding vctype eld vcmaxw eld written value taken type encoding table 174 width information shown table 173 recorded vcmaxw elds whereas full type information recorded corresponding vctype eld attempting write vcmaxw eld width larger supported implemen tation raise illegal instruction exception implementations allowed record vcmaxw value larger value requested particular implementation may choose hardwire vcmaxw elds largest supported width attempting write unsupported type type requires current vcmaxw width vctype eld raise exception write eld vcmaxw register congures vector unit causes vector data registers zeroed vector predicate registers set vector length register vl set maximum supported vector length write vctype eld zeros associated vector data register leaving vector unit state undisturbed attempting write type needing bits corresponding vcmaxw value vctype eld raise illegal instruction exception vector registers zeroed reconguration prevent security holes avoid exposing dierences dierent implementations manage physical vector register storage inorder implementations probaby use ag bit per register mux 0 instead garbage values source overwritten inorder machines partial writes due predication vector lengths less mvl complicate zeroing cases handled adopting hardware readmodifywrite adding zero bit per element trap machinemode trap handler rst write access conguration partial outoforder machines point initial rename table physical zero register rv128 vcmaxw single csr holding 32 4bit width elds bits 4n 3 4n hold maximum width vector data register n rv64 vcmaxw2 csr provides access upper 64 bits vcmaxw rv32 vcmaxw1 csr provides access bits 6332 vcmaxw vcmax3 csr provides access bits 12796 vcnpred csr contains single 4bit wlrl eld giving number enabled architectural predicate registers 0 8 write vcnpred zeros vector data registers sets bits visible vector predicate registers sets vector length register vl maximum supported vector length attempting write value larger 8 vcnpred raises illegal instruction exception", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-08 02:32:22", "segment": "segment72", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2017.3.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2017.4.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2017.5.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "17.4 Vector Length ", "content": "active vector length held xlenbit warl vector length csr vl hold values 0 mvl inclusive writes maximum conguration registers vcmaxw vcnpred cause vl initialized mvl writes vctype aect vl active vector length usually written setvl instruction encoded csrrw instruction vl csr number source argument csrrw requested application vector length avl unsigned xlenbit integer setvl instruction calculates value assign vl according table 175 rules setting vl register help keep vector pipelines full last two iterations stripmined loop similar rules previously used craydesigned machines 7 result calculation also returned result setvl instruction note unlike regular csrrw instruction value written integer register rd original csr value modied value idea implementationdened vector length dates back least ibm 3090 vector facility 5 used special load vector count update vlvcu instruction control stripmine loops setvl instruction included based simpler setvlr instruction introduced asanovic 4 setvl instruction typically used start every iteration stripmined loop set number vector elements operate following loop iteration current mvl obtained performing setvl source argument bits set largest unsigned integer element operations performed vector instruction vl0", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-08 02:32:23", "segment": "segment73", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "17.5 Rapid Con\ufb01guration Instructions ", "content": "take several instructions set vcmaxw vctype vcnpred given conguration accelerate conguring vector unit specialized vcfg instructions added encoded writes csrs encoded immediate values set multiple elds vcmaxw vctype vncpred conguration registers vcfgd instruction encoded csrrw takes register value encoded shown figure 172 returns corresponding mvl destination register correspond ing vcfgdi instruction encoded csrrwi takes 5bit immediate value set conguration returns mvl destination register one primary uses vcfgdi congure vector unit singlebyte element vectors use memcpy memset routines single instruction congure vector unit operation vcfgd instruction also clears vcnpred register predicate registers allocated vcfgd value species many vector registers datatype allocated divided 5bit elds one per supported datatype value 0 eld indicates registers type allocated nonzero value indicates highest vector 5bit eld vcfgd value must contain either zero indicating vector registers allocated type vector register number greater elds lower bit positions indicating highest vector register containing associated type encoding compactly represent arbitrary allocation vector registers data types except must least two vector registers v0 v1 allocated narrowest required type example allocation shown figure 173 separate vcfgp vcfgpi instructions provided using csrrw csrrwi encodings respectively write source value vcnpred register return new mvl writes also clear vector data registers set bits allocated predicate registers set vlmvl vcfgp vcfgpi instruction used vcfgd complete reconguration vector unit zero argument given vcgfd vector unit uncongured enabled registers value 0 returned mvl conguration registers vcmaxw vcnpred accessed state either directly via vcfgd vcfgdi vcfgp vcfgpi instructions vector instructions raise illegal instruction exception quickly change individual types vector register vector data register n dedi cated csr address access vctype eld named vctypevn vcfgt vcfgti instructions assembler pseudoinstructions regular csrrw csrrwi instructions update type elds return original value vcfgti instruction typically used change desired type recording previous type one instruction vcfgt instruction used revert back saved type", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-08 02:32:23", "segment": "segment74", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/fig%2017.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/fig%2017.2.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/fig%2017.3.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 18 ", "content": "n standard extension userlevel interrupts version 11 placeholder complete writeup n extension form basis discussion chapter presents proposal adding riscv userlevel interrupt exception handling n extension present outer execution environment delegated designated interrupts exceptions userlevel hardware transfer control directly userlevel trap handler without invoking outer execution environment userlevel interrupts primarily intended support secure embedded systems m mode umode present also supported systems running unixlike operating systems support userlevel trap handling used unix environment userlevel interrupts would likely replace conven tional signal handling could used building block extensions generate userlevel events garbage collection barriers integer overow oatingpoint traps", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-08 02:32:23", "segment": "segment75", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "18.1 Additional CSRs ", "content": "uservisible csrs added support n extension listed table 181", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-08 02:32:23", "segment": "segment76", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2018.1.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "18.2 User Status Register (ustatus) ", "content": "ustatus register xlenbit readwrite register formatted shown figure 181 ustatus register keeps track controls hart current operating state user interruptenable bit uie disables userlevel interrupts clear value uie copied upie userlevel trap taken value uie set zero provide atomicity userlevel trap handler upp bit hold previous privilege mode user mode uret instructions used return traps umode uret copies upie uie sets upie upie set upieuie stack popped enable interrupts help catch coding errors", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-08 02:32:23", "segment": "segment77", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/fig%2018.1.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "18.3 Other CSRs ", "content": "remaining csrs function analogous way trap handling registers dened mmode smode complete writeup follow", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-08 02:32:23", "segment": "segment78", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "18.4 N Extension Instructions ", "content": "uret instruction added perform analogous function mret sret", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-08 02:32:23", "segment": "segment79", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "18.5 Reducing Context-Swap Overhead ", "content": "userlevel interrupthandling registers add considerable state userlevel context yet usually rarely active normal use particular uepc ucause utval valid execution trap handler ns eld added mstatus sstatus following format fs xs elds reduce contextswitch overhead values live execution uret place uepc ucause utval back initial state", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-08 02:32:23", "segment": "segment80", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 19 ", "content": "rv3264g instruction set listings one goal riscv project used stable software development target purpose dene combination base isa rv32i rv64i plus selected standard extensions imafd generalpurpose isa use abbreviation g imafd combination instructionset extensions chapter presents opcode maps instructionset listings rv32g rv64g table 191 shows map major opcodes rvg major opcodes 3 lower bits set reserved instruction lengths greater 32 bits opcodes marked reserved avoided custom instruction set extensions might used future standard extensions major opcodes marked custom0 custom1 avoided future standard extensions recommended use custom instructionset extensions within base 32bit instruction format opcodes marked custom2rv128 custom3rv128 reserved future use rv128 otherwise avoided standard extensions also used custom instructionset extensions rv32 rv64 believe rv32g rv64g provide simple complete instruction sets broad range generalpurpose computing optional compressed instruction set described chapter 12 added forming rv32gc rv64gc improve performance code size energy eciency though additional hardware complexity move beyond imafdc instruction set extensions added instructions tend domainspecic provide benets restricted class applications eg multimedia security unlike commercial isas riscv isa design clearly separates base isa broadly applicable standard extensions specialized additions chapter21hasamoreextensivediscussionofwaystoaddextensionstotheriscvisa", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-08 02:32:23", "segment": "segment81", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.2%20(1).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.2%20(2).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.2%20(3).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.2%20(4).jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2019.3.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 20 ", "content": "riscv assembly programmer handbook chapter placeholder assembly programmer manual table 201 lists assembler mnemonics x f registers role standard calling convention", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-08 02:32:23", "segment": "segment82", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2020.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2020.2.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2020.3.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 21 ", "content": "extending riscv addition supporting standard generalpurpose software development another goal riscv provide basis specialized instructionset extensions customized accelerators instruction encoding spaces optional variablelength instruction encoding designed make easier leverage software development eort standard isa toolchain building customized processors example intent continue provide full software support implementations use standard base perhaps together many nonstandard instructionset extensions chapter describes various ways base riscv isa extended together scheme managing instructionset extensions developed independent groups volume deals userlevel isa although approach terminology used supervisorlevel extensions described second volume", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-08 02:32:23", "segment": "segment83", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "21.1 Extension Terminology ", "content": "section denes standard terminology describing riscv extensions standard versus nonstandard extension riscv processor implementation must support base integer isa rv32i rv64i addition implementation may support one extensions divide extensions two broad categories standard versus nonstandard standard extension one generally useful designed conict standard extension currently mafdqlcbtpv described chapters manual either complete planned standard extensions nonstandard extension may highly specialized may conict standard nonstandard extensions anticipate wide variety nonstandard extensions developedovertime withsomeeventuallybeingpromotedtostandardextensions instruction encoding spaces prexes instruction encoding space number instruction bits within base isa isa extension encoded riscv supports varying instruction lengths even within single instruction length various sizes encoding space available example base isa dened within 30bit encoding space bits 312 32bit instruction atomic extension ts within 25bit encoding space bits 317 use term prex refer bits right instruction encoding space since riscv littleendian bits right stored earlier memory addresses hence form prex instructionfetch order prex standard base isa encoding twobit 11 eld held bits 10 32bit word prex standard atomic extension sevenbit 0101111 eld held bits 60 32bit word representing amo major opcode quirk encoding format 3bit funct3 eld used encode minor opcode contiguous major opcode bits 32bit instruction format considered part prex 22bit instruction spaces although instruction encoding space could size adopting smaller set common sizes simplies packing independently developed extensions single global encoding table 211 gives suggested sizes riscv greeneld versus browneld extensions use term greeneld extension describe extension begins populating new in struction encoding space hence cause encoding conicts prex level use term browneld extension describe extension ts around existing encodings previously dened instruction space browneld extension necessarily tied particular greeneld parent encoding may multiple browneld extensions greeneld parent encoding example base isas greeneld encodings 30bit instruction space fdq oatingpoint extensions browneld extensions adding parent base isa 30bit encoding space note consider standard extension greeneld encoding denes new previously empty 25bit encoding space leftmost bits full 32bit base instruction encoding even though standard prex locates within 30bit encoding space base isa changing single 7bit prex could move extension dierent 30bit encoding space worrying conicts prex level within encoding space s table 212 shows bases standard extensions placed simple twodimensional taxonomy one axis whether extension greeneld browneld axis whether extension adds architectural state greeneld extensions size instruction encoding space given parentheses browneld extensions name extension greeneld browneld builds upon given parentheses additional userlevel architectural state usually implies changes supervisorlevel system possibly standard calling convention note rv64i considered extension rv32i dierent complete base encoding standardcompatible global encodings complete global encoding isa actual riscv implementation must allocate unique nonconicting prex every included instruction encoding space bases every standard extension standard prex allocated ensure coexist global encoding standardcompatible global encoding one base every included standard extension standard prexes standardcompatible global encoding include nonstandard extensions conict included standard extensions standardcompatible global encoding also use standard prexes nonstandard extensions associated standard extensions included global encoding words standard extension must use standard prex included standardcompatible global encoding otherwise prex free reallocated constraints allow common toolchain target standard subset riscv standardcompatible global encoding guaranteed nonstandard encoding space support development proprietary custom extensions portions encoding space guaranteed never used standard extensions", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-08 02:32:24", "segment": "segment84", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2021.1.jpg?raw=true","https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2021.2.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "21.2 RISC-V Extension Design Philosophy ", "content": "intend support large number independently developed extensions encouraging ex tension developers operate within instruction encoding spaces providing tools pack standardcompatible global encoding allocating unique prexes extensions naturally implemented browneld augmentations existing extensions share whatever prex allocated parent greeneld extension standard extension prexes avoid spurious incompatibilities encoding core functionality allowing custom packing esoteric extensions capability repacking riscv extensions dierent standardcompatible global encodings used number ways one usecase developing highly specialized custom accelerators designed run kernels important application domains might want drop base integer isa add extensions required task hand base isa designed place minimal requirements hardware implementation encoded use small fraction 32bit instruction encoding space another usecase build research prototype new type instructionset extension researchers might want expend eort implement variablelength instructionfetch unit would like prototype extension using simple 32bit xedwidth instruction encoding however new extension might large coexist standard extensions 32bit space research experiments need standard extensions standard compatible global encoding might drop unused standard extensions reuse prexes place proposed extension nonstandard location simplify engineering research prototype standard tools still able target base standard extensions present reduce development time instructionset extension evaluated rened could made available packing larger variablelength encoding space avoid conicts standard extensions following sections describe increasingly sophisticated strategies developing implementations new instructionset extensions mostly intended use highly customized edu cational experimental architectures rather main line riscv isa development", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-08 02:32:24", "segment": "segment85", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "21.3 Extensions within \ufb01xed-width 32-bit instruction format ", "content": "section discuss adding extensions implementations support base xed width 32bit instruction format anticipate simplest xedwidth 32bit encoding popular many restricted accel erators research prototypes available 30bit instruction encoding spaces standard encoding three available 30bit instruction encoding spaces 2bit prexes 00 01 10 used enable optional compressed instruction extension however compressed instructionset extension required three 30bit encoding spaces become available quadruples available encoding space within 32bit format available 25bit instruction encoding spaces 25bit instruction encoding space corresponds major opcode base standard extension encodings four major opcodes expressly reserved custom extensions table 191 represents 25bit encoding space two reserved eventual use rv128 base encoding opimm64 op64 used standard nonstandard extensions rv32 rv64 two opcodes reserved rv64 opimm32 op32 also used standard nonstandard extensions rv32 implementation require oatingpoint seven major opcodes reserved standard oatingpoint extensions loadfp storefp madd msub nmsub nmadd opfp reused nonstandard extensions similarly amo major opcode reused standard atomic extensions required implementation require instructions longer 32bits additional four major opcodes available marked gray table 191 base rv32i encoding uses 11 major opcodes plus 3 reserved opcodes leaving 18 available extensions base rv64i encoding uses 13 major opcodes plus 3 reserved opcodes leaving 16 available extensions available 22bit instruction encoding spaces 22bit encoding space corresponds funct3 minor opcode space base standard extension encodings several major opcodes funct3 eld minor opcode completely occupied leaving available several 22bit encoding spaces usually major opcode selects format used encode operands remaining bits instruction ideally extension follow operand format major opcode simplify hardware decoding spaces smaller spaces available certain major opcodes minor opcodes entirely lled", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-08 02:32:24", "segment": "segment86", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "21.4 Adding aligned 64-bit instruction extensions ", "content": "simplest approach provide space extensions large base 32bit xed width instruction format add naturally aligned 64bit instructions implementation must still support 32bit base instruction format require 64bit instructions aligned 64bit boundaries simplify instruction fetch 32bit nop instruction used alignment padding necessary simplify use standard tools 64bit instructions encoded described fig ure 11 however implementation might choose nonstandard instructionlength encoding 64bit instructions retaining standard encoding 32bit instructions example compressed instructions required 64bit instruction could encoded using one zero bits rst two bits instruction anticipate processor generators produce instructionfetch units capable automatically handling combination supported variablelength instruction encodings", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-10-08 02:32:24", "segment": "segment87", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "21.5 Supporting VLIW encodings ", "content": "although riscv designed base pure vliw machine vliw encodings added extensions using several alternative approaches cases base 32bit encoding supported allow use standard software tools fixedsize instruction group simplest approach dene single large naturally aligned instruction format eg 128 bits within vliw operations encoded conventional vliw approach would tend waste instruction memory hold nops riscvcompatible implementation would also support base 32bit instructions conning vliw code size expansion vliw accelerated functions encodedlength groups another approach use standard length encoding figure 11 encode parallel in struction groups allowing nops compressed vliw instruction example 64bit instruction could hold two 28bit operations 96bit instruction could hold three 28bit operations alternatively 48bit instruction could hold one 42bit operation 96bit instruction could hold two 42bit operations approach advantage retaining base isa encoding instructions holding single operation disadvantage requiring new 28bit 42bit encoding operations within vliw instructions misaligned instruction fetch larger groups one simplication allow vliw instructions straddle certain microarchitecturally signicant boundaries eg cache lines virtual memory pages fixedsize instruction bundles another approach similar itanium use larger naturally aligned xed instruction bundle size eg 128 bits across parallel operation groups encoded simplies instruction fetch shifts complexity group execution engine remain riscv compatible base 32bit instruction would still supported endofgroup bits prex none approaches retains riscv encoding individual operations within vliw instruction yet another approach repurpose two prex bits xedwidth 32bit encoding one prex bit used signal endofgroup set second bit could indicate execution predicate clear standard riscv 32bit instructions generated tools unaware vliw extension would prex bits set 11 thus correct semantics instruction end group predicated main disadvantage approach base isa lacks complex predication support usually required aggressive vliw system dicult add space specify predicate registers standard 30bit encoding space", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-10-08 02:32:25", "segment": "segment88", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 22 ", "content": "isa subset naming conventions chapter describes riscv isa subset naming scheme used concisely describe set instructions present hardware implementation set instructions used application binary interface abi riscv isa designed support wide variety implementations various exper imental instructionset extensions found organized naming scheme simplies software tools documentation", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-10-08 02:32:25", "segment": "segment89", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.1 Case Sensitivity ", "content": "isa naming strings case insensitive", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-10-08 02:32:25", "segment": "segment90", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.2 Base Integer ISA ", "content": "riscv isa strings begin either rv32i rv32e rv64i rv128i indicating supported address space size bits base integer isa", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-10-08 02:32:25", "segment": "segment91", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.3 Instruction Extensions Names ", "content": "standard isa extensions given name consisting single letter example rst four standard extensions integer bases integer multiplication division atomic memory instructions f singleprecision oatingpoint instructions doubleprecision oatingpoint instructions riscv instruction set variant succinctly described concatenating base integer prex names included extensions example rv64imafd also dened abbreviation g represent imafd base extensions intended represent standard generalpurpose isa standard extensions riscv isa given reserved letters eg q quadprecision oatingpoint c 16bit compressed instruction format", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-10-08 02:32:25", "segment": "segment92", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.4 Version Numbers ", "content": "recognizing instruction sets may expand alter time encode subset version numbers following subset name version numbers divided major minor version numbers separated p minor version 0 p0 omitted version string changes major version numbers imply loss backwards compatibility whereas changes minor version number must backwardscompatible example original 64bit standard isa dened release 10 manual written full rv64i1p0m1p0a1p0f1p0d1p0 concisely rv64i1m1a1f1d1 even concisely rv64g1 g isa subset written rv64i2p0m2p0a2p0f2p0d2p0 concisely rv64g2 introduced version numbering scheme second release also intend become permanent standard hence dene default version standard subset present time document eg rv32g equivalent rv32i2m2a2f2d2", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-10-08 02:32:25", "segment": "segment93", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.5 Non-Standard Extension Names ", "content": "nonstandard subsets named using single x followed name beginning letter optional version number example xhwacha names hwacha vectorfetch isa extension xhwacha2 xhwacha2p0 name version 20 nonstandard extensions must separated multiletter extensions single un derscore example isa nonstandard extensions argle bargle may named rv64gxargle xbargle", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-10-08 02:32:25", "segment": "segment94", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.6 Supervisor-level Instruction Subsets ", "content": "standard supervisor instruction subsets dened volume ii named using prex followed supervisor subset name beginning letter optional version number supervisor extensions must separated multiletter extensions single underscore", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-10-08 02:32:25", "segment": "segment95", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.7 Supervisor-level Extensions ", "content": "nonstandard extensions supervisorlevel isa dened using sx prex", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-10-08 02:32:25", "segment": "segment96", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "22.8 Subset Naming Convention ", "content": "table 221 summarizes standardized subset names", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-10-08 02:32:25", "segment": "segment97", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2022.1.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 23 ", "content": "history acknowledgments", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-10-08 02:32:25", "segment": "segment98", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "23.1 History from Revision 1.0 of ISA manual ", "content": "riscv isa instruction set manual builds several earlier projects several aspects supervisorlevel machine overall format manual date back t0 torrent0 vector microprocessor project uc berkeley icsi begun 1992 t0 vector processor based mipsii isa krste asanovic main architect rtl designer brian kingsbury bertrand irrisou principal vlsi implementors david johnson icsi major contributor t0 isa design particularly supervisor mode manual text john hauser also provided considerable feedback t0 isa design scale softwarecontrolled architecture low energy project mit begun 2000 built upon t0 project infrastructure rened supervisorlevel interface moved away mips scalar isa dropping branch delay slot ronny krashinsky christopher batten principal architects scale vectorthread processor mit mark hampton ported gccbased compiler infrastructure tools scale lightly edited version t0 mips scalar processor specication mips6371 used teaching new version mit 6371 introduction vlsi systems class fall 2002 semester chris terman krste asanovic lecturers chris terman contributed lab material class ta 6371 class evolved trial 6884 complex digital design class mit taught arvind krste asanovic spring 2005 became regular spring class 6375 reduced version scale mipsbased scalar isa named smips used 68846375 christopher batten ta early oerings classes developed considerable amount documentation lab material based around smips isa smips lab material adapted enhanced ta yunsup lee uc berkeley fall 2009 cs250 vlsi systems design class taught john wawrzynek krste asanovic john lazzaro maven malleable array vectorthread engines project secondgeneration vector thread architecture design led christopher batten exchange scholar uc berkeley starting summer 2007 hidetaka aoki visiting industrial fellow hitachi gaveconsiderablefeedbackontheearlymavenisaandmicroarchitecturedesignthemaven infrastructure based scale infrastructure maven isa moved away mips isa variant dened scale unied oatingpoint integer register le maven designed support experimentation alternative dataparallel accelerators yunsup lee main implementor various maven vector units rimas avizienis main implementor various maven scalar units yunsup lee christopher batten ported gcc work new maven isa christopher celio provided initial denition traditional vector instruction set flood variant maven based experience previous projects riscv isa denition begun summer 2010 initial version riscv 32bit instruction subset used uc berkeley fall 2010 cs250 vlsi systems design class yunsup lee ta riscv clean break earlier mipsinspired designs john hauser contributed oatingpoint isa denition including signinjection instructions register encoding scheme permits internal recoding oatingpoint values", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-10-08 02:32:25", "segment": "segment99", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "23.2 History from Revision 2.0 of ISA manual ", "content": "multiple implementations riscv processors completed including several silicon fabrications shown figure 231 rst riscv processors fabricated written verilog manufactured pre production 28 nm fdsoi technology st raven1 testchip 2011 two cores developed yunsup lee andrew waterman advised krste asanovic fabricated together 1 rv64 scalar core errordetecting ipops 2 rv64 core attached 64bit oatingpoint vector unit rst microarchitecture informally known trainwreck due short time available complete design immature design libraries subsequently clean microarchitecture inorder decoupled rv64 core developed andrew waterman rimas avizienis yunsup lee advised krste asanovic continuing railway theme codenamed rocket george stephenson successful steam locomotive design rocket written chisel new hardware design language developed uc berkeley ieee oatingpoint units used rocket developed john hauser andrew waterman brian richards rocket since rened developed fabricated two times 28 nm fdsoi raven2 raven3 ve times ibm 45 nm soi technology eos14 eos16 eos18 eos20 eos22 photonics project work ongoing make rocket design available parameterized riscv processor generator eos14eos22 chips include early versions hwacha 64bit ieee oatingpoint vector unit developed yunsup lee andrew waterman huy vo albert ou quan nguyen stephen twigg advised krste asanovic eos16eos22 chips include dual cores cachecoherence protocol developed henry cook andrew waterman advised krste asanovic eos14 silicon successfully run 125 ghz eos16 silicon suered bug ibm pad libraries eos18 eos20 successfully run 135 ghz contributors raven testchips include yunsup lee andrew waterman rimas avizienis brian zimmer jaehwa kwak ruzica jevtic milovan blagojevic alberto puggelli steven bailey ben keller pifeng chiu brian richards borivoje nikolic krste asanovic contributors eos testchips include yunsup lee rimas avizienis andrew waterman henry cook huy vo daiwei li chen sun albert ou quan nguyen stephen twigg vladimir sto janovic krste asanovic andrew waterman yunsup lee developed c isa simulator spike used golden model development named golden spike used celebrate completion us transcontinental railway spike made available bsd opensource project andrew waterman completed master thesis preliminary design riscv compressed instruction set 33 various fpga implementations riscv completed primarily part integrated demos par lab project research retreats largest fpga design 3 cachecoherent rv64ima processors running research operating system contributors fpga implemen tations include andrew waterman yunsup lee rimas avizienis krste asanovic riscv processors used several classes uc berkeley rocket used fall 2011 oering cs250 basis class projects brian zimmer ta undergraduate cs152 class spring 2012 christopher celio used chisel write suite educational rv32 processors named sodor island thomas tank engine friends live suite includes microcoded core unpipelined core 2 3 5stage pipelined cores publicly available bsd license suite subsequently updated used cs152 spring 2013 yunsup lee ta spring 2014 eric love ta christopher celio also developed outoforder rv64 design known boom berkeley outof order machine accompanying pipeline visualizations used cs152 classes cs152 classes also used cachecoherent versions rocket core developed andrew waterman henry cook summer 2013 rocc rocket custom coprocessor interface dened sim plify adding custom accelerators rocket core rocket rocc interface used extensively fall 2013 cs250 vlsi class taught jonathan bachrach several student accelerator projects built rocc interface hwacha vector unit rewritten rocc coprocessor two berkeley undergraduates quan nguyen albert ou successfully ported linux run riscv spring 2013 colin schmidt successfully completed llvm backend riscv 20 january 2014 darius rad bluespec contributed softoat abi support gcc port march 2014 john hauser contributed denition oatingpoint classication instructions aware several riscv core implementations including one verilog tommy thorn one bluespec rishiyur nikhil acknowledgments thanks christopher f batten preston briggs christopher celio david chisnall stefan freudenberger john hauser ben keller rishiyur nikhil michael taylor tommy thorn robert watson comments draft isa version 20 specication", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-10-08 02:32:26", "segment": "segment100", "image_urls": ["https://github.com/merledu/rv-spidercrab/blob/Data_Analyzer/images/riscv-spec-v2.2/table%2023.1.jpg?raw=true"], "Book": "riscv-spec-v2.2" }, { "section": "23.3 History for Revision 2.1 ", "content": "uptake riscv isa rapid since introduction frozen version 20 may 2014 much activity record short history section perhaps important single event formation nonprot riscv foundation august 2015 foundation take stewardship ocial riscv isa standard ocial website riscvorg best place obtain news updates riscv standard acknowledgments thanks scott beamer allen j baum christopher celio david chisnall paul clayton palmer dabbelt jan gray michael hamburg john hauser comments version 20 specica tion", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-10-08 02:32:26", "segment": "segment101", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "23.4 History for Revision 2.2 ", "content": "acknowledgments thanks jacob bachmeyer alex bradbury david horner stefan rear joseph myers comments version 21 specication", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-10-08 02:32:26", "segment": "segment102", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "23.5 Funding ", "content": "development riscv architecture implementations partially funded following sponsors par lab research supported microsoft award 024263 intel award 024894 funding matching funding uc discovery award dig0710227 additional support came par lab aliates nokia nvidia oracle samsung project isis doe award desc0003624 aspire lab darpa perfect program award hr00111220016 darpa poem program award hr001111c0100 center future architectures research cfar starnet center funded semiconductor research corporation additional support aspire industrial sponsor intel aspire aliates google hewlett packard enterprise huawei nokia nvidia oracle samsung content paper necessarily reect position policy us government ocial endorsement inferred", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-10-08 02:32:26", "segment": "segment103", "image_urls": [], "Book": "riscv-spec-v2.2" }, { "section": "Chapter 1", "content": "Why RISC-V?", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-13 16:15:23", "segment": "segment0", "image_urls": [], "Book": "rvbook" }, { "section": "1.1 Introduction ", "content": "goal riscv risc ve become universal instruction set architecture isa suit sizes processors tiniest embedded controller fastest highperformance computer work well wide variety popular software stacks programming languages itshould accommodate implementation technologies field programmable gate arrays fpgas application specicintegrated circuits asics fullcustom chips andevenfuturedevicetechnologies itshouldbeefcientforallmicroarchitecture styles microcoded orhardwired control in order decoupled oroutoforderpipelines singleorsuperscalarinstructionissue andso support extensive specialization act base customized accelerators rise importance moore law fades itshouldbestable inthatthebaseisashouldnotchangemoreimportantly theisacannot discontinued ashas happened inthe past toproprietary isas asthe amd 29000 thedigitalalpha thedigitalvax thehewlettpackardparisc theinteli860 theinteli960 themotorola88000 andthezilogz8000 risc visunusual notonly itisarecent isa born decade alter natives datefrom the1970 sor1980 sbutalsobecause itisanopen isaunlike practically priorarchitectures itsfutureisfreefromthefateorthewhimsofanysinglecorporation whichhas doomedmanyisasinthepastitbelongsinsteadtoanopen nonprotfoundationthegoalof theriscvfoundationistomaintainthestabilityofriscv evolveit slowly carefully solely technical reasons try make popular hardware linux operating systems sign vitality figure 11 lists largest corporatemembers riscv foundation", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-13 16:15:23", "segment": "segment1", "image_urls": [], "Book": "rvbook" }, { "section": "1.2 Modular vs. Incremental ISAs ", "content": "intel betting future highend microprocessor still years away counter zilog intel developed stopgap processor called 8086 intended shortlived successors things turned highend processor ended late market come slow 8086 architecture lived onit evolved 32bit processor eventually 64bit one names kept changing 80186 80286 i386 i486 pentium buttheunderlyinginstructionsetremainedintact conventional approach computer architecture incremental isas new proces sors must implement new isa extensions also extensions past purpose maintain backwards binarycompatibility binary versions decadesold programs still run correctly latest processor requirement combined marketing appeal announcing new instructions new generation proces sors led isas grow substantially size age example figure 12 shows growth number instructions dominant isa today 80x86 dates back 1978 yet added three instructions per month long lifetime convention means every implementation x8632 name use 32bit address version x86 must implement mistakes past extensions even longer make sense example figure 13 describes ascii adjust addition aaa instruction x86 long outlived usefulness analogy suppose restaurant serves xedprice meal starts small dinner hamburger milkshake time adds fries ice cream sundae followed salad pie wine vegetarian pasta steak beer ad innitum becomes gigantic banquet may make little sense total diners nd whatever ever eaten past meal restaurant bad news diners must pay rising cost expanding banquet dinner beyond recent open riscv unusual since unlike almost prior isas modular core base isa called rv32i runs full software stack rv32i frozen never change gives compiler writers operating system developers assembly language programmers stable target modularity comes optional stan dard extensions hardware include depending needs application modularity enables small low energy implementations riscv critical embedded applications informing riscv compiler extensions included generate best code hardware convention append extension letters name indicate included example rv32imfd rv32m single precision oating point rv32f doubleprecision oating point extensions rv32d mandatory base instructions rv32i returning analogy riscv offers menu instead buffet chef need cook customers wantnot feast every mealand customers pay order riscv need add instructions simply marketing sizzle riscv foundation decides add new option menu solid technical reasons extended open discussion committee hardware software experts even new choices appear menu remain optional new requirement future implementations like incremental isas", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-13 16:15:23", "segment": "segment2", "image_urls": [], "Book": "rvbook" }, { "section": "1.3 ISA Design 101 ", "content": "introducing riscv isa helpful understand underlying principles tradeoffs computer architect must make designing isa list seven measures along icons put page margins highlight instances riscv addresses following chapters back cover print book legend icons cost us dollar coin icon simplicity wheel performance speedometer isolation architecture implementation detached halves circle room growth accordion program size opposing arrows compressing line ease programming compiling linking children blocks easy abc illustrate mean section show choices older isas look unwise retrospect riscv often made much better decisions cost processors implemented integrated circuits commonly called chips dies called dies start life piece single round wafer diced many individual pieces figure 14 shows wafer riscv processors cost sensitive area die cost f die area2 obviously smaller die dies per wafer cost die processed wafer less obvious smaller die higher yield fraction manufactured dies work reason silicon manufacturing result small aws scattered wafer smaller die lower fraction awed architect wants keep isa simple shrink size processors imple ment shall see following chapters riscv isa much simpler isa arm32 isa concrete example impact simplicity let compare riscv rocket processor arm32 cortexa5 processor technology tsmc40gplus using samesized caches 16 kib riscv die 027 mm2 ver sus 053 mm2 arm32 around twice area arm32 cortexa5 die costs approx imately 4x 22 much riscv rocket die even 10 smaller die reduces cost factor 12 112 simplicity given cost sensitivity complexity architects want simple isa reduce die area simplicity also reduces chip design time verication time much cost development chip costs must added cost chip overhead dependent number chips shipped simplicity also reduces cost documentation difculty getting customers understand use isa glaring example isa complexity arm32 ldmiaeq sp r4r7 pc instruction stands load multiple incrementaddress equal performs 5 data loads writes 6 registers executes eq condition code set moreover writes result pc also performing conditional branch quite handful ironically simple instructions much likely used complex ones example x8632 includes enter instruction intended rst instruction executed entering procedure create stack frame see chapter 3 compilers instead use two simple x8632 instructions push ebp push frame pointer onto stack mov ebp esp copy stack pointer frame pointer performance except tiny chips embedded applications architects typ ically concerned performance well cost performance factored three terms instructions program average clock cycles instruction time clock cycle time program evenifasimpleisamightexecutemoreinstructionsperprogramthanacomplexisa make faster clock cycle average fewer clock cycles per instruction cpi example coremark benchmark galon levy 2012 100000 iterations performance arm32 cortexa9 arm processor execute fewer instructions riscv case shall see simple instructions also popular instructions isa simplicity win metrics program riscv processor gains nearly 10 three factors results performance advantage almost 30 simpler isa also results smaller chip costperformance excellent isolation architecture implementation original distinction ar chitecture implementation goes back 1960s architecture machine language programmer needs know write correct program perfor mance program temptation architect include instructions isa help performance cost one implementation particular time burden different future implementations mips32 isa regrettable example delayed branch conditional branches cause problems pipelined execution processor wants next instruction execute already pipeline decide whether wants next sequential one branch taken one branch target address taken rst microprocessor 5stage pipeline indecision could caused one clockcycle stall pipeline mips32 solved problem redening branch occur instruction next one thus following instruction always executed job programmer compiler writer put something useful delay slot alas solution help later mips32 processors many pipeline stages hence many instructions fetched branch outcome computed made life harder mips32 programmers compiler writers processor designers ever since incremental isas demand backwards compatibility see section 12 addition makes mips32 code much harder understand see figure 210 page 29 architects put features help one implementation point time also put features hinder implementations example arm32 isas load multiple instruction mentioned previ ous page instructions improve performance singleinstruction issue pipelined designs hurt multipleinstruction issue pipelines reason straightforward implementation precludes scheduling individual loads load multiple parallel instructions reducing instruction throughput processors room growth ending moore law path forward major improvements costperformance add custom instructions specic domains deep learning augmented reality combinatorial optimization graphics means important today isa reserve opcode space future enhancements 1970s 1980s moore law full force little thought saving opcode space future accelerators architects instead valued larger address immediate elds reduce number instructions executed per program rst factor performance equation prior page example impact paucity opcode space architects arm32 later tried reduce code size adding 16bit length instructions formerly uniform 32bit length isa simply room left thus solution create new isa rst 16bit instructions thumb later new isa 16bit 32bit instructions thumb2 using mode bit switch arm isas change modes programmer compiler branches byte address 1 leastsignicant bit worked 16bit 32bit instructions 0 bit arm32 instruction ldmiaeqmentioned aboveis even complicated since whenit branches alsochange instruction setmodes arm32and thumbthumb2 average number clock cycles less 1 a9 andboom celio et al 2015 socalled superscalarprocessors executemore one instructionper clock cycle program size smaller program smaller area chip needed program memory signicant cost embedded devices indeed issue inspired arm architects retroactively add shorter instructions thumb thumb2 isas smaller programs also lead fewer misses instruction caches saves power since offchip dram accesses use much energy onchip sram accesses improves performance well small code size one goals isa architects x8632 isa instructions short 1 byte long 15 bytes one would expect thatthebytevariable length instructions ofthex86should certainly lead tosmaller programs isas limited to32bitlength instructions like arm32and riscvlogically8bitvariable length instructions shouldalsobesmallerthanisasthatofferonly16bit 32bit instructions like thumb2 riscv using rv32c extension see chap ter 7 figure 15 shows arm32 riscv code 6 9 larger code x8632 instructions 32 bits long surprisingly x8632 26 larger compressed versions rv32c thumb2 offer 16bit 32bit instructions new isa using 8bit variable instructions would likely lead smaller code rv32c thumb2 architects rst x86 1970s different concerns moreover given requirement backwards binarycompatibility incremental isa section 12 hundreds new x 8632 instructions longer one might expect since bear burden one twobyte prex squeeze limited free opcode space original x86 ease programming compiling linking since data register much faster access data memory critical compilers good job register allocation task much easier many registers rather fewer light arm32 16 registers x8632 8 modern isas including riscv relatively generous 32 integer registers registers surely make life easier compilers assembly language programmers another issue compilers assembly language programmers guring speed code sequence shall see riscv instructions typically one clock cy cle per instruction ignoring cache misses saw earlier arm32 x8632 instructions take many clock cycles even everything ts cache more unlike arm32 riscv x8632 arithmetic instructions operands mem ory instead requiring operands registers complex instructions operands memory make difcult processor designers deliver performance predictability useful isa support position independent code pic supports dynamic linking see section 35 since shared library code reside different addresses different programs pcrelative branches data addressing boon pic nearly isas provide pcrelative branches x8632 mips32 omit pcrelative data addressing elaboration arm32 mips32 x8632 elaborations optional sections readers delve interested topic need read understand rest book example isa names ofcial ones 32bitaddress arm isa many versions rst 1986 latest called armv7 2005 arm32 generally refers armv7 isa mips also many 32bit versions referring original called mips mips32 different later isa call mips32 intel rst 16bit address architecture 8086 1978 80386 isa expanded 32bit addresses 1985 x8632 notation generally refers ia32 32bitaddress version x86 isa given myriad variants isas nd nonstandard terminology least confusing", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-13 16:15:24", "segment": "segment3", "image_urls": [], "Book": "rvbook" }, { "section": "1.4 An Overview of this Book ", "content": "book assumes seen instruction sets riscv look related introductory architecture book based riscv patterson hennessy 2017 chapter 2 introduces rv32i frozen base integer instructions heart riscv chapter 3 explains remaining riscv assembly language beyond intro duced chapter 2 including calling conventions clever tricks linking as sembly language includes proper riscv instructions plus useful instructions outside riscv pseudoinstructions clever variations real in structions make easier write assembly language programs without complicate isa next three chapters explain standard riscv extensions added rv32i collectively call rv32g g general chapter 4 multiply divide rv32m chapter 5 floating point rv32f rv32d chapter 6 atomic rv32a riscv reference card pages 3 4 handy summary riscv instruc tions book rv32g rv64g rv3264v green card chapter 7 describes optional compressed extension rv32c excellent example elegance riscv restricting 16bit instructions short versions existing 32bit rv32g instructions almost free assembler pick instruction size allowing assembly language programmer compiler oblivious rv32c hardware decoder translate 16bit rv32c instructions 32bit rv32g instructions needs 400 gates percent even simplest implementation riscv chapter 8 introduces rv32v vector extension vector instructions another ex ample isa elegance compared numerous bruteforce single instruction multipledata simd instructions arm32 mips32 x8632 indeed hundreds instructions added x8632 figure 12 simd hundreds coming rv32v even simpler vector isas associates data type length vec tor registers instead embedding opcodes rv32v may compelling reason switching conventional simdbased isa riscv chapter 9 shows 64bit address version riscv rv64g chapter explains riscv architects needed widen registers add word doubleword long versions rv32g instructions extend address 32 64 bits chapter 10 explains system instructions showing riscv handles paging machine user supervisor privilege modes last chapter gives quick description remaining extensions currently consideration riscv foundation next comes largest section book appendix instruction set summary alphabetical order denes full riscv isa extensions mentioned pseudoinstructions 50 pages testimony simplicity riscv end book index", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-13 16:15:24", "segment": "segment4", "image_urls": [], "Book": "rvbook" }, { "section": "1.5 Concluding Remarks ", "content": "easy see formallogical methods exist certain instruction sets abstract adequate control cause execution sequence operations really decisive considerations present point view selecting in struction set practical nature simplicity equipment demanded instruction set clarity application actually important problems together speed handling problems von neumann et al 1947 1947 riscv recent cleanslate minimalist open isa informed mistakes past isas goal riscv architects effective computing devices smallest fastest following von neumann 70yearold advice isa emphasizes simplicity keep costs low plenty registers transparent instruction speed help compilers assembly language programmers map actually important problems appropriate quick code one indication complexity size documentation figure 16 shows size instruction set manuals riscv arm32 x8632 measured pages words read manuals fulltime job8 hours day 5 days weekit would take half month make single pass arm32 manual full month x8632 level intricacy perhaps single person fully understands arm32 x8632 using commonsense metric riscv 1 12 complexity arm32 1 10 1", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-13 16:15:24", "segment": "segment5", "image_urls": [], "Book": "rvbook" }, { "section": "1.6 To Learn More ", "content": "arm ltd arm architecture reference manual armv7a armv7r edition 2014 url http infocenterarmcomhelptopiccomarmdocddi0406c a baumann hardware new software proceedings 16th workshop hot topics operating systems pages 132137 acm 2017 c celio d patterson k asanovic berkeley outoforder machine boom industrycompetitive synthesizable parameterized riscv processor tech rep ucbeecs2015167 eecs department university california berkeley 2015 s galon m levy exploring coremark benchmark maximizing simplicity efcacy embedded microprocessor benchmark consortium 2012 intel corporation intel 64 ia32 architectures software developer manual volume 2 instruction set reference september 2016 s p morse intel 8086 chip future microprocessor design computer 50 4 89 2017 d a patterson j l hennessy computer organization design riscv edition hardware software interface morgan kaufmann 2017 s rodgers r uhlig x86 approaching 40 still going strong 2017 j l von neumann a w burks h h goldstine preliminary discussion logical design electronic computing instrument report us army ordnance depart ment 1947 a waterman design riscv instruction set architecture phd thesis eecs depart ment university california berkeley jan 2016 url http www2eecsberkeley edupubstechrpts2016eecs20161html a waterman k asanovic editors riscv instruction set manual volume ii privileged architecture version 110 may 2017a url https riscvorg specificationsprivilegedisa a waterman k asanovic editors riscv instruction set manual volume userlevel isa version 22 may 2017b url https riscvorgspecifications notes", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-13 16:15:24", "segment": "segment6", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 2", "content": "RV32I: RISC-V Base Integer ISA", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-13 16:15:23", "segment": "segment7", "image_urls": [], "Book": "rvbook" }, { "section": "2.1 Introduction ", "content": "figure 21 onepage graphical representation rv32i base instruction set see full rv32i instruction set concatenating underlined letters left right diagram set notation using lists possible variations instruction using either underlined letters underscore character means letter variation example set less immediate unsigned represents four rv32i instructions slt slti sltu sltiu goal diagrams rst gure following chapters give quick insightful overview instructions chapter", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-13 16:15:24", "segment": "segment8", "image_urls": [], "Book": "rvbook" }, { "section": "2.2 RV32I Instruction formats ", "content": "figure 22 shows six base instruction formats rtype registerregister operations itype short immediates loads stype stores btype conditional branches utype long immediates jtype unconditional jumps figure 23 lists opcodes rv32i instructions figure 21 using formats figure 22 even instruction formats demonstrate several examples simpler riscv isa improves costperformance first six formats instructions 32 bits long simplies instruction decoding arm32 particularly x8632 numerous formats make decoding expensive lowend implementations perfor mance challenge medium highend processor designs second riscv instructions offer three register operands rather one eld shared source destination x8632 operation naturally three distinct operands isa provides twooperand instruction compiler assembly language programmer must use extra move instruction preserve destination operand third riscv speciers registers read written always location instructions means register accesses begin decoding instruction many isas reuse eld source instructions destination others eg arm32 mips32 forces addition extra hardware placed potentially timecritical path select proper eld fourth immediate elds formats always sign extended sign bit always signicant bit instruction deci sion means sign extension immediate may also critical timing path proceed decoding instruction elaboration b jtype formats mentioned immediate eld rotated 1 bit branch instructions variation format relabel b format immediate eld jump instructions rotated 12 bits jump instructions variation u format relabeled j format hence really four basic formats conservatively count riscv six formats help programmers bit pattern zeros illegal rv32i instruction thus erroneous jumps zeroed memory regions immediately trap aid debugging similarly bit pattern ones illegal instruction trap common errors unprogrammed nonvolatile memory devices disconnected memory buses broken memory chips leave ample room isa extensions base rv32i isa uses less 18th encoding space 32bit instruction word architects also carefully picked rv32i opcodes instructions common datapath operations share many opcode bit values possible simplies control logic finally shall see branch jump addresses b j formats must shifted left 1 bit multiply addresses 2 thereby giving branches jumps greater range riscvrotatesthebitsintheimmediateoperandsfromanaturalplacementtoreducetheinstructionsignalfanoutand immediatemultiplexingcostbyalmostafactorfortwo simplies datapath logic lowend implementations nonstandardex different end section following chapters description onhowriscvdiffers isasthecontrast isoften riscvismissingarchitects demonstrate good tastebythefeaturestheyomitaswellasbywhattheyinclude arm32 12bit immediate eld simply constant input function produces constant 8 bits zeroextended full width rotated right value 4 remaining bits multiplied 2 hope encoding useful constants 12 bits would reduce number executed instructions arm32 also dedicates precious fourbitsinmost instruction formats toconditional executiondespite infrequently used conditionalexecutionaddstothecomplexityofoutoforderprocessors elaboration outoforder processors highspeed pipelined processors execute instructions opportunistically instead lockstep program order critical feature processors register renaming maps register names program onto larger number internal physical registers problem conditional execution registers instructions must allocated internal physical registers whether condition holds yet internal physical register availability critical performance resource outoforder processors image devicergb width 101 height 121 bpc 8", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-13 16:15:25", "segment": "segment9", "image_urls": [], "Book": "rvbook" }, { "section": "2.3 RV32I Registers ", "content": "figure 24 lists rv32i registers names determined riscv application binary interface abi use abi names code examples make easier read joy assembly language programmers compiler writers rv32i 31 registers plus x0 always value 0 arm32 merely 16 registers x8632 8 different dedicating register zero surprisingly large factor simplify ing riscv isa figure 33 page 36 chapter 3 gives many examples operations native instructions arm32 x8632 zero register synthesized rv32i instructions simply using zero register operand pc one arm32 16 registers means instruction changes register may also side effect branch instruction pc register complicates hardware branch prediction whose accuracy vital good pipelinedperformance since everyinstructionmightbeabranchinsteadof1020 ofinstructionsexecutedinprograms typical isas also means one less generalpurpose register", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-13 16:15:25", "segment": "segment10", "image_urls": [], "Book": "rvbook" }, { "section": "2.4 RV32I Integer Computation ", "content": "appendix gives details riscv instructions including formats opcodes inthissection andsimilarsectionsofthefollowingchapters wegiveanisaoverviewthat sufcient knowledgeable assembly language programmers well highlight features demonstrate seven isa metrics chapter 1 simple arithmetic instructions add sub logical instructions xor shift instructions sll srl sra figure 21 would expect isa read two 32bit values registers write 32bit result destination register rv32i also offers immediate versions instructions unlike arm32 immediates always signextended negative needed need immediate version sub programs generate boolean value result comparison accommodate cases rv32i offers set less instruction sets destination register 1 rst operand less second 0 otherwise one would expect signed version slt unsigned version sltu signed unsigned integers well immediate versions slti sltiu shall see rv32i branches check relationships two registers conditional expressions involve rela tionships many pairs registers compiler assembly language programmer could use slt logical instructions xor resolve elaborate conditional expressions two remaining integer computation instructions figure 21 help assembly linking load upper immediate lui loads 20bit constant signicant 20 bits register followed standard immediate instruction create 32bit constant two 32bit rv32i instructions add upper immediate pc auipc supports two instruction sequences access arbitrary offsets pc controlow transfers data accesses combination auipc 12bit immediate jalr see transfer control 32bit pcrelative address auipc plus 12bit immediate offset regular load store instructions access 32bit pcrelative data address different first byte halfword integer computation operations operations always full register width memory accesses take orders magnitude energy arithmetic operations narrow data accesses save signicant energy narrow operations arm32 unusual feature option shift one operands arithmeticlogic operations complicates datapath rarely needed hohl hinds 2016 rv32i separate shift instructions rv32i include multiply divide comprise optional rv32m exten sion see chapter 4 unlike arm32 x8632 full riscv software stack run without shrink embedded chips hardware issue mips32 assembler may replace multiply sequence shifts adds try improve perfor mance may confuse programmer seeing instructions executed found assembly language program rv32i also omits rotate instructions detection integer arithmetic overow calculated rv32i instructions see section 26 elaboration bit twiddling instructions rotate consideration riscv foundation part optional in struction extension called rv32b see chapter 11 elaboration xor enables magic trick exchange two values without using intermediate register code exchanges values x1 x2 leave proof reader hint exclusive commutative b b associative b c b c inverse 0 hasanidentity a0a however fascinating riscv ample register set usually lets compilers nd scratch regis ter rarely uses xorswap", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-13 16:15:25", "segment": "segment11", "image_urls": [], "Book": "rvbook" }, { "section": "2.5 RV32I Loads and Stores ", "content": "well providing loads stores 32bit words lw sw figure 21 shows rv32i loads signed unsigned bytes halfwords lb lbu lh lhu stores bytes halfwords sb sh signed bytes halfwords signextended 32 bits written destination registers widening narrow data allows subsequent integer computation instructions operate correctly 32 bits even natural data types narrower unsigned bytes halfwords useful text unsigned integers zero extended 32 bits written destination register addressing mode loads stores adding signextended 12bit immediate register called displacement addressing mode x8632 irvine 2014 different rv32i omitted sophisticated addressing modes arm32 x8632 alas arm32 addressing modes available data types rv32i addressing discriminate data type riscv imitate x86 ad dressing modes example setting immediate eld 0 effect registerindirect addressing mode unlike x8632 riscv special stack instructions using one 31 registers stack pointer see figure 24 standard addressing mode gets benets push pop instructions without added isa complex ity unlike mips32 riscv rejected delayed load similar spirit delayed branches mips32 redened load data unavailable two instructions later would show vestage pipeline whatever benet evaporated longer pipelines came later arm32 mips32 require data aligned naturally datasized boundaries memory riscv misaligned accesses sometimes required porting legacy code one option disallow misaligned accesses base isa provide separate instructions support misaligned accesses load word left load word right mips32 option would complicate register access however since lwl lwr require writing pieces registers instead simply full registers requiring instead regular loads stores support misaligned accesses simplied overall design elaboration endianness riscv chose littleendian byte ordering dominant commercially x8632 systems apple ios google android os microsoft windows arm little endian since endian order matters accessing identical data word bytes endianness affects programmers", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-13 16:15:25", "segment": "segment12", "image_urls": [], "Book": "rvbook" }, { "section": "2.6 RV32I Conditional Branch ", "content": "rv32i compare two registers branch result equal beq equal bne greater equal bge less blt latter two cases signed com parisons rv32i also offers unsigned versions bgeu bltu two remaining relationships greater less equal checked simply reversing operands since x means x x implies x since riscv instructions must multiple two bytes longsee chapter 7 learn optional 2byte instructionsthe branch addressing mode multiplies 12bit immediate 2 signextends adds pc pcrelative addressing helps position independent code thereby reduces work linker loader chapter 3 different noted riscv excluded infamous delayed branch mips32 oracle sparc others also avoided condition codes arm32 x8632 conditional branches add extra state implicitly set instruc tions needlessly complicate dependence calculation outoforder execution fi nally omitted loop instructions x8632 loop loope loopz loopne loopnz elaboration multiword addition without condition codes done follows rv32i using sltu calculate carryout elaboration reading pc current pc obtained setting uimmediate eld auipc 0 x86 32 read pc need call function pushes pc stack callee reads pushed pc stack nally returns pc popping stack reading current pc took 1 store 2 loads 2 taken jumps elaboration software checking overow programs ignore integer arithmetic overow riscv relies software overow checking unsigned addition requires single additional branch instruction addition addu t0 t1 t2 bltu t0 t1 overflow signed addition one operand sign known overow checking requires single branch addition addi t0 t1 imm blt t0 t1 overflow covers common case addition immediate operand general signed addition three additional instructions addition required observing sum less one operands operand negative", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-13 16:15:25", "segment": "segment13", "image_urls": [], "Book": "rvbook" }, { "section": "2.7 RV32I Unconditional Jump ", "content": "single jump link instruction jal figure 21 serves dual functions support procedure calls saves address next instruction pc4 destination register normally return address register ra see figure 24 support unconditional jumps use zero register x0 instead ra destination register x0 changed like branches jal multiplies 20bit branch address 2 sign extends adds result pc form jump address register version jump link jalr similarly multipurpose call pro cedure dynamically calculated address simply perform procedure return selecting ra source register zero register x0 destination register switch case statements calculate jump address also use jalr zero register destination register different rv32i shunned intricate procedure call instructions enter leave instructions x8632 register windows found intel ita nium oracle sparc cadence tensilica registerwindows accelerated function callby many moreregisters 32 newfunction would get newset window 32 registers call passarguments windowsoverlapped meaning adjacent windows", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-13 16:15:25", "segment": "segment14", "image_urls": [], "Book": "rvbook" }, { "section": "2.8 RV32I Miscellaneous ", "content": "control status register instructions csrrc csrrs csrrw csrrci csrrsi csrrwi figure 21 provide easy access registers help measure program performance 64bit counters read 32 bits time measure wall clock time clock cycles executed number instructions retired registers two ecall instruction makes requests supporting execution environment system calls debuggers use ebreak instruction transfer control debugging environment fence instruction sequences device io memory accesses viewed threads external devices coprocessors fencei instruction synchronizes instruction data streams riscv guarantee stores instruction memory visible instruction fetches processor fencei instruction executes chapter 10 covers riscv system instructions different riscv uses memory mapped io instead ins insb insw outs outsb outsw instructions x8632 supports strings using byte loads stores instead 16 special string instructions x8632 rep movs coms scas lods", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-13 16:15:26", "segment": "segment15", "image_urls": [], "Book": "rvbook" }, { "section": "2.9 Comparing RV32I, ARM-32, MIPS-32, and x86-32 using Insertion Sort ", "content": "introduced riscv base instruction set commented upon choices com pared arm32 mips32 x8632 headtohead comparison fig ure 25 shows insertion sort c benchmark figure 26 table summarizes number instructions number bytes insertion sort isas figures 28 211 show compiled code rv32i arm32 mips32 x8632 despite theemphasis onsimplicity theriscvversion usesthesame orfewer instructions andthecode sizes ofthe architectures arequitecloseinthisexample thecompareandexecutebranchesofriscvsaveasmanyinstructions asdothefancieraddressmodesand push pop instructions arm32 x8632 figures 29 211", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-13 16:15:26", "segment": "segment16", "image_urls": [], "Book": "rvbook" }, { "section": "2.10 Concluding Remarks ", "content": "figure 27 uses seven metrics isa design chapter 1 organize lessons pastisas listed ittheprevious sections andshows thepositive outcomes forrv32iwe implyingthatriscvistherstisatohavethoseoutcomesindeed rv32iinheritsthe following risci greatgreatgrandparent patterson 2017 32bit byteaddressable address space instructions 32bit long 31 registers 32 bits wide register 0 hardwired zero operations registers none registertomemory loadstore word plus signed unsigned loadstore byte halfword immediate option arithmetic logical shift instructions immediates always signextend one data addressing mode register immediate pcrelative branching multiply divide instructions instruction load wide immediate upper part register 32bit constant takes two instructions riscv benets starting onequarter onethird century later allowed architects follow santayana advice borrow good ideas repeat mis takes pastincluding risciin current riscv isa moreover riscv foundation grow isa slowly via optional extensions prevent rampant incrementalism plagued successful isas past lindy effect lin2017 observes thefuture life expectancy atechnologyorideaisproportionaltoitsage stood test oftime longer survived past longer likely willsurvive future thathypothesis holds riscarchitecture may agood idea long time elaboration rv32i unique early microprocessors separate chips oatingpoint arithmetic instructions optional moore law soon brought everything chip modularity faded isas subsetting full isa simpler processors trapping software emulate goes back decades ibm 360 model 44 digital equipment microvax exam ples rv32i different full software stack needs base instructions rv32i processor need trap repeatedly omitted instructions rv32g probably closest isa riscv respect tensilica xtensa aimed embedded applications 80instruction base isa intended extended users custom instructions accelerate applications rv32i simpler base isa 64bit address version offers extensions target supercomputers well microcontrollers", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-13 16:15:26", "segment": "segment17", "image_urls": [], "Book": "rvbook" }, { "section": "2.11 To Learn More ", "content": "lindy effect 2017 url https enwikipediaorgwikilindyeffect t chen d a patterson riscv genealogy technical report ucbeecs2016 6 eecs department university california berkeley jan 2016 url http www2 eecsberkeleyedupubstechrpts2016eecs20166html w hohl c hinds arm assembly language fundamentals techniques crc press 2016 k r irvine assembly language x86 processors prentice hall 2014 d patterson close riscv risci 2017 a waterman k asanovic editors riscv instruction set manual volume userlevelisa version22may2017urlhttps riscvorgspecifications", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-13 16:15:26", "segment": "segment18", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 3", "content": "RISC-V Assembly Language", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-13 16:15:26", "segment": "segment19", "image_urls": [], "Book": "rvbook" }, { "section": "3.1 Introduction ", "content": "figure 31 shows four classic steps translation starting c program ending machinelanguage program ready run computer chapter covers last three steps begin role assembler plays riscv calling convention", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-13 16:15:26", "segment": "segment20", "image_urls": [], "Book": "rvbook" }, { "section": "3.2 Calling convention ", "content": "six general stages calling function patterson hennessy 2017 1 place arguments function access 2 jump function using rv32i jal 3 acquire local storage resources function needs saving registers required 4 perform desired task function 5 place function result value calling program access restore registers release local storage resources 6 since function called several points program return control point origin using ret obtain good performance try keep variables registers rather memory hand avoid going memory frequently save restore registers riscv fortunately enough registers offer best worlds keep operands registers yet reduce need save restore insight registers guaranteed preserved across function call called temporary registers called saved registers functions avoid calling functions called leaf functions leaf function arguments local variables keep everything registers without spilling memory conditions hold memory program must save register values memory surprising fraction function calls fall happy case registers within function call must considered either class saved registers preserved across function call class temporary registers function change register containing return value like temporary registers reason preserve registers pass arguments functions also like temporaries caller rely remaining registers unchanged across function call registers used return address stack pointer figure 32 lists riscv application binary interface abi names registers convention whether preserved across function calls given abi conventions see standard rv32i code function entry exit function prologue looks like entrylabel addi sp sp framesize allocate space stack frame adjusting stack pointer sp register sw ra framesize4 sp save return address ra register save registers stack needed body function many function arguments variables t registers prologue allocates space stack function frame called task function complete epilogue undoes stack frame returns point origin restore registers stack needed lw ra framesize4 sp restore return address register addi sp sp framesize deallocate space stack frame ret return calling point see example follows abi shortly rst need explain re maining assembly tasks beyond turning abi register names register numbers elaboration saved temporary registers contiguous support rv32e embedded version riscv 16 registers see chap ter 11 simply uses register numbers x0 x15 saved temporary registers range rest last 16 registers rv32e smaller compiler support yet since match rv32i", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-13 16:15:26", "segment": "segment21", "image_urls": [], "Book": "rvbook" }, { "section": "3.3 Assembly ", "content": "input step unix le sufx s foos msdos asm job assembler step figure 31 simply produce object code instructions processor understands extend include operations useful assembly language programmer compiler writer category based clever congurations regular instructions called pseudoinstructions figures 33 34 list riscv pseudoinstructions rst gure relying register x0 always zero second list example ret mentioned actually pseudoinstruction assembler replaces jalr x0 x1 0 see figure 33 majority riscv pseudoinstructions depend x0 see setting aside one 32 registers hardwired zero greatly simplies riscv instruction set providing many popular operationssuch jump return branch equal zeroas pseudoinstructions figure 35 shows classic hello world program c compiler produces assembly language output figure 36 using calling convention figure 32 pseudoinstructions figures 33 34 traditionally consider running operating commands start period assembler directives commands assembler rather code translated tell assembler place code data specify text data constants use program forth figure 39 shows assembler directives riscv figure 36 directives textenter text section align 2align following code 22 bytes globl maindeclare global symbol main section rodataenter readonly data section balign 4align data section 4 bytes string hello n create nullterminated string string world create nullterminated string assembler produces object le figure 37 using executable linkable format elf standard format tis committee 1995", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-13 16:15:26", "segment": "segment22", "image_urls": [], "Book": "rvbook" }, { "section": "3.4 Linker ", "content": "rather compile source code every time one le changes linker allows individ ual les compiled assembled separately stitches new object code together existing machine language modules libraries derives name one tasks edit links jump link instructions object le fact linker short link editor historical name step figure 31 unix systems input linker les o sufx eg fooo libco output aout le msdos inputs les sufx obj lib output exe le figure 310 shows addresses regions memory allocated code data typical riscv program linker must adjust program data addresses instructions object les match addresses gure less work linker input les position independent code pic pic means branches instructions references data inside le correct wherever code placed mentioned chapter 2 pcrelative branch rv32i makes pic much easier fulll addition instructions object le contains symbol table includes labels program must given addresses part linking process list includes labels data well code figure 36 two data labels set string1 string2 two code labels assigned main printf since hard specify 32bit address within single 32bit instruction linker must adjust two instruc tions per label rv32i code figure 36 shows lui addi data addresses auipc jalr code addresses figure 38 shows nal linked aout version object le figure 37 riscv compilers support several abis depending whether f extensions present rv32 abis named ilp32 ilp32f ilp32d ilp32 means c language data types int long pointer 32 bits optional sufx indicates oatingpoint arguments passed ilp32 oatingpoint arguments passed integer registers ilp32f singleprecision oatingpoint arguments passed oatingpoint registers ilp32d doubleprecision oatingpoint arguments also passed oating point registers naturally pass oatingpoint argument oatingpoint register need cor responding oatingpoint isa extension f see chapter 5 compile code rv32i gcc ag marchrv32i must use ilp32 abi gcc ag mabiilp32 hand oatingpoint instructions mean calling convention required use example rv32ifd compatible three abis ilp32 ilp32f ilp32d linker checks program abi matches libraries although com piler supports many combinations isa extensions abis sets libraries might installed hence common pitfall attempting link program without compatible libraries installed linker produce helpful diagnostic message case simply attempt link incompatible library inform incompatibility pitfall generally occurs compiling one computer different computer cross compiling elaboration linker relaxation jump link instruction 20bit pcrelative address eld single instruction jump far compiler produces two instructions external function quite often one instruction necessary since optimization saves time space linkers make passes code replace two instructions one whenever pass might shrink distance call function ts single instruction linker keeps optimizing code changes process called linker relaxation name referring relaxation techniques solving systems equations addition procedure calls riscv linker relaxes data addressing use global pointer datum lies within 2 kib gp removing lui auipc similarly relaxes threadlocal storage addressing datum lies within 2 kib tp", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-13 16:15:27", "segment": "segment23", "image_urls": [], "Book": "rvbook" }, { "section": "3.5 Static vs. Dynamic Linking ", "content": "prior section describes static linking potential library code linked loaded together execution such libraries berelatively large solinking apopular library multiple programswastesmemorymoreover thelibrariesareboundwhenlinkedevenwhentheyareupdatedlatertoxbugs forcingthestaticallylinkedcodeto use old buggy version avoid problems systems today rely dynamic linking desired external function loaded linked program rst called never called never loaded linked every call rst one uses fast link dynamic overhead paid time program starts links current version library functions needs get newest version furthermore multiple programs use dynamically linked library code library need appear memory code compiler generates resembles static linking instead jumping real function jumps short threeinstruction stub function stub function loads address real function table memory jumps however rst call table lacks address real function instead contains address dynamiclinking routine invoked dynamic linker uses symbol table nd real function copies memory updates table point real function subsequent call pays threeinstruction overhead stub function", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-13 16:15:27", "segment": "segment24", "image_urls": [], "Book": "rvbook" }, { "section": "3.6 Loader ", "content": "program like one figure 38 executable le kept computer storage one run loader job load memory jump starting address loader today operating system stated alternatively loading aout one many tasks operating system loading little trickier dynamicallylinked programs instead simply starting program operating system starts dynamic linker turn starts desired program handles rsttime external calls copies functions memory edits program call point correct function", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-13 16:15:27", "segment": "segment25", "image_urls": [], "Book": "rvbook" }, { "section": "3.7 Concluding Remarks ", "content": "assembler enhances simple riscv isa 60 pseudoinstructions make riscv code easier read write without increasing hardware costs simply dedicat ing one riscv register zero enables many helpful operations load upper immediate lui add upper immediate pc auipc instructions make easier compiler linker adjust addresses external data functions pcrelative branching makes easier help linker positionindependent code plenty registers enables calling convention makes function call return faster reducing number register spills restores riscv offers tasteful collection simple impactful mechanisms reduce cost improve performance make easier program", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-13 16:15:27", "segment": "segment26", "image_urls": [], "Book": "rvbook" }, { "section": "3.8 To Learn More ", "content": "d a patterson j l hennessy computer organization design riscv edition hardware software interface morgan kaufmann 2017 tis committee tool interface standard tis executable linking format elf speci cation version 12 tis committee 1995 a waterman k asanovic editors riscv instruction set manual volume userlevel isa version 22 may 2017 url https riscvorgspecifications", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-13 16:15:27", "segment": "segment27", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 4", "content": "RV32M: Multiply and Divide", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-13 16:15:26", "segment": "segment28", "image_urls": [], "Book": "rvbook" }, { "section": "4.1 Introduction ", "content": "rv32m adds integer multiply divide instructions rv32i figure 41 graphical representation rv32m extension instruction set figure 42 lists opcodes divide straightforward recall quotient dividend remainder divisor alternatively dividend quotient divisor remainder remainder dividend quotient divisor rv32m divide instructions signed unsigned integers divide div di vide unsigned divu place quotient destination register less frequently programmers want remainder instead quotient rv32m offers remainder rem remainder unsigned remu write remainder instead quotient multiply equation simply product multiplicand multiplier complicated divide size product sum sizes multiplier multiplicand multiplying two 32bit numbers yields 64bit product produce properly signed unsigned 64bit product riscv four multiply instructions get integer 32bit productthe lower 32bits full productuse mul get upper 32 bits 64bit product use mulh operands signed mulhu operands unsigned mulhsu one signed unsigned since would complicate hardware write 64bit product two 32bit registers one instruction rv32m requires two multiply instructions produce 64bit product many microprocessors integer division relatively slow operation mentioned right shifts replace unsigned division powers 2 turns division byotherconstantscanbeoptimized bymultiplyingbytheapproximatereciprocalthen applying correction upper half product example figure 43 shows code unsigned division 3 t different arm32 long multiply divide instruction divide become mandatory 2005 almost 20 years rst arm processor mips32 uses special registers hi lo sole destination registers multiply divide instructions design reduced complexity early mips implementations takes extra move instruction use result multiply divide potentially reducing performance hi lo registers also increase architectural state making slightly slower switch tasks elaboration mulh mulhu check overow multiplication overow using mul unsigned multiplication result mulhu zero similarly overow using mul signed multiplication bits result mulh match sign bit result mul ie equals 0 positive ffff ffffhex negative elaboration also easy check divide zero add beqz test dividend divide rv32i trap divide zero programs want behavior ones easily check zero software course divides constants never need checks elaboration mulhsu useful multiword signed multiplication mulhsu generates upper half product multiplier signed multipli cand unsigned substep multiword signed multiplication multiplying mostsignicant word multiplier contains sign bit lesssignicant words multiplicand unsigned instruction improves performance multiword multiplication 15", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-13 16:15:27", "segment": "segment29", "image_urls": [], "Book": "rvbook" }, { "section": "4.2 Concluding Remarks ", "content": "offer smallest riscv processor embedded applications multiply divide part rst optional standard extension riscv nevertheless many riscv proces sors include rv32m", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-13 16:15:27", "segment": "segment30", "image_urls": [], "Book": "rvbook" }, { "section": "4.3 To Learn More ", "content": "t granlund p l montgomery division invariant integers using multiplication acm sigplan notices volume 29 pages 6172 acm 1994 a waterman k asanovic editors riscv instruction set manual volume userlevelisa version22may2017urlhttps riscvorgspecifications", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-13 16:15:27", "segment": "segment31", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 5", "content": "RV32F and RV32D: Single- and Double-Precision Floating Point", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-13 16:15:26", "segment": "segment32", "image_urls": [], "Book": "rvbook" }, { "section": "5.1 Introduction ", "content": "although rv32f rv32d separate optional instruction set extensions often included together given single doubleprecision 32 64bit versions nearly oatingpoint instructions brevity present one chapter figure 51 graphical representation rv32f rv32d extension instruction sets figure 52 lists opcodes rv32f figure 53 lists opcodes rv32d like virtually modern isas riscv obeys ieee 7542008 oatingpoint standard ieee standards committee 2008", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-13 16:15:27", "segment": "segment33", "image_urls": [], "Book": "rvbook" }, { "section": "5.2 Floating-Point Registers ", "content": "rv32f rv32d use 32 separate f registers instead x registers main reason two sets registers processors improve performance doubling register capacity bandwidth two sets registers without increasing space register specier cramped riscv instruction format major impact instruction set new instructions load store f registers transfer data x f registers figure 54 lists rv32d rv32f registers names determined riscv abi processor rv32f rv32d singleprecision data uses lower 32 bits f registers unlike x0 rv32i register f0 hardwired 0 alterable register like 31 f registers ieee 7542008 standard provides several ways round oatingpoint arithmetic helpful determine error bounds writing numerical libraries accurate common round nearest even rne rounding mode set oatingpoint control status register fcsr figure 55 shows fcsr lists rounding options also holds accrued exception ags standard requires different arm32 mips32 32 singleprecision oatingpoint registers 16 doubleprecision registers map two singleprecision registers left right 32bit halves doubleprecision register x8632 oatingpoint arithmetic registers used stack instead stack entries 80bits wde improve accuracy loads covert 32bit 64bit operands 80 bits vice versa stores subsequent version x8632 added 8traditional 64bit oatingpoint registers associated instructions unlike rv 32 fd mips 32 arm 32 x 86 32 overlooked instructions move data directly oatingpoint integer registers solution store oatingpoint register memory load memory integer register vice versa elaboration rv32fd allows rounding mode set per instruction called static rounding helps performance need change rounding mode one instruction default use dynamic rounding mode fcsr static rounding specied optional last argument fadds ft0 ft1 ft2 rtz round towards zero irrespective fcsr caption figure 55 lists names rounding modes", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-13 16:15:27", "segment": "segment34", "image_urls": [], "Book": "rvbook" }, { "section": "5.3 Floating-Point Loads, Stores, and Arithmetic ", "content": "riscv two load instructions flw fld two store instructions fsw fsd rv32f rv32d addressing mode instruction format lw sw adding standard arithmetic operations fadds faddd fsubs fsubd fmuls fmuld fdivs fdivd rv32f rv32d include square root fsqrts fsqrtd also minimum maximum fmins fmind fmaxs fmaxd write smaller larger values pair source operands without using branch instruction many oatingpoint algorithms matrix multiply perform multiply immediately followed addition subtraction hence riscv offers instructions multiply two operands either add fmadds fmaddd subtract fmsubs fmsubd third operand product writing sum also versions negate prod uct adding subtracting third operand fnmadds fnmaddd fnmsubs fnmsubd fused multiplyadd instructions accurate well faster separate multiply add instructions round add rather twice multiply add instructions need new instruction format specify 4 registers called r4 figures 52 53 show r4 format variation r format instead oatingpoint branch instructions rv32f rv32d supply comparison in structions set integer register 1 0 based comparison two oatingpoint registers feqs feqd flts fltd fles fled instructions allow integer branch instruction jump based oatingpoint condition example code branches exit f1 f2 flt x5 f1 f2 x5 1 f1 f2 otherwise x5 0 bne x5 x0 exit x5 0 jump exit", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-13 16:15:27", "segment": "segment35", "image_urls": [], "Book": "rvbook" }, { "section": "5.4 Floating-Point Converts and Moves ", "content": "rv32f rv32d instructions perform combinations useful conversions 32bit signed integers 32bit unsigned integers 32bit oating point 64bit oating point figure 56 displays 10 instructions source data type converted destination data type rv32f also offers instructions move data x f registers fmvxw vice versa fmvwx", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-13 16:15:27", "segment": "segment36", "image_urls": [], "Book": "rvbook" }, { "section": "5.5 Miscellaneous Floating-Point Instructions ", "content": "rv32f rv32d offer unusual instructions help math libraries well provide useful pseudoinstructions ieee 754 oatingpoint standard requires way copy manipulate signs classify oatingpoint data inspired instructions rst signinjection instructions copy everything rst source register sign bit value sign bit depends instruction 1 float sign inject fsgnjs fsgnjd result sign bit rs2 sign bit 2 float sign inject negative fsgnjns fsgnjnd result sign bit opposite rs2 sign bit 3 float sign inject exclusiveor fsgnjxs fsgnjxd sign bit xor sign bits rs1 rs2 well helping sign manipulation math libraries signinjection instructions provide three popular oatingpoint pseudoinstructions see figure 34 page 37 1 copy oatingpoint register fmvs rd rs really fsgnjs rd rs rs fmvd rd rs really fsgnjd rd rs rs 2 negate fnegs rd rs maps fsgnjns rd rs rs fnegd rd rs maps fsgnjnd rd rs rs 3 absolute value since 0 0 0 1 1 0 fabss rd rs becomes fsgnjxs rd rs rs fabsd rd rs becomes fsgnjxd rd rs rs second unusual oatingpoint instruction classify fclasss fclassd classify instructions also great aid math libraries test source operand see 10 oatingpoint properties apply see table write mask lower 10 bits destination integer register answer one ten bits set 1 rest set 0s", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-13 16:15:28", "segment": "segment37", "image_urls": [], "Book": "rvbook" }, { "section": "5.6 Comparing RV32FD, ARM-32, MIPS-32, and x86-32 using DAXPY ", "content": "llnowdoaheadtoheadcomparisonusingdaxpyasouroatingpointbenchmark figure 57 calculates x doubleprecision x vectors scalar figure 58 summarizes number instructions number bytes daxpy programs four isas code figures 59 512 case insertion sort chapter 2 despite emphasis simplicity riscv version fewer instructions code sizes architectures quite close example compareandexecute branches riscv save many instructions fancier address modes push pop instructions arm32 x8632", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-13 16:15:28", "segment": "segment38", "image_urls": [], "Book": "rvbook" }, { "section": "5.7 Concluding Remarks ", "content": "less robert browning 1855 minimalist school building architecture adopted poem axiom 1980s ieee 7542008 oatingpoint standard ieee standards committee 2008 denes oatingpoint data types accuracy computation required operations suc cess greatly reduces difculty porting oatingpoint programs also means oatingpoint isas probably uniform equivalent chapters elaboration 16bit 128bit decimal oatingpoint arithmetic revised ieee oatingpoint standard ieee 7542008 describes several new formats beyond single doubleprecision call binary32 binary64 least sur prising addition quadruple precision named binary128 riscv tentative extension planned called rv32q see chapter 11 standard also provided two sizes binary data interchange indicating programmers might store numbers memory storage expect able compute sizes halfprecision binary16 octuple precision binary256 despite standard intent gpus com pute halfprecision well keep memory plan riscv include halfprecision vector instructions rv32v chapter 8 proviso pro cessors supporting vector halfprecision also add halfprecision scalar instructions surprising addition revised standard decimal oating point riscv set aside rv32l see chapter 11 three selfexplanatory decimal formats called decimal32 decimal64 decimal128", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-13 16:15:28", "segment": "segment39", "image_urls": [], "Book": "rvbook" }, { "section": "5.8 To Learn More ", "content": "ieee standards committee 7542008 ieee standard oatingpoint arithmetic ieee computer society std 2008 a waterman k asanovic editors riscv instruction set manual volume userlevelisa version22may2017urlhttps riscvorgspecifications", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-13 16:15:28", "segment": "segment40", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 6", "content": "RV32A: Atomic Instructions", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-13 16:15:26", "segment": "segment41", "image_urls": [], "Book": "rvbook" }, { "section": "6.1 Introduction ", "content": "assumption already understand isa support multiprocessing job explain rv32a instructions feel sufcient background need reminder study synchronization computer science wikipedia https enwikipediaorgwikisynchronization computerscience read section 21 related riscv architecture book patterson hennessy 2017 rv32a two types atomic operations synchronization atomic memory operations amo load reserved store conditional figure 61 graphical representation rv32a extension instruction set figure 62 lists opcodes instruction formats amo instructions atomically perform operation operand memory set destination register original memory value atomic means interrupt read write memory could processors modify memory value memory read write amo instruction load reserved store conditional provide atomic operation across two instructions load reserved reads word memory writes destination register records reservation word memory store conditional stores word address source register provided exists load reservation memory address writes zero destination register store succeeded nonzero error code otherwise obvious question rv32a two ways perform atomic operations answer two quite distinct use cases programming language developers assume underlying architecture perform atomic compareandswap operation compare register value value memory ad dressed another register equal swap third register value one memory make assumption universal synchronization primitive singleword synchronization operation synthesized compare andswap herlihy 1991 powerful argument adding instruction isa requires three source registers one instruction alas going two three source operands would complicate integer datapath control instruction format three source operands rv32fd multiplyadd instructions affect oatingpoint datapath integer datapath fortunately load reserved store conditional two source registers implement atomic compare swap see top half figure 63 rationale also amo instructions scale better large multipro cessor systems load reserved store conditional also used implement reduction operations efciently amos useful well communicating io de vices perform read write single atomic bus transaction atomicity simplify device drivers improve io performance bottom half figure 63 shows write critical section using atomic swap elaboration memory consistency models riscv relaxed memory consistency model threads may view memory accesses order figure 62 shows rv32a instructions acquire bit aq release bit rl atomic operation aq bit set guarantees threads see amo inorder subsequent memory accesses rl bit set threads see atomic operation inorder previous memory accesses learn adve gharachorloo 1996 excellent tutorial topic different original mips32 mechanism synchronization architects added load reserved store conditional instructions later mips isa", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-13 16:15:28", "segment": "segment42", "image_urls": [], "Book": "rvbook" }, { "section": "6.2 Concluding Remarks ", "content": "rv32a optional riscv processor simpler without however einstein said everything simple possible simpler many situations require rv32a", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-13 16:15:28", "segment": "segment43", "image_urls": [], "Book": "rvbook" }, { "section": "6.3 To Learn More ", "content": "s v adve k gharachorloo shared memory consistency models tutorial computer 29 12 6676 1996 m herlihy waitfree synchronization acm transactions programming languages systems 1991 d a patterson j l hennessy computer organization design riscv edition hardware software interface morgan kaufmann 2017 a waterman k asanovic editors riscv instruction set manual volume userlevelisa version22may2017urlhttps riscvorgspecifications", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-13 16:15:28", "segment": "segment44", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 7", "content": "RV32C: Compressed Instructions", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-13 16:15:26", "segment": "segment45", "image_urls": [], "Book": "rvbook" }, { "section": "7.1 Introduction ", "content": "prior isas signicantly expanded number instructions instruction formats shrink code size adding short instructions two operands instead three small immediate elds arm mips invented whole isas twice shrink code arm thumb thumb2 plus mips16 micromips new isas hampered processor compiler increased cognitive load assembly language programmer rv32c takes novel approach every short instruction must map one single standard 32bit riscv instruction moreover assembler linker aware 16 bit instructions replace wide instruction narrow cousin compiler writer assembly language programmer blissfully oblivious rv32c instructions formats except ending programs smaller figure 71 graphical representation rv32c extension instruction set riscv architects chose instructions rvc extension obtain good code compression across range programs using three observations t 16 bits first ten popular registers a0a5 s0s1 sp ra accessed far rest second many instructions overwrite one source operands third immediate operands tend small instructions favor certain immediates many rv32c instruc tions access popular registers instructions implicitly overwrite source operand almost immediates reduced size loads stores using unsigned offsets multiples operand size figures 73 74 list rv32c code insertion sort daxpy show rv32c instructions demonstrate impact compression explicitly normally instructions invisible assembly language program comments show equiv alent 32bit instructions rv32c instructions parenthetically appendix includes 32bit riscv instruction corresponds 16bit rv32c instruction example address 4 insertion sort figure 73 assembler replaced following 32bit rv32i instruction addi a4 x01 1 16bit rv32c instruction rv32c load immediate instruction narrower must specify one register small immediate cli machine code four hexadecimal digits figure 73 showing cli instruction indeed 2 bytes long another example address 10 figure 73 assembler replaced add a2 x0 a3 a2 pointer j 16bit rv32c instruction cmv a2 a3 expands add a2 x0 a3 a2 pointer j rv32c move instruction merely 16 bits long species two registers processor designer ignore rv32c implementation trick makes inexpensive decoder translates 16bit instructions equivalent 32bit version execute figures 76 78 list rv32c instruction formats opcodes decoder translates equivalent 400 gates tiniest 32bit processor without riscv extensionsis 8000 gates 5 tiny design decoder nearly disappears inside moderate processor caches order 100000 gates different byte halfword instructions rv32c instructions bigger inuence code size small size advantage thumb2 rv32c figure 15 page 9 due code size savings load store multiple procedure entry exit rv32c excludes maintain onetoone mapping rv32g instructions omits reduce implementation complexity highend processors since thumb2 separate isa arm32 processor switch hardware must two instruction decoders one arm32 one thumb2 rv32gc single isa riscv processors need single decoder elaboration would architects ever skip rv32c instruction decode bottleneck superscalar processors try fetch several in structions per clock cycle another example macrofusion whereby instruction decoder combines riscv instructions together form powerful instructions execution see chapter 1 mix 16bit rv32c 32bit rv32i instructions make sophisticated decoding difcult complete within clock cycle highperformance implemen tation", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-13 16:15:28", "segment": "segment46", "image_urls": [], "Book": "rvbook" }, { "section": "7.2 Comparing RV32GC, Thumb-2, microMIPS, and x86-32 ", "content": "figure 72 summarizes size insertion sort daxpy four isas 19 original rv32i instructions insertion sort 12 become rv32c code shrinks 194 76 bytes 12274 52 bytes saving 2476 32 daxpy shrinks 11 4 44 bytes 8 2 3 4 28 bytes 1644 36 results two small examples surprisingly line figure 15 page 9 chapter 2 shows rv32g code 37 larger rv32gc code larger set much bigger programs achieve level savings half instructions programs rv32c instructions elaboration rv32c really unique rv32i instructions indistinguishable rv32ic thumb2 actually separate isa 16bit instructions plus armv7 example compare branch zero thumb2 armv7 vice versa reverse subtract carry micromips32 superset mips32 example micromips multiplies branch displacements two four mips32 riscv always multiplies two", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-13 16:15:28", "segment": "segment47", "image_urls": [], "Book": "rvbook" }, { "section": "7.3 Concluding Remarks ", "content": "mathematician built one rst mechanical calculators led turing award laureate niklaus wirth name programming language rv32c gives riscv one smallest code sizes today almost think hardwareassisted pseudoinstructions however assembler hiding assembly language programmer compiler writer rather chapter 3 expanding real instruction set popular operations make riscv code easier use read approaches aid programmer productivity consider rv32c one riscv best examples simple powerful mechanism improves costperformance", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-13 16:15:28", "segment": "segment48", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 8", "content": "RV32V: Vector", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-13 16:15:26", "segment": "segment49", "image_urls": [], "Book": "rvbook" }, { "section": "8.1 Introduction ", "content": "chapter focuses datalevel parallelism plenty data desired application compute concurrently arrays popular example fundamental scientic applications multimedia programs use arrays well former uses single doubleprecision oatpoint data latter often uses 8 16bit integer data best known architecture datalevel parallelism single instruction multiple data simd simd rst became popular partitioning 64bit registers many 8 16 32 bit pieces computing parallel opcode supplied data width operation data transfers simply loads stores single wide simd register rst step partitioning existing 64bit registers tempting straightfor ward make simd faster architects subsequently widen registers compute partitions concurrently simd isas belong incremental school de sign andtheopcodespeciesthedatawidth expandingthesimdregistersalsoexpandsthe simd instruction set subsequent step doubling width simd registers number simd instructions leads isas path escalating complexity borne processor designers compiler writers assembly language programmers older opinion elegant alternative exploit datalevel parallelism vector architecture chapter provides rationale using vectors instead simd riscv vector computers gather objects main memory put long sequential vector registers pipelined execution units compute efciently vector registers vector architectures scatter results back vector registers main memory size vector registers determined implementation rather baked opcode simd shall see separating vector length maximum op erations per clock cycle instruction encoding crux vector architecture vector microarchitect exibly design dataparallel hardware without affecting programmer programmer take advantage longer vectors without rewriting code addition vector architectures many fewer instructions simd architectures moreover vector architectures wellestablished compiler technology unlike simd vector architectures rarer simd architectures fewer readers know vector isas thus chapter tutorial avor earlier ones want dig deeper vector architectures read chapter 4 appendix g hennessy patterson 2011 rv32v also novel features simplify isa requires explanation even already familiar vector architectures", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-13 16:15:29", "segment": "segment50", "image_urls": [], "Book": "rvbook" }, { "section": "8.2 Vector Computation Instructions ", "content": "figure 81 graphical representation rv32v extension instruction set rv32v encoding nalized edition include usual instructionlayout diagram virtually every integer oatingpoint computation instruction earlier chapter vector version figure 81 inherits operations rv32i rv32m rv32f rv32d rv32a several types vector instruction depending whether source operands vectors vv sufx vector source operand scalar source operand vs sufx scalar sufx means x f register operand along vector register v example daxpy program figure 57 page 55 chapter 5 calculates x x vectors scalar vectorscalar operations rs1 eld species scalar register accessed asymmetric operations like subtraction division offer third variation vector in structions rst operand scalar second vector sv sufx operations like x use superuous symmetric operations like addition multiplication instructions sv version fused multiplyadd instruc tions three operands largest combination vector scalar options vvv vvs vsv vss readers may notice figure 81 ignores data type width vector opera tions next section explains", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-13 16:15:29", "segment": "segment51", "image_urls": [], "Book": "rvbook" }, { "section": "8.3 Vector Registers and Dynamic Typing ", "content": "rv32v adds 32 vector registers whose names start v number elements per vector register varies number depends width operations amount memory dedicated vector registers processor designer example processor allocated 4096 bytes vector registers enough 32 vector registers 16 64bit elements 32 32bit elements 64 16bit elements 128 8bit elements keep number elements exible vector isa vector processor calculates maximum vector length mvl programs use run properly processors differing amounts memory vector registers vector length register vl sets number elements vector particular operation helps programs dimension array multiple mvl demonstrate mvl vl eight predicate registers vpi detail following sections rv32v takes novel approach associating data type length vector registers rather instruction opcodes program tags vector registers data type width executing vector computation instructions dynamic register typing slashes number vector instructions important often six integer three oatingpoint versions vector instruction figure 81 shows shall see section 89 confront numerous simd instructions dynamically typed vector architecture reduces cognitive load assembly language programmer difculty compiler code generator another advantage dynamic typing programs disable unused vector regis ters feature allocates vector memory enabled vector registers example suppose two vector registers enabled type 64bit oats processor 1024 bytes vector register memory processor would halve memory giving vector register 512 bytes 5128 64 elements therefore set mvl 64 thus mvl dynamic value set processor directly changed software source destination registers determine type size operation result conversions implicit dynamic typing example processor multi ply vector doubleprecision oatingpoint numbers singleprecision scalar without rst convert operands precision bonus benet reduces total number vector instructions number instructions executed vsetdcfg instruction sets vector register types figure 82 shows vector reg ister types available rv32v plus types rv64v see chapter 9 rv32v requires vector oatingpoint operations scalar versions also thus must least rv32fv use f32 type rv32fdv use f64 type rv32v introduces 16bit oatingpoint format type f16 implementation supports rv32v rv32f must support f16 f32 formats elaboration rv32v switch context quickly one reason vector architectures less popular simd architectures concern adding large vector registers would stretch time save restore program interrupt called context switch dynamic register typing helps programmer must tell processor vector registers used means processor needs save restore registers context switch rv32v convention disable vector registers vector instructions used means processor performance benet vector registers pay extra context switch time interrupt occurs vector instructions executing earlier vector architectures pay worstcase context switch cost saving restoring vector registers whenever interrupt occurred", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-13 16:15:29", "segment": "segment52", "image_urls": [], "Book": "rvbook" }, { "section": "8.4 Vector Loads and Stores ", "content": "easiest case vector loads stores isdealing single dimension arrays stored sequentially memoryvectorloadllsavectorregisterwithdatafromsequential addresses memory starting address vld instruction data type associated vector register determines size data elements vector length register vl sets number elements load vector store vst inverse operation vld example a0 1024 type v0 x32 vld v0 0 a0 gen erate addresses 1024 1028 1032 1036 reaching limit set vl multidimension arrays accesses sequential stored row major order sequential column accesses twodimensional array want data elements separated size row vector architectures support accesses strided data transfers vlds vsts one could get effect vld vst setting stride size element vlds vsts vld vst guarantee accesses sequential makes easier deliver high memory bandwidth another reason providing vld vst reduces code size instructions executed common case unit stride instructions specify two source registers one giving starting address specifying stride bytes example assume starting address a0 address 1024 size row a1 64 bytes vlds v0 a0 a1 would send sequence addresses memory 1024 1088 1024 1 64 1152 1024 2 64 1216 1024 3 64 vector length register vl tells stop returning data written sequential elements destination vector register thus far assumed program working dense arrays support sparse arrays vector architectures offer indexed data transfers vldx vstx one source register instructions refers vector register scalar register scalar register starting address sparse array element vector register contains index bytes nonzero elements sparse array suppose starting address a0 address 1024 vector register v1 byte indices rst 4 elements 16 48 80 160 vldx v0 a0 v1 would send sequence addressestomemory1040 102416 1072 102448 1104 102480 1184 1024 160 loads returning data sequential elements destination vector register used sparse arrays motivation indexed loads stores many algorithms access data indirectly via table indices", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-13 16:15:29", "segment": "segment53", "image_urls": [], "Book": "rvbook" }, { "section": "8.5 Parallelism During Vector Execution ", "content": "simple vector processor might execute one vector element time element oper ations independent denition processor could theoretically compute simultaneously widest data rv32g 64 bits today vector processors typically execute two four eight 64bit elements per clock cycle hardware handles fringe cases vector length multiple number elements executed per clock cycle like simd number smaller data operations ratio widths narrow data wide data thus vector processor computes 4 64bit operations per clock cycle would normally launch 8 32bit 16 16bit 32 8bit operations per clock cycle simd isa architect determines maximum number data parallel operations per clock cycle number elements per register contrast rv32v processor designer picks without change isa compiler every doubling simd register width doubles number simd instructions requires changes simd compilers hidden exibility means identical rv32v program runs without change simplest aggressive vector processors", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-13 16:15:29", "segment": "segment54", "image_urls": [], "Book": "rvbook" }, { "section": "8.6 Conditional Execution of Vector Operations ", "content": "vector computations include statements rather rely conditional branches vector architectures include mask suppresses operations elements vector operation predicate instructions figure 81 perform conditional tests two vectors vector scalar writes element vector mask 1 condition holds 0 otherwise vector mask must number elements vector registers subsequent vector instruction use mask 1 bit means element changed vector operations 0 means element unchanged rv32v provides 8 vector predicate registers vpi act vector masks instruc tions vpand vpandn vpor vpxor vpnot perform logical instructions combine together allow efcient processing nested conditional statements rv32v instructions specify either vp0 vp1 mask controls vector oper ation perform normal operation elements one two predicates registers must set ones swap one six predicate registers quickly vp0 vp1 rv32v vpswap instruction predicate registers also enabled dynami cally disabling clears predicate registers quickly example suppose evennumbered elements vector register v3 negative integers oddnumbered elements positive integers result code vpltvs vp0 v3 x0 set mask bits elements v3 0 addvv vp0 v0 v1 v2 change elements v0 v1v2 true would set even bits vp0 1 odd bits 0 would replace even elements v0 sum corresponding elements v1 v2 odd elements v0 would unchanged", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-13 16:15:29", "segment": "segment55", "image_urls": [], "Book": "rvbook" }, { "section": "8.7 Miscellaneous Vector Instructions ", "content": "adding instruction congures data types vector registers mentioned vsetdcfg setvl sets vector length register vl destination register smaller source operand maximum vector length mvl reason picking minimum decide loops whether vector code run maximum vector length mvl must run smaller value cover remaining elements thus handle tail setvl executed every loop iteration rv32v also three instructions manipulate elements within vector register vector select vselect produces new result vector gathering elements one source data vector element locations specied second source index vector vindices holds values 0 mvl1 select elements vsrc vselect vdest vsrc vindices thus rst four elements v2 contain 8 0 4 2 vselect v0 v1 v2 replace zeroth element v0 eighth element v1 rst element v0 zeroth element v1 second element v0 fourth element v1 third element v0 second element v1 vector merge vmerge resembles vector select uses vector predicate register choose sources use produces new result vector gathering elements one two source registers depending predicate register new element comes vsrc1 predicate vector register element 0 vsrc2 1 vp0 bit determines whether new element vdest comes vsrc1 bit 0 vsrc2 bit 1 vmerge vp0 vdest vsrc1 vsrc2 thus rst four elements vp0 contain 1 0 0 1 rst four elements v1 contain 1 2 3 4 rst four elements v2 contain 10 20 30 40 vmerge vp0 v0 v1 v2 make rst four elements v0 10 2 3 40 vector extract instruction takes elements starting middle one vector places beginning second vector register start scalar reg holding element starting number vsrc vextract vdest vsrc start example vector length vl 64 a0 contains 32 vextract v0 v1 a0 copy last 32 elements v1 rst 32 elements v0 vextract instruction assists reductions following recursivehalving approach binary associative operator example sum elements vector register use vector extract copy last half vector rst half another vector register halve vector length next add two vector registers together repeat recursivehalving sum vector length equals 1 result zeroth element sum original elements vector register", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-13 16:15:29", "segment": "segment56", "image_urls": [], "Book": "rvbook" }, { "section": "8.8 Vector Example: DAXPY in RV32V ", "content": "figure 83 shows rv32v assembly language daxpy figure 57 page 55 chap ter 5 explain step time rv32v daxpy starts enabling vector registers needed function re quires two vector registers hold portions x doubleprecision oatingpoint numbers 8 bytes wide rst instruction creates constant sec ond writes control status register congures vector registers vcfgd get two registers type f64 see figure 82 denition hardware allocates congured registers numerical order yielding v0 v1 let assume rv32v processor 1024 bytes memory dedicated vector reg isters hardware allocates memory evenly two vector registers hold doubleprecision oatingpoint numbers 8 bytes vector register 5128 64 elements processor sets maximum vector length mvl function 64 rst instruction loop sets vector length following vector instructions instruction setvl writes smaller mvl n vl t0 insight number iterations loop larger n fastest code crunch data 64 values time set vl mvl n smaller mvl read write beyond end x compute last n elements nal iteration loop setvl also writes t0 help later loop bookkeeping location10 instruction vld address c vector load address x scalar register a1 transfers vl elements x memory v0 following shift instruction slli multiplies vector length width data bytes 8 later use incrementing pointers x y instruction address 14 vld loads vl elements memory v1 next instruction add increments pointer x instruction address 1c jackpot vfmadd multiplies vl elements x v0 scalar f0 adds product vl elements v1 stores vl sums back v1 left store results memory loop overhead instruction address 20 sub decrements n a0 vl record number operations completed iteration loop following instruction vst stores vl results memory instruction address 28 add increments pointer following instruction repeats loop n a0 zero n zero nal instruction ret returns calling site power vector architecture iteration 10instruction loop launches 3 64 192 memory accesses 2 64 128 oatingpoint multiplies additions assuming n least 64 averages 19 memory accesses 13 operations per instruction shall see next section ratios simd order magnitude worse", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-13 16:15:30", "segment": "segment57", "image_urls": [], "Book": "rvbook" }, { "section": "8.9 Comparing RV32V, MIPS-32 MSA SIMD, and x86-32 AVX SIMD ", "content": "see contrast simd vector executes daxpy tilt head see simd restricted vector architecture short vector registerseight8bit elements but vector length register strided indexed data transfers unlike rv32v vector length register msa requires extra bookkeep ing instructions check problem values n n odd extra code compute asingle oatingpoint multiply add since msa must operate onpairs ofoperands that code isfound locations3cto4cinfigure85intheunlikelybutpossiblecasewhen n zero branch location 10 skip main computation loop branch around loop instruction location 18 splatid puts copies halves simd register w2 add scalar data simd need replicate wide simd register inside loop ldd instruction location 1c loads two elements simd register w0 increments pointer y load two elements x simd register w1 following instruction location 28 increments pointer x payoff multiplyadd instruction location 2c next delayed branch end loop tests see pointer incremented beyond last even element y loop repeats simd store delay slot address 34 writes result two elements y mips simd figure 85 page 83 shows mips simd architecture msa version daxpy msa simd instruction operate two oatingpoint numbers since msa registers 128 bits wide main loop terminates code checks see n odd performs last multiplyadd using scalar instructions chapter 5 nal instruction returns calling site 7instruction loop heart mips msa daxpy code 6 double precision memory accesses 4 oatingpoint multiplies additions average 1 memory access 05 operations per instruction x86 simd intel gone many generations simd extensions see code figure 86 page 84 sse expansion 128bit simd led xmm registers instructions use expansion 256bit simd part avx created ymm registers instructions rst group instructions addresses 0 25 load variables memory make four copies 256bit ymm registers tests ensure n least 4 entering main loop uses two sse one avx instructions caption figure 86 explains detail main loop heart daxpy computation avx instruction vmovapd address 27 loads 4 elements x ymm0 avx instruction vfmadd213pd address 2c multiplies 4 copies ymm2 times 4 elements x ymm0 adds 4 elements memory address ecxedx 8 puts 4 sums ymm0 following avx instruc tion address 32 vmovapd stores 4 results y next three instructions increment counters repeat loop needed case mips msa fringe code addresses 3e 57 deals cases n multiple 4 relies three sse instructions 6 instructions main loop x8632 avx2 daxpy code 12 double precision memory accesses 8 oatingpoint multiplies additions average 2 memory accesses 1 operation per instruction elaboration illiac iv rst show difculty compiling simd 64 parallel 64bit oatingpoint units fpus illiac iv planned 1 million logic gates moore published law architect originally predicted 1000 million oatingpoint operations per second mflops actual performance 15 mflops best costs escalated 8m estimated 1966 31m 1972 despite construction 64 planned 256 fpus project started 1965 took 1976 run rst real application year cray1 unveiled perhaps infamous supercomputer made top 10 list engineering disasters falk 1976", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-13 16:15:30", "segment": "segment58", "image_urls": [], "Book": "rvbook" }, { "section": "8.10 Concluding Remarks ", "content": "code vectorizable best architecture vector jim smith keynote speech international symposium computer architecture 1994 figure 84 summarizes number instructions number bytes daxpy pro grams rv32ifdv mips32 msa x8632 avx2 simd computation code dwarfed bookkeeping code twothirds threefourths code mips32 msa x8632 avx2 simd overhead either prepare data main simd loop handle fringe elements n multiple number oatingpoint numbers simd register rv32v code figure 83 need bookkeeping code halves number instructions unlike simd vector length register makes vector instruc tions work value n might think rv32v would problem n 0 rv32v vector instructions leave everything unchanged vl 0 however signicant difference simd vector processing static code size simd instructions execute 10 20 times instructions rv32v simd loop 2 4 elements instead 64 vector case extra instruction fetches instruction decodes means higher energy perform task comparing results figure 84 scalar versions daxpy figure 58 page 29 chapter 5 see simd roughly doubles size code instructions bytes main loop size reduction dynamic number instructions executed factor 2 4 depending width simd registers however rv32v vector code size increases factor 12 main loop 14x dynamic instruction count factor 43 smaller dynamic instruction count large difference view second signicant disparity simd vector lacking vector length register explodes number instructions well bookkeeping code isas like mips32 x8632 follow incrementalist doctrine must duplicate old simd instructions dened narrower simd registers every time double simd width surely hundreds mips 32 x8632 instructions created many generations simd isas hundreds future cognitive load assembly language programmer brute force approach isa evolution must overwhelming one remember vfmadd213pd means use comparison rv32v code unaffected size memory vector registers rv32v unchanged vector memory size expands even re compile since processor supplies value maximum vector length mvl code figure 83 untouched whether processor raises vector memory 1024 bytes say 4096 bytes drops 256 bytes unlike simd isa dictates required hardwareand changing isa means changing compilerthe rv32v isa allows processor designers choose resources data parallelism application without affecting programmer com piler one argue simd violates isa design principle chapter 1 isolating architecture implementation think high contrast costenergyperformance complexity ease program ming modular vector approach rv32v incrementalist simd architec tures arm32 mips32 x8632 might persuasive argument riscv", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-13 16:15:30", "segment": "segment59", "image_urls": [], "Book": "rvbook" }, { "section": "8.11 To Learn More ", "content": "h falk went wrong v reaching gigaop fate famed illiac iv shaped research brilliance realworld disasters ieee spectrum 13 10 6570 1976 j l hennessy d a patterson computer architecture quantitative approach else vier 2011 a waterman k asanovic editors riscv instruction set manual volume userlevelisa version22may2017urlhttps riscvorgspecifications 9", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-13 16:15:30", "segment": "segment60", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 9", "content": "RV64: 64-bit Address Instructions", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-13 16:15:26", "segment": "segment61", "image_urls": [], "Book": "rvbook" }, { "section": "9.1 Introduction ", "content": "figures 91 94 shows graphical representations rv64g versions rv32g instructions gures illustrate small increase number instructions switch 64bit isa riscv isas typically add word doubleword long versions 32bit instructions expand registers including pc 64 bits thus sub rv64i subtracts two 64bit numbers rather two 32bit numbers rv32i rv64 close actually different isa rv32 adds instructions base instructions slightly different things example insertion sort rv64i figure 98 quite near code rv32i figure 28 page 27 chapter 2 number instructions number bytes changes load store word instructions become load store doublewords address increment goes 4 words 4 bytes 8 doublewords 8 bytes figure 95 lists opcodes rv64gc instructions figures 91 94 despite rv64i 64bit addresses default data size 64 bits 32bit words valid data types programs hence rv64i needs support words rv32i needs support bytes halfwords specically since registers 64 bits wide rv64i adds word versions addition subtraction addw addiw subw truncate results 32 bits write signextended result destination register rv64i also includes word versions shift instructions get 32bit shift result instead 64bit shift result sllw slliw srlw srliw sraw sraiw 64bit data transfers load store doubleword ld sd finally unsigned versions load byte load halfword rv32i rv64i must unsigned version load word lwu similar reasons rv64m needs add word versions multiply divide remain der mulw divw divuw remw remuw allow programmer synchronize words doublewords rv64a adds doubleword versions 11 instructions rv64f rv64d adds integer doublewords convert instructions calling longs prevent confusion double precision oatingpoint data fcvtls fcvtld fcvtlus fcvtlud fcvtsl fcvtslu fcvtdl fcvtdlu integer x registers 64 bits wide hold dou ble precision oatingpoint data rv64d adds two oatingpoint moves fmvxw fmvwx one exception superset relationship rv64 rv32 compressed instructions rv64c replaced rv32c instructions since instructions shrank code 64bit addresses rv64c drops compressed jump link cjal integer oatingpoint load store word instructions clw csw clwsp cswsp cflw cfsw cflwsp cfswsp place rv64c adds popular add subtract word instructions caddw caddiw csubw load store double word instructions cld csd cldsp csdsp elaboration rv64 abis lp64 lp64f lp64d lp64 means c language data types long pointer 64 bits int still 32 bits sufxes f indicate oatingpoint arguments passed rv32 see chapter 3 elaboration instruction diagram rv64v exactly matches rv32v due dynamic register typing change x64 x64u dynamic register types figure 82 page 75 available rv64v rv32v", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-13 16:15:30", "segment": "segment62", "image_urls": [], "Book": "rvbook" }, { "section": "9.2 Comparison to Other 64-bit ISAs using Insertion Sort ", "content": "gordon bell said opening chapter one fatal architecture aw running address bits programs pushed limits 32bit address space architects began make 64bit address versions isas mashey 2009 earliest mips 1991 extended registers program counter 32 64 bits added new 64bit versions mips32 instructions mips64 assembly language instructions begin letter daddu dsll see figure 910 programmers mix mips32 mips64 instructions program mips64 dropped load delay slot mips32 pipeline stalls readafterwrite dependence decade later time successor x8632 architects increased addressing size took opportunity make improvements x8664 increased number integer registers 8 16 r8r15 increased number simd registers 8 16 xmm8xmm15 added pcrelative data addressing better support positionindependent code improvements smoothed rough edges x8632 see benets comparing x8632 version insertion sort figure 211 page 30 chapter 2 x8664 version figure 911 newer isa keeps variables registers rather several memory reduces instruction count 20 15 instructions code size actually larger one byte newer isa despite fewer instructions 46 versus 45 reason squeeze new opcodes enable registers x8664 added prex byte identify new instructions average instruction length increases x8664 x8632 arm faced address problem another decade later rather evolve old isa 64bit addresses x8664 used opportunity invent brand new isa given fresh start changed many awkward arm32 traits give modern isa increase number integer registers 15 31 remove pc set registers provide register hardwired zero instructions r31 unlike arm32 arm64 data addressing modes work data sizes types arm64 dropped load store multiple instructions arm32 arm64 omitted conditional execution option arm32 instructions still shares weaknesses arm32 condition codes branch source desti nation register elds move instruction format conditional move instructions complex addressing modes inconsistent performance counters 32bit length instructions arm64 switch thumb2 isa thumb2 works 32bit addresses x8632 processors locked itanium amd invented 64bit failed intel forced adopt amd64 isa 64bit address suc unlike riscv arm decided take maximalist approach isa design cer tainly better isa arm32 also bigger example 1000 instructions arm64 manual 3185 pages long arm 2015 moreover still growing three expansions arm64 since announcement years ago arm64 code insertion sort figure 99 looks closer rv64i code x8664 code arm32 code example 31 registers need save restore registers stack since pc longer one registers arm64 uses separate return instruction figure 96 table summarizes number instructions number bytes insertion sort isas figures 98 911 show compiled code rv64i arm64 mips64 x8664 parenthetical phrases comments four programs identify differences rv32i versions chapter 2 rv64i versions mips64 needs instructions primarily nop instructions unlled delayed branch slots rv64i needs fewer compareandbranch in structions delayed branch arm64 x8664 need two compare instructions unnecessary rv64i scaling addressing modes avoid address arithmetic in structions needed rv64i giving fewest instructions however rv64irv64c much smaller code size next section explains elaboration arm64 mips64 x8664 ofcial names ofcial names armv8 call arm64 mipsiv mips64 amd64 x8664 see sidebar previous page history x8664", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-13 16:15:31", "segment": "segment63", "image_urls": [], "Book": "rvbook" }, { "section": "9.3 Program size ", "content": "figure 97 compares average relative code sizes rv64 arm64 x8664 compare gure figure 15 page 9 chapter 1 first rv32gc code almost identical size rv64gc 1 smaller closeness also true rv32i rv64i arm64 code 8 smaller arm32 code 64bit address version thumb 2 instructions remain 32bits long hence arm64 code 25 larger arm thumb2 code code x8664 7 larger x8632 code due adding prex opcodes x8664 instructions accommodate new operations expanded set registers rv64gc wins arm64 code 23 bigger rv64gc x8664 code 34 bigger rv64gc difference large enough either improve performance due lower instruction cache miss rates reduce cost allowing smaller instruction cache still provides satisfactory miss rates", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-13 16:15:31", "segment": "segment64", "image_urls": [], "Book": "rvbook" }, { "section": "9.4 Concluding Remarks ", "content": "one problems pioneer always make mistakes never never want pioneer always best come second look mistakes pioneers made seymour cray architect rst supercomputer 1976 runningoutofaddressbitsistheachillesheelofcomputerarchitecturemanyanarchitecturehasdiedfromawound therearm32andthumb2remain32bitarchitectures help big programs isas like mips64 x8664 survived transi tion x8664 paragon isa design future mips64 unclear time writing arm64 new large isa time tell successful riscv beneted designing 32bit 64bit architectures together whereas older isas architect sequentially unsurprisingly transition 32bit 64bit easiest riscv programmers compiler writers rv64i isa virtually rv32i instructions indeed list rv32gcv rv64gcv two pages reference card important simultaneous design meant 64bit architecture squeezed cramped 32bit opcode space rv64i plenty room optional instruction extensions particularly rv64c makes leader code size see 64bit architecture evidence riscv sound design admittedly easier achieve start 20 years later borrow pioneers good ideas well learn mistakes elaboration rv128 rv128 began inside joke riscv architects simply show 128bit address isa possible however warehouse scale computers may soon 264 bytes semiconductor storage dram flash memory programmers might want access memory address also proposals use 128bit address improve security woodruff et al 2014 riscv manual specify full 128bit isa called rv128g waterman asanovic 2017 additional instructions basically needed go rv32 rv64 figures 91 94 illustrate registers also grow 128 bits new rv128 instructions specify either 128bit versions instructions using q name quadword 64bit versions others using name doubleword", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-13 16:15:31", "segment": "segment65", "image_urls": [], "Book": "rvbook" }, { "section": "9.5 To Learn More ", "content": "i arm armv8a architecture reference manual 2015 m kerner n padgett history modern 64bit computing technical report cs de partment university washington feb 2007 url http coursescswashington educoursescsep59006auprojectshistory64bitpdf j mashey long road 64 bits communications acm 52 1 4553 2009 a waterman design riscv instruction set architecture phd thesis eecs depart ment university california berkeley jan 2016 url http www2eecsberkeley edupubstechrpts2016eecs20161html a waterman k asanovic editors riscv instruction set manual volume userlevel isa version 22 may 2017 url https riscvorgspecifications j woodruff r n watson d chisnall s w moore j anderson b davis b laurie p g neumann r norton m roe cheri capability model revisiting risc age risk computer architecture isca 2014 acmieee 41st international symposiumon pages457468ieee2014", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-13 16:15:31", "segment": "segment66", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 10", "content": "RV32/64 Privileged Architecture", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-13 16:15:26", "segment": "segment67", "image_urls": [], "Book": "rvbook" }, { "section": "10.1 Introduction ", "content": "book far focused riscv support generalpurpose computation instructions introduced available user mode application code usually runs chapter introduces two new privilege modes machine mode runs trusted code supervisor mode provides support operating systems like linux freebsd windows new modes privileged user mode hence title chapter moreprivileged modes generally access features lessprivileged modes add additional functionality available lessprivileged modes ability handle interrupts perform io processors typically spend execution time leastprivileged mode interrupts exceptions transfer control moreprivileged modes embeddedsystem runtimes operating systems use features new modes respond external events like arrival network packets support multitasking protection tasks abstract virtualize hardware features given breadth topics thorough programmer guide would entire additional book instead chapter aims hit high notes riscv features programmers disinterested embedded system runtimes operating systems either skip skim chapter figure 101 graphical representation riscv privileged instructions fig ure 102 lists instructions opcodes see privileged architecture adds instructions instead several new control status registers csrs expose additional functionality chapter describes rv32 rv64 privileged architectures together con cepts differ size integer register keep descriptions concise introduce term xlen refer width integer register bits xlen 32 rv32 64 rv64", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-13 16:15:31", "segment": "segment68", "image_urls": [], "Book": "rvbook" }, { "section": "10.2 Machine Mode for Simple Embedded Systems ", "content": "machine mode abbreviated mmode privileged mode riscv hart hardware thread execute harts running mmode full access memory io andlowlevelsystemfeatures necessary tobootandcongurethesystemassuch itistheonlyprivilege modethatall standardriscvprocessorsimplement indeed simpleriscv microcontrollers support mmode systems focus section important feature machine mode ability intercept handle ex ceptions unusual runtime events riscv classies exceptions two categories syn chronous exceptions arise result instruction execution accessing invalid memory address executing instruction invalid opcode interrupts external events asynchronous instruction stream like mouse button click ex ceptions riscv precise instructions prior exception completely execute none subsequent instructions appear begun execution figure 103 lists standard exception causes five kinds synchronous exceptions happen mmode execution access fault exceptions arise physical memory address support ac cess typefor example attempting store rom breakpoint exceptions arise executing ebreak instruction address datum matches debug trigger environment call exceptions arise executing ecall instruction illegal instruction exceptions result decoding invalid opcode misaligned address exceptions occur effective address divisible access sizefor example amoaddw address 0x12 recall chapter 2 claim misaligned loads stores permitted might wondering misaligned load store address exceptions listed figure 103 two reasons first atomic memory operations chapter 6 require natu rally aligned addresses second implementors choose omit hardware support misaligned regular loads stores difcult feature implement in frequently used processors without hardware rely instead upon exception handler trap emulate misaligned loads stores software using sequence smaller aligned loads stores application code none wiser misaligned memory accesses work expected albeit slowly hardware remains simple alternatively performant processors implement misaligned loads stores hardware imple mentation exibility owes riscv decision permit misaligned loads stores using regular load store opcodes following chapter 1 guideline isolate architecture implementation three standard sources interrupts software timer external software in terrupts triggered storing memorymapped register generally used one hart interrupt another hart mechanism architectures refer interprocessor interrupt timer interrupts raised hart time comparator memorymapped reg ister named mtimecmp exceeds realtime counter mtime external interrupts raised platformlevel interrupt controller external devices attached different hardware platforms different memory maps demand divergent features interrupt controllers mechanisms raising clearing interrupts differ platform platform constant across riscv systems exceptions handled interrupts masked topic next section", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-13 16:15:31", "segment": "segment69", "image_urls": [], "Book": "rvbook" }, { "section": "10.3 Machine-Mode Exception Handling ", "content": "eight control status registers csrs integral machinemode exception handling mtvec machine trap vector holds address processor jumps excep tion occurs mepc machine exception pc points instruction exception occurred mcause machine exception cause indicates exception occurred mie machine interrupt enable lists interrupts processor take must ignore mip machine interrupt pending lists interrupts currently pending mtval machine trap value holds additional trap information faulting address address exceptions instruction illegal instruction exceptions zero exceptions mscratch machine scratch holds one word data temporary storage mstatus machine status holds global interrupt enable along plethora state figure 104 shows executing mmode interrupts taken global interruptenable bit mstatusmie set furthermore interrupt enable bit mie csr bit positions mie correspond interrupt codes figure 103 example mie 7 corresponds mmode timer interrupt mip csr layout indicates interrupts currently pending putting three csrs together machine timer interrupt taken mstatusmie1 mie 7 1 mip 7 1 hart takes exception hardware atomically undergoes several state transi tions pc exceptional instruction preserved mepc pc set mtvec synchronous exceptions mepc points instruction caused exception interrupts points execution resume interrupt handled mcause set exception cause encoded figure 103 mtval set faulting address exceptionspecic word information interrupts disabled setting mie0 mstatus csr previous value mie preserved mpie preexception privilege mode preserved mstatus mpp eld privi lege mode changed m figure 105 shows encoding mpp eld processor implements mmode step effectively skipped avoid overwriting contents integer registers prologue interrupt handler usually begins swapping integer register say a0 mscratch csr usually software arranged mscratch contain pointer additional in memory scratch space handler uses save many integer registers body use body executes epilogue interrupt handler restores registers saved memory swaps a0 mscratch restoring registers preexception values finally handler returns mret instruction unique m mode mret sets pc mepc restores previous interruptenable setting copying mstatus mpie eld mie sets privilege mode value mstatus mpp eld essentially reversing actions described preceding paragraph figure 106 shows riscv assembly code basic timer interrupt handler following pattern simply increments time comparator returns previous task whereas realistic timer interrupt handler might invoke scheduler switch tasks preemptible keeps interrupts disabled throughout handler caveats aside complete example riscv interrupt handler single page sometimes desirable take higherpriority interrupt processing lower priority exception alas one copy mepc mcause mtval mstatus csrs taking second interrupt would destroy old values registers causing data loss without additional help software preemptible interrupt handler save registers inmemory stack enabling interrupts prior exiting disable interrupts restore registers stack addition mret instruction introduced mmode provides one instruction wfi wait interrupt wfi informs processor useful work enter lowerpower mode enabled interrupt becomes pend ing ie mie mip 0 riscv processors implement instruction variety ways including stopping clock interrupt becomes pending simply execute nop hence wfi typically used inside loop elaboration wfi works whether interrupts globally enabled wfi executed interrupts globally enabled mstatusmie1 en abled interrupt becomes pending processor jumps exception handler hand wfi executed interrupts globally disabled enabled inter rupt becomes pending processor continues executing code following wfi code typically examines mip csr decide next strategy reduce interrupt latency compared jumping exception handler need save restore integer registers", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-13 16:15:32", "segment": "segment70", "image_urls": [], "Book": "rvbook" }, { "section": "10.4 User Mode and Process Isolation in Embedded Systems ", "content": "although machine mode sufcient simple embedded systems suitable entire codebase trusted since mmode unfettered access hardware platform often practical trust application code known advance vast prove correct riscv provides mechanisms protect system untrusted code protect untrusted processes untrusted code must forbidden executing privileged instructions like mret accessing privileged csrs like mstatus would allow program take control system restriction accomplished easily enough additional privilege mode user mode umode denies access features generating illegal instruction excep tion attempting use mmode instruction csr otherwise umode mmode behave similarly mmode software enter umode setting mstatusmpp u figure 105 shows encoded 0 executing mret instruction exception occurs umode control returned mmode untrusted code must also restricted access memory processors implement u modes feature called physical memory protection pmp allows mmode specify memory addresses umode access pmp consists several address registers usually eight sixteen corresponding conguration registers grant deny read write execute permissions processor umode attempts fetch instruction execute load store address compared pmp address registers address greater equal pmp address less pmp address i1 pmp i1 conguration register decides whether access may proceed otherwise raises access exception figure 107 shows layout pmp address conguration register csrs address registers named pmpaddr0 pmpaddrn n1 number pmps implemented address registers shifted right two bits pmps fourbyte granularity conguration registers densely packed csrs accelerate context switching figure 108 shows pmp conguration consists r w x bits set permit loads stores fetches respectively mode eld 0 disables pmp 1 enables pmp conguration also supports modes locked features described waterman asanovic 2017", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-13 16:15:32", "segment": "segment71", "image_urls": [], "Book": "rvbook" }, { "section": "10.5 Supervisor Mode for Modern Operating Systems ", "content": "pmp scheme described previous section attractive embedded systems be cause provides memory protection relatively low cost several drawbacks limit use generalpurpose computing since pmp supports xed number memoryregions itdoesn tscaletocomplexapplicationsandsincetheseregionsmustbe contiguous physical memory system suffer memory fragmentation finally pmp ef ciently support paging secondary storage sophisticated riscv processors handle problems way nearly generalpurpose architectures using pagebased virtual memory feature forms core supervisor mode smode optional privilege mode designed support modern unix like operating systems linux freebsd windows smode privileged umode lessprivileged mmode like umode smode software use m mode csrs instructions subject pmp restrictions section covers smode interrupts exceptions next section details smode virtualmemory system default exceptions regardless privilege mode transfer control mmode exception handler exceptions unix system though invoke operating system runs smode mmode exception handler could reroute exceptions smode extra code would slow handling exceptions riscv provides exception delegation mechanism interrupts synchronous excep tions delegated smode selectively bypassing mmode software altogether mideleg machine interrupt delegation csr controls interrupts dele gated smode like mip mie bit mideleg corresponds exception code number figure 103 example mideleg 5 corresponds smode timer interrupt set smode timer interrupts transfer control smode exception handler rather mmode exception handler timers send interproces sor interrupts behalf interrupt delegated smode masked smode software sie super visor interrupt enable sip supervisor interrupt pending csrs smode csrs subsets mie mip csrs layout mmode counter parts bits corresponding interrupts delegated mideleg readable writable sie sip bits corresponding interrupts delegated always zero mmode also delegate synchronous exceptions smode using medeleg ma chine exception delegation csr mechanism analogous interrupt delegation bits medeleg correspond instead synchronous exception codes figure 103 example setting medeleg 15 delegate store page faults smode note exceptions never transfer control lessprivileged mode matter delegation settings exception occurs mmode always handled mmode exception occurs smode might handled either mmode smode depending delegation conguration never umode smode several exceptionhandling csrs sepc stvec scause sscratch stval sstatus perform function mmode counterparts described section 102 figure 109 shows layout sstatus register supervisor exception return instruction sret behaves mret acts smode exception handling csrs instead mmode ones act taking exception also similar mmode hart takes excep tion delegated smode hardware atomically undergoes several similar state image devicergb width 132 height 108 bpc 8 transitions using smode csrs instead mmode ones pc exceptional instruction preserved sepc pc set stvec scause set exception cause encoded figure 103 stval set faulting address exceptionspecic word information interrupts disabled setting sie0 sstatus csr previous value sie preserved spie preexception privilege mode preserved sstatus spp eld privilege mode changed", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-13 16:15:32", "segment": "segment72", "image_urls": [], "Book": "rvbook" }, { "section": "10.6 Page-Based Virtual Memory ", "content": "smode provides conventional virtual memory system divides memory xedsize pages purposes address translation memory protection paging en abled addresses including load store effective addresses pc virtual addresses must translated physical addresses order access physical memory virtual addresses translated physical addresses traversing highradix tree known page table leaf node page table indicates whether virtual address maps aphysicalpage ifso whichprivilegemodesandaccesstypeshavepermissiontoaccess page accessing page unmapped grants insufcient permissions results page fault exception riscv paging schemes named svx x size virtual address bits rv32 paging scheme sv32 supports 4 gib virtualaddress space divided 210 megapages size 4 mib megapage subdivided 210 base pagesthe funda mentalunitofpagingeach4kibhence sv32 spagetableisatwoleveltreeofradix2 10 eachentryinthepagetableis four bytes soapage table isitself 4kibit snocoincidence apage table isexactly size ofapage design simpliesoperatingsystemmemory allocation figure 1010 shows layout sv32 pagetable entry pte following elds explained right left v bit indicates whether rest pte valid v1 v0 virtual address translation traverses pte results page fault r w x bits indicate whether page read write execute permis sions respectively three bits 0 pte pointer next level page table otherwise leaf tree u bit indicates whether page user page u0 umode access page smode u1 umode access page smode g bit indicates mapping exists virtualaddress spaces information hardware use improve addresstranslation performance typically used pages belong operating system bit indicates whether page accessed since last time bit cleared bit indicates whether page dirtied ie written since last time bit cleared rsw eld reserved operating system use hardware ignores ppn eld holds physical page number part physical address pte leaf ppn part translated physical address otherwise ppn gives address next level page table figure 1010 divides ppn two subelds simplify description addresstranslation algorithm rv64 supports multiple paging schemes describe popular one sv39sv39usesthesame4kibbasepageassv32thepagetableentriesdoubleinsizeto eight bytes hold bigger physical addresses maintain invariant page table exactly size page radix tree correspondingly falls 2 9 tree three levels deep sv39 512 gib address space divided 29 gigapages 1 gib gigapage subdivided 29 megapages sv39 slightly smaller sv32 2 mib megapage subdivided 29 4 kib base pages figure 1011 shows layout sv39 pte identical sv32 pte except ppn eld widened 44 bits support 56bit physical addresses 226 gib physicaladdress space elaboration unused address bits since sv39 virtual addresses narrower rv64 integer register might wonder becomes remaining 25 bits sv39 mandates address bits 6339 copies bit 38 thus valid virtual addresses 0000000000000000hex 0000003fffffffffhex ffffffc000000000hexffffffffffffffffhex gap two ranges course 225 times bigger size two ranges combined seemingly wasting 99999997 values 64bit register repre sent make better use extra 25 bits answer programs grow require 512 gib virtualaddress space architects want increase ad dress space without breaking backwards compatibility allowed programs store extra data upper 25 bits would impossible later reclaim bits hold bigger addresses allowing data storage unused address bits grievous error one recurred many times computing history smode csr satp supervisor address translation protection controls paging system figure 1012 shows satp three elds mode eld enables paging selects pagetable depth figure 1013 shows encoding asid address space identier eld optional used reduce cost context switches finally ppn eld holds physical address root page table divided 4 kib page size typically mmode software write zero satp entering smode rst time disabling paging smode software write setting page tables paging enabled satp register smode umode virtual addresses translated physical addresses traversing page table starting root fig ure 1014 depicts process 1 satpppn gives base address rstlevel page table va 3122 gives rstlevel index processor reads pte located address satpppn4096 va 3122 4 2 pte contains base address secondlevel page table va 2112 gives secondlevel index processor reads leaf pte located pteppn4096 va 2112 4 3 leaf pte ppn eld page offset twelve leastsignicant bits original virtual address form nal result physical address leafpteppn4096 va 110 processor performs physical memory access translation process almost sv39 sv32 larger ptes one level indirec tion figure 1019 end chapter gives complete description pagetable traversal algorithm detailing exceptional conditions special case superpage translations almost riscv paging system save one wrinkle instruction fetches loads stores resulted several pagetable accesses paging would reduce performance substantially modern processors reduce overhead addresstranslation cache often called tlb translation lookaside buffer reduce cost cache processors automatically keep coherent page tableif operating system modies page table cache becomes stale s mode adds one instruction solve problem sfencevma informs processor software may modied page tables processor ush translation caches accordingly takes two optional arguments narrow scope cache ush rs1 indicates virtual address translation modied page table rs2 gives addressspace identier process whose page table modied x0 given entire translation cache ushed elaboration addresstranslation cache coherence multiprocessors sfencevma affects addresstranslation hardware hart executed in struction hart changes page table another hart using rst hart must use interprocessor interrupt inform second hart execute sfencevma instruction procedure often referred tlb shootdown", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-13 16:15:32", "segment": "segment73", "image_urls": [], "Book": "rvbook" }, { "section": "10.7 Concluding Remarks ", "content": "brooks turing award laureate architect ibm system360 family computers demonstrated importance differentiating architecture im plementation descendants 1964 architecture still selling today modularity riscv privileged architectures caters needs variety systems minimalist machine mode supports baremetal embedded applications low cost additional user mode physical memory protection together enable multitask ing sophisticated embedded systems finally supervisor mode pagebased virtual memory provide exibility needed host modern operating systems", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-13 16:15:33", "segment": "segment74", "image_urls": [], "Book": "rvbook" }, { "section": "10.8 To Learn More ", "content": "a waterman k asanovic editors riscv instruction set manual volume ii privileged architecture version 110 may 2017 url https riscvorg specificationsprivilegedisa", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-13 16:15:33", "segment": "segment75", "image_urls": [], "Book": "rvbook" }, { "section": "Chapter 11", "content": "Future RISC-V Optional Extensions", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-13 16:15:26", "segment": "segment76", "image_urls": [], "Book": "rvbook" }, { "section": "11.1 'B' Standard Extension for Bit Manipulation ", "content": "b extension offers bit manipulation including insert extract test bit elds rotations funnel shifts bit byte permutations count leading trailing zeros count bits set", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-13 16:15:33", "segment": "segment77", "image_urls": [], "Book": "rvbook" }, { "section": "11.2 'E' Standard Extension for Embedded ", "content": "reduce cost lowend cores 16 fewer registers rv32e saved temporary registers split registers 015 1631 figure 32", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-13 16:15:33", "segment": "segment78", "image_urls": [], "Book": "rvbook" }, { "section": "11.3 'H' Privileged Architecture Extension for Hypervisor Support ", "content": "h extension privileged architecture adds new hypervisor mode second level pagebased address translation improve efciency running multiple operating systems machine", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-13 16:15:33", "segment": "segment79", "image_urls": [], "Book": "rvbook" }, { "section": "11.4 'J' Standard Extension for Dynamically Translated Languages ", "content": "many popular languages usually implemented via dynamic translation including java javascript languages benet additional isa support dynamic checks garbage collection j stands justintime compiler", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-13 16:15:33", "segment": "segment80", "image_urls": [], "Book": "rvbook" }, { "section": "11.5 'L' Standard Extension for Decimal Floating-Point ", "content": "l extension intended support decimal oatingpoint arithmetic dened ieee 7542008 standard problem binary numbers represent common decimal fractions 01 motivation rv32l compu tation radix identical radix input output", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-13 16:15:33", "segment": "segment81", "image_urls": [], "Book": "rvbook" }, { "section": "11.6 'N' Standard Extension for User-Level Interrupts ", "content": "n extension allows interrupts exceptions occur userlevel programs transfer control directly userlevel trap handler without invoking outer execution environment userlevel interrupts mainly intended support secure embedded systems m mode umode present chapter 10 however also support userlevel trap handling systems running unixlike operating systems used unix environment conventional signal handling would likely remain userlevel interrupts could used building block future extensions generate userlevel events garbage collection barriers integer overow oatingpoint traps", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-13 16:15:33", "segment": "segment82", "image_urls": [], "Book": "rvbook" }, { "section": "11.7 'P' Standard Extension for Packed-SIMD Instructions ", "content": "p extension subdivides existing architectural registers provide dataparallel compu tation smaller data types packedsimd designs represent reasonable design point reusing existing wide datapath resources however signicant additional resources devoted dataparallel execution chapter 8 shows designs vector architectures better choice architects use rvv extension", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-13 16:15:33", "segment": "segment83", "image_urls": [], "Book": "rvbook" }, { "section": "11.8 'Q' Standard Extension for Quad-Precision Floating-Point ", "content": "q extension adds 128bit quadprecision binary oatingpoint instructions compliant ieee 7542008 arithmetic standard oatingpoint registers extended hold either single double quadprecision oatingpoint value quadprecision binary oatingpoint extension requires rv64ifd", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-13 16:15:33", "segment": "segment84", "image_urls": [], "Book": "rvbook" }, { "section": "11.9 Concluding Remarks ", "content": "open standardslike committee approach expanding riscv hopefully mean feedback debate occur instructions nalized rather afterwards late change ideal case members implement proposal ratied fpgas make much easier proposing instruction extensions via riscv foundation committees also fair amount work keep rate change slow unlike happened x8632 see figure 12 page 3 chapter 1 forget everything chapter optional despite many extensions adopted hope riscv evolve technological demands maintaining reputation simple efcient isa succeeds riscv signicant break incremental isas past ", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-13 16:15:33", "segment": "segment85", "image_urls": [], "Book": "rvbook" }, { "section": "APPENDIX A RISC-V Instruction Listings ", "content": "appendix lists instructions rv3264i extensions covered book rvm rva rvf rvd rvc rvv pseudoinstructions entry instruction name operands registertransfer level denition instruction format type english description compressed versions gure showing actual layout opcodes think everything need understand instructions compact summaries however want even detail refer ofcial riscv specications waterman asanovic 2017 help readers nd desired instruction appendix header left even page contains rst instruction top page header right odd page contains last instruction bottom page format similar headers dictionaries helps search page word example header next even page shows amoaddw rst instruction page header following odd page shows amominud last instruction page two pages would nd 10 instructions amoaddw amoandd amoandw amomaxd amomaxw amomaxud amomaxuw amomind amominw amominud ", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-13 16:15:33", "segment": "segment86", "image_urls": [], "Book": "rvbook" }, { "section": "CHAPTER 1 ", "content": "introduction computer systems technological advances witnessed computer industry result long chain immense successful efforts made two major forces academia represented university research centers industry represented computer companies however fair say current technological advances computer industry owe inception university research centers order appreciate current technological advances computer industry one trace back history computers development objective historical review understand factors affecting computing know today hopefully forecast future computation great majority computers daily use known general purpose machines machines built specific application mind rather capable performing computation needed diversity applications machines distinguished built serve tailored specific applications latter known special purpose machines computer systems conventionally defined interfaces number layered abstraction levels providing functional support predecessor included among levels application programs highlevel languages set machine instructions based interface different levels system number computer architectures defined interface application programs highlevel language referred language architecture instruction set architecture defines interface basic machine instruction set runtime io control different definition computer architecture built four basic viewpoints structure organization implementation performance definition structure defines interconnection various hardware components organization defines dynamic interplay management various components implementation defines detailed design hardware components performance specifies behavior computer system historical background section would like provide historical background evolution cornerstone ideas computing industry emphasize outset effort build computers originated one single place every reason us believe attempts build first computer existed different geographically distributed places also firmly believe building computer requires teamwork therefore people attribute machine name single researcher actually mean researcher may led team introduced machine therefore see appropriate mention machine place first introduced without linking specific name believe approach fair eliminate controversy researchers names probably fair say first programcontrolled mechanical computer ever build z1 1938 followed 1939 z2 first operational program controlled computer fixedpoint arithmetic however first recorded universitybased attempt build computer originated iowa state university campus early 1940s researchers campus able build smallscale specialpurpose electronic computer however computer never completely operational time complete design fully functional programmable specialpurpose machine z3 reported germany 1941 appears lack funding prevented design implemented history recorded two attempts progress researchers different parts world opportunities gain firsthand experience visits laboratories institutes carrying work assumed firsthand visits interchange ideas enabled visitors embark similar projects laboratories back home far generalpurpose machines concerned university pennsylvania recorded hosted building electronic numerical integrator calculator eniac machine 1944 first operational generalpurpose machine built using vacuum tubes machine primarily built help compute artillery firing tables world war ii programmable manual setting switches plugging cables machine slow today standard limited amount storage primitive programmability improved version eniac proposed campus improved version eniac called electronic discrete variable automatic computer edvac attempt improve way programs entered explore concept stored programs 1952 edvac project completed inspired ideas implemented eniac researchers institute advanced study ias princeton built 1946 ias machine 10 times faster eniac 1946 edvac project progress similar project initiated cambridge university project build storedprogram computer known electronic delay storage automatic calculator edsac 1949 edsac became world first fullscale storedprogram fully operational computer spinoff edsac resulted series machines introduced harvard series consisted mark ii iii iv latter two machines introduced concept separate memories instructions data term harvard architecture given machines indicate use separate memories noted term harvard architecture used today describe machines separate cache instructions data first generalpurpose commercial computer universal automatic computer univac market middle 1951 represented improvement binac built 1949 ibm announced first computer ibm701 1952 early 1950s witnessed slowdown computer industry 1964 ibm announced line products name ibm 360 series series included number models varied price performance led digital equipment corporation dec introduce first minicomputer pdp8 considered remarkably lowcost machine intel introduced first microprocessor intel 4004 1971 world witnessed birth first personal computer pc 1977 apple computer series first introduced 1977 world also witnessed introduction vax11780 dec intel followed suit introducing first popular microprocessor 80 86 series personal computers introduced 1977 altair processor technology north star tandy commodore apple many others enhanced productivity endusers numerous departments personal computers compaq apple ibm dell many others soon became pervasive changed face computing parallel smallscale machines supercomputers coming play first supercomputer cdc 6600 introduced 1961 control data corporation cray research corporation introduced best costperformance supercomputer cray1 1976 1980s 1990s witnessed introduction many commercial parallel computers multiple processors generally classified two main categories 1 shared memory 2 distributed memory systems number processors single machine ranged several shared memory computer hundreds thousands massively parallel system examples parallel computers era include sequent symmetry intel ipsc ncube intel paragon thinking machines cm2 cm5 mspar mp fujitsu vpp500 others one clear trends computing substitution centralized servers networks computers networks connect inexpensive powerful desktop machines form unequaled computing power local area networks lan powerful personal computers workstations began replace mainframes minis 1990 individual desktop computers soon connected larger complexes computing wide area networks wan crt cathode ray tube lan local area network pervasiveness internet created interest network computing recently grid computing grids geographically distributed platforms computation provide dependable consistent pervasive inexpensive access highend computational facilities table 11 modified table proposed lawrence tesler 1995 table major characteristics different computing paradigms associated decade computing starting 1960 architectural development styles computer architects always striving increase performance architectures taken number forms among philosophy single instruction one use smaller number instructions perform job immediate consequence need fewer memory readwrite operations eventual speedup operations also argued increasing complexity instructions number addressing modes theoretical advantage reducing semantic gap instructions highlevel language lowlevel machine language single machine instruction convert several binary coded decimal bcd numbers binary example complex instructions intended huge number addressing modes considered 20 vax machine adds complexity instructions machines following philosophy referred complex instructions set computers ciscs examples cisc machines include intel pentiumtm motorola mc68000tm ibm macintosh powerpctm noted capabilities added processors manufacturers realized increasingly difficult support higher clock rcomplexity computations within single clock period number studies mid1970s early 1980s also identified typical programs 80 instructions executed using assignment statements conditional branching procedure calls also surprising find simple assignment statements constitute almost 50 operations findings caused different philosophy emerge philosophy promotes optimization architectures speeding operations frequently used reducing instruction complexities number addressing modes machines following philosophy referred reduced instructions set computers riscs examples riscs include sun sparctm mipstm machines two philosophies architecture design led unresolved controversy architecture style best however mentioned studies indicated risc architectures would indeed lead faster execution programs majority contemporary microprocessor chips seems follow risc paradigm book present salient features examples cisc risc machines technological development computer technology shown unprecedented rate improvement includes development processors memories indeed advances technology fueled computer industry integration numbers transistors transistor controlled onoff switch single chip increased hundred millions impressive increase made possible advances fabrication technology transistors scale integration grown smallscale ssi mediumscale msi largescale lsi largescale integration vlsi currently waferscale integration wsi table 12 shows typical numbers devices per chip technologies mentioned continuous decrease minimum devices feature size led continuous increase number devices per chip table 12 numbers devices per chip turn led number developments among increase number devices ram memories turn helps designers trade memory size speed improvement feature size provides golden opportunities introducing improved design styles", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-16 19:09:59", "segment": "segment0", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 2 ", "content": "instruction set architecture design chapter consider basic principles involved instruction set architecture design discussion starts consideration memory locations addresses present abstract model main memory considered sequence cells capable storing n bits address issue storing retrieving information memory information stored andor retrieved memory needs addressed image devicergb width 215 height 1 bpc 8 discussion number different ways address memory locations addressing modes next topic discussed chapter program consists number instructions accessed certain order motivates us explain issue instruction execution sequencing detail show application presented addressing modes instruction characteristics writing sample segment codes performing number simple programming tasks unique characteristic computer memory organized hierarchy hierarchy larger slower memories used supplement smaller faster ones typical memory hierarchy starts small expensive relatively fast module called cache cache followed hierarchy larger less expensive relatively slow main memory part cache main memory built using semiconductor material followed hierarchy larger less expensive far slower magnetic memories consist hard disk tape concentration chapter main memory programmer point view particular focus way information stored retrieved memory memory locations operations main memory modeled array millions adjacent cells capable storing binary digit bit value 1 0 cells organized form groups fixed number say n cells dealt atomic entity entity consisting 8 bits called byte many systems entity consisting n bits stored retrieved memory using one basic memory operation called word smallest addressable entity typical size word ranges 16 64 bits however customary express size memory terms bytes example size typical memory personal computer 256 mbytes 256 220 228 bytes order able move word memory distinct address assigned word address used determine location memory given word stored called memory write operation similarly address used determine memory location word retrieved memory called memory read operation number bits l needed distinctly address words memory given l log2 m example size memory 64 read 64 mega words number bits address log2 64 220 log2 226 26 bits alternatively number bits address l maximum memory size terms number words addressed using l bits 2l figure 21 illustrates concept memory words word address explained mentioned two basic memory operations memory write memory read operations memory write operation word stored memory location whose address specified memory read operation word read memory location whose address specified typically memory read memory write operations performed central processing unit cpu three basic steps needed order cpu perform write operation specified memory location word stored memory location first loaded cpu specified register called memory data register mdr address location word stored loaded cpu specified register called memory address register mar signal called write issued cpu indicating word stored mdr stored memory location whose address loaded mar figure 22 illustrates operation writing word given 7e hex memory location whose address 2005 part figure shows status registers memory locations involved write operation execution operation part b figure shows status execution operation worth mentioning mdr mar registers used exclusively cpu accessible programmer similar write operation three basic steps needed order perform memory read operation address location word read loaded mar signal called read issued cpu indicating word whose address mar read mdr time corresponding memory delay reading specified word required word loaded memory mdr ready use cpu addressing modes information involved operation performed cpu needs addressed computer terminology information called operand therefore instruction issued processor must carry least two types information operation performed encoded called opcode field address information operand operation performed encoded called address field instructions classified based number operands threeaddress twoaddress oneandhalfaddress oneaddress zeroaddress explain classes together simple examples following paragraphs noted presenting examples would use convention operation source destination express instruction convention operation represents operation performed example add subtract write read source field represents source operand source operand constant value stored register value stored memory destination field represents place result operation stored example register memory location threeaddress instruction takes form operation add1 add2 add3 form add1 add2 add3 refers register memory location consider example instruction add r1 r2 r3 instruction indicates operation performed addition also indicates values added stored registers r1 r2 results stored register r3 example threeaddress instruction refers memory locations may take form add b c instruction adds contents memory location contents memory location b stores result memory location c twoaddress instruction takes form operation add1 add2 form add1 add2 refers register memory location consider example instruction add r1 r2 instruction adds contents register r1 contents register r2 stores results register r2 original contents register r2 lost due operation original contents register r1 remain intact instruction equivalent threeaddress instruction form add r1 r2 r2 similar instruction uses memory locations instead registers take form add b case contents memory location added contents memory location b result used override original contents memory location b operation performed threeaddress instruction add b c performed two twoaddress instructions move b c add c first instruction moves contents location b location c second instruction adds contents location location c contents location b stores result location c oneaddress instruction takes form add r1 case instruction implicitly refers register called accumulator racc contents accumulator added contents register r1 results stored back accumulator racc memory location used instead register instruction form add b used case instruction adds content accumulator racc content memory location b stores result back accumulator racc instruction add r1 equivalent threeaddress instruction add r1 racc racc twoaddress instruction add r1 racc two oneaddress instruction one andhalf address instruction consider example instruction add b r1 case instruction adds contents register r1 contents memory location b stores result register r1 owing fact instruction uses two types addressing register memory location called oneandhalfaddress instruction register addressing needs smaller number bits needed memory addressing interesting indicate exist zeroaddress instructions instructions use stack operation stack data organization mechanism last data item stored first data item retrieved two specific operations performed stack push pop operations figure 24 illustrates two operations seen specific register called stack pointer sp used indicate stack location addressed stack push operation sp value used indicate location called top stack value 5a stored case location 1023 storing pushing value sp incremented indicate location 1024 stack pop operation sp first decremented become 1021 value stored location dd case retrieved popped stored shown register different operations performed using stack structure consider example instruction add sp sp instruction adds contents stack location pointed sp pointed sp 1 stores result stack location pointed current value sp figure 25 illustrates addition operation table 21 summarizes instruction classification discussed different ways operands addressed called addressing modes addressing modes differ way address information operands specified simplest addressing mode include operand instruction address information needed called immediate addressing involved addressing mode compute address operand adding constant value content register called indexed addressing two addressing modes exist number addressing modes including absolute addressing direct addressing indirect addressing number different addressing modes explained immediate mode according addressing mode value operand immediately available instruction consider example case loading decimal value 1000 register ri operation performed using instruction following load 1000 ri instruction operation performed load value register source operand immediately given 1000 destination register ri noted order indicate value 1000 mentioned instruction operand address immediate mode customary prefix operand special character seen use immediate addressing mode simple use immediate addressing leads poor programming practice change value operand requires change every instruction uses immediate value operand flexible addressing mode explained direct absolute mode according addressing mode address memory location holds operand included instruction consider example case loading value operand stored memory location 1000 register ri operation performed using instruction load 1000 ri instruction source operand value stored memory location whose address 1000 destination register ri note value 1000 prefixed special characters indicating direct absolute address source operand figure 26 shows illustration direct addressing mode example content memory location whose address 1000 2345 time instruction load 1000 ri executed result executing instruction load value 2345 register ri direct absolute addressing mode provides flexibility compared immediate mode however requires explicit inclusion operand address instruction flexible addressing mechanism provided use indirect addressing mode explained indirect mode indirect mode included instruction address operand rather name register memory location holds effective address operand order indicate use indirection instruction customary include name register memory location parentheses consider example instruction load 1000 ri instruction memory location 1000 enclosed parentheses thus indicating indirection meaning instruction load register ri contents memory location whose address stored memory address 1000 indirection made either register memory location therefore identify two types indirect addressing register indirect addressing register used hold address operand memory indirect addressing memory location used hold address operand two types illustrated figure 27 indexed mode addressing mode address operand obtained adding constant content register called index register consider example instruction load x rind ri instruction loads register ri contents memory location whose address sum contents register rind value x index addressing indicated instruction including name index register parentheses using symbol x indicate constant added figure 28 illustrates indexed addressing seen indexing requires additional level complexity register indirect addressing modes addressing modes presented represent commonly used modes processors provide programmer sufficient means handle general programming tasks however number addressing modes used number processors facilitate execution specific programming tasks additional addressing modes involved compared presented among addressing modes relative autoincrement autodecrement modes represent wellknown ones explained relative mode recall indexed addressing index register rind used relative addressing indexed addressing except program counter pc replaces index register example instruction load x pc ri loads register ri contents memory location whose address sum contents program counter pc value x figure 29 illustrates relative addressing mode autoincrement mode addressing mode similar register indirect addressing mode sense effective address operand content register call autoincrement register included instruction however autoincrement content autoincrement register automatically incremented accessing operand indirection indicated including autoincrement register parentheses automatic increment register content accessing operand indicated including parentheses consider example instruction load rauto ri instruction loads register ri operand whose address content register rauto loading operand register ri content register rauto incremented pointing example next item list items figure 210 illustrates autoincrement addressing mode autodecrement mode similar autoincrement auto decrement mode uses register hold address operand however case content autodecrement register first decremented new content used effective address operand order reflect fact content auto decrement register decremented accessing operand 2 included indirection parentheses consider example instruction load rauto ri instruction decrements content register rauto uses new content effective address operand loaded register ri figure 211 illustrates autodecrement addressing mode seven addressing modes presented summarized table 22 case table shows name addressing mode definition generic example illustrating use mode presenting different addressing modes used load instruction illustration however understood types instructions given machine following section elaborate different types instructions typically constitute instruction set given machine instruction types type instructions forming instruction set machine indication power underlying architecture machine instructions general classified following subsections data movement instructions data movement instructions used move data among different units machine notably among instructions used move data among different registers cpu simple register register movement data made instruction move ri rj instruction moves content register ri register rj effect instruction override contents destination register rj without changing contents source register ri data movement instructions include used move data registers memory instructions usually referred load store instructions respectively examples two instructions load 25838 rj store ri 1024 first instruction loads content memory location whose address 25838 destination register rj content memory location unchanged executing load instruction store instruction stores content source register ri memory location 1024 content source register unchanged executing store instruction table 23 shows common data transfer operations meanings arithmetic logical instructions arithmetic logical instructions used perform arithmetic logical manipulation registers memory contents examples arithmetic instructions include add subtract instructions add r1 r2 r0 subtract r1 r2 r0 first instruction adds contents source registers r1 r2 stores result destination register r0 second instruction subtracts contents source registers r1 r2 stores result destination register r0 contents source registers unchanged add subtract instructions addition add subtract instructions machines multiply divide instructions two instructions expensive implement could substituted use repeated addition repeated subtraction therefore modern architectures multiply divide instructions instruction set table 24 shows common arithmetic operations meanings logical instructions used perform logical operations shift compare rotate names indicate instructions perform respectively shift compare rotate operations register memory contents table 25 presents number logical operations sequencing instructions control sequencing instructions used change sequence instructions executed take form conditional branching conditional jump unconditional branching jump call instructions common characteristic among instructions execution changes program counter pc value change made pc value unconditional example unconditional branching jump instructions case earlier value pc lost execution program starts new value specified instruction consider example instruction jump newaddress execution instruction cause pc loaded memory location represented newaddress whereby instruction stored new address executed hand change made pc branching instruction conditional based value specific flag examples flags include negative n zero z overflow v carry c flags represent individual bits specific register called condition code cc register values flags set based results executing different instructions meaning flags shown table 26 consider example following group instructions fourth instruction conditional branch instruction indicates result decrementing contents register r1 greater zero z flag set next instruction executed labeled loop noted conditional branch instructions could used execute program loops shown call instructions used cause execution program transfer subroutine call instruction effect jump terms loading pc new value next instruction executed however call instruction incremented value pc point next instruction sequence pushed onto stack execution return instruction subroutine load pc popped value stack effect resuming program execution point branching subroutine occurred figure 212 shows program segment uses call instruction program segment sums number values n stores result memory location sum values added stored n consecutive memory locations starting num subroutine called addition used perform actual addition values main program stores results sum input output instructions input output instructions io instructions used transfer data computer peripheral devices two basic io instructions used input output instructions input instruction used transfer data input device processor examples input devices include keyboard mouse input devices interfaced computer dedicated input ports computers use dedicated addresses address ports suppose input port keyboard connected computer carries unique address 1000 therefore execution instruction input 1000 cause data stored specific register interface keyboard computer call input data register moved specific register called accumulator computer similarly execution instruction output 2000 causes data stored accumulator moved data output register output device whose address 2000 alternatively computer address ports usual way addressing memory locations case computer input data input device executing instruction move rin r0 instruction moves content register rin register r0 similarly instruction move r0 rin moves contents register r0 register rin performs output operation table 27 transfer control operations latter scheme called memorymapped inputoutput among advantages memorymapped io ability execute number memorydedicated instructions registers io devices addition elimination need dedicated io instructions main disadvantage need dedicate part memory address space io devices", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-16 19:10:02", "segment": "segment1", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 3 ", "content": "processing unit design chapter focus attention main component computer system central processing unit cpu primary function cpu execute set instructions stored computer memory simple cpu consists set registers arithmetic logic unit alu control unit cu follows reader introduced organization main operations cpu cpu basics typical cpu three major components 1 register set 2 arithmetic logic unit alu 3 control unit cu register set differs one computer architecture another usually combination generalpurpose specialpurpose registers generalpurpose registers used purpose hence name general purpose image devicergb width 215 height 1 bpc 8 specialpurpose registers specific functions within cpu example program counter pc specialpurpose register used hold address instruction executed next another example specialpurpose registers instruction register ir used hold instruction currently executed alu provides circuitry needed perform arithmetic logical shift operations demanded instruction set chapter 4 covered number arithmetic operations circuits used support computation alu control unit entity responsible fetching instruction executed main memory decoding executing figure 51 shows main components cpu interactions memory system input output devices cpu fetches instructions memory reads writes data memory transfers data inputoutput devices typical simple execution cycle summarized follows next instruction executed whose address obtained pc fetched memory stored ir instruction decoded operands fetched memory stored cpu registers needed instruction executed results transferred cpu registers memory needed execution cycle repeated long instructions execute check pending interrupts usually included cycle examples interrupts include io device request arithmetic overflow page fault when interrupt request encountered transfer interrupt handling routine takes place interrupt handling routines programs invoked collect state currently executing program correct cause interrupt restore state program actions cpu execution cycle defined microorders issued control unit microorders individual control signals sent dedicated control lines example let us assume want execute instruction moves contents register x register y let us also assume registers connected data bus d control unit issue control signal tell register x place contents data bus d delay another control signal sent tell register read data bus d activation control signals determined using either hardwired control microprogramming register set registers essentially extremely fast memory locations within cpu used create store results cpu operations calculations different computers different register sets differ number registers register types length register also differ usage register generalpurpose registers used multiple purposes assigned variety functions programmer specialpurpose registers restricted specific functions cases registers used hold data used calculations operand addresses length data register must long enough hold values data types machines allow two contiguous registers hold doublelength values address registers may dedicated particular addressing mode may used address general purpose address registers must long enough hold largest address number registers particular architecture affects instruction set design small number registers may result increase memory references another type registers used hold processor status bits flags bits set cpu result execution operation status bits tested later time part another operation memory access registers two registers essential memory write read operations memory data register mdr memory address register mar mdr mar used exclusively cpu directly accessible programmers order perform write operation specified memory location mdr mar used follows word stored memory location first loaded cpu mdr address location word stored loaded cpu mar instruction fetching registers two main registers involved fetching instruction execution program counter pc instruction register ir pc register contains address next instruction fetched fetched instruction loaded ir execution successful instruction fetch pc updated point next instruction executed case branch operation pc updated point branch target instruction branch resolved target address known condition registers condition registers flags used maintain status information architectures contain special program status word psw register psw contains bits set cpu indicate current status executing program indicators typically arithmetic operations interrupts memory protection information processor status specialpurpose address registers index register index addressing address operand obtained adding constant content register called index register index register holds address displacement index addressing indicated instruction including name index register parentheses using symbol x indicate constant added segment pointers support segmentation address issued processor consist segment number base displacement offset within segment segment register holds address base segment stack pointer data organization mechanism last data item stored first data item retrieved two specific operations performed stack push pop operations specific register called stack pointer sp used indicate stack location addressed stack push operation sp value used indicate location called top stack storing pushing value sp incremented architectures eg x86 sp decremented stack grows low memory datapath cpu divided data section control section data section also called datapath contains registers alu datapath capable performing certain operations data items control section basically control unit issues control signals datapath internal cpu data move one register another alu registers internal data movements performed via local buses may carry data instructions addresses externally data move registers memory io devices often means system bus internal data movement among registers alu registers may carried using different organizations including onebus twobus threebus organizations dedicated datapaths may also used components transfer data themselves frequently example contents pc transferred mar fetch new instruction beginning instruction cycle hence dedicated datapath pc mar could useful speeding part instruction execution onebus organization using one bus cpu registers alu use single bus move outgoing incoming data since bus handle single data movement within one clock cycle twooperand operations need two cycles fetch operands alu additional registers may also needed buffer data alu bus organization simplest least expensive limits amount data transfer done clock cycle slow overall performance figure 53 shows onebus datapath consisting set generalpurpose registers memory address register mar memory data register mdr instruction register ir program counter pc alu twobus organization using two buses faster solution onebus organization case generalpurpose registers connected buses data transferred two different registers input point alu time therefore twooperand operation fetch operands clock cycle additional buffer register may needed hold output alu two buses busy carrying two operands figure 54a shows two bus organization cases one buses may dedicated moving data registers inbus dedicated transferring data registers outbus case additional buffer register may used one alu inputs hold one operands alu output connected directly inbus transfer result one registers figure 54b shows twobus organization inbus outbus threebus organization threebus organization two buses may used source buses third used destination source buses move data registers outbus destination bus may move data register inbus two outbuses connected alu input point output alu connected directly inbus expected buses data move within single clock cycle however increasing number buses also increase complexity hardware figure 55 shows example threebus datapath cpu instruction cycle sequence operations performed cpu execution instructions presented fig 56 long instructions execute next instruction fetched main memory instruction executed based operation specified opcode field instruction completion instruction execution test made determine whether interrupt occurred interrupt handling routine needs invoked case interrupt figure 56 cpu functions basic actions fetching instruction executing instruction handling interrupt defined sequence microoperations group control signals must enabled prescribed sequence trigger execution microoperation section show micro operations implement instruction fetch execution simple arithmetic instructions interrupt handling fetch instructions sequence events fetching instruction summarized follows contents pc loaded mar value pc incremented operation done parallel memory access result memory read operation instruction loaded mdr contents mdr loaded ir let us consider onebus datapath organization shown fig 53 see fetch operation accomplished three steps shown table t0 t1 t2 note multiple operations separated imply accomplished parallel using threebus datapath shown figure 55 following table shows steps needed execute simple arithmetic operation add r1 r2 r0 instruction adds contents source registers r1 r2 stores results destination register r0 addition executed follows registers r0 r1 r2 extracted ir contents r1 r2 passed alu addition output alu transferred r0 using onebus datapath shown figure 53 addition take three steps shown following table t0 t1 t2 using twobus datapath shown figure 54a addition take two steps shown following table t0 t1 using twobus datapath inbus outbus shown figure 54b addition take two steps shown t0 t1 using threebus datapath shown figure 55 addition take one step shown following table add x r0 instruction adds contents memory location x register r0 stores result r0 addition executed follows memory location x extracted ir loaded mar result memory read operation contents x loaded mdr contents mdr added contents r0 using onebus datapath shown figure 53 addition take five steps shown t0 t1 t2 t3 t4 using twobus datapath shown figure 54a addition take four steps shown t0 t1 t2 t3 using twobus datapath inbus outbus shown figure 54b addition take four steps shown t0 t1 t2 t3 using threebus datapath shown figure 55 addition take three steps shown t0 t1 t2 interrupt handling execution instruction test performed check pending interrupts interrupt request waiting following steps take place contents pc loaded mdr saved mar loaded address pc contents saved pc loaded address first instruction interrupt handling routine contents mdr old value pc stored memory following table shows sequence events t1 t2 t3 control unit control unit main component directs system operations sending control signals datapath signals control flow data within cpu cpu external units memory io control buses generally carry signals control unit computer components clockdriven manner system clock produces continuous sequence pulses specified duration frequency sequence steps t0 t1 t2 figure 57 timing control signals t0 t1 t2 used execute certain instruction opcode field fetched instruction decoded provide control signal generator information instruction executed step information generated logic circuit module used inputs generate control signals signal generator specified simply set boolean equations output terms inputs figure 57 shows block diagram describes timing used generating control signals mainly two different types control units micro programmed hardwired microprogrammed control control signals associated operations stored special memory units inaccessible programmer control words control word microinstruction specifies one microoperations sequence microinstructions called microprogram stored rom ram called control memory cm hardwired control fixed logic circuits correspond directly boolean expressions used generate control signals clearly hardwired control faster microprogrammed control however hardwired control could expensive complicated complex systems hardwired control economical small control units also noted microprogrammed control could adapt easily changes system design easily add new instructions without changing hardware hardwired control require redesign entire systems case change hardwired implementation hardwired control direct implementation accomplished using logic circuits control line one must find boolean expression terms input control signal generator shown figure 57 let us explain implementation using simple example assume instruction set machine three instructions instx insty inst z b c e f g h control lines following table shows control lines activated three instructions three steps t0 t1 t2 figure 510 shows logic circuits control lines boolean expressions rest control lines obtained similar way figure 511 shows state diagram execution cycle instructions microprogrammed control unit idea microprogrammed control units introduced m v wilkes early 1950s microprogramming motivated desire reduce complexities involved hardwired control studied earlier instruction implemented using set microoperations associated microoperation set control lines must activated carry corresponding microoperation idea micro programmed control store control signals associated implementation certain instruction microprogram special memory called control memory cm microprogram consists sequence microinstructions microinstruction vector bits bit control signal condition code address next microinstruction microinstructions fetched cm way program instructions fetched main memory fig 512 instruction fetched memory opcode field instruction determine microprogram executed words opcode mapped microinstruction address control memory microinstruction processor uses address fetch first microinstruction microprogram fetching microinstruction appropriate control lines enabled every control line corresponds 1 bit turned every control line corresponds 0 bit left completing execution one microinstruction new microinstruction fetched executed condition code bits indicate branch must taken next microinstruction specified address bits current microinstruction otherwise next microinstruction sequence fetched executed instruction fetched memory opcode field instruction determine microprogram executed words opcode mapped microinstruction address control memory microinstruction processor uses address fetch first microinstruction microprogram fetching microinstruction appropriate control lines enabled every control line corresponds 1 bit turned every control line corresponds 0 bit left completing execution one microinstruction new microinstruction fetched executed condition code bits indicate branch must taken next microinstruction specified address bits current microinstruction otherwise next microinstruction sequence fetched executed length microinstruction determined based number microoperations specified microinstructions way control bits interpreted way address next microinstruction obtained microinstruction may specify one microoperations activated simultaneously length microinstruction increase number parallel microoperations per microinstruction increases furthermore control bit microinstruction corresponds exactly one control line length microinstruction could get bigger length microinstruction could reduced control lines coded specific fields microinstruction decoders needed map field individual control lines clearly using decoders reduce number control lines activated simultaneously tradeoff length microinstructions amount parallelism important reduce length microinstructions reduce cost access time control memory may also desirable microoperations performed parallel control lines activated simultaneously horizontal versus vertical microinstructions microinstructions classified horizontal vertical individual bits horizontal microinstructions correspond individual control lines horizontal microinstructions long allow maximum parallelism since bit controls single control line vertical microinstructions control lines coded specific fields within microinstruction decoders needed map field k bits 2k possible com binations control lines example 3bit field microinstruction could used specify one eight possible lines encoding vertical microinstructions much shorter horizontal ones control lines encoded field activated simultaneously therefore vertical micro instructions allow limited parallelism noted decoding needed horizontal microinstructions decoding necessary vertical case example 3 consider threebus datapath shown figure 55 addition pc ir mar mdr assume 16 generalpurpose registers numbered r0 r15 also assume alu supports eight functions add subtract multiply divide shift left shift right consider add operation add r1 r2 r0 adds contents source registers r1 r2 store results destination register r0 example study format microinstruction horizontal organization use horizontal microinstructions control bit control line format microinstruction control bits following alu operations registers output outbus1 source 1 registers output outbus2 source 2 registers input inbus destination operations shown following table shows number bits needed alu source 1 source 2 destination image devicergb width 215 height 1 bpc 8", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-16 19:10:04", "segment": "segment2", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 4 Memory System Design ", "content": "memory hierarchy typical memory hierarchy starts small expensive relatively fast unit called cache followed larger less expensive relatively slow main memory unit cache main memory built using solidstate semiconductor material typically cmos transistors customary call fast memory level primary memory solidstate memory followed larger less expensive far slower magnetic memories consist typically hard disk tape customary call disk secondary memory tape conventionally called tertiary memory objective behind designing memory hierarchy memory system performs consists entirely fastest unit whose cost dominated cost slowest unit memory hierarchy characterized number parameters among parameters access type capacity cycle time latency bandwidth cost term access refers action physically takes place read write oper ation capacity memory level usually measured bytes cycle time defined time elapsed start read operation start subsequent read latency defined time interval request information access first bit information bandwidth provides measure number bits per second accessed cost memory level usually specified dollars per megabytes figure 61 depicts typical memory hierarchy table 61 provides typical values memory hierarchy parameters term random access refers fact access memory location takes fixed amount time regardless actual memory location andor sequence accesses takes place example write operation memory location 100 takes 15 ns operation followed read operation memory location 3000 latter operation also take 15 ns compared sequential access access location 100 takes 500 ns consecutive access location 101 takes 505 ns expected access location 300 may take 1500 ns memory cycle locations 100 300 location requiring 5 ns effectiveness memory hierarchy depends principle moving information fast memory infrequently accessing many times replacing new information principle possible due phenomenon called locality reference within given period time programs tend reference relatively confined area memory repeatedly exist two forms locality spatial temporal locality spatial locality refers phenomenon given address referenced likely addresses near referenced within short period time example consecutive instructions straightline program temporal locality hand refers phenomenon particular memory item referenced likely referenced next example instruction program loop sequence events takes place processor makes request item follows first item sought first memory level memory hierarchy probability finding requested item first level called hit ratio h1 probability finding missing requested item first level memory hierarchy called miss ratio 1 2 h1 requested item causes miss sought next subsequent memory level probability finding requested item second memory level hit ratio second level h2 miss ratio second memory level 1 h2 process repeated item found upon finding requested item brought sent processor memory hierarchy consists three levels average memory access time expressed follows average access time memory level defined time required access one word level equation t1 t2 t3 represent respectively access times three levels cache memory cache memory owes introduction wilkes back 1965 time wilkes distinguished two types main memory conventional slave memory wilkes terminology slave memory second level unconventional highspeed memory nowadays corresponds called cache memory term cache means safe place hiding storing things idea behind using cache first level memory hierarchy keep information expected used frequently cpu cache small highspeed memory near cpu end result given time active portion main memory duplicated cache therefore processor makes request memory reference request first sought cache request corresponds element currently resid ing cache call cache hit hand request corresponds element currently cache call cache miss cache hit ratio hc defined probability finding requested element cache cache miss ratio 1 hc defined probability finding requested element cache case requested element found cache brought subsequent memory level memory hierarchy assuming element exists next memory level main memory brought placed cache expectation next requested element residing neighboring locality current requested element spatial locality upon cache miss actually brought main memory block elements contains requested element advantage transferring block main memory cache visible could possible transfer block using one main memory access time possibility could achieved increasing rate information transferred main memory cache one possible technique used increase bandwidth memory interleaving achieve best results assume block brought main memory cache upon cache miss consists elements stored different memory modules whereby consecutive memory addresses stored successive memory modules figure 62 illustrates simple case main memory consisting eight memory modules assumed case block consists 8 bytes introduced basic idea leading use cache memory would like assess impact temporal spatial locality performance memory hierarchy order make assessment limit deliberation simple case hierarchy consisting two levels cache main memory assume main memory access time tm cache access time tc measure impact locality terms average access time defined average time required access element word requested processor twolevel hierarchy impact temporal locality case assume instructions program loops executed many times example n times loaded cache used replaced new instructions average access time tav given deriving expression assumed requested memory element created cache miss thus leading transfer main memory block time tm following n accesses made requested element taking tc expression reveals number repeated accesses n increases average access time decreases desirable feature memory hierarchy impact spatial locality case assumed size block transferred main memory cache upon cache miss elements also assume due spatial locality elements requested one time processor based assumptions average access time tav given deriving expression assumed requested memory element created cache miss thus leading transfer main memory block consisting elements time tm following accesses one elements constituting block made expression reveals number elements block increases average access time decreases desirable feature memory hierarchy cache memory organization three main different organization techniques used cache memory three techniques discussed techniques differ two main aspects criterion used place cache incoming block main memory criterion used replace cache block incoming block cache full direct mapping simplest among three techniques simplicity stems fact places incoming main memory block specific fixed cache block location placement done based fixed relation incoming block number cache block number j number cache blocks n", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-16 19:10:06", "segment": "segment3", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 5 ", "content": "inputoutput design organization considered fundamental concepts related instruction set design assembly language programming processor design memory design turn attention issues related input output io design organization emphasized outset io plays crucial role modern computer system therefore clear understanding appreciation fundamentals io operations devices interfaces great importance input output io devices vary substantially characteristics one distinguishing factor among input devices also among output devices data processing rate defined average number characters processed device per second example data processing rate input device keyboard 10 characters bytes second scanner send data rate 200000 characterssecond similarly laser printer output data rate about100000 characterssecond graphic display output data rate 30000000 characterssecond striking character keyboard computer cause character form ascii code sent computer amount time passed next character sent computer depend skill user even sometimes hisher speed thinking often case user knows heshe wants input sometimes need think touching next button keyboard therefore input keyboard slow burst nature waste time computer spend valuable time waiting input slow input devices mechanism therefore needed whereby device interrupt processor asking attention whenever ready called interruptdriven communication computer io devices see section 83 consider case disk typical disk capable transferring data rates exceeding several million bytessecond would waste time transfer data byte byte even word word therefore always case data transferred form blocks entire programs also necessary provide mechanism allows disk transfer huge volume data without intervention cpu allow cpu perform useful operation huge amount data transferred disk memory basic concepts figure 81 shows simple arrangement connecting processor memory given computer system input device example keyboard output device graphic display single bus consisting required address data control lines used connect system components figure 81 concerned way processor io devices exchange data indicated introduction part exists big difference rate processor process information input output devices one simple way accommodate speed difference input device example keyboard deposit character struck user register input register indicates availability character processor input character taken processor indicated input device order proceed input next character similarly processor character output display deposits specific register dedicated communication graphic display output register character taken graphic display indicated processor proceed output next character simple way communication processor io devices called io protocol requires availability input output registers typical computer system number input registers belonging specific input device also number output registers belonging specific output device addition mechanism according processor address input output registers must adopted one arrangement exists satisfy abovementioned requirements among two particular methods explained first arrangement io devices assigned particular addresses isolated address space assigned memory execution input instruction input device address cause character stored input register device transferred specific register cpu similarly execution output instruction output device address cause character stored specific register cpu transferred output register output device arrangement called shared io shown schematically figure 82 case address data lines cpu shared memory io devices separate control line used need executing input output instructions typical computer system exists one input one output device therefore need address decoder circuitry device identification also need status registers input output device status input device whether ready send data processor stored status register device similarly status output device whether ready receive data processor stored status register device input output registers status registers address decoder circuitry represent main components io interface module main advantage shared io arrangement separation memory address space io devices main disadvantage need special input output instructions processor instruction set shared io arrangement mostly adopted intel second possible io arrangement deal input output registers regular memory locations case read operation address corresponding input register input device example read device 6 equivalent performing input operation input register device 6 similarly write operation address corresponding output register output device example write device 9 equivalent performing output operation output register device 9 arrangement called memorymapped io shown figure 83 main advantage memorymapped io use read write instructions processor perform input output operations respectively eliminates need introducing special io instructions main disadvantage memory mapped io need reserve certain part memory address space addressing io devices reduction available memory address space memorymapped io mostly adopted motorola interruptdriven io often necessary normal flow program interrupted example react abnormal events power failure interrupt also used acknowledge completion particular course action printer indicating computer completed printing character input register ready receive character interrupt also used timesharing systems allocate cpu time among different programs instruction sets modern cpus often include instruction mimic actions hardware interrupts cpu interrupted required discontinue current activity attend interrupting condition serve interrupt resume activity wherever stopped discontinuity processor current activity requires finishing executing current instruction saving processor status mostly form pushing register values onto stack transferring control jump called interrupt service routine isr service offered interrupt depend source interrupt example interrupt due power failure action taken save values processor registers pointers resumption correct operation guaranteed upon power return case io interrupt serving interrupt means perform required data transfer upon finishing serving interrupt processor restore original status popping relevant values stack processor returns normal state enable sources interrupt one important point overlooked scenario issue serving multiple interrupts example occurrence yet another interrupt processor currently serving interrupt response new interrupt depend upon priority newly arrived interrupt respect interrupt currently served newly arrived interrupt priority less equal currently served one wait processor finishes serving current interrupt hand newly arrived interrupt priority higher currently served interrupt example power failure interrupt occurring serving io interrupt processor push status onto stack serve higher priority interrupt correct handling multiple interrupts terms storing restoring correct processor status guaranteed due way push pop operations performed example serve first interrupt status 1 pushed onto stack upon receiving second interrupt status 2 pushed onto stack upon serving second interrupt status 2 popped stack upon serving first interrupt status 1 popped stack possible interrupting device identify processor sending code following interrupt request code sent given io device represent io address memory address location start isr device scheme called vectored interrupt interrupt hardware discussion assumed processor recognized occurrence interrupt proceeding serve computers provided interrupt hardware capability form specialized interrupt lines processor lines used send interrupt signals processor case io exists one io device processor pro vided mechanism enables handle simultaneous interrupt requests recognize interrupting device two basic schemes implemented achieve task first scheme called daisy chain bus arbitration dcba second called independent source bus arbitration isba interrupt operating systems interrupt occurs operating system gains control operating system saves state interrupted process analyzes interrupt passes control appropriate routine handle interrupt several types interrupts including io interrupts io interrupt notifies operating system io device completed suspended operation needs service cpu process interrupt context current process must saved interrupt handling routine must invoked process called context switching process context two parts processor context memory context processor context state cpu registers including program counter pc program status words psws registers memory context state program memory including program data interrupt handler routine processes different type interrupt operating system must provide programs save area contexts also must provide organized way allocating deallocating memory interrupted process interrupt handling routine finishes processing interrupt cpu dispatched either interrupted process highest priority ready process depend whether interrupted process preemptive nonpreemptive process nonpreemptive gets cpu first context must restored control returned interrupts process figure 87 shows layers software involved io operations first program issues io request via io call request passed io device device completes io interrupt sent interrupt handler invoked eventually control relinquished back process initiated io direct memory access dma main idea direct memory access dma enable peripheral devices cut middle man role cpu data transfer allows peripheral devices transfer data directly memory without intervention cpu peripheral devices access memory directly would allow cpu work would lead improved performance especially cases large transfers dma controller piece hardware controls one peripheral devices allows devices transfer data system memory without help processor typical dma transfer event notifies dma controller data needs transferred memory dma cpu use memory bus one use memory time dma controller sends request cpu asking permission use bus cpu returns acknowledgment dma controller granting bus access dma take control bus independently conduct memory transfer transfer complete dma relinquishes control bus cpu processors support dma provide one input signals bus requester assert gain control bus one output signals cpu asserts indicate relinquished bus figure 810 shows dma controller shares cpu memory bus direct memory access controllers require initialization cpu typical setup parameters include address source area address destination area length block whether dma controller generate processor interrupt block transfer complete dma controller address register word count register control register address register contains address specifies memory location data transferred typically possible dma controller automatically increment address register word transfer next transfer next memory location word count register holds number words transferred word count decremented one word transfer control register specifies transfer mode direct memory access data transfer performed burst mode singlecycle mode burst mode dma controller keeps control bus data transferred memory peripheral device mode transfer needed fast devices data transfer stopped entire transfer done singlecycle mode cycle stealing dma controller relinquishes bus transfer one data word minimizes amount time dma controller keeps cpu controlling bus requires bus requestacknowledge sequence performed every single transfer overhead result degradation performance singlecycle mode preferred system tolerate cycles added interrupt latency peripheral devices buffer large amounts data causing dma controller tie bus excessive amount time following steps summarize dma operations dma controller initiates data transfer data moved increasing address memory reducing count words moved word count reaches zero dma informs cpu termination means interrupt cpu regains access memory bus dma controller may multiple channels channel associated address register count register initiate data transfer device driver sets dma channel address count registers together direction data transfer read write transfer taking place cpu free things transfer complete cpu interrupted direct memory access channels shared device drivers device driver must able determine dma channel use devices fixed dma channel others flexible device driver simply pick free dma channel use linux tracks usage dma channels using vector dmachan data structures one per dma channel dmachan data structure contains two fields pointer string describing owner dma channel flag indicating dma channel allocated buses bus computer terminology represents physical connection used carry signal one point another signal carried bus may represent address data control signal power typically bus consists number connections running together connection called bus line bus line normally identified number related groups bus lines usually identified name example group bus lines 1 16 given computer system may used carry address memory locations therefore identified address lines depending signal carried exist least four types buses address data control power buses data buses carry data control buses carry control signals power buses carry powersupplyground voltage size number lines address data control bus varies one system another consider example bus connecting cpu memory given system called cpu bus size memory system 512mword word 32 bits system size address bus log2 512 220 29 lines size data bus 32 lines least one control line r w exist system addition carrying control signals control bus carry timing signals signals used determine exact timing data transfer bus determine given computer system component processor memory io devices place data bus receive data bus bus synchronous data transfer bus controlled bus clock clock acts timing reference bus signals bus asynchronous data transfer bus based availability data clock signal data transferred asynchronous bus using technique called handshaking operations synchronous asynchronous buses explained understand difference synchronous asynchronous let us consider case master cpu dma source data transferred slave io device following sequence events involving master slave master send request use bus master request granted bus allocated master master place addressdata bus slave slave selected master signal data transfer slave take data master free bus synchronous buses synchronous buses steps data transfer take place fixed clock cycles everything synchronized bus clock clock signals made available master slave bus clock square wave signal cycle starts one rising edge clock ends next rising edge beginning next cycle transfer may take multiple bus cycles depending speed parameters bus two ends transfer one scenario would first clock cycle master puts address address bus puts data data bus asserts appropriate control lines slave recognizes address address bus first cycle reads new value bus second cycle synchronous buses simple easily implemented however connecting devices varying speeds synchronous bus slowest device determine speed bus also synchronous bus length could limited avoid clockskewing problems asynchronous buses fixed clock cycles asynchronous buses handshaking used instead figure 811 shows handshaking protocol master asserts dataready line point 1 figure sees dataaccept signal slave sees dataready signal assert dataaccept line point 2 figure rising dataaccept line trigger falling dataready line removal data bus falling dataready line point 3 figure trigger falling dataaccept line point 4 figure handshaking called fully interlocked repeated data completely transferred asynchronous bus appropriate different speed devices input output interfaces interface data path two separate devices computer system inter face buses classified based number bits transmitted given time serial versus parallel ports serial port 1 bit data trans ferred time mice modems usually connected serial ports parallel port allows 1 bit data processed printers common peripheral devices connected parallel ports table 84 shows summary variety buses interfaces used personal computers summary one major features computer system ability exchange data devices allow user interact system chapter focused io system way processor io devices exchange data computer system chapter described three ways organizing io programmed io interruptdriven io dma programmed io cpu handles transfers take place registers devices interruptdriven io cpu handles data transfers io module running concurrently dma data transferred memory io devices without intervention cpu also studied two methods synchronization polling interrupts polling processor polls device waiting io complete clearly processor cycles wasted method using interrupts processors free switch tasks io devices assert interrupts io complete interrupts incurs delay penalty two examples interrupt handling covered 80 86 family arm chapter also covered buses interfaces wide variety interfaces buses used personal computers summarized", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-16 19:10:09", "segment": "segment4", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 6 ", "content": "pipelining design techniques exist two basic techniques increase instruction execution rate processor increase clock rate thus decreasing instruction execution time alternatively increase number instructions executed simultaneously pipelining instructionlevel parallelism examples latter technique pipelining owes origin car assembly lines idea one instruction processed processor time similar assembly line success pipeline depends upon dividing execution instruction among number subunits stages perform ing part required operations possible division consider instruction fetch f instruction decode operand fetch f instruction execution e store results subtasks needed execution instruction case possible five instructions pipeline time thus reducing instruction execution latency chapter discuss basic concepts involved designing instruction pipelines performance measures pipeline introduced main issues contributing instruction pipeline hazards discussed possible solutions introduced addition introduce concept arithmetic pipelining together problems involved designing pipeline coverage concludes review recent pipeline processor general concepts pipelining refers technique given task divided number subtasks need performed sequence subtask performed given functional unit units connected serial fashion operate simultaneously use pipelining improves performance compared traditional sequential execution tasks figure 91 shows illustration basic difference executing four subtasks given instruction case fetching f decoding execution e writing results w using pipelining sequential processing clear figure total time required process three instructions i1 i2 i3 six time units fourstage pipelining used compared 12 time units sequential processing used possible saving 50 execution time three instructions obtained order formulate performance measures goodness pipeline processing series tasks space time chart called gantt chart used chart shows suc cession subtasks pipe respect time figure 92 shows gantt chart chart vertical axis represents subunits four case horizontal axis represents time measured terms time unit required unit perform task developing gantt chart assume time taken subunit perform task call unit time seen figure 13 time units needed finish executing 10 instructions i1 i10 compared 40 time units sequential proces sing used ten instructions requiring four time units following analysis provide three performance measures good ness pipeline speedup n throughput u n efficiency e n noted analysis assume unit time units speedup n consider execution tasks instructions using nstages units pipeline seen n 1 time units required instruction pipeline simple analysis made section 91 ignores important aspect affect performance pipeline pipeline stall pipeline operation said stalled one unit stage requires time perform function thus forcing stages become idle consider example case instruction fetch incurs cache miss assume also cache miss requires three extra time units figure 93 illustrates effect instruction i2 incurring cache miss assuming execution ten instructions i1 i10 figure shows due extra time units needed instruction i2 fetched pipeline stalls fetching instruction i3 subsequent instructions delayed situations create known pipeline bubble pipeline hazards creation pipeline bubble leads wasted unit times thus leading overall increase number time units needed finish executing given number instructions number time units needed execute 10 instructions shown figure 93 16 time units compared 13 time units cache misses pipeline hazards take place number reasons among instruction dependency data dependency explained methods used prevent fetching wrong instruction operand use nop operation method used order prevent fetching wrong instruction case instruction dependency fetching wrong operand case data dependency recall example 1 example execution sequence ten instructions i1 i10 pipeline consisting four pipeline stages id ie considered order show execution instructions pipeline assumed branch instruction fetched pipeline stalls result executing branch instruction stored assumption needed order prevent fetching wrong instruction fetching branch instruction reallife situations mechanism needed guarantee fetching appropriate instruction appropriate time insertion nop instructions help carrying task nop instruction effect status processor", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-16 19:10:09", "segment": "segment5", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "CHAPTER 7 ", "content": "reduced instruction set computers riscs risccisc evolution cycle term riscs stands reduced instruction set computers originally introduced notion mean architectures execute fast one instruction per clock cycle risc started notion mid1970s even tually led development first risc machine ibm 801 minicomputer launching risc notion announces start new paradigm design computer architectures paradigm promotes simplicity computer architecture design particular calls going back basics rather providing extra hardware support highlevel languages paradigm shift relates known semantic gap measure difference operations provided highlevel languages hlls provided computer architectures recognized wider semantic gap larger number undesirable consequences include execution inefficiency b excessive machine program size c increased compiler complexity expected consequences conventional response computer architects add layers complexity newer architectures include increasing number complexity instructions together increasing number addressing modes architectures resulting adoption add complexity known complex instruction set computers ciscs however soon became apparent complex instruction set number disadvantages include complex instruction decoding scheme increased size control unit increased logic delays drawbacks prompted team computer architects adopt principle less actually more number studies conducted investigate impact complexity performance discussed riscs design principles computer minimum number instructions disadvantage large number instructions executed realizing even simple function result speed disadvantage hand computer inflated number instructions disadvantage complex decoding hence speed disadvantage natural believe computer carefully selected reduced set instructions strike balance two design alternatives question becomes constitutes carefully selected reduced set instructions order arrive answer question necessary conduct indepth studies number aspects computation aspects include operations frequently performed execution typical benchmark programs b operations time consuming c type operands frequently used number early studies conducted order find typical break operations performed executing benchmark programs esti mated distribution operations shown table 101 careful look estimated percentage operations performed reveals assignment statements conditional branches procedure calls constitute 90 total operations performed operations however complex may make remaining 10 addition findings studies time performance characteristics operations revealed among operations procedure callsreturn timeconsuming regards type operands used typical computation noticed majority references less 60 made simple scalar variables less 80 scalars local variables procedures observations typical program behavior led following conclusions simple movement data represented assignment statements rather complex operations substantial optimized conditional branches predominant therefore careful attention paid sequencing instructions particularly true known pipelining indispensable use procedure callsreturn timeconsuming operations therefore mechanism devised make communication parameters among calling called procedures cause least number instruc tions execute prime candidate optimization mechanism storing accessing local scalar variables conclusions led argument instead bringing instruc tion set architecture closer hlls appropriate rather optimize performance timeconsuming features typical hll programs obviously call making architecture simpler rather complex remember complex operations long division represent small por tion less 2 operations performed typical computation one ask question achieve answer keeping frequently accessed operands cpu registers b minimizing registertomemory operations two principles achieved using following mechanisms use large number registers optimize operand referencing reduce processor memory traffic optimize design instruction pipelines minimum compiler code generation achieved use simplified instruction set leave complex unnecessary instructions following two approaches identified implement three mechanisms software approach use compiler maximize register usage allocating registers variables used given time period philosophy adopted stanford mips machine hardware approach use ample cpu registers variables held registers larger periods time philosophy adopted berkeley risc machine hardware approach necessitates use new register organization called overlapped register window riscs versus cisc choice risc versus cisc depends totally factors must considered computer designer factors include size complexity speed risc architecture execute instructions perform function performed cisc architecture compensate drawback risc architectures must use chip area saved using complex instruction decoders providing large number cpu registers additional execution units instruction caches use resources leads reduction traffic processor memory hand cisc architecture richer complex instructions require smaller number instructions risc counterpart however cisc architecture requires complex decoding scheme hence subject logic delays therefore reasonable consider risc cisc paradigms differ primarily strategy used trade different design factors little reason believe idea improves performance risc architecture fail thing cisc architecture vice versa example one key issue risc development use optimizing compiler reduce complexity hardware optimize use cpu registers ideas applicable cisc compilers increasing number cpu registers could much improve performance cisc machine could reason behind finding pure commercially available risc cisc machine unusual see risc machine complex floating point instructions see details sparc architecture next section equally expected see cisc machines making use register windows risc idea fact studies indicating cisc machine motorola 680xx register window achieve 2 4 times decrease memory traffic factor achieved risc architecture berkeley risc due use register window however noted processor developers except intel associates opted risc processors computer system manufacturers sun microsystems using risc processors products however compatibility pcbased market companies still producing cisc based products tables 103 104 show limited comparison example risc cisc machine terms performance characteristics respectively elaborate comparison among number commercially available risc cisc machines shown table 105 worth mentioning point following set common character istics among risc machines observed limited number instructions 128 less limited set simple addressing modes minimum two indexed pc relative operations performed registers memory operations two memory operations load store pipelined instruction execution large number generalpurpose registers use advanced compiler technology optimize register usage one instruction per clock cycle hardwired control unit design rather microprogramming 3 branch call jmpx cond rx pc rx cond condition 4 special instructions getpsw rd rd psw arithmetic logical instructions three operands form desti nation source1 op source2 fig 102 load store instructions may use either indicated formats dst register loaded stored low order 19 bits instructions used determine effective address instructions load store 8 16 32 64bit quantities 32bit registers two methods provided calling procedures call instruction uses 30 bit pc relative offset fig 103 jmp instruction uses instruction formats used arithmetic logical operations allows return address put register risc uses threeaddress instruction format availability twoand one address instructions two addressing modes indexed mode pc relative modes indexed mode used synthesize three modes baseabsolute direct register indirect indexed linear byte array modes risc uses static twostage pipeline fetch execute floatingpoint unit fpu contains thirtytwo 32bit registers hold 32 single precision 32bit floatingpoint operands 16 doubleprecision 64bit operands eight extendedprecision 128bit operands fpu execute 20 floatingpoint instructions single double extendedprecision using first instruction format used arithmetic addition instructions loading storing fpus registers cpu also test fpus registers branch conditionally results risc employs conventional mmu supporting single paged 32bit address space risc fourbus organization shown figure 104", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-16 19:10:11", "segment": "segment6", "image_urls": [], "Book": "COMPUTER-ARCHITECTURE-COM-314" }, { "section": "APPENDIX A ", "content": "details representative integrated circuits pertinent details representative integrated circuits ics taken vendor manuals reprinted appendix refer vendor manuals details ics ics detailed grouped 1 gates decoders ics useful combinational circuit design 2 flipops registers others ics useful sequential circuit design 3 memory ics details provided ttl technology attempt made provide details uptodate ics rapid changes ic technology hard provide latest details book refer current manuals ic vendors latest information ics detailed may longer available usually alternative form ics later version another technology found vendor another source nevertheless details given representative characteristics designer would seek", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-17 20:15:09", "segment": "segment0", "image_urls": [], "Book": "computerorganization" }, { "section": "A.1 GATES, DECODERS, AND OTHER ICs USEFUL IN COMBINATIONAL CIRCUIT DESIGN ", "content": "figure a1 shows details several gate ics ic dual twowide twoinput andorinvert circuits figure a2 shows bcd decimal decoder details 4bit adder shown figure a3 figure a4 shows data selec tormultiplexer figure a5 shows 4line 16line decoderdemultiplexer 74155 figure a6 dual 1of4 decoderdemultiplexer common address inputs separate enable inputs decoder section enabled accept binary weighted address input a0 a1 provide four mutually exclusive activelow outputs 0 3 enable requirements decoder met outputs decoder high e 1of8 decoderdemultiplexer tying ea eb relabeling common connec tion address a2 forming common enable connecting remaining eb ea", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-17 20:15:10", "segment": "segment1", "image_urls": [], "Book": "computerorganization" }, { "section": "A.2 FLIP-FLOPS, REGISTERS, AND OTHER ICs USEFUL IN SEQUENTIAL CIRCUIT DESIGN ", "content": "figure a7 shows details ipop ic 4bit latch ic shown figure a8 figure a9 shows ic dual jk ipops decoder sections 2input enable gate decoder enable gate requires one activehigh input one activelow input ea decoder accept either true complemented data demultiplex ing applications using ea ea inputs respectively enable gate decoder b requires two activelow inputs ea eb device used 7490 figure a10 4bit rippletype decade counter device consists four masterslave ipops internally connected provide divide bytwo section dividebyve section section separate clock input initiate state changes counter hightolow clock transition state changes q outputs occur simultaneously internal ripple delays therefore decoded output signals subject decoding spikes used clocks strobes gated asynchronous master reset mr1 mr2 provided overrides clocks resets clears ipops also provided gated asynchronous master set ms1 ms2 overrides clocks mr inputs setting outputs nine hllh since output dividebytwo section internally connected succeeding stages device may operated various counting modes bcd 8421 counter cp1 input must externally connected q0 output cp0 input receives incoming count producing bcd count sequence symmetrical biquinary dividebyten counter q3 output must connected externally cp0 input input count applied cp1 input dividebyten square wave obtained output q0 operate dividebytwo dividebyve counter external interconnections required rst ipop used binary element dividebytwo function cp0 input q0 output cp1 input used obtain dividebyve operation q3 output two shift register ics various capabilities shown figures a11 and a12 7495 figure a12 4bit shift register serial parallel synchronous operating modes serial data ds four parallel data d0 d3 inputs four parallel outputs q0q3 serial parallel mode operation controlled mode select input two clock inputs cp1 cp2 serial shift right parallel data transfers occur synchronously hightolow transition selected clock input mode select input high cp2 enabled hightolow transition enabled cp2 loads parallel data d0d1 inputs register low cp1 enabled hightolow transition enabled cp1 shifts data serial input ds q0 transfers data q0 q1 q1 q2 q2 q3 respectively shift right shift left accomplished externally connecting q3 d2 q2 d1 q1 d0 operating 7495 parallel mode high normal operations mode select change states clock inputs low however changing hightolow cp2 low changing lowtohigh cp1 low cause changes register outputs figure a13 shows details synchronous decode counter", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-17 20:15:10", "segment": "segment2", "image_urls": [], "Book": "computerorganization" }, { "section": "A.3 MEMORY ICs", "content": "", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-17 20:15:10", "segment": "segment3", "image_urls": [], "Book": "computerorganization" }, { "section": "A.3.1 Signetics 74S189 ", "content": "64bit ram figure a14 organized 16 words 4 bits four address lines a0a3 four data input lines i1i4 four data output lines 2007 taylor francis group llc d1d4 note data output lines activelow therefore output complement data selected word lowactive chip enable ce high data outputs assume high impedance state write enable low data input lines written addressed location high data read addressed location operation ics summarized truth table timing characteristics ic also shown figure a14 read operation data appear output taa ns address stable address inputs tce indicates time required output data stable ce activated tcd chip disable time write data input lines address lines stabilized ce rst activated activated minimum data setup time twsc must active least twp ns successful write operation", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-17 20:15:10", "segment": "segment4", "image_urls": [], "Book": "computerorganization" }, { "section": "A.3.2 Intel 2114 ", "content": "4096bit static ram organized 1024 4 figure a15 internally memory cells organized 64by64 matrix 10 address lines a0a9 address bits a3a8 select one 64 rows 4bit portion selected row selected address bits a0 a1 a2 a9 activelow chip select cs low ic put write mode otherwise ic read mode cs low cs high outputs assume high impedance state common data io lines thus controlled cs device uses hmos ii highperformance mos technology directly ttl compatible respects inputs outputs single 5 v power supply device available ve versions maximum access time ranges 100 250 ns depending version maximum current consumption ranges 40 70", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-17 20:15:10", "segment": "segment5", "image_urls": [], "Book": "computerorganization" }, { "section": "A.3.3 Texas Instruments TMS4116 ", "content": "16k 3 1 dynamic nmos ram figure a16 data input data output q readwrite control w input decode 16k 14 address lines required ic provides seven address lines a0a6 14 address bits multiplexed onto seven address lines using row address select ras column address select cas inputs although multiplexing decreases speed ram operation minimizes number pins ic three power supplies 12 v 5 v 5 v required operation ic illustrated memory cells organized 128 3 128 array 7 loworder address bits select row read data selected row transferred 128 senserefresh ampliers 7 highorder address bits select one 128 sense ampliers connect data output line time data sense ampliers refreshed ie capacitors charged rewritten proper row memory array write data sense ampliers changed new data values rewrite operation thus read write cycle row memory refreshed timing diagrams read refresh write cycles shown b c ras cas high q high impedance state begin cycle 7 loworder address bits rst placed address lines ras driven low 4116 latches row address highorder 7 bits address placed address lines cas driven low thus selecting required bit write cycle must valid data cas goes low w go low cas read cycle q valid ta c start cas remain valid ras cas go high data memory must refreshed every 2 ms refreshing done reading row data sense ampliers rewriting ras required perform refresh cycle 7bit counter used refresh rows memory counter counts every 2 ms 7bit output counter becomes row address cycle two modes refresh possible burst periodic burst mode rows refreshed every 2 ms thus refresh cycle takes 450 ns burst refresh mode rst 128 3 450 57600 ns 576 ms consumed refresh 19424 ms available read write periodic mode one refresh cycle every 2128 15626 ms rst 450 ns 15625 ms interval taken refresh several dynamic memory controllers available offtheshelf control lers generate appropriate signals refresh dynamic memory module system one controller described next", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-17 20:15:10", "segment": "segment6", "image_urls": [], "Book": "computerorganization" }, { "section": "A.3.4 INTEL 8202A Dynamic RAM Controller ", "content": "device figure a17 provides signals needed control 64k dynamic ram tms4116 type performs address multiplexing gener ates ras cas signals contains refresh counter timer device several features make useful building microcomputer systems follows concentrate basic features needed control dynamic ram reader refer manufacturers manuals details functions pins device shown b outputs out0 out6 functions either 14bit address inputs al0al6 ah0ah6 refresh counter outputs outputs 8202a directly connected address inputs 4116 connected w memory chip select input pcs clock clk input addition cas four ras signals generated device multiple ras signals useful selecting bank memory larger memory systems built using dynamic memory ics signals rd wr xack sack compatible control signals produced microprocessors intel 8088", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-17 20:15:10", "segment": "segment7", "image_urls": [], "Book": "computerorganization" }, { "section": "APPENDIX B ", "content": "stack implementation last inrstout lifo stack versatile structure useful variety operations computer system used address data manipulation return address storage parameter passing subroutine call return arithmetic operations alu set storage locations registers organized lifo manner coin box figure b1 popular example lifo stack coins inserted retrieved end top coin box pushing coin moves stack coins one level new coin occupying top level tl poping coin box retrieves coin top level second level sl coin becomes new top level pop lifo stack simply stack push implies levels move one tl data pop implies pops tl tl sl levels move two popular implementations stack 1 rambased implementation 2 shiftregisterbased implementation rambased implementation special register called stack pointer sp used hold address top level stack stack built reserved area memory push operation corresponds implementation stack grows toward higher address memory locations items pushed data actually move levels push pop operations figure b2 portrays shiftregisterbased implementation nlevel stack stack level hold mbit datum data pushed stack using shift right signal poped stack using shift left signal movement data levels implementation shiftregisterbased implementations faster rambased stacks memory access needed rambased implementations popular additional hardware needed sp implement stack instructions push pop registers memory locations must added instruction set stack included design", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-17 20:15:10", "segment": "segment8", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 1 ", "content": "introduction recent advances microelectronic technology made computers integral part society step everyday lives inuenced computer technology awake digital alarm clock beaming preselected music right time drive work digital processorcontrolled automobile work extensively automated ofce shop computercoded grocery items return rest computerregulated heating cooling environment homes may necessary understand detailed operating principles jet plane automobile use enjoy benets technical marvels computer systems technology also reached level sophistication wherein average user need familiar intricate technical details operation use efciently computer scientists engineers application developers however require fair understanding operating principles capabilities limitations digital computers enable development complex yet efcient userfriendly systems book designed give understanding operating principles digital computers chapter begins describing organization generalpurpose digital computer system briey traces evolution computers", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-17 20:15:10", "segment": "segment9", "image_urls": [], "Book": "computerorganization" }, { "section": "1.1 COMPUTER SYSTEM ORGANIZATION ", "content": "primary function digital computer process data input produce results better used specic application environment example consider digital computer used control trafc light intersection input data number cars passing intersection specied time period processing consists computation redyellowgreen time periods function number cars output variation redyellow green time intervals based results processing system data input device sensor detect passing car intersection trafc lights output devices electronic device keeps track number cars computes redyellowgreen time periods processor physical devices constitute hardware components system processing hardware programmed compute redyellowgreen time periods according rule rule algorithm used solve particular problem algorithm logical sequence steps solve problem translated program set instructions processor follow solving problem programs written language understandable processing hardware collection programs constitutes software component computer system", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-17 20:15:11", "segment": "segment10", "image_urls": [], "Book": "computerorganization" }, { "section": "1.1.1 Hardware ", "content": "trafclight controller simple specialpurpose computer system requir ing physical hardware components constitute generalpurpose computer system see figure 11 four major hardware blocks general purpose computer system memory unit mu arithmetic logic unit alu inputoutput unit iou control unit cu inputoutput io devices input output data memory unit systems io devices send receive data alu rather mu programs reside memory unit alu processes data taken memory unit alu stores processed data back memory unit alu control unit coordinates activities three units retrieves instructions programs resident mu decodes instructions directs alu perform corresponding processing steps also oversees io operations representative io devices shown figure 11 keyboard mouse common input devices nowadays video display printer common output devices scanners used input data hard copy sources magnetic tapes disks used io devices devices also used memory devices increase capacity mu console specialpurpose io device permits system operator interact computer system modernday computer systems console typically dedicated terminal", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-17 20:15:11", "segment": "segment11", "image_urls": [], "Book": "computerorganization" }, { "section": "1.1.2 Software ", "content": "hardware components computer system electronic devices basic unit information either 0 1 corresponding two states electronic signal instance one popular hardware technologies 0 represented 0 v 1 represented 5 v programs data must therefore expressed using binary alphabet consisting 0 1 programs written using binary digits machine language programs level programming operations add subtract represented unique pattern 0s 1s computer hardware designed interpret sequences programming level tedious since programmer work sequences 0s 1s needs detailed knowledge computer structure tedium machine language programming partially alleviated using symbols add sub rather patterns 0s 1s operations programming symbolic level called assembly language pro gramming assembly language programmer also required detailed knowledge machine structure operations permitted assembly language primitive instruction format capabilities depend hardware organization machine assembler program used translate assembly language programs machine language use highlevel programming languages fortran cobol c java reduces requirement intimate knowledge machine organization compiler program needed translate highlevel language program machine language separate compiler needed high level language used programming computer system note assembler compiler also programs written one languages translate assembly highlevel language program respectively machine language figure 12 shows sequenceof operations occursoncea program isdevel oped program written either assembly language highlevel language called source program assembly language source program translated assembler machine language program machine language program object code compiler converts highlevel language source object code object code ordinarily resides intermediate device magnetic disk tape loader program loads object code intermediate device memory unit data required program either available memory supplied input device execution program effect program execution production processed data results", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-17 20:15:11", "segment": "segment12", "image_urls": [], "Book": "computerorganization" }, { "section": "1.1.3 System ", "content": "operations selecting appropriate compiler translating source object code loading object code memory unit starting stopping accounting computer system usage automatically done system set supervisory programs permit automatic operation usually provided computer system manufacturer set called operating system receives information needs set command language statements user manages overall operation computer system operating system utility programs used system may reside memory block typically readonly special devices needed write programs readonly memory programs commonly used data termed rmware figure 13 simple rendering complete hardware software environment generalpurpose computer system", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-17 20:15:11", "segment": "segment13", "image_urls": [], "Book": "computerorganization" }, { "section": "1.2 COMPUTER EVOLUTION ", "content": "man always search mechanical aids computation development abacus around 3000 bc introduced positional notation number systems seventeenthcentury france pascal leibnitz developed mechanical calculators later developed desk calculators 1801 jacquard used punched cards instruct looms weaving various patterns cloth 1822 charles babbage englishman developed difference engine mechanical device carried sequence computations specied settings levers gears cams data entered manually computations progressed around 1820 babbage proposed analytical engine would use set punched cards program input another set cards data input third set cards output results mechanical technology sufciently advanced analytical engine never built nevertheless analytical engine designed probably rst computer modern sense word several unitrecord machines process data punched cards developed united states 1880 herman hollerith census applications 1944 mark rst automated computer announced electromechanical device used punched cards input output data paper tape program storage desire faster computations mark could provide resulted development eniac rst electronic computer built vacuum tubes relays team led americans eckert mauchly eniac employed storedprogram concept sequence instructions ie program stored memory use machine processing data eniac control board programs wired rewiring control board necessary computation sequence john von neumann member eckertmauchly team developed edvac rst storedprogram computer time wilkes developed edsac rst operational storedprogram machine also introduced concept primary secondary memory hierarchy von neumann credited developing storedprogram concept beginning 1945 rst draft edvac structure edvac established organization stored program computer von neumann machine contains 1 input device data instructions entered 2 storage unit results entered instructions data fetched 3 arithmetic unit process data 4 control unit fetch interpret execute instructions storage 5 output device deliver results user contemporary computers von neumann machines although various alternative architectures evolved", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-17 20:15:11", "segment": "segment14", "image_urls": [], "Book": "computerorganization" }, { "section": "1.2. 1 Von Neuma nn Model ", "content": "figur e 1 4 show von neu mann mode l typical uniproce ssor compute r syst em consi sting memor unit alu cont rol unit io unit memor unit sing leport devi ce consist ing mem ory address register r memory buf fer register mbr also calle mem ory data regis ter mdr th e memory cells arranged form sever al memor words whe word unit data read writte n read write opera tions memor utilize memor port th e alu performs arith meti c logic operations data items acc umulator acc andor mb r typical ly acc ret ains results operati ons control unit consi sts progr counter pc contain addre ss instru ction fetched instruc tion register ir instru ctions fetched memor execu tion two regist ers include structure th ese used hold data address valu es com putation simplic ity io subsys tem shown input outpu alu subsyste m practice o may also occur directly memory o devi ces witho ut utilizi ng processor registers components system interconnected multiplebus structure data addresses ow control unit manages ow use appropriate control signals figur e 15 show mor e general ized compute r system str ucture represe ntative modern day architectures processor subsystem ie central processing unitcpu consists alu control unit various processor registers processor memory io subsystems interconnected system bus consists data address control status lines practical systems may differ singlebus architecture figure 15 sense may congured around multiple buses instance may memory bus connects processor memory subsystem io bus interface io devices processor forming twobus structure possible congure system several io buses wherein bus may interface one type io device processor since multiple bus structures allow simultaneous operations buses higher throughput possible compared singlebus architectures however multiple buses system complexity increases thus speedcost tradeoff required decide system structure important note following characteristics von neumann model make inefcient 1 programs data stored single sequential memory create memory access bottleneck 2 explicit distinction data instruction representations memory distinction brought cpu execu tion programs 3 highlevel language programming environments utilize several data structures single multidimensional arrays linked lists etc memory one dimensional requires data structures linearized repre sentation 4 data representation retain information type data instance nothing distinguish set bits representing oatingpoint data representing character string distinction brought program logic characteristics von neumann model overly general requires excessive mapping compilers generate code executable hardware programs written highlevel languages problem terme seman tic gap spite deci encies vo n neu mann model mos practical structure digital compute rs sever al efcient com pilers developed year n arrowed sem antic gap exte nt almost invisible high level language program ming environment note von neu mann model figure 14 provides one path addresse second path data instructions cpu memory early variation model harvard architecture shown figure 16 architecture provides independent paths data addresses data instruction addresses instructions allows cpu access instruction data simultaneously name harvard architecture due howard aiken work marki markiv computers harvard university machines separate storage data instructions current harvard architectures use separate storage data instructions separate paths buffers access data instructions simultaneously early enhancements von neumann model mainly concentrated increasing speed basic hardware structure hardware technology progressed efforts made incorporate many highlevel language features possible hardware rmware effort reduce semantic gap note hardware enhancements alone may sufcient attain desired performance architecture overall computing environment start ing algorithm development execution programs needs analyzed arrive appropriate hardware software rmware structures possible structures exploit parallelism algorithms thus performance enhancement one reason parallel processing also reasons reliability fault tolerance expandability modular devel opment etc dictate parallel processing structures introduce parallel processing concepts later book", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-17 20:15:11", "segment": "segment15", "image_urls": [], "Book": "computerorganization" }, { "section": "1.2.2 Generations of Computer Technology ", "content": "commercial computer system development followed development hardware technology usually divided four generations 1 first generation 19451955 vacuum tube technology 2 second generation 19551965 transistor technology 3 third generation 19651980 integrated circuit ic technology 4 fourth generation 1980 very large scale integrated vlsi circuit tech nology elaborate architectural details various machines developed three generations except following brief evolution account firstgeneration machines univac 1 ibm 701 built vacuum tubes slow bulky accommodated limited number io devices magnetic tape predominant io medium data access time measured milliseconds secondgeneration machines ibm 1401 7090 rca 501 cdc 6600 bur roughs 5500 dec pdp1 used randomaccess core memories transistor technol ogy multifunctional units multiple processing units data access time measured microseconds assembler highlevel language developed integratedcircuit technology used thirdgeneration machines ibm 360 univac 1108 illiaciv cdc star100 contributed nano second data access processing times multiprogramming array pipeline processing concepts came computer systems viewed generalpurpose data processors introduction 1965 dec pdp8 minicomputer minicomputers regarded dedicated application machines limited processing capability compared largescale machines since several new minicomputers introduced distinction mini largescale machines becoming blurred due advances hardware software technology development microprocessors early 1970s allowed signicant contribution third class computer systems microcomputers microproces sors essentially computers integratedcircuit ic chip used components build dedicated controller processing system advances ic technology leading current vlsi era made microprocessors powerful minicomputers 1970s vlsibased systems called fourthgeneration sys tems since performance much higher thirdgeneration systems modern computer system architecture exploits advances hardware software technologies fullest extent due advances ic technology make hardware much less expensive architectural trend interconnect several processors form highthroughput system claim witnessing development fthgeneration systems accepted denition fthgeneration computer fifthgeneration development efforts united states involve building super computers high computational capability large memory capacity exible multipleprocessor architectures employing extensive parallelism japanese fthgeneration activities aimed toward building articial intelligencebased machines high numeric symbolic processing capabilities large memories userfriendly natural interfaces attribute fth generation biologyinspired neural networks dna computers optical computer systems current generation computer systems exploits parallelism algorithms computations provide high performance simplest example parallel architecture harvard architecture utilizes two buses operating simul taneously parallel processing architectures utilize multiplicity processors memories operating concurrently describe various parallel pipelined architecture structures later book", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-17 20:15:12", "segment": "segment16", "image_urls": [], "Book": "computerorganization" }, { "section": "1.2.3 Moore\u2019s Law ", "content": "progress hardware technology resulting current vlsi era given us capability fabricate millions transistors ic chip allowed us build chips powerful processing structures large memory systems early supercomputers cdc 6600 128 kb memory could perform 10 million instructions per second today supercomputers contain thousands processors terabytes memory perform trillion operations per second possible miniaturization transistors far miniaturization continue 1965 gordon moore founder intel corporation stated density transistors ic double every year socalled moore law modied state density chips doubles every 18 months law held true 40 years many believe continue hold decades arthur rock proposed corollary moore law states cost capital equipment manufacture ics double every four years indeed cost building new ic fabrication facilities escalated 10000 1960s 10 million 1990s 3 billion today thus cost might get prohibitive even though technology allows building denser chips newer technologies process ing paradigms continually invented achieve costtechnology compromise", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-17 20:15:12", "segment": "segment17", "image_urls": [], "Book": "computerorganization" }, { "section": "1.3 ORGANIZATION VERSUS DESIGN VERSUS ARCHITECTURE ", "content": "computer organization addresses issues types capacity memory control signals register structure instruction set etc answers question computer work basically programmer point view architecture art science buildi ng method style buildi ng th us compute r architect devel ops performanc e specica tions various component compute r syst em denes interc onnecti ons betwee n computer designer hand rene thes e component specica tions imple ments using hardwar e software rm ware elements arch itect capab ilities greatly enhanc ed also expose design aspects compute r system com puter syst em describ ed followin g leve ls detail 1 processormemoryswitch pms level architect views system simply description system components interconnections components specied blockdiagram level 2 instruction set level level function instruction described emphasis description level behavior system rather hardware structure system 3 register transfer level level hardware structure visible previous levels hardware elements level registers retain data processed current phase processing complete 4 logic gate level level hardware elements logic gates ip ops behavior less visible hardware structure predominates 5 circuit level level hardware elements resistors transistors capacitors diodes 6 mask level level silicon structures layout implements system ic shown one moves rst leve l descr iption towar last evident behavio r machin e transf ormed hardwares oftwa struct ure compute r architect conce ntrates rst two levels describ ed earli er whereas compute r desi gner takes system desi gn remainin g levels", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-17 20:15:12", "segment": "segment18", "image_urls": [], "Book": "computerorganization" }, { "section": "1.4 PERFO RMANC E EVALU ATION ", "content": "several measur es perf ormance used eval uation compute r systems common ones million instru ctions per secon mips million operations per second mops million oatingpoint operations per second mflops megaops billion oatingpoint operations per second gflops gigaops million logical inferences per second mlips machines capable trillion oatingpoint operations per second teraops available table 11 lists common prexe used thes e measures po wer of10 prexes typically used power frequency voltage computer performance measurements powerof2 prexes typically used memory le register sizes measure used depends type operations one interested particular application machine evaluated measures based mix operations representative occur rence application performance rating could either peak rate ie mips rating cpu exceed realistic average sustained rate addition comparative rating compares average rate machine wellknown machines also used addition performance factors considered evaluating architectures generality wide range applications suited architecture ease use expandability scalability one feature receiving considerable attention openness architecture architecture said open designers publish architec ture details others easily integrate standard hardware software systems guiding factor selection architecture cost several analytical techniques used estimating performance techniques approximations complexity system increases techniques become unwieldy practical method estimating performance cases using benchmarks", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-17 20:15:12", "segment": "segment19", "image_urls": [], "Book": "computerorganization" }, { "section": "1.4.1 Benchmarks ", "content": "benchmarks standardized batteries programs run machine estimate performance results running benchmark given machine compared known standard machine using criteria cpu memory utilization throughput device utilization etc benchmarks useful evaluating hardware well software single processor well multiprocessor systems also useful comparing performance system certain changes made highlevel language host computer architecture execute ef ciently features programming language frequently used actual programs ability often measured benchmarks benchmarks considered representative classes applications envisioned architecture several benchmar k suites use today bein g developed important note benchm arks provi de broad performanc e guideline responsi bility user select benchm ark com es close appl ication evaluate machin e based scena rios expected appl ication machin e bein g evaluat ed cha pter 15 provides details benchmarks performance evaluation", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-17 20:15:12", "segment": "segment20", "image_urls": [], "Book": "computerorganization" }, { "section": "1.5 SUMMARY ", "content": "chapter introduced basic terminology associated modern computer systems ve common levels architecture abstractions introduced along trace evolution computer generations performance issues measures briey introduced subsequent chapters book expand concepts issues highlighted chapter references burks aw goldstine hh von neumann j preliminary discussion logical design electrical computing instrument us army ordnance department report 1946 goldstine hh computer pascal von neumann princeton nj princeton university press 1972 grace r benchmark book upper saddle river nj prenticehall 1996 hill md jouppi np sohi gs readings computer architecture san francisco ca morgan kaufmann 1999 price wj benchmarking tutorial ieee microcomputer oct 1989 pp 28 43 schaller r moore law past present future ieee spectrum june 1997 pp 5259 shiva sg advanced computer architectures boca raton fl crc taylor francis 2005 problems", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-17 20:15:12", "segment": "segment21", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 2 ", "content": "number systems codes mentioned earlier elements discrete data representation correspond discrete voltage levels current magnitudes digital system hardware digital system required manipulate numeric data instance best use 10 voltage levels level corresponding decimal digit noise introduced multiple levels makes representation impractical therefore digital systems typically use twolevel representation one voltage level representing 0 representing 1 represent 10 decimal digits using binary twovalued alphabet 0 1 unique pattern 0s 1s assigned digit example electronic calculator keystroke produce pattern 0s 1s corresponding digit operation represented key data elements operations represented binary form practical digital systems good understanding binary number system data representation basic analysis design digital system hardware chapter discuss binary number system detail addition discuss two widely used systems octal hexadecimal two number systems useful representing binary information compact form human user digital system works data manipulated system either verify communicate another user compactness provided systems helpful see chapter data conversion one number system performed straightforward manner data processed digital system made decimal digits alphabetic characters special characters etc digital system uses unique pattern 0s 1s represent digits characters binary form collection binary patterns called binary code various binary codes devised digital system designers years popular codes discussed chapter", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-17 20:15:13", "segment": "segment22", "image_urls": [], "Book": "computerorganization" }, { "section": "2.1 NUMBER SYSTEMS ", "content": "let us review decimal number system system familiar 10 symbols 0 9 called digits system along set relations dening operations addition subtraction multipli cation 3 division total number digits number system called radix base system digits system range value 0 r1 r radix decimal system r 10 digits range value 0 101 9 socalled positional notation number radix point separates integer portion number fraction portion fraction portion radix point explicitly shown positional notation further position representation weight associated weight position equivalent radix raised power power starts 0 position immediately left radix point increases 1 move position toward left decreases 1 move position toward rightatypicalnumberinthedecimalsystemisshowninthefollowingexample n 1 integer digits fraction digits number shown earlier consider integer n digits nite range values represented integer smallest value range 0 corresponds digit ndigit integer equal 0 digit corresponds value r 1 highest digit number system ndigit number attains highest value range value equal rn 1 table 21 lists rst numbers various systems discuss binary octal hexadecimal systems next", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-17 20:15:13", "segment": "segment23", "image_urls": [], "Book": "computerorganization" }, { "section": "2.1.1 Binary System ", "content": "system radix 2 two allowed digits 0 1 binary digit abbreviated bit typical binary number shown positional notation following example 16 10 see polyn omial expans ion summa tion show n posi tions cont aining 0 contribu te sum conver binary number deci mal simply accumul ate weight correspo nding nonzer bit number bit take either two valu es 0 1 2 bits derive 2 2 4 combina tions 00 01 10 11 decima l valu es combina tions bi nary numb ers 0 1 2 3 respective ly sim ilarly 3 bits deriv e 2 r 8 com binations ranging value 000 0 decima l 11 1 7 deci mal gener al wi th n bit possi ble generat e 2 n combina tions 0s 1s combina tions whe n viewed binary nu mbers range value 0 2 n 1 table 22 shows som e bina ry numb ers various valu es n 2n com binations possib le n obta ined start ing n 0s count ing bina ry u ntil numb er n 1s reach ed mechani cal method gener ating com binations describ ed herein th e rst com bination n 0s last n 1s see tab le 22 valu e least signican bit lsb bit posi tion 0 alter nates valu e betwee n 0 1 every row move row row similarly value bit posi tion 1 alternat es ever two rows i e two 0s followed tw 1s gener al value bit posi tion alter nates every 2i rows start ing 0s th observat ion utilize generat ing 2n combinatio ns", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-17 20:15:13", "segment": "segment24", "image_urls": [], "Book": "computerorganization" }, { "section": "2.1.2 Octal System ", "content": "system r 8 allowed digits 0 1 2 3 4 5 6 7 typical numb er show n positional notation exam ple 23", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-17 20:15:13", "segment": "segment25", "image_urls": [], "Book": "computerorganization" }, { "section": "2.1.3 Hexadecimal System ", "content": "system r 16 allowed digits 0 1 2 3 4 5 6 7 8 9 b c e f digits f correspond decimal values 10 15 respect ivelyatypicalnumberisshowninthefollowingexample", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-17 20:15:13", "segment": "segment26", "image_urls": [], "Book": "computerorganization" }, { "section": "2.2 CON VERSIO N ", "content": "conver numb ers nondecim al system decima l simply expand give n numb er polyn omial eval uate polyn omial using deci mal arith meti c show n ex amples 21 24 decimal numb er conver ted system integer fraction portions number handled separately radix divide technique used convert integer portion radix multiply technique used fraction portion", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-17 20:15:13", "segment": "segment27", "image_urls": [], "Book": "computerorganization" }, { "section": "2.2.1 Radix Divide Technique ", "content": "1 divide given integer successively required radix noting remainder step quotient step becomes new dividend subsequent division stop division process quotient becomes zero 2 collect remainders step last rst place left right form required number following examples illustrate procedure example 25 245 rst divided 2 generating quotient 122 remainder 1 next 122 divided generating 61 quotient 0 remainder division process continued quotient 0 remainders noted step remainder bits step last rst placed left right form number base 2 verify validity radix divide technique consider polynomial representation 4bit integer a3a2a1a0 rewritten 2 2 2 a3 a2 a1 a0 form seen bits binary number correspond remainder dividebytwo operation following examples show applicationofthetechniquestoothernumbersystems", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-17 20:15:13", "segment": "segment28", "image_urls": [], "Book": "computerorganization" }, { "section": "2.2.2 Radix Multiply Technique ", "content": "move position right radix point weight corresponding bit binary fraction halved radix multiply technique uses fact multiplies given decimal number 2 ie divides given number half obtain fraction bit technique consists following steps 1 successively multiply given fraction required base noting integer portion product step use fractional part product multiplicand subsequent steps stop fraction either reaches 0 recurs 2 collect integer digits step rst last arrange left right radix multiplication process converge 0 possible represent decimal fraction binary exactly accuracy depends number bits used represent fraction examples follow radix divide multiply algorithms applicable conversion numbers base base number converted base p base q number base p divided multiplied q base p arithmetic familiarity decimal arithmetic methods convenient p 10 general easier convert base p number base q p 6 10 q 6 10 rst converting number decimal base p converting decimal number base q ie n p 10 q shown following example", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-17 20:15:13", "segment": "segment29", "image_urls": [], "Book": "computerorganization" }, { "section": "2.2.3 Base 2k Conversion ", "content": "eight octal digits represented 3bit binary number similarly 16 hexadecimal digits represented 4bit binary number general digit base p number system p integral power k 2 represented kbit binary number converting base p number base q p q integral powers 2 base p number rst converted binary turn converted base q inspection conversion procedure called base 2k conversion thus possible represent binary numbers compact form using octal hexadecimal systems conversion systems also straightforward easier work fewer digits large number bits digital system users prefer work octal hexadecimal systems understanding verifying results produced system communicating data users users machine", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-17 20:15:13", "segment": "segment30", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3 ARITHMETIC ", "content": "arithmetic number systems follows general rules decimal binary arithmetic simpler decimal arithmetic since two digits 0 1 involved arithmetic octal hexadecimal systems requires practice general unfamiliarity systems section describe binary arithmetic detail followed brief discussion octal hexadecimal arithmetic simplicity integers used examples section nonetheless procedures valid fractions numbers integer fraction portions socalled xedpoint representation binary numbers digital systems radix point assumed either right end left end eld number represented rst case number integer second fraction fixedpoint representation common type repr esentatio n sci entic com puting applica tions large range numb ers mus repr esented oating point represe ntation u sed floati ngpoint repr esentatio n f numb ers discusse section 2 6", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-17 20:15:13", "segment": "segment31", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1 Binary Arithmetic ", "content": "table 23 illustrates rules binary addition subtraction multiplication", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-17 20:15:13", "segment": "segment32", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1.1 Addition ", "content": "table 23 note 0 0 0 0 1 1 1 0 1 1 1 10 thus addition two 1s results sum 0 carry 1 two binary numbers added carry position included addition bits next signicant position decimal arithmetic example216illustratesthis bits lsb position ie position 0 rst added resulting sum bit 1 carry 0 carry included addition bits position 1 3 bits position 1 added using two steps 0 1 1 1 1 10 resulting sum bit 0 carry bit 1 next signicant position position 2 process continued signicant bit msb gener al addi tion two nbi numbers resu lts nu mber n 1bits long numb er represe ntation conned n bits operands theaddition shoul kept small enough sum excee n b", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-17 20:15:13", "segment": "segment33", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1.2 Subtraction ", "content": "table 23 b see 0 0 0 10 1 1 1 0 0 1 1 borrow of1thatis subtracting a1froma0resultsina1withaborrowfromthenext mostsignicantposition asindecimalarithmeticsubtractionoftwobinarynumbersis performed stage bystage asindecimal arithmetic starting thelsb tothemsb someexamplesfollow bit position 1 requires borrow bit position 2 borrow minuend bit 2 becomes 0 subtraction continues msb bit 2 requires borrow bit 3 borrow minuend bit 3 0 bit 3 requires borrow bits 4 5 minuend zeros borrowing bit 6 process intermediate minuend bits 4 5 attain value 1 compare decimal subtraction subtraction continues msb", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-17 20:15:14", "segment": "segment34", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1.3 Multiplication ", "content": "binary multiplication similar decimal multiplication table 23 c see 0 3 0 0 0 3 1 0 1 3 0 0 1 3 1 1 example follows general product two nbit numbers 2n bits long example 219 two nonzero bits multiplier one position 2 corresponding 22 position 3 corresponding 23 2 bits yield partial products whose values simply multiplicand shifted left 2 3 bits respect ively 0 bits multiplier contribute partial products 0 values thus following shiftandadd algorithm adopted multiply two nbit numbers b b bn1 bn2 b1b0 1 start 2nbit product value 0 2 bi 0 n1 6 0 shift positions left add product procedure reduces multiplication repeated shift addition multiplicand", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-17 20:15:14", "segment": "segment35", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1.4 Division ", "content": "longhand trialanderror procedure decimal division also used binary shown example 220 example 220 procedure divisor compared dividend step divisor greater dividend corresponding quotient bit 0 otherwise quotient bit 1 divisor subtracted dividend compare andsubtract process continued lsb dividend procedure formalized following steps 1 align divisor signicant end dividend let portion dividend msb bit aligned lsb divisor denoted x assume n bits divisor 2n bits dividend let 0 2 compare x y x quotient bit 1 perform xy x quotient bit 0 3 set 1 n stop otherwise shift 1 bit right go step 2 purposes illustration procedure assumed division integers divisor greater dividend quotient 0 divisor 0 procedure stopped since dividing 0 results error see examples multiplication division operations reduced repeated shift addition subtraction hardware perform shift add subtract operations programmed perform multiplication division well older digital systems used measures reduce hardware costs advances digital hardware technology possible implement complex operations economical manner", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-17 20:15:14", "segment": "segment36", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3.1.5 Shifting ", "content": "generally shifting base r number left one position inserting 0 vacant lsd position equivalent multiplying number r shifting number right one position inserting 0 vacant msd position generally equivalent dividing number r binary system left shift multiplies number 2 right shift divides number 2 shown example 221 msb nbit number 0 shifting left would result number larger magnitude accommodated n bits 1 shifted msb position discarded nonzero bits shifted lsb position right shift discarded accuracy lost later chapter discuss shifting detail", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-17 20:15:14", "segment": "segment37", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3. 2 Oct al Arithm etic ", "content": "table 2 4 show octal addi tion multipl ication tables exampl es follow illustrate four arithmetic operations octal similarity decimal arithmetic table 24 used look result stage arithmetic alternate method used following examples operation rst performed decimal result converted octal beforeproceedingtothenextstage asshowninthescratchpad use multipl ication table table", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-17 20:15:14", "segment": "segment38", "image_urls": [], "Book": "computerorganization" }, { "section": "2.4 ", "content": "derive quotient digit trial and err", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-17 20:15:14", "segment": "segment39", "image_urls": [], "Book": "computerorganization" }, { "section": "2.3. ", "content": "3 hexade cimal arithme tic table", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-17 20:15:14", "segment": "segment40", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5 ", "content": "show addition multiplic ation tables followi ng exam ples illustrate hexadecimal arithmetic examples shown far used positive numbers practice digital system must represent positive negative numbers accommodate sign number additional digit called sign digit included representation along magnitude digits thus represent ndigit number would need n 1 digits typically sign digit msd two popular representation schemes used signmagnitude system complement system", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-17 20:15:14", "segment": "segment41", "image_urls": [], "Book": "computerorganization" }, { "section": "2.4 SIGN-MAGNITUDE SYSTEM ", "content": "representation n 1 digits used represent number msd sign digit remaining n digits magnitude digits value sign digit 0 positive numbers r 1 negative numbers r radix number system sample representations follow example 231 assume ve digits available represent number sign magnitude portions number separated illustration purposes used actual representation sign magnitude portions handled separately arithmetic using signmagnitude numbers magnitude result computed appropriate sign attached result decimal arithmetic sign magnitude system used small digital systems digital meters typically decimal mode arithmetic used digital computers decimal binarycoded decimal arithmetic mode described later chapter complement number representation prevalent representation mode modernday computer systems", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-17 20:15:14", "segment": "segment42", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5 COMPLEMENT NUMBER SYSTEM ", "content": "consider subtraction number number b equivalent adding a b complement number system provides convenient way representing negative numbers ie complements positive numbers thus reduc ing subtraction addition multiplication division correspond respectively repeated addition subtraction possible perform four basic arithmetic operations using hardware addition negative numbers represented complement form two popular complement num ber systems radix complement diminished radix complement system commonly called either 2s complement 10s complement depending number system used section describe 2s complement system 10s complement system displays char acteristics 2s complement system discussed example 232 2s complement 01010 2 25 01010 100000 01010 10110 n 5 r 2 b 2s complement 00010 2 21 00010 100000 00010 11110 n 1 r 2 c 10s complement 4887 10 102 4887 5113 n 4 r 10 10s complement 4887 10 102 4887 5113 n 2 r 10 veried example 232 two methods radix complement number method 1 complement add 1 01010 2 10101 a complement bit ie change 0 1 1 0 1 b add 1 lsb get 2s complement 10110 method 2 copy complement 010 10 2 10 101 10 101 a copy bits lsb including rst nonzero bit b complement remaining bits msb get 2s complement diminished radix complement n r1 number n r dened n r1 rn rm n r 24 n respectively number digits integer fraction portion number note n r n r1 rm 25 radix complement number obtained adding r1 lsb diminished radix complement form number diminished radix complement commonly called 1s complement 9s complement depending whether binary decimal number system used respectively example 233 n r r n n r1 1001 2 4 0 24201001 100011001 11111001 0110 b 1001 2 3 1 23211001 100001 1001 11111001 0110 c 4867 10 3 1 1031014867 1000014867 99994867 5132 example 233 seen 1s complement number obtained subtracting digit largest digit number system binary system equivalent complementing ie changing 1 0 0 1 bit given number exam ple 234 n 1011011 0 1s comple ment n 1111111 1 1011 0110", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-17 20:15:14", "segment": "segment43", "image_urls": [], "Book": "computerorganization" }, { "section": "01001.00 1 ", "content": "also obta ined compleme nting bit n signmagni tude represe ntation sign bit include representa tion n umbers comple ment system well becaus e comp lement number corr espond negativ e p ositive numb ers represe nted comp lement syst ems remai n sam e form signm agnitude system negativ e numb ers represe nted comple ment form shown followi ng exam ple assume 5 bits availa ble repr esentatio n msb sign bit obtain comp lement numb er start sign magnitud e form correspo nding posi tive numb er adopt com plemen ting proced ures discussed example 2 35 sign bit separ ated mag nitude bits il lustration purposes th separ ation necessar complement systems since sign bit also participates arithmetic though magnitude bit see later section table 26 shows range numb ers represente 5 bits signmagnitude 2s complement 1s complement systems note sign magnitude 1s complement systems two representations 0 0 0 whereas 2s complement system unique representation 0 note also use combination 10000 represent largest negative number 2s complement system general ranges integers represented nbit eld using 1 sign bit n1 magnitude bits three systems 1 signmagnitude 2n11 2n11 2 1s complement 2n11 2n11 3 2s complement 2n1 2n11 illustrate arithmetic systems number representation", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-17 20:15:14", "segment": "segment44", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5.1 2s Complement Addition ", "content": "example 236 illustrates addition numbers represented 2s complement form example 236 decimal signmagnitude 2s complement 5 00101 00101 4 00100 00100 01001 9 signmagnitude 2s complement representations since numbers positive 2s complement addition sign bit also treated magnitude bit participate additi proce ss exam ple sign magn itude portion separated illus tration purposes b decimal sign magnit ude 2s complem ent 5 0010 1 00101 4 1010 0 11100 100001 0001 2 ca rry sign po sition ignored negativ e number represen ted com plemen form two numb ers added sign bits also include addition proce ss carry sign bit position ignored sign bit 0 indicat ing result positive c decimal sign magnitude 2s complem ent 4 00100 00100 5 10101 11011 11111 0001 2 th e resu lt n egative carry carry generated msb addition result negat ive sinc e sign 1 resu lt com plemen form must com plemen ted obta signm agnitude repr esentatio n dec imal signma gnitude 2s complem ent 5 10101 11011 4 10100 11100 110111 1001 2 ignor e carry resu lt negative subtrac tion perform ed decima l arithmeti c sign magnitud e system number smaller magnitude subtracted one larger magnitude sign result larger number comparison needed complement system shown ex ample 236 summary 2s complement addition carry generated msb ignored sign bit result must operands operands sign result large t magnitude eld hence overow occurs signs operands different carry sign bit indicates positive result carry generated result negative must complemented obtain sign magnitude form sign bit participates arithmetic", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-17 20:15:14", "segment": "segment45", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5.2 1s Complement Addition ", "content": "example 237 illustrates 1s complement addition similar 2s comple ment representation except carry generated sign bit added lsboftheresulttocompletetheaddition", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-17 20:15:14", "segment": "segment46", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5.3 Shifting Revisited ", "content": "seen earlier shifting binary number left 1 bit equivalent multiplying 2 shifting binary number right 1 bit equivalent dividing 2 example 238 illustrates effect shifting unsigned number example 238 consider number n six integer bits four fraction bits shift number left 1 msb would lost thereby resu lting overo w sh ifting n right 1 bit th e 1 lsb lost becau se shift thereby resu lting less accur ate fra ction enough bits retai n n onzero bits fraction shift operation loss accuracy result prac tice nite numb er bits number represe ntation hence care mus taken see shifti ng resu lt either overo w inac curacy sign magnit ude shifting sign magnitud e numb ers shifted sign bit include shift opera tions sh ifting follows sam e p rocedure ex ample 238 2s complement shifting 2s complement numbers shifted right sign bit value copied vacant msb magnitude bit position left 0 inserted vacant lsb position left shift example 239 illustratesthis change value sign bit left shift indicates overow ie result large 1s complement shifting 1s complement numbers shifted copy sign bit inserted vacant lsb position left shift msb position magnitude bits right shift sign bit receives msb ofthemagnitudeduringtheleftshift", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-17 20:15:15", "segment": "segment47", "image_urls": [], "Book": "computerorganization" }, { "section": "2.5.4 Comparison of Complement Systems ", "content": "table 27 summarizes operations complement systems 2s complement system used almost digital computer systems 1s complement system used older computer systems advantage 1s complement system 1s complement obtained inverting bitoftheoriginalnumberfrom1to0and0to1 whichcanbedoneveryeasilyby logic components digital system conversion number 2s complement system requires addition operation 1s complement number obtained scheme implement copycomplement algorithm described earlier chapter 2s complement widely used system representation one popular representationbiased excessradix representation used represent oatingpoint numbers representation described section 26", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-17 20:15:15", "segment": "segment48", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6 FLOATING-POINT NUMBERS ", "content": "fixedpoint representation convenient representing numbers bounded orders magnitude instance digital computer uses 32 bits represent numbers range integers used limited 2311 approximately 1012 scientic computing environments wider range numbers may needed oatingpoint representation may used general form oatingpoint representation number n last three forms valid oatingpoint representations among rst two forms preferred however since forms need represent integer portion mantissa 0 rst form requires fewest digits represent mantissa since signicant zeros eliminated mantissa form called normalized form oatingpoint representation note example radix point oats within mantissa incre menting exponent 1 move left decrementing exponent 1 move right shifting mantissa scaling exponent frequently done manipulation oatingpoint numbers let us concentrate representation oatingpoint numbers radix implied number system used hence shown explicitly representation mantissa exponent either positive negative hence oatingpoint representation consists four components e f se sf se sf signs exponent mantissa respectively f represented normalized form true binary form used representation f rather complement forms 1 bit used represent sf 0 positive 1 negative since msb normalized mantissaisalways1 therangeofmantissais oatingpoint representation 0 exception contains 0s two oatingpoint numbers added exponents must compared equalized addition simplify comparison operation without involv ing sign exponent exponents usually converted positive numbers adding bias constant true exponent bias constant usually magnitude largest negative number represented exponent eld thus oatingpoint representation q bits exponent eld 2s complement representation used unbiased exponent en range long mantissa 0 theoretically exponent anything thereby making possible several representations oatingpoint representation numbers xedpoint representation represented 0 sequence 0s retain uniqueness 0 representation xed oatingpoint repre sentations mantissa set 0s exponent set negative exponent biased form ie 0s", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-17 20:15:15", "segment": "segment49", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1 IEEE Standard ", "content": "1985 ieee institute electrical electronics engineers published standard oatingpoint numbers standard ofcially known ieee754 1985 species singleprecision 32 bits doubleprecision 64 bits oatingpoint numbers represented well arithmetic carried binary oatingpoint numbers stored signmagnitude form shown signicant bit sign bit exponent biased exponent mantissa signicand minus signicant bit exponents signed small huge values 2s complement representation usual representation signed values would make comparison exponent values harder solve problem exponent biased 2e1 1 stored makes value unsigned range suitable comparison signicant bit mantissa determined value exponent 0 exponent 2e 1 signicant bit mantissa 1 number said normalized exponent 0 signicant bit mantissa 0 number said denormalized three special cases arise 1 exponent 0 mantissa 0 number 0 depending sign bit 2 exponent 2e 1 mantissa 0 number innity depending sign bit 3 exponent 2e 1 mantissa 0 number represented number nan summarized", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-17 20:15:15", "segment": "segment50", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.1 Single Precision ", "content": "singleprecision binary oatingpoint number stored 32bit word exponent biased 2811 127 case 8bit exponent eld represent values range 126 127 exponent 127 would biased value 0 reserved encode value denormalized number zero exponent 128 would biased value 255 reserved encode innity number normalized numbers common exponent eld contains biased exponent value mantissa eld isthefractionalpartofthesignicandthenumberhasthevalue e fraction binary ie signicand binary number 1 followed radix point followed binary bits fraction therefore 1 2 note 1 demoralized numbers except e 126 0 fraction e 127 signicand shifted right one bit order include leading bit always 1 case balanced incrementing exponent 126 calculation 2 126 smallest exponent normalized number 3 two zeroes 0 0 0 1 4 two innities 1 0 1 1 5 nans may sign signicand meaning diagnostics rst bit signicand often used distinguish signaling nans quiet nans 6 nans innities 1s exp eld 7 smallest nonzero positive largest nonzero negative numbers represented denormalized value 0s exp eld binary value 1 fraction eld example 241 represent decimal number 118625 using ieee 754 system 1 since negative number sign bit 1 2 number without sign binary 1110110101 3 moving radix point left leaving 1 left 1110110101 1110110101 3 26 normalized oatingpoint number mantissa part right radix point lled 0s right make 23 bits 11011010100000000000000 4 exponent 6 bias 127 hence exponent eld 6 127 133 binary 10000101 representation thus", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-17 20:15:15", "segment": "segment51", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.2 Double Precision ", "content": "double precision format essentially except elds wider nans innities represented exponent 1s 2047 normal ized numbers exponent bias 1023 e exp 1023 denormalized numbers exponent 1022 minimum exponent normalized number 1023 normalized numbers leading 1 digit binary point denormalized numbers earlier innity zero signed note 1 smallest nonzero positive largest nonzero negative numbers represented denormalized value 0s exp eld binary value 1 fraction eld 21074 5 10324 2 smallest nonzero positive largest nonzero negative normalized numbers represented value binary value 1 exp 0 fraction eld 21022 22250738585072020 10308 3 largest nite positive smallest nite negative numbers represented value 1022 exp eld 1s fraction eld 21024 2971 17976931348623157 10308", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-17 20:15:15", "segment": "segment52", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.3 Comparison ", "content": "representation makes comparisons subsets numbers possible bytebybyte basis share byte order sign nans excluded example two positive oatingpoint numbers b comparison b gives identical results comparison two signed unsigned binary integers bit patterns byte order b words two positive oatingpoint numbers known nans compared signed unsigned binary integer comparison using bits provided oatingpoint numbers use byte order", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-17 20:15:15", "segment": "segment53", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.4 Rounding ", "content": "ieee standard four different rounding modes 1 unbiased rounds nearest value number falls midway rounded nearest value even zero least signicant bit mode required default 2 toward zero 3 toward positive innity 4 toward negative innity standard behavior computer hardware round ideal innitely precise result arithmetic operation nearest representable value give representation result practice options ieee754compliant hardware allows one set rounding mode following 1 round nearest default far common mode 2 round toward 1 negative results round toward zero 3 round toward 1 negative results round away zero 4 round toward zero sometimes called chop mode similar common behavior oattointeger conversions convert 39 3 default rounding mode ieee 754 standard mandates roundtonearest behavior described earlier fundamental algebraic operations including square root library functions cosine log mandated means ieeecompliant hardware behavior completely determined 32 64 bits mandated behavior dealing overow underow appropriate result computed taking rounding mode consideration though exponent range innitely large resulting exponent packed eld correctly overow underow action described earlier taken arithmetical distance two consecutive representable oatingpoint numbers called ulp unit last place example numbers represented 45670123 45670124 hexadecimal one ulp ulp 107 single precision 1016 double precision mandated behavior ieeecompliant hardware result within onehalf ulp", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-17 20:15:15", "segment": "segment54", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.5 Accuracy ", "content": "oatingpoint numbers faithfully mimic real numbers oatingpoint operations faithfully mimic true arithmetic operations many problems arise writing mathematical software uses oating point first although addition multiplication commutative b b 3 b b 3 associative b c b c using 7digit decimal arithmetic", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-17 20:15:16", "segment": "segment55", "image_urls": [], "Book": "computerorganization" }, { "section": "1234.567 \u00fe 45.67844 \u00bc 1280.245 1280.245 \u00fe 0.0004 \u00bc 1280.245 ", "content": "2007 taylor francis group llc", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-17 20:15:16", "segment": "segment56", "image_urls": [], "Book": "computerorganization" }, { "section": "45.67844 \u00fe 0.0004 \u00bc 45.67884 45.67884 \u00fe 1234.567 \u00bc 1280.246 ", "content": "also distributive b 3 c 3 c b 3 c", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-17 20:15:16", "segment": "segment57", "image_urls": [], "Book": "computerorganization" }, { "section": "1234.567 3 3.333333 \u00bc 4115.223 1.234567 3 3.333333 \u00bc 4.115223 4115.223 \u00fe 4.115223 \u00bc 4119.338 ", "content": "", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-17 20:15:16", "segment": "segment58", "image_urls": [], "Book": "computerorganization" }, { "section": "1234.567 \u00fe 1.234567 \u00bc 1235.802 1235.802 3 3.333333 \u00bc 4119.340 ", "content": "aside rounding actions performed arithmetic operation lead inaccuracies accumulate unexpected ways consider 24bit single precision representation decimal 01 given previ ously e 4 110011001100110011001101 100000001490116119384765625 exactly squaring number gives 010000000298023226097399174250313080847263336181640625 exactly 010000000707805156707763671875 e 7 101000111101011100001011 nearest representable number 009999999776482582092285156250 e 7 101000111101011100001010 representable number closest 001 thus binary oatingpoint expectation 01 squared equals 001 met similarly division 10 always give results multiplica tion 01 even though division 4 multiplication 025 decimal arithmetic division 3 give results multiplication 033333 long nite number digits considered addition loss signicance inability represent numbers p 01 exactly slight inaccuracies following phenomena may occur 1 cancellation subtraction nearly equal operands may cause extreme loss accuracy perhaps common serious accuracy problem 2 conversions integer unforgiving converting 63090 integer yields 7 converting 063009 may yield 6 conversions generally truncate rather rounding 3 limited exponent range results might overow yielding innity 4 testing safe division problematical checking divisor zero guarantee division overow yield innity 5 equality problematical two computational sequences mathematically equal may well produce different oatingpoint values programmers often perform comparisons within tolerance often decimal constant accurately represented necessarily make problem go away", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-17 20:15:16", "segment": "segment59", "image_urls": [], "Book": "computerorganization" }, { "section": "2.6.1.6 Exceptions ", "content": "addition innity value produced overow occurs special value nan produced operations taking square root negative number nan encoded reserved exponent 128 1024 signicand eld distinguishes innity intention inf nan values common circumstances propagate one operation next oper ation nan operand produces nan result need attended point programmer chooses addition creation exceptional values events may occur though quite benign 1 overow occurs described previously producing innity 2 underow occurs described previously producing denorm 3 zerodivide occurs whenever divisor zero producing innity appropriate sign sign zero meaningful note small nonzero divisor still cause overow produce innity 4 operand error occurs whenever nan created occurs when ever operand operation nan obvious thing happens sqrt 20 log 10 5 inexact event occurs whenever rounding result changed result true mathematical value occurs almost time usually ignored looked exacting applications computer hardware typically able raise exceptions traps events occur presented software language system dependent usually exceptions masked disabled sometimes overow zerodivide operand error enabled", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-17 20:15:16", "segment": "segment60", "image_urls": [], "Book": "computerorganization" }, { "section": "2.7 BINARY CODES ", "content": "far seen various ways representing numeric data binary form digital system requires information binary form external world however uses various symbols alphabetic characters special characters eg represent information order represent various symbols binary form unique pattern 0s 1s assigned represent symbol pattern code word corresponding symbol seen earlier possible represent 2n elements binary string containing n bits q symbols represented binary form minimum number bits n required code word given code word might possibly contain n bits accommodate error detection correction number bits code word set assignment code words symbols information represented could 2007 taylor francis group llc arbitrary case table associa ting eleme nt code word needed might follow general rul e exampl e code word requi red repr esent four sym bolssay dog cat tiger cam elwe use four combina tions 2 bits 00 01 10 11 ass ignments combina tions four sym bols arbitrar y represent 26 letters alphabet would need 5bit code 5 bits possible generate 32 combinations 0s 1s 26 32 combinations used represent alphabet similarly 4bit code needed represent 10 decimal digits 10 16 combinations possible used represent decimal digits examine possibilities later section codes designed repr esent nly numeri c data ie decima l digits 0 9 classied two categories weight ed nonw eighted alphanum eric codes repr esent alphabetic numeri c data third class codes designed errordetection correction purposes", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-17 20:15:16", "segment": "segment61", "image_urls": [], "Book": "computerorganization" }, { "section": "2.7.1 Weighted Codes ", "content": "stated need least 4 bits represent 10 decimal digits 4 bits possible represent 16 elements since 10 16 possible combinations used numerous distinct codes possible table 28 shows possibilities note codes weighted codes since bit position code weight associated sum weights corresponding nonzero bit code decimal digit represented note word 8 4 2 1 code table binary number whose decimal equivalent decimal digit represents commonly used code known binary coded decimal bcd code bcdencoded number used arithmetic operations decimal digit represented 4 bits example 567 10 represented bcd 5 6 7 0101 0110 0111 bcd arithmetic 4bit unit treated digit arithmetic performed digit bydigi basis show n example 242 sum patter n two digits grea ter 9 resu lting 4bit pattern valid code word bcd case correctio n factor 6 added digit derive valid code word lsd sum exceeds 9 correctio n f digit result next signica nt igit excee ding 9 correctio n digit yields nal corr ect sum important stand differ ence binary bcd repr esen tati ons numb ers bcd decima l digit repr esented correspo nding 4bi code wor d binary comple te numb er converte bina ry patter n appropriat e number bits exampl e bina ry represen tation 567 10 1000110 111 whe reas bcd represe ntatio n 0101 0110 0111 2 4 2 1 code show n table 28 decima l 6 represe nted 1100 anot valid repr esentatio n 6 code 0110 chosen com binations table 28 mak e code selfcom plemen ting code said selfcom plemen ting code word 9s comple ment n ie 9 n obtaine com plemen ting bit code word n prope rty code makes taking com plemen easy imple ment digita l hardwar e neces sary condition code selfc omplement ing sum weight code 9 thus 2 4 2 1 6 4 2 3 codes selfcomplement ing bcd", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-17 20:15:16", "segment": "segment62", "image_urls": [], "Book": "computerorganization" }, { "section": "2.7.2 Nonweighted Codes ", "content": "table 29 shows two popular codes th ey weight associated bit code word excess3 4bit selfcomplementing code code decimal digit obtained adding 3 corresponding bcd code word excess3 code used older computer systems addition selfcomplementing thereby making subtraction easier code enables simpler arithmetic hardware code included completeness longer commonly used gray code 4bit code 16 code words selected change 1bit position move one code word subsequent code word codes called cyclic codes 1bit changes code word code word easier detect error change 1 bit example consider case shaft position indicator assume shaft position divided 16 sectors indicated gray code shaft rotates code words change time change 2 bits code word compared previous one error several cyclic codes devised commonly used", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-17 20:15:16", "segment": "segment63", "image_urls": [], "Book": "computerorganization" }, { "section": "2.7.3 Error-Detection Codes ", "content": "errors occur digital data transmission result external noise introduced medium transmission example digital system uses bcd code data representation error occurs lsb position data 0010 resulting data 0011 0011 valid code word receiving device assumes data error guard erroneous interpretations data several errordetection correction schemes devised names imply errordetection scheme simply detects error occurred whereas error correction scheme corrects errors describe simple errordetection scheme using parity checking information elaborate schemes cyclic redundancy check crc check sums xmodem protocols refer books kohavi 1970 ercegovac lang 1985 listed reference section end chapter simple parity check err ordetect ion scheme extra bit know n parity bit include code word p arity bit set 1 0 depend ing number 1s ori ginal code word mak e total numb er 1s even even parity scheme odd odd parity scheme sendi ng device sets parity bit receivi ng device checks inco ming data parity syst em usin g even parity scheme error detected rece iver detects odd numb er 1s incomi ng data vice vers parity bit include code wor ds codes describ ed earli er table 210 shows two erro rdetection codes rst code even parity bcd fth bit parity bit set even parity second code know n 2outof 5 code code 5 bits used represe nt decimal digit two 2 bits f 5 1s 32 com binations possibl e u sing 5 bits 10 utilize form code als even parity code err occur transm ission even parity lost detect ed receive r simple parity scheme 2 bits err even pari ty maint ained able detect error occurred occur rence mor e one err anticipated elabora te parity checki ng schemes usin g mor e one parity bit used fact possibl e devise codes dete ct erro rs also corr ect including enough parity b exampl e block words transm itted wor might include parity bit last wor b lock might parity wor bit checks err corr espond ing bit p osition wor block see problem 220 scheme usually ref erred cros sparity check ing vertic al horizont al redun dancy check codi ng scheme dete cts corr ects single errors hamm ing 1950 inve nted single err ordetect ingcorr ecting schem e usin g distanc e3 code code word scheme differs ther code words lea st 3 bit po sitions", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-17 20:15:17", "segment": "segment64", "image_urls": [], "Book": "computerorganization" }, { "section": "2.7. 4 Alph anumeri c Codes ", "content": "alphabetic characters numeric digits special charact ers used represent inf ormation processe b igital syst em alph anumeric code needed two popu lar alph anumeric codes extended bcd interc hange code eb cdic amer ican standard code inform ation interchange scii table 211 shows thes e codes ascii mor e com monly used ebcdic used primarily large ibm com puter system s ebcd ic asc ii 8bit codes hence represe nt 256 elemen ts general com puter syst em every component f system need use sam e code data represe ntatio n fo r exampl e simpl e calcul ator system shown figur e 21 keyboa rd produces ascii coded characters correspo nding keystrok e asciic oded n umeric data converte processor bcd processi ng proce ssed data reco nverted ascii pri nter code conversi com mon partic ularly whe n devices various vendor integr ated form system", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-17 20:15:17", "segment": "segment65", "image_urls": [], "Book": "computerorganization" }, { "section": "2.8 DATA STO RAGE AND REGISTE R TRAN SFER ", "content": "let us exam ine operation digital compute r detail better stand data represen tation man ipulation schemes b inary informa tion stor ed digital systems devi ces ip ops discus sed cha pter 4 call storage devices stor age cel ls cell store 1 bit data th e cont ent state cell change 1 0 0 1 sign als inputs wherea content cell determin ed sensing outputs collection storage cells calle register nbit regist er thus store nbit data number bits often manipulat ed data unit system dete rmines word size syst em 16bit com puter system man ipu lates 16bit numb ers mos often wor size 16 b computer systems 8 16 32 64bit words common 8 bit unit data common ly cal led byte 4bit unit nibble onc e word size machin e dened halfw ord doublew ord designat ions also used designat e data half twic e number f bits word digital syst em manipulat es data set register transfer opera tions regi ster transf er op eration moving cont ents one regist er ie sourc e another register ie destina tion th e source register content remai n unchan ged regi ster transf er whereas contents destination register replaced thos e source register let us examin e set register transfer operations neede brin g addition two numbers digital compute r th e memor unit digital com puter composed several wor ds viewed register memor words cont data othe rs cont instru ctions man ipulating data memor word addre ss associa ted figur e 22 assum ed word size 16 bits memory 30 words program stored memory locations 0 10 two data words addre sses 20 30 show n word 0 cont ains instru ction add b whe b operand stored loca tions 20 30 resp ectively th e instru ction coded binary wi th 6 msbs represe nting add opera tion remai ning 10 bits used addre ss two operands 5 bits opera nd addre ss control unit fetches instruction add b memory word 0 fetch operation register transfer instruction analyzed control unit decode operation called operand address required control unit sends series control signals processing unit bring set register transfers needed assume processing unit two operand registers r1 r2 results register r3 adder adds contents r1 r2 stores result r3 carry add b instruction control unit 1 transfers contents memory word 20 operand r1 2 transfers contents memory word 30 operand b r2 3 commands adder add r1 r2 sends result r3 4 transfers contents r3 memory word 30 operand b description obviously much simplied compared actual operations take place digital computer nevertheless illustrates dataow capabilities needed digital system seen example binary data stored register interpreted various ways context register content examined determines meaning bit pattern stored example register part control unit accessed instruction fetch phase contents interpreted one instructions however contents register processing unit accessed data fetch phase instruction execution inter preted data figure 23 shows 16bit register three interpretations contents note contents register meaning considered bcd digits since 1010 valid bcd digit data transfer manipulative capabilities digital system brought aboutbydigitallogiccircuitsinlaterchapters wewilldiscusstheanalysisanddesignof logic circuits memory subsystems form components digital system", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-17 20:15:17", "segment": "segment66", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9 REPRESENTATION OF NUMBERS, ARRAYS, AND RECORDS ", "content": "let us assume computer system uses 16bit words unit data machine often works 16 bits long let us assume byte 16bit word accessible independent unit implies access 2 bytes word independently byteaccess mode whole word one unit wordaccess mode consider representation 5310 since 53 less 281 255 represent 53 8 bit shown upper half word byte 1 contains zeros consider representation 30010 requires 16bit word shown representation number least signicant part number occupies byte 0 signicant part occupies byte 1 called little endien swap contents two bytes get big endien representa tion representations used practice accessing number important remember endien representation used interpret ation results appropriate value number represented", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-17 20:15:17", "segment": "segment67", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9.1 BCD Numbers ", "content": "nancial applications represent data bcd rather binary retain accuracy calculations instance 53 represented two bcd digits 4bit long 50101 30011 16bit word representation pack four bcd digits 16bit word", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-17 20:15:17", "segment": "segment68", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9.2 Floating-Point Numbers ", "content": "using ieee standard 53 represented", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-17 20:15:17", "segment": "segment69", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9.3 Representation of Strings ", "content": "strings arrays characters character represented byte either ascii ebcdic format representation string computer shown using 32bit machine architecture end string usually denoted special character called null represented 0 alternatively integer value indicating number characters string made part string representation instead null terminating character", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-17 20:15:17", "segment": "segment70", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9.4 Arrays ", "content": "consider array integers 10 25 4 76 8 n 5 number elements array 8bit machine repre sented consider twodimensional array 4 6 8 9 two representations array rowwise representation elem ents consecutive rows array onedimensional array represented consecutive locations memory shown matrix 4 6 8 9 columnwise representation elements consecutive columns array onedimensional array represented consecutive locations memory shown 4 8 6 9", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-17 20:15:17", "segment": "segment71", "image_urls": [], "Book": "computerorganization" }, { "section": "2.9.5 Records ", "content": "reco rd typicall contain one elds data elds may different types numb er bits needed represen e ld may vary consid er instance records stud ents universit cont aining elds id name phone number gpa id eld coul treat ed either nume ric alph a numeric depend ing processi ng perform ed name eld string might b e partit ioned subel ds rst las middl e names phone number could tre ated numeri c alphanumer ic gpa o ating point numb er deta iled characterist ic reco rd indicating eld lengths type data e ld needs maintai ned acce ss data appro priate ly", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-17 20:15:17", "segment": "segment72", "image_urls": [], "Book": "computerorganization" }, { "section": "2.10 SUMMA RY ", "content": "topics cover ed chapter form basi data represe ntation schemes used digita l system s pres ented mos com mon number systems conver sion procedure s basi c arithmeti c schem es numb er system popul ar represe ntation schemes examined various bina ry codes encoun tered digita l system discusse d also include brief introdu ction operatio n f digital compute r based register transfer concept oatingpoint representation associated accuracy problemshavebeenintroduced", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-17 20:15:17", "segment": "segment73", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 3 ", "content": "combinational logic hardware compo nent comp uter syst em built several logic circui ts logic circuit interc onnecti f several pri mitive logi c devi ces perfo rm desired functi one mor e inputs one outputs chapt er introdu ces logic devices used building one type logic circui called com binati onal circuit outpu combina tional circuit function input circui t outpu ts time functi inputs part icular time circuit memor y circui memo ry called sequential cir cuit th e outpu sequential circui time functi inputs time also state circuit time state circui depend ent wha happen ed circuit prior time hence state also function previous inputs states chapter introduction analysis design combi nationa l circuits details seque ntial circui analysi design give n chapter 4", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-17 20:15:17", "segment": "segment74", "image_urls": [], "Book": "computerorganization" }, { "section": "3.1 BASIC OPERATIONS AND TERMINOLOGY ", "content": "example 31 consider addition two bits 0 plus 0 0 0 plus 1 1 1 plus 0 1 1 plus 1 10 ie sum 0 carry 1 addition two singlebit numbers produces sum bit carry bit operations arranged following table separate two resulting bits sum carry 2007 taylor francis group llc b two operands take valu e either 0 1 th e rst two colu mns show four com binations valu es possible two operan ds b last two columns repr esent sum two opera nds represe nted sum carr bit note carr 1 whe n 1 b 1 wherea sum bit 1 whe n one follow ing tw conditions sat ised 0 b 1 1 b 0 sum 1 0 b 1 1 b 0 let us say a0 pro nounce represe nts pposite condi tion 0 a0 1 vice versa similarly b0 represe nts opposite condi tion b say sum 1 a0 1 b 1 1 b0 1 shorthan notation follows side equation 31 boolean expressions b boolean variables possible values boolean variables earlier examples 0 1 could also true false expression formed combining variables operations value sum depends values b sum function b carry example 32 consider following statement subtract add instruction given signs different subtract instruction given signs alike let represent subtract action represent add instruction given condition b represent signs different condition c represent subtract instruction given condition statement expressed b c b0 usually removed expressions ambi guity thus function written ab cb0", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-17 20:15:18", "segment": "segment75", "image_urls": [], "Book": "computerorganization" }, { "section": "3.1.1 Evaluation of Expressions ", "content": "knowing value component variables expression nd value expression hierarchy operations important evaluation expressions always perform operations rst followed lastly absence parentheses parentheses expressions within parentheses evaluated rst observing hie rarchy operations remaining expression evaluated th e followi ng exam ples illustrate expressi eval uation sequential order operations performed show n numb ers belo w opera tion", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-17 20:15:18", "segment": "segment76", "image_urls": [], "Book": "computerorganization" }, { "section": "3.1.2 Truth Tables ", "content": "figur e 31 shows truth tables three primitive operations truth table indicates value function possible combinations values variables function one column truth table corresponding variable one column value function since variable take either two values 0 1 number combinations values multiplies number component vari ables increases instances two variables 2 3 2 4 combinations values hence four rows truth table general 2n rows truth table function n component variables expression righthand side function complex truth table developed several steps following example illustrates development truth table example 36 draw truth table z ab0 a0c a0b0c three component variables b c hence 23 eight combinations values b c eight combinations shown lefthand side truth table figure 32 combinations generated changing value c 0 1 1 0 move row row changing value b every two ie 21 rows changing value every four ie 22 rows combinations thus numerically increasing order binary number system starting 000 2 0 10 111 2 7 10 subscripts denote base number system general n component variables 2n combinations values ranging numerical value 0 2n 1 evaluate z example function knowing values b c row truth table figure 32 values a0 b0 rst generated values ab0 a0c a0b0c generated anding values appropriate columns row nally value z found oring values last three columns row note evaluating a0b0c corresponds anding a0 b0 values followed anding value c similarly two values ored ored two time columns corresponding a0 b0 ab0 a0c a0b0c0 usually shown nal truth table", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-17 20:15:18", "segment": "segment77", "image_urls": [], "Book": "computerorganization" }, { "section": "3.1.3 Functions and Their Representation ", "content": "two constants logic alphabet 0 1 true false variable b x take value either 1 0 time three basic truth table seen q 1 0 b 0 c 1 q 1 0 1 b 0 1 c 1 means q 1 0 b 0 c 1 similarly corresponding three 1s q column table q 1 0 bc 1 ab 0 c 0 1 ab 0 c 1 argument leads following representation q q a0b0c a0bc ab0c0 ab0c normal sop form general derive sop form truth table use following procedure 1 generate product term corresponding row value function 1 2 product term consider individual variables uncomplemented value variable row 1 complemented value variable row 0 pos form function derived truth table similar procedure 1 generate sum term corresponding row value function 0 2 sum term consider individual variables complemented value variable row 1 uncomplemented value variable row 0 q b c b0 c a0 b0 c a0 b0 c0 pos form q example 313 sop form easy natural work compared pos form pos form tends confusing beginner since correspond algebraic notation used example 314 derivation sop pos forms representation another threevariable function p shown", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-17 20:15:18", "segment": "segment78", "image_urls": [], "Book": "computerorganization" }, { "section": "3.1.4 Canonical Forms ", "content": "sop pos forms functions derived truth table procedures canonical forms canonical sop form component variable appears either complemented uncomplemented form product term example 315 q function b c q a0b0c ab0c0 a0b0c0 canonical sop form q a0b ab0c a0c0 rst last product terms three variables present canonical pos form similarly dened canonical product term also called minterm canonical sum term called maxterm hence functions represented either sum minterm product maxterm formats example 316 themintermlistformisacompactrepresentationforthecanonicalsopform 0 0 0 maxterm list form compact representation canonical pos form knowing one form derived shown following example example 317 given q b c sm 0 1 7 8 10 11 12 15 q fourvariable function hence 24 16 combinations input values whose decimal values range 0 15 eight minterms hence 168 8 maxterms q b c 2345691314 also note complement q represented q0 b c x 2345691314 017810111215 note nvariable function number minterms number maxterms 2n", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-17 20:15:18", "segment": "segment79", "image_urls": [], "Book": "computerorganization" }, { "section": "3.2 BOOLEAN ALGEBRA (SWITCHING ALGEBRA) ", "content": "1854 george boole introduced symbolic notation deal symbolic statements take binary value either true false symbolic notation adopted claude shannon analyze logic functions since come known boolean algebra switching algebra denitions theorems postulates algebra described denition boolean algebra closed algebraic system containing set k two elements two binary operators every x set k x belongs k x belongs k addition followingpostulatesmustbesatised denition two expressions said equivalent one replaced denition dual expression obtained replacing expression 1 0 0 1 principle duality states equation valid boolean algebra dual also valid note part b postulates dual corresponding part vice versa example 318 given x yz x x z dual x z x x z theorems following theorems useful manipulating boolean functions traditionally used converting boolean functions one form another deriving canonical forms minimizing reducing complexity boolean functions theorems proven drawing truth tables sides see lefthand side values righthand side possible combination component variable values algebrai c proof theo rems found ref erences listed end chapter", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-17 20:15:18", "segment": "segment80", "image_urls": [], "Book": "computerorganization" }, { "section": "3.3 MINIMIZ ATION OF BOO LEAN FUN CTIONS ", "content": "theorems postu lates boolea n alge bra used simpl ify inimize boolean functi ons minimi zed functi yields less complex circui nonmini mized functi general comple xity gate increases numb er inputs increases hen ce reduction n umber literals boolean function reduces com plexity complete circui t designi ng integr ated circuits ic consider ations area taken circui silicon wafer used fabric ate ic regul arity structure circuit fabrication point view exampl e progr ammab le logic array pla imple mentati di scussed chapte r 4 circui yields regular struct ure rando logi c i e using gates impleme ntation min imizing numb er literals function may yield less comple x pla imp lementati howeve r som e produc terms sop form com pletely eliminate function pla size reduc ed minimiz ation using theorem postu lates tedious two popular minimi zation methods 1 using karnaug h map kmaps 2 quine mccluskey proced ure two methods describ ed sectio n", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-17 20:15:18", "segment": "segment81", "image_urls": [], "Book": "computerorganization" }, { "section": "3.3.1 Venn Dia grams ", "content": "truth tables canoni cal forms used earli er chapter represe nt boolean functi ons another method represe nting funct ion using venn diagrams th e variables repr esented circles universe rectang le unive rse correspo nds 1 everythin g 0 correspo nds null nothin g figures 33 34 show typical logic operations using venn diagrams diagrams opera tion iden tied area belo nging particula r variable operation union two area ie area belongs either corresponding two operands operation intersection areas ie area common corresponding two operands unshaded area one expre ssion 0 note combina tions shown tru th tables also show n venn diagrams figure 35 show combina tions correspo nding two threevari able functions", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-17 20:15:18", "segment": "segment82", "image_urls": [], "Book": "computerorganization" }, { "section": "3.3. 2 Karnaug h Map s ", "content": "karn augh maps kmaps modied venn diagrams consid er twovari able ven n diag ram show n figur e 36a four combina tions two variables show n th e four areas identied four minte rms figure 36 b figure 36c shows venn diagram rearranged four areas equal also note two rig hthand blocks diag ram correspo nd m2 m3 two bloc ks bottom m1 m3 corr espond b figure 36d marks area a0 b b0 expl icitly figur e 36e usual kma p two variabl es two variables b distributed valu es alon g top b alon g side figure 37 show thr eevariabl e kma p since 23 eigh combin ations three variables need eight bloc ks blocks arranged two right hand columns corr espond tw mi ddle columns correspo nd b bottom row corr espond c block correspo nds minterm fo r exampl e block named m6 corr espond area b c area abc0 110 minterm code minterm m6 th e rst two variables b represen ted four combina tions along top third variabl e c alon g side figure 37b note area consists bloc ks value 1 bl ocks 4 5 6 7 irresp ective b c simi larly b 1 blocks 2 3 6 7 c 1 1 3 5 7 variable values liste alon g top side easy iden tify minterm corresponding block example lefthand topcorner block corresponds 0 b 0 c 0 abc 000 m0 fourvar iable karnaug h map show n figure 38 th e area minte rms also iden tied repres enta tion fu nctions kmap s represented functions venn diagrams ha ding ar eas kmaps eac h b loc k give n value f 0 1 epending n th e v alue th e fu nctio n e ach blo ck c orr esp din g inte rm ha valu e f 1 r b loc ks w ill h ave 0 ho w n n e xa mple 3 19 nd 3 20 example 319 f x z sm 0 1 4 place 1 corresponding minterm figure 38 fourvariable karnaugh map example 320 f b c pm 1 4 9 10 14 place 0 corresponding maxterm usually 0s shown explicitly kmap 1s shown blank block corresponds 0 plotting sum products form function given sop form equivalent minterm list derived method described earlier chapter minterms plotted kmap alternative faster method intersect areas kmap corresponding product term illustrated example 321 example 321 f x z xy z x corresponds blocks 4 5 6 7 blocks x 1 y0 corresponds blocks 0 1 4 5 blocks 0 xy0 corresponds intersection xy0 4 5 similarly therefore kmap 1 union 4 5 0 4 0 4 5 alternatively xy0 corresponds area x 1 0 last column y0z0 corresponds area 0 z 0 block 0 4 hence union two corresponds blocks 0 4 5 note also threevariable kmap product term two variables missing use four 1s corresponding four minterms generated singlevariable product term representation general product term n missing variables represented 2n 1s kmap example follows plotting product sums form procedure plotting pos expression similar sop form except 0s used instead 1s minimization note combination variable values represented block kmap differs adjacent block one variable variable complemented one block true uncomplemented example consider blocks 2 3 corresponding minterms m2 m3 threevariable kmap m2 corresponds 010 a0bc0 m3 corresponds 011 a0bc0 values b remain c different adjacent blocks property two terms differ one variable called logical adjacency kmap physically adjacent blocks also logically adjacent threevariable kmap block 2 physically adjacent blocks 0 3 6 note m2 also logically adjacent m0 m3 m6 adjacency property used simplication boolean functions combine m8 m12 combination shown later grouping 1s kmap note grouping eliminated variable b changes value two blocks similarly grouping m9 m13 yields ac0d grouping m5 m13 yields bc0d 0 0 0 effect equivalent grouping four 1s topright corner kmap shown forming group two adjacent 1s eliminated 1 literal product term grouping four adjacent 1s eliminated 2 literals general group 2n adjacent 1s eliminate n literals hence simplifying functions advantageous form large group 1s possible number 1s group must powerof2 1 2 4 8 etc groups formed product term corresponding group derived following general rules 1 eliminate variable changes value within group move block block within group observe change product term containing variables function 2 variable value 0 blocks group appear complemented product term 3 variable value 1 blocks group appear uncomplemented product term group four 1s kmap therefore product term corresponding grouping ac0 summarize earlier observations following procedure simplifying functions 1 form groups adjacent 1s 2 form group large possible number 1s group must power 2 3 cover 1 kmap least 1 included several groups necessary 4 select least number groups cover 1s map 5 translate group product term 6 product terms obtain minimized function recognize adjacencies threevariable map righthand edge considered lefthand edge thus making block 0 adjacent block 4 1 adjacent 5 similarly fourvariable map top bottom edges brought together form cylinder two ends cylinder bought together form toroid like donut following examples illustrate grouping kmaps corresponding simplied functions example 325 1 groupings marked essential m12 covered ab m0 m8 covered b0d0 m5 covered bd 2 three groups chosen minterms left uncovered m6 m14 cover either choose bc cd0 hence two simplied forms either satisfactory form since contains number literals simplied functions pos form obtain simplied function pos form 1 plot function f kmap 2 derive kmap f0 changing 1 0 0 1 3 simplify kmap f0 obtain f0 sop form 4 use demorgan laws obtain f minimization using cares designing logic circuits know certain input combinations occur corresponding outputs circuit ignored also possible care circuit output would even certain input combinations occur conditions termed cares exam ple 331 con sider circui converts bcd code input int corr esponding excess3 code th bcd excess3 decode r expects inputs combina tions corr espond ing decima ls 0 9 othe r six input 1015 never occur hence outpu corr espond ing six inputs care cares indicated kma p treat ed either 1 0 necessar cover cares grouping care covered treated 0s th e follow ing maps illustr ate use n care simplifyin g output functions decoder k maps usef ul functi ons wi th four ve variabl es figur e 39 shows vevariable kmap since number blocks doubles addi tional variable minimization using kmaps becomes complex functions ve variables quinemccluskey procedure described next section useful cases", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-17 20:15:19", "segment": "segment83", "image_urls": [], "Book": "computerorganization" }, { "section": "3.3.3 Quine\u2013McCluskey Procedure ", "content": "quinemccluskey procedure also uses logical adjacency property reduce boolean function two minterms logically adjacent differ open position one variables uncomplemented form one minterm complemented form variable eliminated combining two minterms quinemccluskey procedure compares minterm others combines possible procedure uses following steps 1 classify minterms cares function groups term group contains number 1s binary representation term 2 arrange groups formed step 1 increasing order number 1s let number groups n 3 compare minterm group 1 n 1 group 1 two terms adjacent form combined term variable thus eliminated represented combined term 4 repeat matching operation step 3 combined terms combinations done combined term nal list called prime implicant pi prime implicant product term combined others yield term fewer literals 5 construct prime implicant chart one column minterm minterms cares listed one row pi x rowcolumn intersection indicates prime implicant corresponding row covers minterm corresponding column 6 find essential prime implicants ie prime implicants cover least one minterm covered pi note two parts map treated two planes one superimposed blocks position plane also logically adjacent 7 select minimum number prime implicants remaining cover minterms covered essential pis 8 set prime implicants thus selected form minimum function procedure illustrated example 332 th e quinem cclus key proce dure program med compute r ef cie nt functions numb er variabl es several techniq ues simpl ify boo lean functio n devi sed intere sted reader ref erred books liste refere nces th e auto mation boolea n function minimi zation act ive rese arch area 1970s advances ic technol ogy cont ribute decl ine intere st minim ization boo lean functions minimi zation number f ics efci ent interconnec tions sign icance saving gates pres entday desi gn environment", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-17 20:15:19", "segment": "segment84", "image_urls": [], "Book": "computerorganization" }, { "section": "3.4 PRIMIT IVE HARDW ARE BLOC KS ", "content": "logic circui physi cal imple mentati boolean function th e primitive boo lean opera tions implemente elect ronic compon ents know n gates th e boolea n const ants 0 1 impleme nted two uniqu e voltage levels current leve ls gate receive thes e logic values inputs produc es logic value functi inputs outpu t gate one inputs output opera tion gate describ ed truth table tru th tables standard sym bols used represe nt three primitive gate show n figure 310 th e gate always one input one outpu t two inputs show n gates figur e 310 convenien ce maximum numb er inputs allowed gate limited ele ctronic technol ogy used build gate numb er inputs gate terme fan assum e restrictio n n fanin four input gate show n belo w figur e 310 shows three othe r popul ar gates utilit gates disc ussed later thi chapter", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-17 20:15:19", "segment": "segment85", "image_urls": [], "Book": "computerorganization" }, { "section": "3.5 FUNCT IONAL ANAL YSIS OF COMBINA TIONAL CIRCU ITS ", "content": "th e func tional analysis combina tional circui process determin ing relat ions outputs inputs th ese rel ations expre ssed either set boo lean functi ons e outpu truth table circuit funct ional analy sis usually perform ed verify sta ted function com bi nationa l cir cuit funct ion circui know n analysi determ ines two othe r type anal ysis common ly performed logic circui ts loadi ng timing anal yses wi discuss functi onal anal ysis sectio n descr ibe types anal yses sect ion 39 con sider com binational circui n input variabl es outputs show n figure 311a since n input 2n combina tions input values fo r input combina tion unique com bination outpu valu es truth table circui 2n rows n columns shown figure 311b need boolean functions describe circuit demonstrate analysis procedure following example examp le 333 consid er circui show n figur e 31 2 th e thr ee input variabl es circui x z two outpu ts p q rder derive outputs p q functio ns x z trace signals inputs outputs facilitat e tracing labeled ou tputs gates circuit arbitr ary symbols r1 r2 r3 r4 note r 1 r 2 functions input variabl es r3 r4 functions r1 r2 input variables p function r2 r3 q functionofpandr4tracingthroughthecircuit wenotethat 1 2 transform function q sop form using theorems postulates boolean algebra shown later theorems postulates used identied p respectively along numbers thus figure", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-17 20:15:19", "segment": "segment86", "image_urls": [], "Book": "computerorganization" }, { "section": "3.13 ", "content": "shows derivatio n truth table circui t th e truth table drawn funct ions p q tracing circuit since three inputs truth table eight rows eigh combina tions input values show n figur e", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-10-17 20:15:20", "segment": "segment87", "image_urls": [], "Book": "computerorganization" }, { "section": "3.13b. ", "content": "impose combina tion f values inputs circuit outpu valu es tracing circui t figur e", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-10-17 20:15:20", "segment": "segment88", "image_urls": [], "Book": "computerorganization" }, { "section": "3.13a ", "content": "show condition circui corres ponding input condi tion x 0 0 z 0 tracing thr ough circui note r 1 r 1 r 0 r 0 p 1 q 1 repeat process seven input com binations derive com plete truth table shown figure 313b shown column corresponding intermediate outputs r1 r2 r3 r4 truth table convenience columns usually retained nal truth table summa rize functional anal ysis procedure follows obta outpu functi ons logic diag ram 1 label outputs gates circuit arbitrary variable names 2 express outputs rst level gates ie gates whose inputs circuit input variables functions inputs 3 express outputs next level gates functions inputs circuit inputs outputs previous level gates 4 continue process step 3 circuit outputs obtained 5 substitute functions corresponding intermediate variable output functions eliminate intermediate variables functions 6 simplify output functions possible us ing et h ods es c ri b ed se ct ion 3 3 obtain truth table circuit diagram 1 determine number inputs n circuit list binary numbers 0 2n 1 forming input portion truth table 2n rows 2 label outputs gates circuit arbitrary symbols 3 determine outputs rst level gates input combination rstlevel output forms column truth table 4 using combination input values values intermediate outputs already determined derive values outputs next level gates 5 continue process step 4 circuit outputs reached", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-10-17 20:15:20", "segment": "segment89", "image_urls": [], "Book": "computerorganization" }, { "section": "3.6 SYNTH ESIS OF CO MBINAT IONAL CIRCUITS ", "content": "synthesi proce ss transform ing word statemen function performed int logic circui t th e word statemen rst converte truth table ste p requires identicat ion input v ariables outpu values correspo nding combinati input valu es input expresse boo lean function input variabl es functi ons trans formed logic circuits addi tion ynthesis several othe r terms used literature denot e transform ation say realiz e function circuit imple ment functi using logic circui build circui function simpl desi gn circuit book use terms interc hangeably neede d four type circui impleme ntations popular or nan d nand imple mentation gener ated direct ly sop form function ora nd rnor imp lementati ons evolve directly pos form il lustrate impleme ntations using fol lowing exampl e examp le 334 build circuit implemen functi p show n followi ng truth table", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-10-17 20:15:20", "segment": "segment90", "image_urls": [], "Book": "computerorganization" }, { "section": "3.6.1 AND\u2013OR Circuits ", "content": "truth table p sop form 0 0 0 0 p x z sum four product terms use gate four inputs generat e p see figure 31 4a inputs gate product three variables hence use four gates realizing product term outputs gates connected four inputs gate shown figure 314b complemented variable generated using gate figure 314c shows circuit needed generate x0 y0 z0 nal task building circuit connect complemented uncomplemented signals appropri ate inputs gates often gates specically shown circuit assumed true complemented values variables available logic circuit usually shown figure 314b type circuit designed using sop form boolean function starting point called twolevel andor circuit rst level consists gates second level consists gates", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-10-17 20:15:20", "segment": "segment91", "image_urls": [], "Book": "computerorganization" }, { "section": "3.6.2 OR\u2013AND Circuits ", "content": "orand circuit designed starting pos form function p x z x z0 x0 z x0 y0 z design carried three steps rst two shown figure 315a third step including gate identical requi red andor circuit design prac tice outpu functions simplie befor e drawi ng logic diagrams ignored simplic ation problem earli er exam ple", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-10-17 20:15:20", "segment": "segment92", "image_urls": [], "Book": "computerorganization" }, { "section": "3.6.3 NAND\u2013NA ND and NOR \u2013NOR Circ uits ", "content": "nan operations show n figure 310 un iversal opera tions pri mitive operations reali zed using nand opera tors operator s figure 3 16 shows reali zation three opera tions using ly nan gates theo rems used arriving simpl ied form expressions als identied gure gate used simi lar way realize thr ee primiti opera tions universa l char acter nand gate permits building logic circuits usin g one type gate ie nan examp le 335 figure 317 illus trates transf ormation andor circuit circuit consisting nan gates gate or circui figur e 317a replaced two nand gates see figure 31 6 th e circuit nand gates redundant gates circuit figure 317c gates 5 8 b e removed gates simpl compleme nt input signal b 0 twic e hence neede d similar ly gates 7 9 rem oved th e circui figure 317d nan dna nd circuit th e circui ts figur e 317a equi valent since reali ze functi ab a0 c nan dnand imple mentati thus derived or circuit simpl replacing gate or circuit na nd gate sam e numb er inputs gate replace s nor im plementati likew ise obtained sta rting wi th oran imp lementati replaci ng gate gate input literal feeding secon level dir ectly must inve rted con sider circuit figure 318a c fed directly secon level hence mus inve rted derive corr ect nan dnand imple mentati shown figur e 318b th ese imple mentati ons feasible becau se gates available comme r cia lly form integra ted circuits ics p ackages contain ing sever al gates type using types gates elimina tes need different types ics using type ics usually allows efcient use ics figure 316 realization primitive operations using nand reduces ic package count nand circuits primitive circuit congurations major ic technologies gates realized complementing outputs nand respectively thus nand gates less complex fabricate cost efcient corresponding gates summarize combinational circuit design procedure 1 specication circuit function derive number inputs outputs required 2 derive truth table figure 317 nandnand transformation 3 either andor nandnand form circuit required a derive sop form function output truth table b simplify output functions c draw logic diagram rst level nand gates second level nand gate appropriate number inputs 4 either orand nornor form circuit required a derive pos form function output truth table b simplify output functions possible c draw logic diagram rst level gates second level gate appropriate number inputs", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-10-17 20:15:20", "segment": "segment93", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7 SOME POPULAR COMBINATIONAL CIRCUITS ", "content": "describe several commonly used combinational logic circuits section available ic components various manufacturers used components designing logic systems", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-10-17 20:15:20", "segment": "segment94", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7.1 Adders ", "content": "addition common arithmetic operation performed processors processor hardware capable addition two numbers three primi tive arithmetic operations also performed using addition hardware subtraction performed adding subtrahend expressed either 2s 1s complement form minuend multiplication repeated addition multipli cand multiplier number times division repeated subtraction divisor dividend bits a0 b0 least sign icant bit lsb a3 b3 sign icant bits msb addition performed starting lsb position adding a0 b0 produc e sum bit s0 carr c 0 carr c0 used addition next sign icant bits a1 b1 producing 1 c 1 addition proce ss carried thr ough msb position halfad der devi ce add 2 bit produc ing sum bit carry bit outputs full adder adds 3 bit produc ing sum bit carr bit outputs add two nbit numb ers thus need one halfa dder n1 ful l adders figure 319 show halfadder fulladder arrangem ent p erform 4bi add ition cal led ripp lecarry adder since carry ripp les sta ges adder starting lsb msb th e time neede carry propa gation proportion number bits since sum correct value carry appears msb longe r carry propagat ion time slower adder sever al schemes increas e speed adder discusse chapte r 1 0 figure 320 show bloc k diag ram repr esentatio n truth tables full adders h alfadders truth tables derive sop form functi ons outpu ts adder s th ey th equatio n 6 lite rals compared 12 lite rals original equatio n th simplie equatio n realized thr ee twoi nput gates one thr eeinput gate", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-10-17 20:15:20", "segment": "segment95", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7. 2 Decode rs ", "content": "code word string certai n numb er bits descr ibed chap ter 2 nbit binary string take 2n comb inations values nto2 n decoder show n figur e 323 circui conver ts n bit input data 2n outpu ts max imum time one outpu lin e correspo nding com bin ation input lines 1 outpu ts 0 outpu ts usual ly numbere 0 2 n 1 exampl e combina tion input 4t o24 decode r 1001 utput 9 1 othe r outpu ts 0 usual ly necessar draw truth table decode r th ere would sing le 1 output column truth table produc sum term corr espond ing 1 could easily derived figur e 324 show circuit diag ram 3to8 decode r thr ee inputs designat ed b c c lsb th e outpu ts numb ered 0 7", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-10-17 20:15:20", "segment": "segment96", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7. 3 Code Convert ers ", "content": "code converte r translat es input code word output b patter n corre spondi ng new code wor d decoder code conver ter change nbit code word 2nbit code word design code converter bcd excess3 code conversion provided earlier chapter", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-10-17 20:15:20", "segment": "segment97", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7.4 Encoders ", "content": "encoder generates nbit code word function combination values input maximum 2n inputs figure 325 th e desi gn encode r executed rst drawing truth table show nbi outpu neede 2n com binations inputs circuit iagrams derived outpu bit exam ple 336 partial truth table 4to2 line encode r show n figur e 326 a altho ugh 16 combina tions 4 inputs 4 used becau se 2bit output suppor ts 4 com binations th e output combina tions identify four input lines 1 partic ular time th ese functi ons may simpl ied observin g sufc ient w 0 x 0 d0 1 matter valu es z sim ilarly w 0 0 sufc ient d1 1 su ch obser vations although always strai ghtforwar help simplifyin g functi ons thus reducing amount hardwar e neede d alternati vely truth table figure 326a completed includi ng remainin g 12 input combina tions enteri ng care outpu ts correspo nding inputs d0 1 derived truth table simplie d th e outpu functi ons seen truth table d0 w0 x0 d1 w0y0 37 figure 326b shows circuit diagram 4to2 encoder", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-10-17 20:15:21", "segment": "segment98", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7.5 Multiplexers ", "content": "multiplexer switch connects one several inputs output set n control inputs needed select one 2n inputs connected output fig ure 327 examp le 337 operation multiplexer four inputs d0d 3 hence two control signals c0 c1 describ ed follow ing table althoug h six inputs com plete truth table 2 6 rows require designi ng circui since outpu simply assumes valu e one four inputs depending control signals c1 c 0 outpu d0 c0 1 c0 0 d1 c0 1 c0 d2 c 1 c0 0 d3 c1 c 0 3 8 circuit reali zing multipl exer shown figur e 328 inputs d0 d1 d2 d3 utput multipl exer circuit sing le lines appl ication require ata lines mul ti plexed mor e 1 bit circui duplica ted bit data", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-10-17 20:15:21", "segment": "segment99", "image_urls": [], "Book": "computerorganization" }, { "section": "3.7.6 Demult iplexers ", "content": "demultip lexer e input several outpu ts switches conn ects input one outpu ts based comb ination values set control selec inputs n cont rol sign als ther e max imum 2 n outpu ts figure 329 example 338 operation demultiplexer four outputs o hence two control necessary draw truth table eight rows circuit although three inputs since input directed one four outputs basedonthefourcombinationsofvaluesonthecontrolsignalsc1andc0thus typical application multiplexer connect one several input devices selected device number input computer system demultiplexer used switch output computer one several output devices", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-10-17 20:15:21", "segment": "segment100", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8 INTEGRATED CIRCUITS ", "content": "far chapter concentrated functional aspects gates logic circuits terms manipulating binary signals gate electronic circuit made transistors diodes resistors capacitors components interconnected realize particular function section expand understanding gates circuits electronic level detail current state digital hardware technology logic designer combines ics perform specic functions realize functional logic units ic small slice silicon semiconductor crystal called chip discrete electronic components mentioned chemically fabricated interconnected gates circuits circuits accessible pins attached chip one pin input signal one output signal circuit fabricated ic chip mounted either metallic plastic package various types packages dualinline package dip at package used dip widely used package number pins varies 8 64 ic given numeric designation printed package ic manufacturer catalog provides functional electronic details ic ic contains one gates type logic designer combines ics perform specic functions realize functional logic units electroniclevel details gates usually needed build efcient logic circuits logic circuit complexity increases electronic characteristics become important solving timing loading problems circuit figure 330 shows details ic comes popular ttl transistortransistor logic family ics numeric designation 7400 contains four twoinput nand gates 14 pins pin 7 ground pin 14 supply voltage used power ic three pins used gate two input one output ics notch package used reference pin numbers pins numbered counterclockwise starting notch digital linear two common classications ics digital ics operate binary signals whereas linear ics operate continuous signals book dealing digital ics advances ic technology possible fabricate large number gates single chip according number gates contains ic classied small medium large verylargescale circuit ic containing gates approximately 10 called smallscale integrated ssi circuit mediumscale integrated msi circuit complexity around 100 gates typically implements entire function adder decoder chip ic complexity 100 gates largescale integrated lsi circuit vlsi circuit contains thousands gates two broad categories ic technology one based bipolar tran sistors ie pnp npn junctions semiconductors based unipolar metal oxide semiconductor eld effect transistor mosfet within technology several logic families ics available popular bipolar logic families ttl emittercoupled logic ecl pchannel mos pmos nchannel mos nmos complementary mos cmos popular mos logic families new family ics based gallium arsenide introduced recently technology potential providing ics faster ics silicon technology following discussion examine functionallevel details ics details adequate build circuits using ics various performance char acteristics ics introduced without electroniclevel justication addi tion details type provi ded figure 330 ic manufac turer catalog contains information voltage ranges logic level fanout propagation delays forth ic section introduce major symbols notation used ic catalogs describe common characteristics selecting using ics", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-10-17 20:15:21", "segment": "segment101", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8.1 Positive and Negative Logic ", "content": "mentioned earlier two distinct voltage levels used designate logic values 1 0 practice voltages levels range voltages rather xed values figure 331 shows voltage levels used ttl technology high level corresponds range 245 v low level corresponds 004 v two levels designated h l general voltage levels selected assignment 1 0 levels arbitrary socalled positive logic system higher two voltages denotes logic1 lower value denotes logic0 negative logic system designations opposite following table shows two possible logic value assignments note h l positive ttl negative ecl h 07 095 v l 19 15 v assignment relative magnitudes voltages logic1 logic0 determines type logic rather polarity voltages dual assignments logic gate implements operation positive logic system implements dual operation negative logic system ic manufacturers describe function gates terms h l rather logic1 logic0 example consider ttl 7408 ic contains four twoinput gates figure 332a function ic described manufacturer terms h l shown voltage table using positive logic voltage table figure 332b converted truth table figure 332c representing positive logic shown gate figure 332d assuming negative logic table figure 332b converted truth table figure 332e truth table negative logic gate shown figure 332f halfarrows inputs output designate negative logic values note gates figure 332d f correspond physical gate function either positive logic negative logic similarly shown following dual operations valid assume positive logic system throughout book use negative logic symbolism practice designer may combine two logic notations circuit mixed logic long signal polarities interpreted consistently", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-10-17 20:15:21", "segment": "segment102", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8.2 Signal Inversion ", "content": "used bubble nand gate symbols denote signal inversion complementation bubble notation extended logic diagram examples shown figure 333 gate symbol figure 333a implies input x asserted eg h output low l input said activehigh output said activelow alternative symbol gate activelow input shown figure 333b gate activelow inputs shown figure 333c output gate high inputs low note invertand gate typical ic four inputs one output shown figure 333d input output e activelow inputs b c activehigh l input would appear h internal ic h corresponding e internal ic would appear l external ic activelow signal active carries low logic value whereas activehigh signal active carries high logic value bubble logic diagram indicates activelow input output example bcdtodecimal decoder ic 7442 shown figure 334 four activelow outputs output corresponding decimal value input bcd number active time output corresponding decimal value bcd input ic l outputs h two ics used circuit activelow activehigh designations input output signals must observed proper operation circuit although ics logic family generally compatible signalactive designations important distinguish negative logic designation half arrow activelow designation bubble logic diagram desig nations similar effect shown gate symbols figure 335 replaced fact shown figure 335c halfarrow following bubble cancels effect bubble signal hence halfarrow bubble removed inputs outputs gate different polarity example halfarrow bubble removed output negative logic gate figure 335c inputs represent negative logic outputs represent positive logic polarities", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-10-17 20:15:21", "segment": "segment103", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8.3 Other Characteristics ", "content": "important characteristics noted designing ics active low activehigh designations signals voltage polarities low high voltage values especially ics different logic families used circuit ics family usually compatible respect characteristics special ics interface circuits built different ic technologies also available designers usually select logic family basis following characteristics 1 speed 2 power dissipation 3 fanout 4 availability 5 noise immunity noise margin 6 temperature range 7 cost power dissipation proportional current drawn power supply current inversely proportional equivalent resistance circuit depends values individual resistors circuit load resistance operating point transistors circuit reduce power dissipation resistance increased however increasing resistance increases rise times output signal longer rise time longer takes circuit output switch one level circuit slower thus compromise speed ie switching time power dissipation necessary availability various versions ttl instance result compromises measure used evaluate performance ics speedpower product smaller product better performance speedpower product standard ttl power dissipation 10 mw propagation delay 10 ns 100 pj noise margin logic family deviation h l ranges signal tolerated gates logic family circuit function affected long noise margins obeyed figure 336 shows noise margins ttl output voltage level gate stays either vlmax vhmin output connected input another gate input treats voltage vihmin 20 v high voltage vilmin 08 v low thus ttl provides guaranteed noise margin 04 v supply voltage vcc 5 v required depending ic technology load gate increased ie number inputs connected output increased output voltage may enter forbidden region thereby contributing improper operation care must taken ensure output levels maintained obeying fanout constraint gate using gates special outputs higher fanout capabilities fanout gate maximum number inputs gates connected output without degrading operation gate fanout function current sourcing sinking capability gate gate provides driving current gate inputs connected output gate current source whereas current sink current ows gates connected output customary assume driving driven gates ic technology determined according current sourcing sinking capabilities gates technologies output gate high acts current source inputs driving see figure 337 current increases output voltage decreases may enter forbidden region driving capability thus limited voltage drop output low driving gate acts current sink inputs maximum current output transistor sink limited heat dissipation limit transistor thus fanout minimum driving capabilities standard ttl gate fanout 10 standard ttl buffer drive 30 standard ttl gates various versions ttl available depending version fanout ranges 10 20 typical fanout ecl gates 25 cmos 50 popularity ic family helps availability cost ics comes produced large quantities therefore popular ics generally available availability measure number types ics family availability large number ics family makes design exibility temperature range operation important consideration especially environments military applications automobiles forth temperature variations severe commercial ics operate temperature range 08c 708c ics military applications operate temperature range 558c 1258c cost ic depends quantity produced popular offtheshelf ics become inexpensive long offtheshelf ics used circuit cost circuit components eg circuit board connectors interconnections currently higher ics circuits required large quantities custom designed fabricated ics small quantities justify cost custom design fabrication several types programmable ics allow semicustom design ics special applications table 31 summarizes characteristics popular ic technologies", "url": "RV32ISPEC.pdf#segment104", "timestamp": "2023-10-17 20:15:22", "segment": "segment104", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8.4 Special Outputs ", "content": "several ics special characteristics available useful building logic circuits briey examine special ics section functional level detail gate logic circuit said loaded required drive inputs fanout either loaded gate replaced gate functionality higher fanout available ic family buffer connected output ics designated buffers drivers provide higher fanout regular ic logic family example following ttl 7400 series ics designated buffers 7406 hex inverter bufferdriver 7407 hex bufferdriver noninverting 7433 quad twoinput buffer 7437 quad twoinput nand buffer buffers drive approximately 30 standard ttl loads compared fanout 10 nonbuffer ics general outputs two gates connected without damaging gates gates two special types outputs available certain conditions allow outputs connected realize function output signals use gates thus results reduced complexity circui t need gates illus trated example 339 example 339 need design circuit connects one four inputs b c input z four control inputs c1 c2 c3 c4 determine whether b c connected z respectively also known one inputs connected output given time one control inputs active time function circuit thus represented z p c1 q c2 r c3 c4 figure 338a shows andor circuit implementation function gates figure 338a outputs connected form function fourinput gate eliminated circuit furthermore number inputs increases along corre sponding increase control inputs circuit expanded simply connect ing additional gate outputs common output connection way connecting outputs realize function known wiredor connection fact circuit generalized form bus transfers selected source signal selected destination bus shown figure 338b simply common path shared sourcetodestination data transfers w additional destination circuit one source one destination activated given time example transfer data r z control signals c3 c5 activated simultaneously similarly q connected w c2 c6 activated sources wiredor bus one active given time however one destination activated simultaneously source signal must transferred several destinations transfer p z w control signals c1 c5 c6 activated simultaneously buses commonly used digital systems large number source destinations must interconnected using gates whose outputs connected form wiredor reduces complexity bus interconnection two types gates available special outputs used mode 1 gates open collector free collector outputs 2 gates tristate outputs figure 339 shows two circuits outputs ttl open collector nand gates tied together realized shown figure 339a second level gate fact illustrated dotted gate symbol note wiredand capability results realization andor invert circuit ie circuit rst level gates orinvert gate second level one level gates similarly open collector ecl gates used rst level realize outputs tied together shown figure 339b wiredor capability results realization orandinvert circuit ie circuit rst level gates second level consisting one andinvert nand gate one level gates limit number outputs typically 10 ttl tied together limit exceeded ics tristate outputs used place open collector ics addition providing two logic levels 0 1 output ics made stay highimpedance state enable input signal used purpose output ic either logic 1 0 enabled ie enable signal active enabled output high impedance state equivalent effect ic circuit noted highimpedance state one logic levels rather state gate electrically connected rest ofthecircuit outpu ts tristate ics also tied togethe r form wired or long one ic enabled time note case wired or wired formed using open collec tor gates one outpu act ive simulta neously figure", "url": "RV32ISPEC.pdf#segment105", "timestamp": "2023-10-17 20:15:22", "segment": "segment105", "image_urls": [], "Book": "computerorganization" }, { "section": "3.40 ", "content": "shows bus circui example", "url": "RV32ISPEC.pdf#segment106", "timestamp": "2023-10-17 20:15:22", "segment": "segment106", "image_urls": [], "Book": "computerorganization" }, { "section": "3.39 ", "content": "usin g tristat e gate s control inputs form enable inputs tristate gate s trista te outpu ts allow outputs connec ted toge ther open collec tor ics figure", "url": "RV32ISPEC.pdf#segment107", "timestamp": "2023-10-17 20:15:22", "segment": "segment107", "image_urls": [], "Book": "computerorganization" }, { "section": "3.41 ", "content": "shows tristat e ic ttl 74241 th ic eight trist ate bu ffers four enabled sign al pin 1 four signal pin 19 pin 1 active low enable pin 19 act ivehigh enable represent a tive op erating values shown figur e", "url": "RV32ISPEC.pdf#segment108", "timestamp": "2023-10-17 20:15:22", "segment": "segment108", "image_urls": [], "Book": "computerorganization" }, { "section": "3.41b ", "content": "ics several special features ics provide truth comple ment outputs eg ecl 10107 stro input signal needs active order gate outpu active eg ttl 7425 thus strob e enabl e input figure", "url": "RV32ISPEC.pdf#segment109", "timestamp": "2023-10-17 20:15:22", "segment": "segment109", "image_urls": [], "Book": "computerorganization" }, { "section": "3.42 ", "content": "illus trates ics details consult ic man ufacturer man uals liste end chapt er provide comple te listing ics char acterist ics refer append ix detail popular ics", "url": "RV32ISPEC.pdf#segment110", "timestamp": "2023-10-17 20:15:22", "segment": "segment110", "image_urls": [], "Book": "computerorganization" }, { "section": "3.8.5 ", "content": "designing ics nandn nor imp lementati ons exte nsively used desi gning ics nand functions basic ic fabric ation using single type gate im plementati preferable becau se sever al identical gates available one chip logic designers usually choose available ic decoder adder etc implement functions rather implement gate level discussed earlier chapter several nonconventional design approaches taken designing ics example since decoders available msi components outputs decoder corresponding input combination circuit provides output 1 ored realize function example 340 implementation full adder using 3to8 decoder whose outputs assumed high active shown figure 343 implement full adder outputs decoder lowactive ttl 74166 exam ple", "url": "RV32ISPEC.pdf#segment111", "timestamp": "2023-10-17 20:15:22", "segment": "segment111", "image_urls": [], "Book": "computerorganization" }, { "section": "3.41 ", "content": "another example bcd exces s3 code converte r realized using 4bi parallel b inary adder ic ttl 748 3 show n figure", "url": "RV32ISPEC.pdf#segment112", "timestamp": "2023-10-17 20:15:22", "segment": "segment112", "image_urls": [], "Book": "computerorganization" }, { "section": "3.44. ", "content": "th e bcd code word connec ted one two 4bi inputs adder const ant f 3 ie 0011 connec ted output input outpu bcd input plus 3 exces s3 code practice logic design ers select available msi lsi compo nents rst impleme nt muc h circui possible use ssi com ponents require comple te design", "url": "RV32ISPEC.pdf#segment113", "timestamp": "2023-10-17 20:15:22", "segment": "segment113", "image_urls": [], "Book": "computerorganization" }, { "section": "3.9 ", "content": "load ing tim ing two main probl ems resolve designi ng ics loading timing loading problem occurs event output one gate driv e subse quent gates connecte prac tice limit number gate input connecte outpu gate limit calle fanout gate fanout limit excee ded sign als degra de hence circui perform prope rly load ing probl em solved providi ng buff er outpu loaded gate either using separ ate invert ing nonin vertin g buff er replaci ng load ed gate one higher fan number inputs gate referr ed fanin tim ing problem general critical simple combina tional circuit ever timin g analysis usual ly neces sary complex circuit timing diag rams useful analysis figure", "url": "RV32ISPEC.pdf#segment114", "timestamp": "2023-10-17 20:15:22", "segment": "segment114", "image_urls": [], "Book": "computerorganization" }, { "section": "3.45 ", "content": "shows timing char acteristic gate th e xaxis indicates time logic valu es 1 0 show n magnitudes voltage yaxis figure", "url": "RV32ISPEC.pdf#segment115", "timestamp": "2023-10-17 20:15:22", "segment": "segment115", "image_urls": [], "Book": "computerorganization" }, { "section": "3.46 ", "content": "show timing diagram simpl e combinational circuit t0 three inputs b c 0 hence z1 z2 z 0 t1 b changes 1 assuming gates delays ideal gates z1 changes 1 t1 hence z also changes 1 t2 c changes 1 resulting changes z1 z2 z t3 change 1 pulling a0 0 z1 0 z2 1 z remains 1 timing diagram expanded indicate combin ations inputs graph ical way represe nting truth table also anal yze effect gate delays u sing timing diagram figure 347 analysi cir cuit gate delays shown t1 t2 t3 t4 assume circuit starts t0 inputs 0 t1 b change 1 change results change z1 t1 t2 rat t1 change z1 causes z change t4 later ie 1 t2 4 cha nging c 1 t2 change signal value rai ses 1 t3 a0 falls 0 t3 t1 z 1 falls 0 3 t1 t2 z2 raises 1 3 t3 t3 t1 t2 ther e time period z1 z2 0 contri buting gli tch z z rises back 1 t4 z2 rises 1 momentar transition z 0 might cause problem com plex circuit hazar ds results unequa l delays signal path circuit th ey prevente addi ng addi tional circui try anal ysis indicates utilit timi ng diagram th e hazard earli er exampl e ref erred static 1haz ard since outpu momentar ily goes 0 whe n remai n 1 hazard occur whe n circui realized sop form functi circuit reali zed pos form static 0hazard may occur whe rein circuit mom entarily give utput 1 whe n remai ned 0 dynamic hazard causes output change three mor e time change 1 0 0 1 figur e", "url": "RV32ISPEC.pdf#segment116", "timestamp": "2023-10-17 20:15:23", "segment": "segment116", "image_urls": [], "Book": "computerorganization" }, { "section": "3.48 ", "content": "demonst rates various type hazards hazards eliminated including additional gates circuit general removal static 1hazards circuit implemented sop form also removes static 0 dynamic hazards detailed discussion hazards beyond scope book refer shiva 1998 details hazards", "url": "RV32ISPEC.pdf#segment117", "timestamp": "2023-10-17 20:15:23", "segment": "segment117", "image_urls": [], "Book": "computerorganization" }, { "section": "3.10 SUM MARY ", "content": "th chapter provi des introdu ction analysi design com bination al logic circui ts logic minimi zation procedure discussed altho ugh discus sion ic technol ogy brief deta ils designi ng ics give n sufc ient understa nd inform ation ic vendor catalo g start buildi ng simpl e circui ts com plete standin g timing loading probl em helps man datory understa nd rest materia l book log ic circui ts also impleme nted usin g progr ammab le logi c component progr ammab le logic array pla program mable arr ay logi c pa l gate array g eld program mable g afpga chapter 4 provides details progr ammab le logic design", "url": "RV32ISPEC.pdf#segment118", "timestamp": "2023-10-17 20:15:23", "segment": "segment118", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 4 ", "content": "synchronous sequential circuits digita l circuits examin ed far posse ss memor y outpu combinati onal circuit time functi inputs time practice digita l systems contain memor eleme nts addition combina tional logi c portion thus making seque ntial circuits outpu sequenti al circuit time function externa l inputs intern al state time th e sta te circui den ed content memory elemen ts circui function previous sta tes inputs circuit figure 41 shows block diagram seque ntial circuit inputs n outputs p intern al memor ele ments th e outpu p memory eleme nts combine constitut es state circui time ie present state combina tional logic determ ines outpu circui time provi des nextstat e inform ation memor elements based exte rnal inputs present state based next state informat ion cont ents memory eleme nts change next state state time whe dt time increm ent sufcien memor elements mak e transition denot e dt 1 chapter tw type seque ntial circuits synch ronous async hronous behavior f synchr onous circui depend signal valu es disc rete points time behavior asynchrono us circuit epends order input signals change change occur time discrete time instants synchronous circuit determined control ling signal usually called clock clock signal makes 0 1 1 0 transitions regular interv als figure 42 shows two cloc k sign als one com plement along various terms used describe clock pair 0to1 1to0 transitions constitutes pulse pulse consists rising edge falling edge time transitions edges pulse width period clock time two corresponding edges clock clock frequency reciprocal period although clock figure 42 shown regular period intervals two pulses need equal sy nchronous sequential circuits use ip ops memor eleme nts ip op elect ronic device store either 0 1 ip op sta one two logic states change inputs ipop neede bring change state ty pically two outpu ts ip op one corr espond norm al state q othe r correspo nds comple ment sta te q examin e four po pular ip ops chapter asy nchron ous circui ts use timedelay elemen ts delay lines memor ele men ts delay li ne show n figur e 43a int roduces propagat ion dela input signal shown figur e 43b output sign al input signal excep delayed dt fo r instance 0to1 transi tion input t1 occurs outpu t2 dt later thus dela lines u sed memor eleme nts p resentstat e informat ion time forms input next state achi eved dt prac tice propagat ion dela ys introduced combinati onal circuit logic gates may sufcien produc e neede elay thereby necessi tating physi cal time delay element cases model figure", "url": "RV32ISPEC.pdf#segment119", "timestamp": "2023-10-17 20:15:23", "segment": "segment119", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1 ", "content": "reduces combina tional circuit feed back ie circui whos e outpu ts fed back inputs thus asynchrono us circuit may treat ed combina tional circuit feed back becaus e feedback change occur ring outpu result input change may turn contribute change inputs cycle changes may cont inue make cir cuit unst able circuit prope rly designed gener al async hronous circui ts difcul anal yze design properl designed however tend faster synchr onous circui ts synchr onous seque ntial circui gener ally cont rolled pu lses master clock ip ops circuit mak e transition new state whe n clock pulse present inputs absen ce single maste r clock operation circuit becom es unreliabl e since two clock pulses arriving different sources input ip ops guarante ed arrive time un equal path dela ys th phenom enon calle clock skewing clock skewing avoided analyzing delay path clock source inserting additional gates paths shorter delays make delays paths equal chapter describe analysis design procedures synchronous sequential circuits refer books listed references section chapter details asynchronous circuits", "url": "RV32ISPEC.pdf#segment120", "timestamp": "2023-10-17 20:15:23", "segment": "segment120", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1 FLIP-FLOPS ", "content": "mentioned earlier ipop device store either 0 1 ipop contains 1 said set ie q 1 q0 0 contains 0 reset ie q 0 q0 1 introduce logic properties four popular types ipops section", "url": "RV32ISPEC.pdf#segment121", "timestamp": "2023-10-17 20:15:23", "segment": "segment121", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1.1 Set\u2013Reset Flip-Flop ", "content": "setreset sr ipop two inputs setting r resetting ip op ideal sr ipop built using crosscoupled circuit shown figure 44a operation circuit illustrated figure 44b inputs 1 r 0 applied time q0 assumes value 0 one gate delay later since q0 r 0 q assumes value 1 another gate delay later thus two gate delay times circuit settles set state denote two gate delay time t hence state time dt 1 designat ed q 1 1 change 0 show n secon row figure 44b anal ysis circui indicates q q0 values change r changed 1 output valu es change q 0 q0 1 changi ng r back 0 alter output values 1 r 1 applied outpu ts assume valu e 0 regardless previous state circuit th condi tion desi rable since ipop operation require one output always comple ment f ther input condition changes 0 r 0 sta te circuit depends order inputs change 1 0 change faster r circuit attains reset state otherwis e attains set state thus crosscoupl ed gate circuit forms sr ip op input condition 1 r 0 set ipop condition 0 r 1 rese ts ipo p 0 r 0 constitut e change condi tion input condition 1 r 1 p ermitted occur inputs tra nsitions ipo p present state q next sta te q 1 various input combina tions sum marized figur e 44c th table calle state table four colu mns one corr espond ing input combina tion two rows one corr esponding state ip op recall outpu ts crossc oupled gate circuit figure 44 change instant aneously ce change input conditio n change occur delay equivalent least two gate delays th async hronou circui sinc e outputs change inputs change circuit also called sr latch name implies device used latch ie store data later use circuit figure 45 data 1 0 input line latched ipop clock input added circuit figure 44 construct clocked sr ipop shown figur e 46a long clock stays 0 ie absen ce clock pulse outputs two gates s1 r1 0 hence state ipop change r values impressed ipop inputs s1 r1 clock pulse thus clock controls transi tions synchronous circuit graphic sym bol clocked sr ip op shown figur e 4 6b given pres ent state r input conditions next state f ip op determ ined show n char acteristic table figur e", "url": "RV32ISPEC.pdf#segment122", "timestamp": "2023-10-17 20:15:23", "segment": "segment122", "image_urls": [], "Book": "computerorganization" }, { "section": "4.6c. ", "content": "th table obta ined rearra nging sta te table figur e", "url": "RV32ISPEC.pdf#segment123", "timestamp": "2023-10-17 20:15:23", "segment": "segment123", "image_urls": [], "Book": "computerorganization" }, { "section": "4.4c ", "content": "next sta te determ ined easily pres ent state input condi tion know n figur e", "url": "RV32ISPEC.pdf#segment124", "timestamp": "2023-10-17 20:15:23", "segment": "segment124", "image_urls": [], "Book": "computerorganization" }, { "section": "4.7 ", "content": "shows sr ip op formed cros scoupling two nan gates alon g truth table seen truth table circuit requires 0 input change state unlike circuit figure", "url": "RV32ISPEC.pdf#segment125", "timestamp": "2023-10-17 20:15:23", "segment": "segment125", "image_urls": [], "Book": "computerorganization" }, { "section": "4.4, ", "content": "whi ch require 1 men tioned earlier takes least two gate dela times sta te transi tion occur ther e change input conditio n ip op thus puls e width cloc k controlling ipop mus lea st equal delay inputs shoul change transition comple te puls e width longe r delay state transi tions resu lting rst input condi tion change overrid den subse quent change inputs duri ng clock puls e necessar reco gnize change input condi tions howe ver pulse width mus short enough pulse width clock fre quency thus must adju sted accommodat e ip op circui tra nsition ti rate input change timing charact eristics requireme nts f ip o ps discussed later chapt er figur e", "url": "RV32ISPEC.pdf#segment126", "timestamp": "2023-10-17 20:15:24", "segment": "segment126", "image_urls": [], "Book": "computerorganization" }, { "section": "4.8 ", "content": "shows graphic symbol sr ip op b oth clocked r asynchrono us input prese nt clear clock require activa te ipop asynchronous inputs asynchronous direct inputs used regular operation opop generally used initialize ipop either set reset state instance circuit power turned state ipops determined directinputsareusedtoinitializethestate eithermanuallythrougha master clear switch powerup circuit pulses direct input ipops circuit examine three commonly used ipops preset clear clocked congurations discussed apply ipops well remaining sections chapter reference signal show time associated assumed current time", "url": "RV32ISPEC.pdf#segment127", "timestamp": "2023-10-17 20:15:24", "segment": "segment127", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1.2 D Flip-Flop ", "content": "figure 49 shows delay data ipop state table ipop assumes state input q 1 1 1 q 1 0 0 function ipop introduce unit delay dt signal input d hence ipop known delay ipop also called data ipop since stores data input line ipop modied sr ipop obtained connecting input d0 r input shown figure 49c clocked ipop also called gated dlatch clock signal gates data latch next state ipop data input time regardless present state illustrated characteristics table shown figure 49d", "url": "RV32ISPEC.pdf#segment128", "timestamp": "2023-10-17 20:15:24", "segment": "segment128", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1.3 JK Flip-Flop ", "content": "jk ipop modied sr ipop j 1 k 1 input combination allowed occur combination occurs ipop comple ments sta te j input correspo nds input k input correspo nds r input sr ip op figure 4 10 shows graph ic sym bol state table charact eristic table realizatio n jk ipop usin g sr ipop see problem 41 hint convert one type ip op another", "url": "RV32ISPEC.pdf#segment129", "timestamp": "2023-10-17 20:15:24", "segment": "segment129", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1.4 T Fli p-Flop ", "content": "figure 411 show graphic sym bol state table char acterist ic table toggle ip op ip op compleme nts state whe n 1 remains sta te 0 ipop reali zed connecting j k inputs jk ipo p shown figur e 411d", "url": "RV32ISPEC.pdf#segment130", "timestamp": "2023-10-17 20:15:24", "segment": "segment130", "image_urls": [], "Book": "computerorganization" }, { "section": "4.1.5 Characte ristic and Excitat ion Table s ", "content": "characteri stic table ip op useful anal ysis seque ntial cir cuits since provides nextstat e inf ormation function pres ent state inputs characterist ic tables ip ops give n figure 41 2 ready refere nce excit ation tables input tables shown figur e 413 ipop useful designi ng seque ntial circuits sinc e describ e excitation input condition require bring sta te transition ip op q q 1 tables derived state tables corresponding ipops c ns id er th e ta te ta bl e f sr ipop shown figure", "url": "RV32ISPEC.pdf#segment131", "timestamp": "2023-10-17 20:15:24", "segment": "segment131", "image_urls": [], "Book": "computerorganization" }, { "section": "4.4. ", "content": "transition ipop state 0 0 shown rst row state table input either sr 00 01 sr ip op makes transi tion 0 0 long 0 r either 1 0 excitation requi rement shown sr 0d rst row excitatio n table tra nsition 0 1 requi res input sr 10 transition 1 0 requires sr 01 1 1 requires sr d0 thus excitatio n table accounts four possibl e transi tions excitati tables three ip ops similarly derived", "url": "RV32ISPEC.pdf#segment132", "timestamp": "2023-10-17 20:15:24", "segment": "segment132", "image_urls": [], "Book": "computerorganization" }, { "section": "4.2 ", "content": "timing ch aracter istics flipflo ps consid er cros scouple circui forming sr ipop figure 4 14 shows timing diagram assuming ipop state 0 begin t1 input changes 0 1 response q changes 1 t2 delay dt t1 dt time required circuit settle new state t3 goes 0 change q t4 r changes 1 hence q changes 0 dt time later t5 t6 r changes 0 effect q note r inputs remain new data value least time dt ipop recognize change input condition ie make state transition time called hold time consider timing diagram clocked sr ipop shown figure 415 clock pulse width w t8 t1 changes 1 t2 response q changes 1 t3 dt later since clock pulse still 1 r changes 1 t5 q changes 0 t6 pulse width w1 t4 t1 change would recognized thus case clocked sr ipop clock pulse width least equal dt ipop change state response change input pulse width greater dt r values change clock pulse since ipop circuit keep changing states result input change registers last input change clock pulse width critical parameter proper operation ipop consid er timing diag ram figure 416 ipop changes 1 t1 ip op changes original state 0 t2 time later since ip op circuit contain feed back path outputs input input stay 1 longer ie beyond t2 outpu woul fed back input ip op changes state avoid oscillat ion w must always less t order avoi p roblems resu lting clock puls e width ip ops practice desig ned either mas ter slave ipops edge triggered ip ops whi ch descr ibed next", "url": "RV32ISPEC.pdf#segment133", "timestamp": "2023-10-17 20:15:24", "segment": "segment133", "image_urls": [], "Book": "computerorganization" }, { "section": "4.2.1 Master\ufffd Slave Fli p-Flops ", "content": "mas ter slave congur ation shown figure 417a e two ipops used th e cloc k controls separat ion connec tion cir cuit inputs inputs master inve rted clock cont rols separ ation connection slave inputs maste r outpu ts prac tice clock sign al takes certain amount time mak e transition 0 1 1 0 show n tr f respective ly timing diagram figure 4 17b clock changes 0 1 point slave stage disconnec ted maste r sta ge p oint b master connec ted circuit input change state based inputs point c clock mak es transi tion 1 0 mas ter stage isolated inputs sla inputs connecte outpu ts maste r stage th e slave ipop change sta te based inputs slave sta ge isolated master stage thus masterslave conguration results one state change clock period thereby avoiding race conditions resulting clock pulse width note inputs master stage change clock pulse slave stage changing state without affecting operation masterslave ipop since changes recognized master next clock pulse masterslave ipops especially useful input ipop function output consider timing diagram figure 417c masterslave ipop r initially 0 ipop change state clock pulse however glitch line clock high sets master stage turn transferred slave stage resulting erroneous state called one catching problem avoided ensuring input changes com plete inputs stable well leading edge clock timing requireme nt kno wn setup time tsetup setup w clock pulse width th achieved either narr ow clock pulse width whic h difcul guara ntee large set time whic h reduc es ip op opera ting speed edgetrigge red ipops preferr ed master slave ip ops becau se one catchi ng problem associa ted wi th latter", "url": "RV32ISPEC.pdf#segment134", "timestamp": "2023-10-17 20:15:24", "segment": "segment134", "image_urls": [], "Book": "computerorganization" }, { "section": "4.2.2 Edge-Tri ggered Fli p-Flops ", "content": "edgetrigge red ipops design ed change state based input conditions either rising falling edge clock rising edge clock trigge rs positive edge trigge red ip op shown figur e 415 falling edge cloc k trigge rs negativ e edgetrigg ered ip op change input values occur rence trigge ring edge n ot bring state transition thes e ip ops next trigge ring edge figure 418a show com mon trailing edgetr iggered ip op circui built three crosscoupled ipops flipops 1 2 serve set inputs third ipop appropriate values based clock inputs consider clock input transitions shown figure 418b flipop 3 reset initially ie q 0 clock goes 1 t0 point w goes 0 one gate delay since z remains 0 ipop 3 change state clock pulse 1 x follow ie x d0 t1 clock changes 0 t2 z changes 1 delay t3 w remains 0 consequently ipop 3 changes state 1 delay thus state change brought trailing edge clock clock 0 change input change either z w shown t4 t5 z 1 w 0 t6 clock changes 1 z goes 0 t7 input changes 0 since clock 1 x change accordingly delay changes result changing w 1 trailing edge clock t8 since z 0 w 1 ipop 3 changes 0 seen timing diagram shown figure 418b trailing edge clock pulse either w z becomes 1 z 1 blocked gate 1 w 1 blocked gates 2 4 blocking requires one gate delay trailing edge clock hence change blocking occurs thus hold time one gate delay note total time required ipop transition three gate delays trailing edgeone gate delay w z change two gate delays q q0 change thus add tsetup two gate delays transition time three gate delays minimum clock period ve gate delays output ipop fed directly back input additional circuitry feedback path usually case sequential circuits minimum clock period increases correspondingly leading edgetriggered ipop designed using crosscoupled nand circuits along lines circuit shown figure 418", "url": "RV32ISPEC.pdf#segment135", "timestamp": "2023-10-17 20:15:24", "segment": "segment135", "image_urls": [], "Book": "computerorganization" }, { "section": "4.3 ", "content": "flip flop ic appe ndix provides deta ils several ip op ics ttl 7474 dual positive edgetriggered ipop ic triangle clock input graphic symbol indicates positive edge triggering negative edge triggering indicated triangle along bubble input shown case 74ls73 sd rd activelow asynchronous set reset inputs respectively operate inde pendent clock data input transferred q output positive clock edge input must stable one set time 20 ns pri positive edge clock po sitive transi tion time cloc k ie 08 20 v shoul equal less clocktoo utput delay time reliable operation ipop 7473 74ls73 dual mas terslave jk ip op ics th e 7473 positive puls e triggered note absen ce triang le clock input graphic sym bol jk info rmation load ed mas ter clock high transferred slave clock highto low transi tion con ventiona l operation thi ipop jk inputs mus stable cloc k high ip op als direct set reset inputs 74ls73 negative edge triggered ip op jk inputs must stable one setup time 20 ns prior highto low transition clock ipop active low direct rese input 7475 four bistabl e latche s 2bi latch controlle active high enable input e enabled data enter latch appea r q outputs th e q outpu ts fol low data inputs long enable high latched outputs remain sta ble long enable input sta ys low data inputs must sta ble one set time prior high tolow transition enable data latched", "url": "RV32ISPEC.pdf#segment136", "timestamp": "2023-10-17 20:15:25", "segment": "segment136", "image_urls": [], "Book": "computerorganization" }, { "section": "4.4 ANAL YSIS OF SYN CHRONO US SEQUENT IAL CIRCUITS ", "content": "analysi synchro nous seque ntial circuit proce ss dete rmining function al rel ation exists betwee n outputs inputs intern al states content ip ops circuit combine determin e internal state circuit thus circuit contain n ip ops one 2n states knowing present state circuit input values time able derive next state ie state time 1 output produced circuit t sequential circuit described completely state table similar ones show n ip ops figur es 44 411 fo r circui n ipops 2n rows state table inputs circuit 2m columns stable table intersection row column next state output information recorded state diagram graphical representation state table state represented circle state transitions represented arrows circles input combination brings transition corres ponding output information shown arrow analyzing sequential circuit thus corresponds generating state table state diagram circuit state table state diagram used determine output sequence generated circuit given input sequence initial state known important note proper operation sequential circuit must initial state inputs applied usually powerup circuits used initialize circuit appropriate state power turned following examples illustrate analysis procedure example 41 consider sequential circuit shown figure 419a one circuit input x one output z circuit contains one ipop analyze operation circuit trace circuit various input values states ip op derive corresponding output nextstate values since circuit one ipop two states corresponding q 0 q 1 present state designated circuit q output z function state circuit input x time t next state circuit 1 determined value input time t since memory element ipop 1 assume 0 x 0 tracing circuit see z 0 0 hence z 0 1 0 abovestate transition output shown top left blocks nextstate output tables figure 419b similarly x 0 q 1 z 1 making 1 1 shown bottom left blocks tables two entries tables similarly derived tracing circuit two tables merged one entry entry shown figure 419c form socalled transition table circuit block table corresponds present state input combination corresponding nextstate output information entered notes 1 arrows shown rising edge clock emphasize circuit uses opop triggered edge 2 clock t3 triggers q change 1 0 since x 1 state change delayed dt occurs corresponding falling edge clock simplicity assumed dt clock width hence transition q shown 4 practice clock width slightly larger dt 3 also ignore data setup hold times timing diagrams chapter simplicity simply use value fof ipop inputs triggering clockedge determine transition ipop block separated slash mark table figure 419c see state circuit 0 produces output 0 stays state 0 long input values 0s rst 1 input condition sends circuit 1 state output 1 1 state circuit remains state regardless input output complement input x state diagram figure 419d illustrates operation circuit graphically circle represents state arrow represents state transition input value corresponding transition output circuit time repre sented arrow separated slash generalize state diagram state table representations next example since ipop positive edgetriggered ipop state transition takes place rising edge clock occurs however fact explicitly shown tables timing diagram used illustrate timing characteristics operation circuit fourclock pulse period shown figure 419e input x assumed make transition shown ip op positive edge triggered state change occurs result rising edge takes certain amount time edge occurs assumed new state attained falling edge clock assuming initial state 0 t1 0 z 0 thus affecting y t2 x changes 1 hence z change 1 neglecting gate delays t3 positive edge clock starts state transition making q reach 1 t4 z goes 0 since q goes 1 t4 x changes 0 t5 thereby bringing z 1 changing d hence remains 1 t8 d corresponding transitions z also shown last part timing diagram shows z anded clock illustrate fact output z valid clock pulse x assumed valid clock pulses timing diagram represents input sequence x 0101 corresponding output sequence z 0110 note output sequence 2s complement input sequence lsb occurring rst msb last circuit tracing procedure discussed adopted analysis simple circuits circuit becomes complex tracing becomes cum bersome illustrate systematic procedure derivation state table hence state diagram analysis combinational circuit figure 419a ipop input equation excitation equation x circuit output equation z xy0 x0y equations express inputs ipops circuit circuit output functions circuit inputs state circuit time t figure 420a shows truth table form table whose rows correspond present state circuit whose columns correspond combination values circuit input x knowing value block table ie presentstateinput pair determine correspo nding next sta te 1 ipo p usin g ip op excitation table case ip op next state hence next state table shown figure 420b identical figur e 420a th e truth table output z represente table figure 420c tables figure 420b 420c merged form figure 420d whi ch usually calle tra nsition table since show sta te outpu transitio ns bina ry form gener al state circuit designat ed alphabet ic char acter transi tion table converte sta te table assignment alph abet sta te state table obtaine assigni ng 0 b 1 show n figure 420e examp le 42 illus trates analysi proce dure comple x circui t example 42 consid er seque ntial circui shown figur e 421a th ere two clocked ip ops one input line x one output line z q outputs ipops y1 y2 constitute present state circuit time t signal values j k determine next state y1 1 jk ipop value determines next state y2 1 ipop since opops triggered clock transitions occur simultaneously practice one type ipop used circuit since ic generally contains one ipop using single type ipop circuit reduces component count hence cost circuit different types ipops used examples chapter illustration purposes analyzing combination portion circuit derive ipop input excitation equations j xy2 k x y0 2 y0 1y2 x0y0 2 circuit output equation z xy1y0 2 two ipops four states hence state table shown figure 421b four rows rows identied state vectors y1y2 00 01 10 11 input x 0 1 hence state table two columns nextstate transitions jk ipop derived figure 421c note tables shown figure 421c j k simply rearranged truth tables reect combination input values along columns combination presentstate values along rows tables j k merged entry entry derive composite jk table makes easier derive state transition jk ipop although y1 y2 values shown table y1 value required determine y1 1 j k values known example boxed entry top left table j 0 k 1 hence characteristic table jk ipop figure 413 ip op reset y1 1 equal 0 similar ly boxed entry second row j 1 k 1 hen ce ip op compleme nts sta te since y1 value correspo nding entry 0 1 1 1 process repeate six mor e time complete y1 1 table analysi ip op transi tions show n figure 421 y2 1 derived thes e transi tions transi tion tables indivi dual ipops merged column column form transition table figur e 421e entire circuit instead denoting states pri mary state vectors lett er designat ions b e used state shown figur e 421e nextstat e table shown figur e 421f derived outpu table figur e 421g derived circui outpu equat ion show n ou tput nextstat e tables merged form state table figur e 421h circui t state table thor oughly depi cts behavi sequential circuit th e sta te diagram circuit derived sta te table show n figure 421i ass uming starting initial state input seque nce corres pond ing nextstat e outpu seque nces shown figur e 4 21j note outpu sequence indicates outpu 1 whe n circui input seque nce 0101 thu 01 01 seque nce dete ctor note sequence detected circui goes starting state another com plete 0101 sequence requi red circui produc e outpu 1 state diag ram example rearra nged mak e circui detect overlapping seque nces circuit produce output 01 occurs directly aft er detect ion 0101 sequen ce example x 00010101 0100 101 z 00000101 0100 001 th e state diagram shown figur e 422 accompl ishes starting initial state circui input seque nce know n sta te table sta te diagram seque ntial circuit permit functional anal ysis whe reby circui behavior dete rmined timing anal ysis requi red whe n mor e detailed analysi circuit parameter needed figure 423 show timi ng diag ram rst ve clock pulses exam ple 42 th heuristic analysi formali zed fol lowing stepby step pro cedur e analysis synchr onous sequential cir cuits 1 analyze combinational part circuit derive excitation equations ipop circuit output equations 2 note number ipops p determine number states 2 p express ipop input equation function circuit inputs present state derive transition table ipop using characteristic table ipop 3 derive nextstate table ipop merge one thus forming transition table entire circuit 4 assign names state vectors transition table derive nextstate table 5 using output equations draw truth table output circuit rearrange tables state table form one output merge output tables column column form circuit output table 6 merge nextstate output tables one form state table entire circuit 7 draw state diagram always neces sary follow analysi procedure circuits yield direct analysi shown exam ple 43 examp le 43 figure 424a show sequential cir cuit mad e two ipops recall ip op comple ments state input 1 hence input x held 1 ff0 com plements clock pulse ff1 complements q0 1 ie every second cloc k pulse state diagram show n figure 424b note output circuit state seen figure 424b modulo 4 counter refer exam ple 45 another modu lo4 counter desi gn wha would count seque nce ff0 ff1 figur e 42 3 falling edge trigge red", "url": "RV32ISPEC.pdf#segment137", "timestamp": "2023-10-17 20:15:26", "segment": "segment137", "image_urls": [], "Book": "computerorganization" }, { "section": "4.5 DESIGN OF SYNC HRONOU S SEQU ENTIAL CIRCU ITS ", "content": "th e design sequential circuit process derivi ng logic diag ram speci cation circuit require behavior circui behavior oft en expre ssed wor ds rst step design drive exact specica tion require behavior terms either state diag ram stable table proba bly mos difcul step design since deni te rules est ablished derive state diag ram stable table th e designer intuiti experien ce guides descrip tion converte sta te diag ram state table remainin g steps becom e mechanical examin e classical design proced ure exampl es sectio n always neces sary follow thi classical proced ure desi gns lend themse lves mor e direct intuiti desi gn methods th e classi cal design procedure consists fol lowing steps 1 deriving state diagram state table circuit problem statement 2 deriving number ipops p needed design number states state diagram formula 2p1 n 2p n number states 3 deciding types ipops used often simply depends type ipops available particular design 4 assigning unique p bit pattern state vector state 5 deriving state transition table p tables one ipop 6 separating state transition table p tables one ipop 7 deriving input table ipop input using excitation tables figure 413 8 deriving input equations ipop input circuit output equations 9 drawing circuit diagram design proce dure illus trated followi ng exam ples examp le 44 design seque ntial circuit dete cts input seque nce 1011 seque nces may overlap 1011 sequence detect give utput 1 whe n input comple tes seque nce 1011 becaus e overlap allow ed las 1 1011 sequence coul rst bit next 1011 seque nce hence input 011 enough produc e output 1 input seque nce 1011011 consists two overlapping sequences figure 425a show state diag ram th e seque nce starts 1 ass uming starting state circuit stays long input 0 producing output 0 waiting input 1 occur rst 1 input takes circuit new state b long inputs continue 1 circuit stay b waiting 0 occur continue sequence hence move new state c c 0 received sequence inputs 100 current sequence possibly lead 1011 hence circuit returns state a 1 received c circuit moves new state continuing sequence 1 input completes 1011 sequence circuit gives 1 output goes b preparation 011 new sequence 0 input b creates possibility overlap hence circuit returns c detect 11 subsequence required complete sequence drawing state diagram purely process trial error general start initial state state move either new state one alreadyreached states depending input values state diagram com plete input combinations tested accounted state note number states diagram predetermined various diagrams typically possible given problem statement amount hardware needed synthesize circuit increases number states thus desirable reduce number states possible state table example shown figure 425b since four states need two ipops four 2bit patterns arbitrarily assigned states transition table figure 425c output table figure 425d drawn output table see z xy1y2 wi use sr ip op ipop common practice use one kind ipop circuit differe nt kinds used illustration purpos es tran sitions ipop 1 sr extracted figure", "url": "RV32ISPEC.pdf#segment138", "timestamp": "2023-10-17 20:15:26", "segment": "segment138", "image_urls": [], "Book": "computerorganization" }, { "section": "4.25c ", "content": "show n rst table y1 1 figur e", "url": "RV32ISPEC.pdf#segment139", "timestamp": "2023-10-17 20:15:26", "segment": "segment139", "image_urls": [], "Book": "computerorganization" }, { "section": "4.25e. ", "content": "transitio ns using excita tion tables sr ip op fig ure", "url": "RV32ISPEC.pdf#segment140", "timestamp": "2023-10-17 20:15:26", "segment": "segment140", "image_urls": [], "Book": "computerorganization" }, { "section": "4.14), ", "content": "r excitations derived eg 0t o0 transition ipop require 0 r 3 1to0 transition requires 0 r 1 r exci tations functi ons x y1 y2 separated indivi dual tables excita tion equatio ns derived equation shown figure", "url": "RV32ISPEC.pdf#segment141", "timestamp": "2023-10-17 20:15:26", "segment": "segment141", "image_urls": [], "Book": "computerorganization" }, { "section": "4.25e. ", "content": "input equat ion secon ip op similarly derived show n figur e", "url": "RV32ISPEC.pdf#segment142", "timestamp": "2023-10-17 20:15:26", "segment": "segment142", "image_urls": [], "Book": "computerorganization" }, { "section": "4.25f. ", "content": "circui diagram shown figure", "url": "RV32ISPEC.pdf#segment143", "timestamp": "2023-10-17 20:15:26", "segment": "segment143", "image_urls": [], "Book": "computerorganization" }, { "section": "4.25g. ", "content": "com plexity seque ntial circui reduc ed simpl ifying input outpu equat ions addi tion judicious alloca tion state vectors states also reduces circuit com plexity examp le", "url": "RV32ISPEC.pdf#segment144", "timestamp": "2023-10-17 20:15:26", "segment": "segment144", "image_urls": [], "Book": "computerorganization" }, { "section": "4.5 ", "content": "modulo 4 up dow n count er modu lo4 counter four states 0 1 2 3 input x count er controls dir ection count x 0 x 1 th e state circuit ie count circuit outpu t th e state diag ram shown figur e", "url": "RV32ISPEC.pdf#segment145", "timestamp": "2023-10-17 20:15:26", "segment": "segment145", "image_urls": [], "Book": "computerorganization" }, { "section": "4.26. ", "content": "derivation state diagram counter straightforward since number states transitions completely dened problem state ment note input values shown arcsthe output circuit state circui need two ip ops assignm ent f 2bit vectors states als dened 00 01 10 11 correspo nd 0 1 2 3 respect ively state table transition table shown figure 426b c respective ly cont rol sign al x x 0 indicates count x 1 indicat es count use jk ip op ip op input equat ions derived figure 426d figur e 4 26e show circui", "url": "RV32ISPEC.pdf#segment146", "timestamp": "2023-10-17 20:15:26", "segment": "segment146", "image_urls": [], "Book": "computerorganization" }, { "section": "4.6 REG ISTERS ", "content": "regist er storage device capab le holding binary data collection ip ops nbit register built n ip ops figure 427 shows 4bi register built four ip ops four input lines in1 in2 in3 4 connecte input correspo nding ip op clock pu lse occurs data input lines in1 thr ough 4 ente r register cloc k thus loads register loading parallel since four bits enter regist er simulta n eously q outpu ts ip ops connecte outpu lines out4 hence four bits data ie contents register available simul taneously ie parallel output lines hence para lleli nput parallello ad parallel output register clock puls e 4bit data input enters register input lines in1 in4 remains register next cloc k pulse clock controls load ing register shown figur e 428 loa must 1 data ente r register clear signal shown figure 4 28 leads zeros register i e clears register clearing register common operation normal ly done thro ugh async hronou clear input reset provi ded ip ops thus async hronou inputs used cle aring operation done inde penden f clock th e clear signal shown figure", "url": "RV32ISPEC.pdf#segment147", "timestamp": "2023-10-17 20:15:26", "segment": "segment147", "image_urls": [], "Book": "computerorganization" }, { "section": "4.2 ", "content": "9 clears regist er asynch ronously scheme clear input must set 1 cle aring regi ster brough 0 deact ivate reset allow resu mption normal opera tion circuit figur e", "url": "RV32ISPEC.pdf#segment148", "timestamp": "2023-10-17 20:15:26", "segment": "segment148", "image_urls": [], "Book": "computerorganization" }, { "section": "4.29, ", "content": "data input lines ente r register rising edge cloc k th erefore need mak e sure data input lines always valid figure", "url": "RV32ISPEC.pdf#segment149", "timestamp": "2023-10-17 20:15:26", "segment": "segment149", "image_urls": [], "Book": "computerorganization" }, { "section": "4.30 ", "content": "shows 4bit regist er built jk ip ops data input lines enter regi ster rising edge clock whe n load 1 loa 0 since j k 0 content register remai n unchanged note used two gate gate data ip op figur e", "url": "RV32ISPEC.pdf#segment150", "timestamp": "2023-10-17 20:15:26", "segment": "segment150", "image_urls": [], "Book": "computerorganization" }, { "section": "4.30. ", "content": "try elim inate one gates gating line rather j k lines introducing unequal delays j k lines due extra gate k line long clock edge appears inputs settl ed thi un equal delay would present problem appear circuit operate properl replace jk ip ops figure 430 ipops change needed retai n content f regist er unaltered load 0 case choice type ip op used circuit depend mode impleme ntation synchronous circui t design ing ics often efcient use jk ip ops gated inputs impleme nting circuit msi parts common use ipops gate cloc ks com mon op eration data regist er shift either rig ht left figur e 431 show 4bit shif register built ip ops q output ip op connec ted input ip op rig ht cloc k pulse content d1 move d2 content d2 moves d3 d3 move d4 simulta neously hence right shift regist er output shift register time content d4 input set 1 1 ente red d1 shift pulse similar ly 0 loaded setting input 0 nbit shif register load ed serially n cloc k puls es content register outpu serially usin g outpu l n cloc k puls es note loading nbit rightshift register serially least signicant bit must entered rst followed signica nt bit values also outpu lin e shift regist er connecte input line cont ents regist er circulate shift pulse figur e", "url": "RV32ISPEC.pdf#segment151", "timestamp": "2023-10-17 20:15:26", "segment": "segment151", "image_urls": [], "Book": "computerorganization" }, { "section": "4.32 ", "content": "shows shift register seriali nput serialou tput para llel outpu circulat e left rig ht shift left right capab ilitie s input rece ives data ip op right left depend ing whe ther di rection sign al 0 1 respective ly since right left signals com ple men ts regist er shift one direction time register performs shift circul ate based valu e mo de signal shift mode data left input ente r regist er directi 1 data n right input ente r regi ster di rection 0 cont ent regist er outpu parallel 01020304 serial mode 04 3bit shift regist er using sr ip ops rig htshift parallel serial input capabiliti es show n figur e 433 refer appe ndix deta ils shift regist er ic s followi ng exampl es illus trate utility shift regist ers seque ntial circui design exam ple 46 1011 se quence dete ctor using shift regis ter possibl e design sequential circui ts without follow ing cla ssical design proce dure discusse earlier chapter illustrate desi gning 1011 sequence detect exam ple 44 usin g 4bit rightshift register circuit shown figure 434a input z1 shift regi ster cont ains seque nce 1011 ie 1101 lef right sinc e input enters shift register left z1 gated clock produc e z sam e clock used shif control shif register e shift regist er activa ted rising edge clock th e operation cir cuit il lustrated timing diag ram figur e 434b t1 x 1 hen ce 1 ente rs shift regist er assume shift register cont ents settled t2 simil arly 3 0 ente rs shift register t5 7 1 enters resulting seque nce 1011 bein g content shift register hen ce z1 goes 1 8 since 0 enters shift register t9 z1 0 10 thus z 1 duri ng t9 10 note z wi 1 15 t16 sinc e entry 011 comple tes seque nce require shift register cleare begin wi th also note circuit require four ip ops com pared two ip ops needed circui figure 425 com binational portion circuit less complex examp le 47 serial adder ripple adder circui describ ed cha pter 3 uses n 1 full adder one half adder generate sum two nbit numb ers th e addi tion done para llel although carry propagat e lsb position msb position car ry propagat ion delay dete rmines speed adder slower speed addi tion tolerated system serial adder utilize d serial adder uses one full adder two shift regist ers bits added brough ful l adder input sum output full adder shifted one operand regist ers car ry output stored ipop used addition next signica nt bits nbit addition thus perf ormed n cycles i e n clock pulse times full adder figur e 435 shows serial adder 6bit op erands stored shift registers b addi tion follow stageb ystage additi process done paper lsb msb carry ip ops reset beginnin g f addition since carry lsb position 0 ful l adder adds lsbs b cin gener ates sum c rst shif pulse cout enters carry ipop sum enters msb b regist ers shifted right simultaneous ly circuit ready addition next state six pulses needed com plete addition end least signica nt nbit sum b n 1 h bit carry ip op ope rands b lost end addi tion proce ss lsb output b connecte ms b input b become circulat ing shift register content b unaltered due addition since bit pattern b sixth shift puls e befor e addi tion began valu e als require pres erved shoul converted circul ating shift regist er sum outpu full adder mus fed third shift regist er cir cuit enclosed dotted lines figur e 435 seque ntial circui one ip op hence two states two input lines b one outpu line sum cin present state vector c nextstat e vector examp le 48 serial 2s co mplement er serial 2s comple menter fol lows copy comp lemen algori thm 2s comple menting cont ents regist er see chapte r 2 recall algorithm examin es bits regist er sta rting lsb al l conse cutive zero bits well rst nonzer bit c opied remai ning bits includi ng msb comp lemented conver numb er 2s comple ment exampl e follows 1 0 1 1 0 1 0 1 0 0 0 11bit numb er complem ent copy 0 1 0 0 1 0 1 1 0 0 0 2s complemen two disti nct operati ons algori thm copy complem ent transition copyi ng mode com plemen ting mode brough rst nonzero bit th e serial comple menter circuit mus seque ntial circuit mode operation time depend whethe r nonzero bit occur red two states hence one ipop circuit one input lin e b number com plemen ted entering starting lsb one outpu line either copy r com plement input circui starts copy sta te changes comple ment state rst non zero bit ente rs thr ough input line beginnin g 2 comple ment opera tion circuit must set copy state figure 436 shows design circuit state diag ram figure 436a state table figure 436b deri ved fol lowed outpu equatio n figure 436c input equations sr ipop figure 436e circuit shown figure 436f circuit set copy state resetting ip op complementation", "url": "RV32ISPEC.pdf#segment152", "timestamp": "2023-10-17 20:15:27", "segment": "segment152", "image_urls": [], "Book": "computerorganization" }, { "section": "4.7 REGISTER TRANSFER LOGIC ", "content": "manipulation data digital systems involves movement data registers data movement accomplished either serial parallel transfer nbit data one register nbit takes n shift pulses done serial mode done one pulse time parallel mode data path transfer 1bit registers sufcient serial mode operation path repeatedly used transforming nbit one time parallel transfer scheme n data paths needed thus serial transfer scheme les expens ive terms hardware slower parallel schem e figure 4 37 shows para llel transf er scheme 4bit register 4bit register b x cont rol signal data transf er occur rising edge clock puls e x 1 x 0 j k input ipops register b 0 hence content regist er b remain unchanged even rising edge clock pulse occurs synchronous digita l circuit control signals x also synchr onized cloc k timing require ments proper operation para llel transfer circuit show n figur e 437 b t1 control sign al x goes 1 t2 register transfer occur s x b e brough 0 t2 represe nt transf er schem e diag ram figure 437c register represe nted rectang le along clock input th e inputs outpu ts regist ers show n requi red numb er 4 show n next indicat es numb er bits transf erred hence numb er parallel lines needed line cont rolled x com mon conven tion used represe nt multipl e bits signal circui diagram figure 438 shows serial transf er scheme b two 4bit shift registers shift right response shift clock seen timing diag ram figur e", "url": "RV32ISPEC.pdf#segment153", "timestamp": "2023-10-17 20:15:27", "segment": "segment153", "image_urls": [], "Book": "computerorganization" }, { "section": "4.38b, ", "content": "need four clock puls es comple te transfer cont rol signal x mus stay 1 four clock pulses data processi ng done processing unit com puter accompl ished one regist er transfer opera tions often requi red data one regist er transf erred several regi sters regist er receives inputs one mor e othe r regi sters figur e", "url": "RV32ISPEC.pdf#segment154", "timestamp": "2023-10-17 20:15:27", "segment": "segment154", "image_urls": [], "Book": "computerorganization" }, { "section": "4.39 ", "content": "show tw schem es transf erring content either regist er regist er b regist er c cont rol sign al c content move c b c signal cont ents b move c one cont rol signal active ti th accompl ished usin g true com plemen sam e cont rol sign al select one two tra nsfer path s note take least two gate delay times act ivation cont rol signal data either b reach inputs c control signal must stay active time occurrence rising edge clock pulse gates data c figure 439b shows use 4line 2to1 multiplexer accomplish register transfer required figure 439a", "url": "RV32ISPEC.pdf#segment155", "timestamp": "2023-10-17 20:15:27", "segment": "segment155", "image_urls": [], "Book": "computerorganization" }, { "section": "4.8 REGISTER TRANSFER SCHEMES ", "content": "required transfer data several registers complete processing sequence digital computer one two transfer schemes generally used 1 pointtopoint 2 bus pointtopoint scheme one transfer path two registers involved data transfer bus scheme one common path time shared register transfers", "url": "RV32ISPEC.pdf#segment156", "timestamp": "2023-10-17 20:15:27", "segment": "segment156", "image_urls": [], "Book": "computerorganization" }, { "section": "4.8. ", "content": "1 point topoin tran sfer th e hardwar e require pointto point transf er betwee n three 3bit registers b c shown figur e", "url": "RV32ISPEC.pdf#segment157", "timestamp": "2023-10-17 20:15:27", "segment": "segment157", "image_urls": [], "Book": "computerorganization" }, { "section": "4.40. ", "content": "paths shown c b c cont rol signals used bri ng data transf er scheme allows one transfer made time parallel independent data paths available example control signals c c b enabled time disadvant age scheme amount hardwar e require transf er increas es rapidl addi tional registers include new register connec ted regist ers thr ough new er data path s growth makes schem e expensive hence pointto point scheme used whe n fast para llel operation desired", "url": "RV32ISPEC.pdf#segment158", "timestamp": "2023-10-17 20:15:27", "segment": "segment158", "image_urls": [], "Book": "computerorganization" }, { "section": "4.8. ", "content": "2 bus transfer figur e", "url": "RV32ISPEC.pdf#segment159", "timestamp": "2023-10-17 20:15:27", "segment": "segment159", "image_urls": [], "Book": "computerorganization" }, { "section": "4.41a ", "content": "shows bus scheme transf er data betwee n three 3bit regist ers bus common data path hi ghway regist er either feeds data ie content regist er bus takes data ie register bus time one register putting data bus requires bits position register ored connected corresponding bit line bus figure 441b shows typical timing transfer c contr ol signals bus bu c 1 simulta n eously transfer take place several regist ers receive data bus simul taneously one register put data bus time thus bus transf er scheme slower pointto point scheme b ut hardware requireme nts consi derably less furt additional registers added bus structu adding two paths one bus register regi ster bus fo r reasons bus transfer com monly used data transf er scheme prac tice large numb er registers connec ted bus th requires use gates man inputs form bus interc onnecti two special types outpu ts avai lable n certain gates perm easier realizatio n function gates openc ollector outpu tristate output figure 442a shows 1 b bus built using opencol lector g ates outpu ts special gate tied together provi de function one common ly used devi ce tristat e gate show n figur e 442b gate enabled enable 1 output function input disabled enable 0 output nonexi stent ele ctrically scheme shown figure 442a realizes function using tristat e buff ers figure 443 show comple te bus structure transf er four 4 bit registers b c d source register connected bus enabling appropriate tristate selected outputs source control 2to4 decoder destination register selected outputs destination control 2to4 decoder note 4line 4to1 multiplexer could also used form connections registers bus", "url": "RV32ISPEC.pdf#segment160", "timestamp": "2023-10-17 20:15:28", "segment": "segment160", "image_urls": [], "Book": "computerorganization" }, { "section": "4.9 REGISTER TRANSFER LANGUAGES ", "content": "since register transfer basic operation digital computer several register transfer notations evolved past decade notations complete enough describe digital computer register transfer level come known register transfer languages since used describe hardware structure behavior digital systems generally known hardware description languages hdl two widely used hdls vhdl high speed icvhsichardware design language verilog references listed end chapter provide details languages purposes book relatively simple hdl used details shown later table 41 42 show basic opera tors construc ts hdl general format register transfer destination source cont rol sign al assoc iated condi tional register transfer contr ol condition transfer1 else tran sfer2 figure 444 illustrate featur es hdl transfers figure 439 describ ed statemen control c else c b", "url": "RV32ISPEC.pdf#segment161", "timestamp": "2023-10-17 20:15:28", "segment": "segment161", "image_urls": [], "Book": "computerorganization" }, { "section": "4.10 DESIGN ING SEQU ENTIAL CIRCU ITS WITH ", "content": "tegrated circuits appendix shows small nd ediums cale ics fr om th e transistortransistor logic ttl family reader referred ic manufacturer catalogs details ics sequential c ircuit ca n designed b following classical de sign proc edure es cribed n chapte r na l tep de sign c ircuit components ipops registers etc selected referring manufacturer catalogs often possible design sequential circuits without following classical design procedure serial adder design see example 47 one example number states practical circuits becomes large classical design procedure becomes impractical circuit functions usually partitioned partition separately designed ad hoc methods design based familiarity available ics may used designing partition complete circuit example 49 illustrates design process using ics example 49 paralleltoserial data converter object design paralleltoserial data converter accepts 4bit data parallel produces output serialbit stream data input input consists sign bit a0 three magnitude bits a1a2a3 serial device expects receive sign bit a0 rst followed three magnitude bits order a3a2a1 shown figure 445a note output bit pattern obtained circulating input data right three times shifting right 1 bit time perform 4bit shift register loaded parallel right shifted required ttl 7495 one circuit 7495 circuit diagram deduced mode input must 1 parallel load operation 0 serial input right shift modes output must connected serial input line circulating data figure 445b shows details circuit operation complete operation needs eight steps designated 0 7 two idle steps 8 9 shown since decade counter 7490 available count 0 9 circuit shown figure 445c 4bit output decade counter 7490 decoded using bcdtodecimal decoder 7442 since outputs 7442 low active output 0 value 0 0 time step value 1 times hence used mode control signal 7495 inputs cp1 cp2 7495 must tied together circuit receives clock modes output 3 7442 used alert serial device data acceptance starting next clock edge output 8 indicates idle state example illustrates simple design using ics practice timing problems severe triggering ipops data setup times clock skews ie arrival clock parallel lines slightly different times due differences path delays timing elements must considered detail", "url": "RV32ISPEC.pdf#segment162", "timestamp": "2023-10-17 20:15:28", "segment": "segment162", "image_urls": [], "Book": "computerorganization" }, { "section": "4.11 PROGRAMMABLE LOGIC ", "content": "logic implementations discussed far required interconnection selected ssi msi lsi components printed circuit boards pcbs current figure 445 continued hardware technology cost pcb connectors wiring four times ics circuit yet implementation mode cost effective circuits built small quantities progress ic technology leading current vlsi era possible fabricate complex digital system chip implementation cost effective large quantities circuit system would needed since ic fabrication costly process manage costs exploiting capabilities technology three implementation approaches currently employed based quantities circuits needed custom semicustom programmable logic custom implementations cir cuits needed large quantities semicustom medium quantities program mable logic mode used small quantities section rst provide brief description ic fabrication process examine relative merits approaches provide details programmable logic design figur e 446 show ste ps typical ic man ufacturing proce ss th e process starts thin 10 mil thick 1 mil 11000 inch slice ptype semiconductor material 25 inches diameter called wafer hundreds identical circuits fabricated wafer using multistep process wafer cut individual dies die corresponding ic fabrication process consists following steps 1 wafer surface preparation 2 epitaxial growth 3 diffusion 4 metallization 5 probe test 6 scribe break 7 packaging 8 final test circuit designer rst develops circuit diagram fabrication circuit must transistor diode resistor level detail today however circuit designer need deal level detail since computeraided design cad tools translate gatelevel design circuit level circuit designed usually simulated verify functional timing loading characteristics characteristics acceptable circuit brought placement routing stage placement process placing circuit components appropriate positions interconnections routed properly automatic placement routing tools available complex circuit step time consuming one even cad tools end step circuit layout reects structure silicon wafer made highresistivity singlecrystal ptype silicon material wafer rst cleaned sides polished one side nished mirror smooth epitaxial diffusion process adding minute amounts n ptype impurities dopants achieve desired low resistivity epitaxial layer forms collector transistors layer covered layer silicon dioxide formed exposing wafer oxygen atmosphere around 10008c critical part fabrication process preparation masks transfer circuit layout silicon mask plates rst drawn according circuit layout plates reduced photographic techniques size nal chip masks placed wafer one mask die photoresist coating wafer exposed ultraviolet light unexposed surface etched chemically leaving desired pattern wafer surface pattern subjected diffusion although steps vary depending process technology used manufacture ic diffusion classied isolation base emitter diffusion stages stages corresponding terminals transistors fabricated diffusion stage corresponds one masks etch operation wafer contains several identical dies circuit components formed die wafer subjected photo etching open windows provide connections components interconnections made ie metallization vacuum deposition thin lm aluminum entire wafer followed another mask etch operation remove unnecessary interconnections among dies wafer may defective result imperfections wafer fabrication process selected dies tested mark failing ones percentage good dies obtained called yield fabrication process wafer scribed diamondtipped tool along boundaries dice separate individual chips die tested mounted leader pins attached packaged circuit layout mask preparation timeconsuming error prone stages fabrication process hence contribute design cost circuit complex circuit layout requires thousands labor hours even use cad tools", "url": "RV32ISPEC.pdf#segment163", "timestamp": "2023-10-17 20:15:28", "segment": "segment163", "image_urls": [], "Book": "computerorganization" }, { "section": "4.11.1 Circuit Implementation Modes and Devices ", "content": "customdesign mode circuit implementation steps ic fabrication process unique application hand although mode offers smallest chip size highest speed design costs justify mode lowvolume applications typically annual sales volume ic around 510 times nonrecurring engineering nre costs design makes customdesign mode beyond reach applications semicustomdesign mode initial steps fabrication process remain standard applications last step metallization unique application accomplished using xed arrays gates predesigned chip real estate gates interconnected form application specic ic asic nre cost ics order magnitude smaller customdesign mode thus making cost effective lowvolume applications use standard gate patterns ic simpler design rules employed ics use chip area efciently speeds also lower compared customdesigned ics ics used semi customdesign mode mask programmable means make ics application specic user supplies interconnection pattern ie program ic manufacturer turn prepares masks program ic obviously programmed function ics altered programmabledesign mode ics known programmable logic devices plds used several types plds pattern gates interconnection paths prefabricated chip ics pro grammed user special programming equipment called pld program mers eld programmable plds allow erasing program reprogram early development stages digital system eldprogrammable devices typically used allow exibility design alterations design tested circuit performance deemed acceptable maskprogrammable ics used implement system plds available offtheshelf fabrication expenses involved designers simply program ic create asic plds typically replace several components typical ssimsibased design thereby reducing cost reduced number external interconnections pcbs connectors four popular plds use today 1 readonly memory rom 2 programmable logic array pla 3 programmable array logic pal 4 gate arrays ga rst three types based twolevel andor circuit structure last uses general gate structure implement circuits devices available eld maskprogrammable versions use circuit figure 447 illustrate difference rom pla pal structures exam ple 410 con sider twoleve l impleme ntation functi ons f1 ab a0 b0 f 2 ab0 a0 b shown figure 447 rst level impleme nted arr ay colu mn correspo nds input variable comple ment row corr esponds minterm input variabl es secon leve l imple men ted array colu mn correspo nds combinati sel ected mi nterms generat ed arr ay purpos es thi exampl e assum e rowcolum n inters ection arrays program med th electroni c devices inters ections used either connect disc onnect row line column line connec ted intersectio n reali zes either operation depe nding array inters ection locate dot inters ections indicat es two lines connec ted rom array fabricate minterm correspo nding input variabl es available hence programma ble th e arr ay progr ammab le reali ze circui t pal array com pletely fabr i cat ed progr ammable array program mable pla arr ays progr ammab le defe r descr iption roms chapters 5 9 provide bri ef descrip tions othe r three plds followi ng sectio ns", "url": "RV32ISPEC.pdf#segment164", "timestamp": "2023-10-17 20:15:29", "segment": "segment164", "image_urls": [], "Book": "computerorganization" }, { "section": "4.11 .2 Pr ogramm able Logic Arrays ", "content": "plas lsi devices several inputs outputs array array sop form f functi imple mented used designi ng plas con necting appro priate inputs comple ments arr ay yields p roduct terms produc terms combine array yield outpu ts opera tions realized usin g wired logic rather disc rete gates since require sum produc terms generat ed pla impleme ntation econom ical exam ple 411 illus trates struct ure opera tion plas exam ple 411 requi red imple ment fol lowing functi using pla f1 b c m 237 f2 b c m 137 th e kmaps figure 448a minim ize functi note minimi zation procedure reduce number product terms rather number literals order simplify pla three product terms needed implement twooutput function since bc common modied truth table shown figure 448b rst column lists product terms second column lists circuit inputs product term inputs coded 0 1 0 indicates variable appears complemented 1 indicates variable appears uncomplemented indicates variable appear product term column provides programming information array pla six inputs a0 b b0 c c0 gate array shown figure 448c input fusible link thus 0 implies link retained corresponding complemented variable 1 implies link retained corresponding truth variable implies blowing link retainingblowing operation links programming array last column figure 448b shows circuit outputs 1 column row indicates product term corresponding row product term output corresponding column indicates connection thus column useful programming array shown figure 448c figure 448c gates shown dotted indicate wired logic gate six inputs receive three circuit inputs complements required connections indicated second column figure 448b made one gate required product term similarly gates shown three inputs one product term circuit required product terms indicated third column figure 448b connected actual pla structure shown figure 448d dot gure corresponds connection using switching device diode transistor absence dot rowcolumn intersection indicates connection arrow shown connection detail bottom gure considered diode positive direction verify structure implements logic two types plas available mask eld programmable pla pro gram provided designer ic manufacturer fabricate mask programmed pla never altered programmed special devices used program eldprogrammable pla fpla type pla switching devices fabricated rowcolumn intersection connec tions established blowing fusible links plas 1220 inputs 2050 product terms 612 outputs available offtheshelf addition programmable logic sequencers plss plas storage capabilities useful sequential circuit implementations also available recent plas andor structure augmented additional logic capabilities several cad tools available enable designing plas tools generate pla programs optimize pla layout custom fabrication simulate pla designs", "url": "RV32ISPEC.pdf#segment165", "timestamp": "2023-10-17 20:15:29", "segment": "segment165", "image_urls": [], "Book": "computerorganization" }, { "section": "4.11.3 Programmable Array Logic ", "content": "pal array programmable array xed pal thus easier program less expensive pla however pal less exible pla terms circuit design since orarray conguration xed figure 449 shows pal14h4 early pal ic monolithic memories incor porated invented pal devices mmi founded 1969 eventually acquired advanced micro devices amd 1987 though amd later spun programmable logic division vantis acquired lattice semiconductor ic 14 inputs pin numbers 1 9 11 12 13 18 19 4 outputs pin numbers 14 17 input buffered buffer produces true complemented values input device layout consi sts 63 row 32 colu mn array colu mn corr espond input com plemen t since 14 inputs 28 32 colu mns utilize d row correspo nds produc term outpu ic realized ori ng four produc terms produc terms realized b wired logic gates show n inputs f gates thus symbolic repr esentatio ns nly 16 64 row used devi ce figur e 450 shows symb ology used pals figur e 450a product term abc reali zed retai ning fusibl e links rowcolu mn intersectio ns array show n x absence x inters ection indicat es blow n li nk hence connec tion realiza tion functi ab0 a0 b show n figur e 450b unprog ramm ed pals fuses intact fu ses blow n duri ng progr amming pal reali ze requi red functi shorthan nota tion indicate fuses alon g row intact shown figur e 450c th simply impli es particula r row uti lized sample imp lementati usin g conven tion shown figure 450d realizi ng multipleo utput circui ts using pla common product terms shared among various outpu ts pal since arr ay progr ammed sharing possibl e outpu function must min imized separat ely befor e imple mentati since number input gates output pal xed altern ate ways realizi ng circuit may need explored number produc terms minim ized function excee ds numb er inputs available example 412 shows circuit implementation using pal example 412 implement following function using pal f1 b c m 015710111315 f2 b c m 024510111315 f3 b c m 02371011121314 figur e 451a show kmaps correspo nding simplie functi ons figure 451b shows implementation using pal14h4 implementation fl f2 present problem since number product terms functions less equal four since f3 ve product terms one pal outputs used realize rst four product terms z z fed pal input combined remaining product term f3 realize f3 obviously implementation simple circuit using pal14h4 economical since use mostly inputs product terms possible nevertheless example illustrates design procedure several pal ics available 1020 inputs 110 outputs various congurations 216 inputs per outputor gate typically pals pro grammed prom programmers use pal personality card programming half pal outputs selected programming inputs outputs used addressing outputs switched progr unprog ramm ed loca tions one early pal design aids available pala sm softw monolith ic mem ories inc palas acce pts pal design specica tion v eries design optional functi table gener ates fuse plot requi red progr pal alt era cplds section extracted altera max 7000 series data sheets http www alteracom literature ds m7000pdf altera developed three families chips t within cpld category max 5000 max 7000 max 9000 max 7000 series represents widely used technology offers stateoftheart logic capacity speed performance max 5000 represents older technology offers costeffective solution max 9000 similar max 7000 except max 9000 offers higher logic capacity th e gener al arch itecture f altera x 7000 series depi cted figur e 452 consist array bloc ks calle logic array blocks labs interc onnect wires cal led p rogramma ble interc onnect array pia th e pia capab le connectin g lab input ou tput lab also input outputs chip connec directly pia labs lab f1 b c m 0 1 5 7 10 11 13 15 comple x spld like structure entire chip consider ed array splds max 7000 devices avai lable based eprom eeprom technol ogy rece ntly even eepro max 7000 chip could pro grammabl e outof circuit special purpose progr amming unit howe ver 1996 alte ra releas ed 7000s series repr ogrammabl e inc ircuit th e structure lab shown figur e", "url": "RV32ISPEC.pdf#segment166", "timestamp": "2023-10-17 20:15:29", "segment": "segment166", "image_urls": [], "Book": "computerorganization" }, { "section": "4.53. ", "content": "lab consist two sets eight macroc ells show n figure", "url": "RV32ISPEC.pdf#segment167", "timestamp": "2023-10-17 20:15:29", "segment": "segment167", "image_urls": [], "Book": "computerorganization" }, { "section": "4.54), ", "content": "mac rocell com prises set progr ammab le product terms part plane feed org ate ip op ip ops congur ed type jk sr transpa rent number inputs orgate macrocell variable orgate fed 5 product terms within macrocell addition 15 extra product terms macrocells lab product term exibility makes max 7000 series lab efcient terms chip area typical logic functions need ve product terms architecture supports wider functions needed interesting note variablesized orgates sort available basic splds besides altera several companies produce devices categorized cplds example amd manufacturers mach family lattice plsi series xilinx produces cpld series call xc7000 announced new family called xc9500", "url": "RV32ISPEC.pdf#segment168", "timestamp": "2023-10-17 20:15:29", "segment": "segment168", "image_urls": [], "Book": "computerorganization" }, { "section": "4.11.4 Gate Arrays ", "content": "gate arrays lsi devices consisting array gates fabricated chip area along wire routing channels facilitate interconnection gas originated late 1970s replacements ssi msibased circuits built pcbs figure 455 shows structure typical ga shaded rectangular area chip array gates channels wire routing paths array gates periphery chip io pads using device designer species interconnections within rectangu lar area ie cell form function equivalent ssi ms1 function intercell interconnections generated using pcb routing software aids disadvantage structure increased propagation delays result long path lengths increased chip area fabricate given circuit compared custom design chip circuit system order overcome slow speeds caused long path lengths decreased density result large areas dedicated routing devices evolved allowed interconnection ga area rather dedicated channels figure 456 shows one ga signetics 8a1260 device uses integrated schottky logic isl nand gates arranged 2 arrays 26 rows 22 columns 52 schottky buffers driving multiload enable signals 60 lsttl io buffers programmed inputs bidirectional paths totem pole tristate open collector outputs using combination appropriately congured nand gates function realized fact ssi msi functions ttl manuals copied unnecessary functionalities multiple chip enables provided ttl ics eliminated copying make circuits efcient designing gas designer user interacts ic manufacturer supplier extensively cad tools see figure 457 user generates logic circuit description veries design logic timing simulation set tests uncover faults also generated user provided supplier along design specications eg wire lists schematics supplier performs automatic placement routing mask generation ic prototype fabrication performance prototype accepted user production run ics initiated longe r necessar desi gn gate level detail designi ng gas ga man ufacturers provide set standar cells fun ctions part cad environm ent standar cells utilize congur ing ga muc h like using ssi msi com ponents design syst em standar cel ls library correspo nd gates various type congur ations msi type cells multipl exers count ers adders ari thmetic logi c units seen discussion earlier gas offer much design exibility compared three plds disadvantage turnaround time design prototype several weeks although ga manufacturers started offering 12 day turnaro unds disadvant age othe r three plds design inex ibilit sinc e twoleve l or reali zations possi ble sever al devi ces com bine featur es ga exib ility progr ammab ility offered devices esse ntially plds enhanc ed architec tur es som e depar ting comple tely or structure som e enhancing dor structure additional progr ammab le logic bloc ks macr cells collectivel call devi ces eld progr ammab le gas fp ga th ere two basic cat egories fpgas market today sram based antif usebase fpgas xilinx alte ra leading manufac turers terms numb er users sram based fpga s actel quickl ogic cyp ress xilinx offer antifuse based fpga s provi de brief descr iption one fpga xil inx sr amba sed fpgas sectio n extracted xilinx srambas ed fpga data sheet http www xilinxcom appnotes fpga nsrec98 pdf basic structure xilinx fpgas array based chip comprising twodimensional array logic blocks interconnected via horizontal vertical routing channels xilinx introduced rst fpga family xc2000 series 19 85 offers thr ee generations xc3000 xc4000 xc5000 xilin x recently introdu ced fpga family based antifu ses xc8 100 th e xilinx 4000 family devi ces range capacity 2000 mor e 1500 0 equi valent gate s xc4000 featur es congur able logic bloc k clb based lookup tables luts lut small 1bit wide memor array whe address lines memor inputs logic block 1bit outpu memory lut outpu t lut k inputs would corr espond 2k 3 1bit memor realize logi c functi k input program ming logic functi truth table directly memor y th e xc4000 clb contain thr ee separat e luts con guration show n figur e 458 two 4 input luts fed clb inputs third lut used combina tion two th arrangem ent allow clb imp lement wide range logic functions nine inputs two separ ate funct ions four inputs othe r possi bilities clb also cont ains two ip ops ward goal providi ng highdens ity devi ces suppor integr ation entire systems xc4000 chip ystemorient ed featur es instanc e clb cont ains circui try allows efciently perform arithmetic ie circuit implement fast carry operation adderlike circuits also luts clb congured read wr ite ram cells new vers ion family 4000e addi tional feature ram cong ured dual port ram wi th single wr ite two read ports 4000e ram block synchronous ram also xc4000 chip include wide andplanes around peri phery logi c block array facil itate imple menting circui blocks wide decoders xc4000 interconnec arranged horizontal v ertical channe ls channel contain som e numb er short wire segments span single clb longer segment span tw clbs long segment span entire length width chip progr ammab le swit ches avai lable connec inputs outputs clbs wire segment connec one wire segment anot small section rout ing channel repr esentativ e xc4000 devi ce shown figur e", "url": "RV32ISPEC.pdf#segment169", "timestamp": "2023-10-17 20:15:30", "segment": "segment169", "image_urls": [], "Book": "computerorganization" }, { "section": "4.59. ", "content": "g ure shows nly wire segments h orizontal channe l show vertical routing channels clb inputs outpu ts routing switches sign als must pass swit ches reach one clb anot total number swit ches traversed depends particula r set wire segm ents used thus speed performa nce imple mented circui depend part wire segment alloca ted indivi dual signals cad tools noted cir cuits desig ned usin g plds gener al slower custom designe ics also use silicon area chip efcientl hence tend les dense custom desi gned count erparts neverthel ess designs mor e costef fective lowvo lume applications ic techno logy new er plds continua lly introdu ced ic manufac turers architect ure char acterist ics also vary rapid rate introduction new devi ces descrip tion devi ces becomes obsol ete time publishe book imperative desi gner consult manufac turer man uals mos uptoda te inf ormation th e periodi cals listed refere nces section also b e consult ed new ic announc eme nts", "url": "RV32ISPEC.pdf#segment170", "timestamp": "2023-10-17 20:15:30", "segment": "segment170", "image_urls": [], "Book": "computerorganization" }, { "section": "4.12 ", "content": "sum mary analysis design synchronous sequential circuits described chapter given overview subject reader referred logic design texts listed end chapter details topics also asynchronous circuit analysis design ic manufacturer catalogs important sources infor mation logic designers although detailed electrical characteristics given catalogs required purposes book register transfer logic concepts described chapter used extensively chapters 5 6 logical design simple computer chapter 13 embedded systems expands programmable logic design concepts introduced chapter refer enc es altera max 7000 series http wwwalteracom literature dsm7000pdf blakeslee tr digital design standard msi lsi new york ny john wiley 1975 composite cell logic data sheets sunnyvale ca signetics 1981 computer new york ny ieee computer society monthly davis gr isl gate arrays operate low power schottky ttl speeds computer design 20 august 1981 pp 183186 fast ttl logic series data handbook sunnyvale ca phillips semiconductors 1992 greeneld jd practical design using ics new york ny john wiley 1983 kohavi z switching automata theory new york ny mcgrawhill 1978 mccluskey ej introduction theory switching circuits new york ny mcgraw hill 1965 nashelsky l introduction digital technology new york ny john wiley 1983 national advanced bipolar logic databook santa clara ca national semiconductor corporation 1995 perry dl vhdl new york ny mcgrawhill 1991 pal device data book sunnyvale ca amd mmi 1988 programmable logic data manual sunnyvale ca signetics 1986 programmable logic data book san jose ca cypress 1996 programmable logic devices data handbook sunnyvale ca philips semiconductors 1992 shiva sg introduction logic design new york ny marcel dekker 1998 technical staff monolithic memories designing programmable array logic new york ny mcgrawhill 1981 thomas de moorby pr verilog hardware description language boston kluwer 1998 ttl data manual sunnyvale ca signetics 1987 xilinx srambased fpgas http wwwxilinxcom appnotes fpgansrec98pdf c", "url": "RV32ISPEC.pdf#segment171", "timestamp": "2023-10-17 20:15:30", "segment": "segment171", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 5 ", "content": "simple computer organization programming purpos e chapter introdu ce ter minology basic functi ons simple complete com puter mainl program mer user point view call simple hypot hetical compu ter asc simple computer although asc appea rs primitive com parison com mercial ly availa ble machin e organ ization reec ts basi c stru cture mos com plex mode rn compute r instruction set limit ed complete enough write powerf ul program s assem bly language programmi ng understand ing assembly pro cess must system designer outline tradeoff involved selecting arch itectura l featur es machine chapter subsequ ent chapter book however deal trad eoffs th e deta iled hardwar e esign asc provided chapter 6 chapters 7 throu gh 15 examin e selected arch itectura l attribut es com mercial ly availa ble compute r system", "url": "RV32ISPEC.pdf#segment172", "timestamp": "2023-10-17 20:15:30", "segment": "segment172", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1 A SIMPLE COMPU TER ", "content": "figure 51 show hardware com ponents asc assum e asc 16bit machin e hence unit data manipulat ed transferred various registers machin e 16 bits long call regist er storage device capable holding certain number bits bit correspo nds ipop data bits written loaded register remai n regist er new data loaded long power data regi ster read transferr ed registers read operation change content register figure 52 show model rando maccess memor r used main memor asc type memor system describ ed chapter 9 ram addressable location memory accessed random manner process reading writing location ram consumes equal amount time matter location physical ly memor y two type ram availa ble read write memory rwm readonl memor rom mos com mon type f main memory rwm whose mode l show n figure 52 rw memor register memor location addre ss associa ted data input wri tten int outpu read memor location acce ssing location using address memory address regist er mar stores addre ss n bit r 2n locations addressed numb ered 0 2 n 1 transfer data memor usually terms set bits known memory word 2n words memory h bit thus 2 n 3 bit memor y com mon nota tion used describ e rams general n 3 unit memor contain n wor ds units unit either bit byte 8 bit word certai n numb er bits memor buff er register mbr used store data writ ten read memory word read memor addre ss memor word read provided mar read signal set 1 copy cont ents addresse memor wor brough memor logi c mbr content memory word thus altered read operation write word memor data written p laced mbr exte rnal logic address location data writte n placed r write sign al set 1 memory logic transf ers mbr content addresse memory location content memory word thus altered wr ite opera tion memor word dened often acce ssed unit data typical word sizes used memor organ izations comme rcially avai lable machin es 6 16 32 36 64 bit addition addre ssing memor word possi ble address portion eg halfwor quarter word multiple eg doubl e word quad word depend ing memory organ ization byteaddress able memory example address associated byte 8 bit per byte memory memory word consists one bytes literature routinely uses acronym ram mean rwm follow popular practice use rwm context requires us specic included mar mbr components memory system model practice registers may located memory subsystem registers system may serve functions registers rom also ram except data read data usually written rom either memory manufacturer user offline mode special devices write burn data pattern rom rom also used main memory contains data programs usually altered real time syst em opera tion cha pter 9 provi des description rom memory systems 16bit address address 216 ie 26 3 210 64 3 1024 64k memory words k 210 1024 asc assume memory 64k 16bit words 16bit long mar thus required mbr also 16bit long mar stores address memory location accessed mbr receives data memory word memoryread operation retai ns data written memory word duri ng memor ywri te opera tion th ese two registers n ot normally accessib le programme r asc storedprog ram machin e progr ams stored memor y dur ing execu tion program instruction stored progr rst fetche memor control unit operations called instru ction perf ormed ie instruction execute two special regist ers used perform ing fetche xecute opera tions progra count er pc instruc tion register ir pc cont ains addre ss instru ction fetched memory usually increment ed control unit point next instru ction addre ss end instru ction fetch instruction fetched ir th e circuitry connec ted ir decodes instru ction gener ates appro priate control signals p erform operatio ns called instru ction pc ir 16bit long asc th ere 16 bit accumul ator regist er acc used arithmet ic logic operation s name implies accumul ates resu lt arithmeti c logic operations th ere three inde x regist ers inde x 1 2 3 asc used man ipulation addre sses disc uss functi registers later chapter th ere 5bi processor status regi ster psr whose b represe nt carry c negat ive n zero z overo w v interr upt enabl e condi tions opera tion arithmet iclogic unit resu lts carry mos sign icant bit accumulator carry bit set negative zero bits indicate status accumulator operation involves accumulator interruptenable ag indicates processor accept interrupt interrupts discussed chapter 7 th e overo w b provi ded comple te psr illustration used chapter console needed permit operator interaction machine asc console permits operator examine change contents memory locations initialize program counter power onoff startstop controls also console console set 16 switches 16bit data word entered asc memory 16 lights monitors display 16bit data either memory location specied register execute program operator rst loads programs data memory sets pc contents address rst instruction program starts machine concept console probably oldfashioned since modern machines designed use one io devices terminal system console console necessary debugging phase computer design process included console simplify discussion program loading execution concepts execution program additional data input output done input output device simplicity assume one input device transfer 16bit data word acc one output device transfer 16bit content acc output medium could well keyboard terminal input device display output device note data acc altered due output input operation replaces original acc content new data", "url": "RV32ISPEC.pdf#segment173", "timestamp": "2023-10-17 20:15:31", "segment": "segment173", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.1 Data Format ", "content": "asc memory array 64k 16bit words 16bit words either instruction 16bit unit data exact interpretation depends context machine accesses particular memory word pro grammer aware least assembly machinelanguage program ming levels data program segments memory make certain data word accessed phase processor accessing instruction vice versa xedpoint integer arithmetic allowed asc figure 53a shows data format signicant bit sign bit followed 15 magnitude bits since asc uses 2s complement representation sign magnitude bits treat ed alike computa tions note als fourdigit hexade cimal notation used repr esent 16bit data wor d use notation denot e 16bit quant ity irresp ective whethe r data instru ction", "url": "RV32ISPEC.pdf#segment174", "timestamp": "2023-10-17 20:15:31", "segment": "segment174", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1. 2 Instr uction For mat ", "content": "instru ction asc program occupies 16bit word instru ction wor four elds shown figur e 53b bits 15 11 instruc tion word used opera tion code opco de opcode uniqu e bit patter n encodes primitive opera tion compute r perform thus asc total 25 32 instru ctions use instruction set 16 instructions simplic ity book opcode 16 instru ctions occupy bits 15 12 bit 11 set 0 instru ction set expand ed beyond current set 16 opcodes new instructions would 1 bit 11 bit 10 instruction word indirect ag bit set 1 indire ct addre ssing used othe rwise set 0 bits 9 8 instru ction word select one three inde x registers whe n indexed addressi ng cal led instru ction manipulat es index register bits 7 0 used repr esent memor address instru ctions ref er memory instru ction ref er memory indirect inde x memor addre ss elds used opcode eld repr esents complete instruction 8 bits address representation asc directly address 28 256 memory locations means program data must always rst 256 locations memory indexed indirect addressing modes used extend addressing range 64k thus asc direct indirect indexed addressing modes indirect indexed addressing mode elds used addressing mode interpreted either indexedindirect preindexed indirect indirectindexed postindexed indirect assume asc allows indexedindirect mode describe addressing modes description instruction set follows", "url": "RV32ISPEC.pdf#segment175", "timestamp": "2023-10-17 20:15:31", "segment": "segment175", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.3 Instruction Set ", "content": "table 51 lists com plete instruction set asc colum n 2 show signicant four bits opcode hexadecimal form fth bit 0 shown opcode also identied symbolic name mnemonic shown column 1 shr instruction shifts contents acc 1 bit right signicant bit acc remains unchanged contents last signicant bit position lost oneaddress instructions instructions use 16 bit instruction word following mem symbolic address arbitrary memory location absolute address physical address memory location expressed numeric quantity symbolic address mapped absolute address assembly language problem translated machine language description assumes direct addressing mode mem effective address address operand 8bit address usually modied indirect index operations generate effective address memory operand instructions xxxxxxxx opcode x xx description oneaddress instructions follows lda mem acc mem load accumulator lda loads acc contents memory location mem specied contents mem changed contents acc execution instruction replaced contents mem sta mem mem acc store accumulator sta stores contents acc specied memory location acc contents altered add mem acc mem add add adds contents memory location specied contents acc memory contents altered bru mem pc mem branch unconditional bru transfers program control address mem next instruction executed mem bip mem acc 0 branch acc positive pc mem bip instruction tests n z bits psr 0 program execution resumes address mem specied execution continues next instruction sequence since pc must contain address instruction executed next branching operation corresponds transferring address pc bin mem acc 0 pc mem branch accumulator negative bin instruction tests n bit psr 1 program execution resumes address specied execution continues next instruction sequence ldx mem index index mem load index register ldx loads index register specied index contents memory location specied assembly language instruction format index 1 2 3 stx mem index mem index score index register stx stores copy contents index register specied index ag memory location specied address index register contents remain unchanged tix mem index index index 1 test index index 0 pc mem increment tix increments index register content 1 next tests index register content 0 program execution resumes address specied otherwise execution continues next sequential instruction tdx mem index index index 1 test index index 6 0 pc mem increment tdx decrements index register content 1 next tests index register content equal 0 program execution resumes address specied otherwise execution continues next sequential instruction important note ldx stx tdx tix instructions refer index register one operands indexed mode addressing thus possible instructions since index eld used index register reference direct indirect modes addressing used example lda z 3 adds contents index register 3 z compute effective address ea contents memory location ea loaded acc index register altered ldx z 3 loads index register 3 contents memory location z input output inst ructions since asc one input one outpu devi ce address inde x indire ct elds instruction word used thus thes e also zeroadd ress instru ctions rwd acc input data read word rwd instru ction reads 16bit word input device acc content acc rwd thus lost wwd put acc write word wwd instru ction writes 16bi wor acc onto outpu device acc content remai n u naltered", "url": "RV32ISPEC.pdf#segment176", "timestamp": "2023-10-17 20:15:31", "segment": "segment176", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.4 Addressin g Modes ", "content": "address ing modes allow ed machine inue nced progr amming languages corr espond ing data struct ures machin e uses asc instruction format allow common addressing mode s various othe r mode used machin es com mercially available described cha pter 8 asc addressi ng modes described reference load accumulator lda instruction z assumed symbolic address memory location 10 fo r mode asse mbly language format shown rst followed instru ction format encode binary ie machin e language effec tive address cal culation effect instru ction als illustrate d note effective address addre ss memory wor whe operand located direct addres sing instructi format lda z 00010 0 00 00001010 effective address z r ha effect acc z effect instru ction illus trated figur e 54a use hexade cimal notation represe nt data addre sses followi ng diagrams note content register memor shown hexade cimal indexed addres sing instructi format lda z 2 00010 0 10 00001010 effective address z index regist er 2 effect acc z index register 2 numb er opera nd eld comma denot es inde x regist er used assuming index register 2 contains 3 figure 54b illustrates effect instruction numbers circles show sequence operations contents index register 2 added z derive effective address z 3 contents location z 3 loaded accumulator shown figure54b note address eld instruction refers z contents index register specify offset z contents index register varied using ldx tix tdx instructions thereby accessing various memory consecutive memory locations dynamically changing contents index register since index registers 16 bits wide effective address 16 bits long thereby extending memory addressing range 64k range 256 locations possible 8 address bits common use indexed addressing mode referencing elements array address eld instruction points rst element subsequent elements referenced incrementing index register indirect addressing instruction format lda z 00010 1 00 0000 1010 effe ctive addre ss z effe ct r z acc ar ie acc z th e asterisk next mne monic denotes indire ct addressin g mode mode address eld points locatio n whe addre ss opera nd found see figure 54c since memory word 16 bits long indirect addressing mode also used extend addressing range 64k simply changing contents location z illustration refer various memory addresses using instruction feature useful example creating multiple jump instruction contents z dynamically changed refer appropriate address jump common use indirect addressing referencing data elements pointers pointer contains address data accessed data access takes place indirect addressing using pointer operand data moved locations sufcient change pointer value accordingly order access data new location indirect index ags used two possible modes effective address computation depending whether indirecting indexing performed rst illustrated indexedindirect addressing preindexedindirect instruction format lda z 2 00010 10 0000 1010 1 effective address z index register 2 effect acc z index register 2 indexing done rst followed indirect compute effective address whose contents loaded accumulator shown figure 54d indirectindexed addressing postindexedindirect instruction format lda z 2 1 10 00010 00001010 effective address z index register 2 effect acc z index register 2 indirect performed rst followed indexing compute effective address whose contents loaded accumulator shown figure 54e note instruction formats two modes identical asc distinguish two modes indirect ag must expanded 2 bits modes allowed instead assume asc always performs preindexedindirect postindexedindirect supported addressing modes applicable singleaddress instructions exceptions indexreference instructions ldx stx tix tdx indexing permitted consid er array pointer locate conse cutive location memory prei ndexedi ndirect addressing mode useful acce ssing data eleme nts since rst index partic ular p ointer array indire ct pointer acce ss data ele ment hand postinde xedindire ct mode useful setting po inters array since access rst eleme nt arr ay indire cting pointer access subse quent elements array indexing pointer value", "url": "RV32ISPEC.pdf#segment177", "timestamp": "2023-10-17 20:15:32", "segment": "segment177", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.5 Other Addressing Modes ", "content": "chapter 8 descr ibes several addressing modes emp loyed prac tice example convenient sometimes programming include data part instruction immediate addressing mode used cases immediate addressing implies data part instruction mode allowed asc instruction set extended include instructions load immediate ldi add immediate adi etc instructions opcode eld contain 5bit opcode remaining 11 bits contain data instance ldi 10 would imply loading 10 acc adi 20 would imply adding 20 acc since asc permit addressing mode asc assembler designed accept socalled literal addressing mode simulates immediate addressing mode asc refer following section details", "url": "RV32ISPEC.pdf#segment178", "timestamp": "2023-10-17 20:15:32", "segment": "segment178", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.6 Addressing Limitations ", "content": "discussed earlier asc instruction format restricts directaddressing range rst 256 locations memory thus program data t locations 0 255 programming difculties encountered possible following programming alternatives used 1 program resides rst 256 locations data resides higher addressed locations memory case instruction addresses represented 8bit address eld since data references require address eld longer 8 bit data references handled using indexed indirect addressing modes example data location 300 loaded acc either following instructions a lda 02 assuming index register 2 contains 300 b lda 0 assuming location 0 memory contains 300 2 data reside rst 256 locations program resides beyond location 255 case data reference instructions lda sta etc use direct indirect andor indexed modes memory reference instruc tions bru bip bin must use indexed andor indirect modes 3 program data reside beyond location 255 memory reference instructions must indirect andor indexed recall index reference instructions use direct indirect modes addressing", "url": "RV32ISPEC.pdf#segment179", "timestamp": "2023-10-17 20:15:32", "segment": "segment179", "image_urls": [], "Book": "computerorganization" }, { "section": "5.1.7 Machine Language Programming ", "content": "possible write program asc using absolute addresses actual physical memory addresses opcodes since instruction set instruction data formats known programs called machine language pro grams need translated hardware interpret since already binary form programming level tedious however program asc machine language add two numbers store sum third location shown 0001 0000 0000 1000 0011 0000 0000 1001 0001 0000 0000 1000 decode instructions determine program modernday computers seldom programmed level programs must level however execution program begin translators assemblers compilers used converting programs written assembly highlevel languages machine language discuss hypothetical assembler asc assembly language next section", "url": "RV32ISPEC.pdf#segment180", "timestamp": "2023-10-17 20:15:32", "segment": "segment180", "image_urls": [], "Book": "computerorganization" }, { "section": "5.2 ASC ASSEMBLER ", "content": "assembler translates asc assembly language program machine lan guage programs available provide details language accepted assembler outline assembly process section assembly language program consists sequence statements instructions coded mnemonics symbolic addresses statement consists four elds label operation mnemonic operand comments shown figure 55 label symbolic name denoting memory location instruction located necessary provide label statement statements referenced elsewhere program need labels provided label set alphabetic numeric characters rst must alphabetic character mnemonic eld contains instructions mnemonic following instruction mnemonic denotes indirect addressing operand eld consists symbolic addresses absolute addresses index register designations typical operands shown figure 56 comments elds start optional eld consists comments programmer affect instruction way ignored assembler rst character label eld designates complete statement comment instruction assembly language program classied either executable instruction assembler directive pseudoinstruction 16 instructions asc instruction set executable instruction assem bler generates machine language instruction corresponding instruc tion program pseudoinstruction directive assembler instruction used control assembly process reserve memory locations establish constants required program pseudoinstructions assembled generate machine language instructions executable care must taken assembly language programmer partition program assembler directive execution sequence description asc pseudoinstructions follows org address origin function org directive provide assembler memory address next instruction located org usually rst instruction program operand eld org provides starting address ie address rst instruction program located rst instruction program org assembler defaults starting address 0 one org directive program end address physical end end indicates physical end program last statement program operand eld end normally contains label rst executable statement program equ equate equ provides means giving multiple symbolic names memory locations shown following example example 51 equ b another name b b must already dened equ b 5 name location b 5 equ 10 name absolute address 10 bss block storage starting bss used reserve blocks storage locations intermediate nal results example 52 z bss reserves five locations first named z 5 z z1 z2 z3 z4 operand eld always designates number locations reserved contents reserved locations dened bss block storage constants bsc provides means storing constants memory locations addition reserving locations operand eld consists one operands separate operand requires one memory word example 53 z bsc 5 reserves one location named z containing 5 p bsc 5 6 7 reserves three locations p containing 5 p 1 containing 6 p 2 containing 7 literal addressing mode convenient programmer able dene constants data part instruction feature also makes assembly language program readable literal addressing mode enables literal constant preceded example lda 2 implies loading constant 2 decimal accumulator add h10 implies adding h10 accumulator asc assembler recognizes literals reserves available memory location constant address eld substitutes address memory location instruction provide assembly language programs examples example 54 figure 57 shows program add three numbers located b c save theresultatdtheprogramisoriginedtolocation10notehowthehlt sta tement separates program logic data block b c dened usin g bsc statemen ts one location reserve using bss exam ple 55 figur e 58 shows program accum ulate v e n umbers stored starting location x memory store result z index regist er 1 rst set 4 point last number bloc k ie x 4 acc set 0 tdx used access numb ers one time last rst terminat e loop n umbers accum ulated exam ple 56 divis ion tre ated repe ated subtrac tion divisor divided zero negativ e resu lt obta ined th e quotient equal maximum numb er times subtrac tion performed witho ut yieldi ng negativ e resu lt figure 59 show division routine th e generation obje ct code assembly lang uage program descr ibed fol lowing sectio n", "url": "RV32ISPEC.pdf#segment181", "timestamp": "2023-10-17 20:15:32", "segment": "segment181", "image_urls": [], "Book": "computerorganization" }, { "section": "5.2. 1 Assem bly Proce ss ", "content": "th e major functi ons asse mbler program 1 gener ate addre ss sym bolic name program 2 generate bina ry equival ent assembly instru ction th e asse mbly proce ss usual ly carried two scans sourc e program thes e scans called pass assembl er cal led twop ass ssembler rst pass used alloca ting memory location sym bolic nam e used program secon pass references thes e sym bolic nam es reso lved restrictio n mad e sym bol must dened referenced one pass sufce 0 details asc twopa ss assembler given figures", "url": "RV32ISPEC.pdf#segment182", "timestamp": "2023-10-17 20:15:32", "segment": "segment182", "image_urls": [], "Book": "computerorganization" }, { "section": "5.10 ", "content": "", "url": "RV32ISPEC.pdf#segment183", "timestamp": "2023-10-17 20:15:32", "segment": "segment183", "image_urls": [], "Book": "computerorganization" }, { "section": "5.11. ", "content": "assembler uses counter known location counter lc keep track memory locations used rst instru ction program org operand eld org denes initial value lc otherwis e lc set 0 lc increment ed appropri ately asse mbly proce ss con tent lc time addre ss next available memory location assembler p erforms followi ng tas ks duri ng rst pass 1 enters labels symbol table along lc value address label 2 validates mnemonics 3 interprets pseudoinstructions completely 4 manages location counter asse mbler uses opcode table extract opcode inf ormation op code table table storing mnemon ic correspo nding opcode othe r attribut e instru ction useful asse mbly proce ss symbol table created asse mbler consists two entr ies sym bolic name sym bol addre ss whi ch sym bol located wi illus trate assembly proce ss ex ample", "url": "RV32ISPEC.pdf#segment184", "timestamp": "2023-10-17 20:15:33", "segment": "segment184", "image_urls": [], "Book": "computerorganization" }, { "section": "5.7. ", "content": "examp le", "url": "RV32ISPEC.pdf#segment185", "timestamp": "2023-10-17 20:15:33", "segment": "segment185", "image_urls": [], "Book": "computerorganization" }, { "section": "5.7 ", "content": "consid er program shown figure", "url": "RV32ISPEC.pdf#segment186", "timestamp": "2023-10-17 20:15:33", "segment": "segment186", "image_urls": [], "Book": "computerorganization" }, { "section": "5.12a. ", "content": "symbo l table initial ly emp ty location count er start defa ult valu e 0 rst instruction org operand eld evaluat ed value 0 entered int lc label eld next instru ction beg begin entered sym bol table assigned address 0 th e mnemon ic e ld ldx valid mnemon ic since thi instruction takes one memor word lc incremented 1 opera nd eld ldx instruction eval uated duri ng rst pass th proce ss scann ing labe l mnemon ic e lds enterin g labels sym bol table validati ng mnemon ic increm enting lc cont inues end instru ction reached ps eudoins tructions comple tely evaluat ed pass locat ion counter valu es shown figure", "url": "RV32ISPEC.pdf#segment187", "timestamp": "2023-10-17 20:15:33", "segment": "segment187", "image_urls": [], "Book": "computerorganization" }, { "section": "5.12a ", "content": "alon g sym bol table entries end pass 1 figure", "url": "RV32ISPEC.pdf#segment188", "timestamp": "2023-10-17 20:15:33", "segment": "segment188", "image_urls": [], "Book": "computerorganization" }, { "section": "5.12b. ", "content": "end rst pass location counter advanced e since bss 4 takes four locations second pass machine instructions generated using source program symbol table operand elds evaluated pass instruction format elds appropriately lled instruction starting lc 0 label eld instruction 0 ignored opcode 11000 substituted mnemonic ldx since oneaddress instruction operand eld evaluated next mnemonic hence indirect ag set 0 absolu te addre ss c obtaine sym bol table entered addre ss eld instruction inde x ag set 01 th process cont inues instruction end reached ob ject code shown figur e 5 9c binary hexade cimal format s th e symbol table shown figure", "url": "RV32ISPEC.pdf#segment189", "timestamp": "2023-10-17 20:15:33", "segment": "segment189", "image_urls": [], "Book": "computerorganization" }, { "section": "5.9d ", "content": "one mor e entr correspo nding literal 0 e instru ction ldx 02 assembl ed ldx 2 location cont aining 0 conten ts word reserve response bss dened unused bits hlt instruction words assumed 0s", "url": "RV32ISPEC.pdf#segment190", "timestamp": "2023-10-17 20:15:33", "segment": "segment190", "image_urls": [], "Book": "computerorganization" }, { "section": "5.3 ", "content": "pro gram loadin g obje ct code must load ed machine memory execu ted asc console used load program data memor y loading thr ough console tedious time consumi ng howe ver especia lly progr ams large cases sma program read object code sta tements input device stores appropriat e memor locations rst written assembl ed loaded using console machine memor y th loader progr used load obje ct code r data memor y figur e 513 shows load er program asc instead loading thi program time console stored rom forms part 6 4k memor space asc load ing initiated set ting pc begi nning addre ss load er using console start ing mac hine note loader occupi es loca tions 224 255 hence care must taken mak e sure othe r progr ams overwr ite space note also asse mbler must binary object code form load ed asc memor used asse mble othe r program s mea ns asse mbler mus b e tra nslated source lang uage asc binary code either man ually impleme nting asse mbler som e machine loaded asc either usin g loader retai ning rom port ion memory use", "url": "RV32ISPEC.pdf#segment191", "timestamp": "2023-10-17 20:15:33", "segment": "segment191", "image_urls": [], "Book": "computerorganization" }, { "section": "5.4 SUBROUTINES ", "content": "subroutine function method procedure subprogram portion code within larger program performs specic task independent remaining code syntax many programming languages includes support creating selfcontained subroutines subroutine consists instructions performing task chunked together given name chunking allows us deal potentially complicated task single concept instead worrying many many steps computer might go perform task need remember name subroutine whenever want program perform task call subroutine subroutines major tool handling complexity reduce redundancy program enable reuse code across multiple programs allow us decompose complex problems simpler pieces improve readability program typical components subroutine body code executed subroutine called parameters passed subroutine point called one values returned point call occurs programming languages like pascal fortran distinguish functions return values subroutines procedures languages like c lisp make distinction treat terms synonymous name method commonly used connection object oriented programming specically subroutines part objects calling subroutine means jumping rst instruction subroutine using jmp instruction execution subroutine end jump back point program subroutine called program pick left calling subroutine known returning subroutine subroutine reusable meaningful sense mus possibl e cal l subro utine man dif ferent places progr case compute r know wha point progr return whe n subro utine ends answ er return point reco rded somew befor e subroutine calle d address memor compute r suppos ed return subro utine ends called return addres befor e jumpi ng start subro utine program mus store return address place whe subro utine nd subro utine n ished perform ing assigned task ends jump back return address let us expand asc instruction set include two instructions handle subrout ines jump subroutine jsr return subroutine ret following progr illustr ates subroutine call return operations org 0 begin lda x js r sub1 wwd hlt x bsc 23 sub1 sta z ad z ret end begin th e main program calls subro utine sub 1 jsr program cont rol transf ers sub1 two instru ctions subro utine execu ted followed ret effect executing ret return control return address main progr main progr execu tes wwd followed hlt enable opera tion jsr instru ction stor e return addre ss ie addre ss instru ction followi ng jsr somew befor e jumping subro utine ret execu ted return addre ss retrieved program transf ers instruction followi ng call return address stored dedicated register dedicated memory location subroutine calls another subroutine ie nested call return address corresponding rst call lost hence program return properl y typicall push return addre ss stack see appendi x b details stack implementation call popped stack return allows subroutine calls nested return address item information program send subroutine task subroutine multiply number seven main program tell subroutine number multiply seven information said parameter subroutine similarly subrou tine get answer result multiplying parameter value seven back main program answer called return value subroutine program parameter value accumulator calling subroutine subroutine knows look jumps back main program subroutine puts return value accumulator main program knows look passing parameter values return values back forth register accumulator simple efcient method communication subroutine rest program asc acc three index registers used parameter passing common way passing parameters main program push onto stack prior calling subroutine subroutine retrieve stack operate return results back stack main program", "url": "RV32ISPEC.pdf#segment192", "timestamp": "2023-10-17 20:15:33", "segment": "segment192", "image_urls": [], "Book": "computerorganization" }, { "section": "5.5 MACROS ", "content": "macro group instructions codied used many times necessary program unlike subroutines macro called program macro call replaced group instructions constitute macro body thus call macro results substitution macro body also pass parameters macro macro call executed parameter substituted name value specied time call example macro denition shown macro add2 bip pos add 1 pos add 1 endm macro assembler directive indicating macro denition add2 macro name endm signies end macro denition instructions macro body add 1 accumulator positive otherwise add 2 accumulator call macro simply use add2 instruction program assembler replace add2 three instructions corresponding macro body every time add2 used program usually substitution macro expansion done prior rst pass assembler macro accumulate two values b store result c dened b c parameters macro macro abc b c lda add b sta c endm call macro would abc x z call would result replacement b c x z macro expansion note macro expanded b c visible program also x z must dened macros allow programmer dene repetitive code blocks use macro call expand program thus macros convert code inline program called control overhead required subroutines save retrieve return address thus needed advantage subroutines subroutine body expanded calling program even multiple calls subroutines save instruction memory cost run time overhead macros consume instruction memory eliminate subroutine callreturn overhead one facilities use macros offers creation libraries groups macros included program different le creation libraries simple rst create le macro denitions save text le macros1 call macros necessary use instruction include macros1 beginning program", "url": "RV32ISPEC.pdf#segment193", "timestamp": "2023-10-17 20:15:34", "segment": "segment193", "image_urls": [], "Book": "computerorganization" }, { "section": "5.6 LINKERS AND LOADERS ", "content": "practice executable program composed main program several subroutines modules modules either come predened library developed programmer particular application assemblers compilers allow independent translation program modules corresponding machine code linker program takes one modules generated assemblers compilers assembles single executable program linking process object les static libraries assembled new library executable program program modules contain machine code information linker information comes mainly form two types symbol denitions 1 dened exported symbols functions variables present module represented object available use modules 2 undened imported symbols functions variables called referenced object internally dened linker job resolve references undened symbols nding module denes symbol question replacing placeholders symbol address linker also takes care arranging modules program address space may involve relocating code assumes specic base address another base since assembler compiler seldom knows module reside often assumes xed base location relocating machine code may involve retargeting absolute jumps loads stores instance case asc programs org directive denes beginning address program module modules assembled origins linker make sure overlap memory put together single executable program even linking done guarantee executable reside specied origin loaded machine memory since programs may residing memory space time loading thus program may relocated linkers loaders perform several related conceptually separate actions program loading copying program secondary storage main memory ready run cases loading involves copying data disk memory others involves allocating storage setting protection bits arranging virtual memory map virtual addresses disk pages relocation mentioned earlier relocation process assigning load addresses various parts program adjusting code data program reect assigned addresses many systems relocation happens quite common linker create program multiple subpro grams create one linked output program starts zero various subprograms relocated locations within big program program loaded system picks actual load address linked program relocated whole load address symbol resolution program built multiple subprograms refer ences one subprogram another made using symbols main program might use square root routine called sqrt math library denes sqrt linker resolves symbol noting location assigned sqrt library patching caller object code call instruction refers location although considerable overlap linking loading reasonable dene program program loading loader one symbol resolution linker either relocation allinone linking loaders three functions", "url": "RV32ISPEC.pdf#segment194", "timestamp": "2023-10-17 20:15:34", "segment": "segment194", "image_urls": [], "Book": "computerorganization" }, { "section": "5.6.1 Dynamic Linking ", "content": "modern operating system environments allow dynamic linking postpon ing resolving undened symbols program run means executable still contains undened symbols plus list modules libraries provide denitions loading program load moduleslibraries well perform nal linking approach offers two advantages 1 oftenused libraries eg standard system libraries need stored one location duplicated every single binary 2 error library function corrected replacing library programs using dynamically immediately benet correction programs included function static linking would relinked rst dynamic linking means data library copied new executable library compile time remains separate le disk minimal amount work done compile time linkerit records libraries executable needs index names numbers majority work linking done time application loaded load time execution program run time appropriate time loader nds relevant libraries disk adds relevant data libraries program memory space operating systems link library load time program starts executing others may able wait program started execute link library actually referenced ie run time latter often called delay loading either case library called dynamically linked library dynamic linking originally developed multics operating system starting 1964 also feature mts michigan terminal system built late 1960s microsoft windows dynamically linked libraries called dynamiclink libraries dlls one wrinkle loader must handle location memory actual library data known executable dynamically linked libraries loaded memory memory locations used depend specic dynamic libraries loaded possible store absolute location data executable even library since conicts different libraries would result two specied overlapping addresses would impossible use program might change increased adoption 64bit architec tures offer enough virtual memory addresses give every library ever written unique address range would theoretically possible examine program load time replace references data libraries pointers appropriate memory locations libraries loaded method would consume unacceptable amounts either time memory instead dynamic library systems link symbol table blank addresses program compile time references code data library pass table import directory load time table modied location library codedata loaderlinker process still slow enough signi cantly affect speed programs call programs high rate certain shell scripts library contains table methods within known entry points calls library jump table looking location code memory calling introduces overhead calling library delay usually small negligible dynamic linkersloaders vary widely functionality depend explicit paths libraries stored executable change library naming layout le system cause systems fail commonly name library path stored executable operating system supplying system nd library ondisk based algorithm unixlike systems search path specifying le system directories look dynamic libraries systems default path specied conguration le others hard coded dynamic loader executable le formats specify additional directories search libraries particular program usually overridden environment variable although disabled setuid setgid programs user force program run arbitrary code developers libraries encouraged place dynamic libraries places default search path downside make installation new libraries problematic known locations quickly become home increasing number library les making management complex microsoft windows check registry determine proper place nd activex dll dlls check directory program loaded current working directory older versions win dows directories set calling setdlldirectory function system32 system windows directories nally directories specied path environment variable one biggest disadvantages dynamic linking executables depend separately stored libraries order function properly library deleted moved renamed incompatible version dll copied place earlier search executable could malfunction even fail load damaging vital library les used almost executable system usually render system completely unusable dynamic loading subset dynamic linking dynamically linked library loads unloads run time request request load dynamically linked library may made implicitly compile time explicitly application run time implicit requests made adding library references may include le paths simply le names object le compile time linker explicit requests made applications using run time linker application program interface api operating systems support dynamically linked libraries also support dynamically loading libraries via run time linker api instance micro soft windows uses api functions loadlibrary loadlibraryex freelibrary getprocaddress microsoft dynamic link libraries posixbased systems including unix unixlike systems use dlopen dlclose dlsym", "url": "RV32ISPEC.pdf#segment195", "timestamp": "2023-10-17 20:15:34", "segment": "segment195", "image_urls": [], "Book": "computerorganization" }, { "section": "5.7 SUMMARY ", "content": "chapter programmer introduction asc organization details assembler asc assembly language programming assembly process described brief introduction program loaders linkers given various components asc assumed exist justication given component needed architectural tradeoffs used selecting features machine discussed subsequent chapters book cha pter 6 provides detailed hardw design asc deta ils topicscanbefoundinthereferences", "url": "RV32ISPEC.pdf#segment196", "timestamp": "2023-10-17 20:15:34", "segment": "segment196", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 6 ", "content": "simple computer hardware design organiza tion simpl e com puter asc provi ded chapte r 5 pro grammer view machine illustrate complete hardware design asc chapter assume design follows sequence eight steps listed 1 selection instruction set 2 word size selection 3 selection instruction data formats 4 register set memory design 5 data instruction owpath design 6 arithmetic logic unit design 7 io mechanism design 8 generation control signals design control unit practice design digital computer iterative process decision made early design process may altered suit parameter later step example instruction data formats may changed accom modate better data instruction owpath design computer architect selects architecture machine based cost performance tradeoffs computer designer implements architecture using hardware software components available complete process development computer system thus also viewed consisting two phases design implementation architect derives architecture design phase subsystem architecture implemented several ways depending available technology requirements distin guish two phases chapter restrict chapter design hardware components asc use memory elements registers ipops logic gates components design architectural issues concern stage design described subsequent chapters reference architectures commercially available machines rst describe program execution process order depict utilization registers asc", "url": "RV32ISPEC.pdf#segment197", "timestamp": "2023-10-17 20:15:34", "segment": "segment197", "image_urls": [], "Book": "computerorganization" }, { "section": "6.1 PRO GRAM EXEC UTION ", "content": "onc e obje ct code load ed memor execu ted initial izing progr counter starting address activa ting start switch consol e instruc tions fetched memor execu ted seque nce hlt instru ction reac hed err conditio n occur s th e execu tion instru ction consists two phase 1 instruction fetch 2 instruction execute", "url": "RV32ISPEC.pdf#segment198", "timestamp": "2023-10-17 20:15:35", "segment": "segment198", "image_urls": [], "Book": "computerorganization" }, { "section": "6.1. 1 Instr uction Fetc h ", "content": "dur ing instru ction fetch instru ction word transferr ed memor instruc tion regist er ir accompl ish cont ents program count er pc rst transf erred r memory read operation perf ormed transf er instru ction int mbr instru ction transf erred ir whi le memor read cont rol unit uses intern al logic add 1 content program counter program counter point memor word followi ng one curr ent instru ction fetched th seque nce opera tions constitut es fetch phase sam e instru ctions", "url": "RV32ISPEC.pdf#segment199", "timestamp": "2023-10-17 20:15:35", "segment": "segment199", "image_urls": [], "Book": "computerorganization" }, { "section": "6.1. 2 Instr uction Execu tion ", "content": "onc e instru ction instru ction regist er opcode decoded seque nce operations brough retri eve opera nds neede memor perform proce ssing cal led opcode th execu tion phase uniqu e instru ction instru ction set exam ple lda instru ction effective addre ss rst calcul ated content memor word effective addre ss read transf erred acc end execu te phase machin e returns fetch phase asc uses additional phase compute effect ive address instru ction uses indire ct addre ssing mode sinc e com putation involves reading address memory phase termed defer phase fetch defer execute phases together form socalled instruction cycle note instruction cycle need contain three phases fetch execute phases required instructions defer phase required indirect addressing called example 61 figur e 61 give sample progr show contents asc registers execution program", "url": "RV32ISPEC.pdf#segment200", "timestamp": "2023-10-17 20:15:35", "segment": "segment200", "image_urls": [], "Book": "computerorganization" }, { "section": "6.2 DATA, INSTRUCTION, AND ADDRESS FLOW ", "content": "detailed analysis ow data instructions addresses instruction fetch defer execute phases instruction instruction set required determine signal ow paths needed registers memory analysis asc based execution process shown figure 61 follows mar read mbr ir pc mar read mbr acc x mbr acc h0003 memory instruction add x1 mbr h0004 x index1 memory mar read mbr ir pc mar", "url": "RV32ISPEC.pdf#segment201", "timestamp": "2023-10-17 20:15:35", "segment": "segment201", "image_urls": [], "Book": "computerorganization" }, { "section": "6.2.1 Fetch Phase ", "content": "assuming pc loaded address rst instruction program following set register transfers manipulations needed fetch phase instruction 622 address calculations oneaddress instructions following address computation capabilities needed effec tive addres assumed r end address cal culation execu tion instru ction begi ns using conca t enat ion operator transf er mar ir70 rene mar 00000000 ir7 0 means mos signicant 8 bits mar receive 0s th e conca tenation 0s addre ss eld ir give n earli er assumed whe never content ir addre ss eld transf erred 1 6bit destina tion ir addre ss eld added index register memory opera tions also desi gnated read memor mbr mar write memory ar mbr note content index register 0 r ir7 0 index used direct address computa tion also fact used later chapt er ass uming arithmeti c perform ed one arithmeti c unit instru ction address ow paths required fetch cycl e address com putation asc shown figur e", "url": "RV32ISPEC.pdf#segment202", "timestamp": "2023-10-17 20:15:35", "segment": "segment202", "image_urls": [], "Book": "computerorganization" }, { "section": "6.2 ", "content": "list mne monic codes see table", "url": "RV32ISPEC.pdf#segment203", "timestamp": "2023-10-17 20:15:35", "segment": "segment203", "image_urls": [], "Book": "computerorganization" }, { "section": "6.2. ", "content": "3 execut ion phase th e detailed data ow duri ng execution instru ction determin ed anal ysis shown table", "url": "RV32ISPEC.pdf#segment204", "timestamp": "2023-10-17 20:15:35", "segment": "segment204", "image_urls": [], "Book": "computerorganization" }, { "section": "6.3 ", "content": "bus stru ctur e th e data address transfers show n figure", "url": "RV32ISPEC.pdf#segment205", "timestamp": "2023-10-17 20:15:35", "segment": "segment205", "image_urls": [], "Book": "computerorganization" }, { "section": "6.2 ", "content": "brough either pointtopoint interconnection registers bus structure connecting registers asc bus structures commonly used pointtopoint interconnection becomes complex number registers connected increases common bus structures single bus multibus singlebus structure data address ow one bus multibus structure typically consists several buses dedicated certain transfers example one bus coul data bus othe r could address bus mul tibus struct ures provide advant age tailori ng bus set transf ers dedicated permi ts para llel operations singl ebus struct ures advant age uniformi ty design char acteristic consider ed evaluating bus struct ures amount hardw require data transf er rates possibl e figure", "url": "RV32ISPEC.pdf#segment206", "timestamp": "2023-10-17 20:15:35", "segment": "segment206", "image_urls": [], "Book": "computerorganization" }, { "section": "6.3 ", "content": "shows bus structure asc bus structure realized recog nizing two operands required arithmetic operations using arithmetic unit asc operand bus bus1 bus2 feeding arithmetic unit output arithmetic unit another bus bus3 transfers involve arithmetic operation two direct paths assumed arithmetic unit paths either transfer bus1 bus3 transfer bus2 bus3 contents bus typical operations listed opera tion bus structure show n figure", "url": "RV32ISPEC.pdf#segment207", "timestamp": "2023-10-17 20:15:35", "segment": "segment207", "image_urls": [], "Book": "computerorganization" }, { "section": "6.3 ", "content": "simi lar describ ed section", "url": "RV32ISPEC.pdf#segment208", "timestamp": "2023-10-17 20:15:35", "segment": "segment208", "image_urls": [], "Book": "computerorganization" }, { "section": "4.8. ", "content": "data enter register rising edge clock clock required cont rol signals select source destination regist ers bus transfer generat ed cont rol unit figur e", "url": "RV32ISPEC.pdf#segment209", "timestamp": "2023-10-17 20:15:35", "segment": "segment209", "image_urls": [], "Book": "computerorganization" }, { "section": "6.4a ", "content": "show detailed bus transfer circuit add operation timing diagram show n figur e", "url": "RV32ISPEC.pdf#segment210", "timestamp": "2023-10-17 20:15:35", "segment": "segment210", "image_urls": [], "Book": "computerorganization" }, { "section": "6.4b. ", "content": "control sign als acc bus1 mb r bus2 select acc mbr sourc e registers bus1 bus2 respect ively th e cont rol signal add enabl es add operation alu sign al bus 3 acc selects acc destination bus3 thes e control sign als active time stay active long enough com plete bus transfer data enters acc rising edge clock pulse cont rol signals become inactive time required tra nsfer data un source estination alu regi ster transfe r time th e clock frequenc mus slowes register transfer accomplis hed cloc k period asc slowest transf er one invol ves add opera tion thus regist er transfer time dictates speed transfer bus hence processing speed cycle time processor note als content acc mbr alter ed unti l sum enters acc since alu com binational circui t accomm odate feedback operation acc must congur ed usin g maste r slave ipops similarly accommodate increment pc operation pc must also congured using masterslave ipops input output transfers performed two separate 16bit paths data input lines dil data output lines dol connected accumulator io scheme selected simplicity alternatively dil dol could connected one three buses sing lebus structure possibl e asc structure either one operands two opera nd operations result mus stored buff er register befor e transmitt ed destination register thus additional transf ers sing lebus struct ure operations take longer comple te thereby mak ing structure slower multibus struct ure transfer data instructions addresse bus structure cont rolled set control signals gener ated control unit machin e detailed design cont rol unit illustrate later chapt er", "url": "RV32ISPEC.pdf#segment211", "timestamp": "2023-10-17 20:15:35", "segment": "segment211", "image_urls": [], "Book": "computerorganization" }, { "section": "6.4 ARITH METIC AND LOGIC UNIT ", "content": "alu asc hardwar e performs ari thmetic logical operations th e instruction set implies alu asc must perform addition tw numbers compute 2s comple ment numb er shift content accumul ator either right left 1 bit addit ionally asc alu mus directly transfer either inputs outpu suppor data transf er operations ir mb r r ir assume cont rol unit machin e p rovides appro priate control signals enable alu perform one operatio ns since bus1 bus2 inputs bus3 outpu alu followi ng operations must b e perform ed alu add bus3 bus1 bus2 comp bus3 bus10 shr bus3 bus115 bus1 15 1 shl bus3 bus14 0 0 tra1 bus3 bus1 tra2 bus3 bus2 add comp shr shl tra1 tra2 control signals generated control unit bit positions buses numbered 15 0 left right control signals activates particular operation alu one control sign als may active time figur e 65 shows typical b alu connections bits bus1 bus2 bus3 function description alu follows corresponding control signals add addition circuitry consists fteen full adders one halfadder least signicant bit 0 sum output adder gated gate add control signal carry output adder input carryin adder next signicant bit halfadder bit 0 carryin carryout bit 15 carry ag bit psr accumulator contents bus1 added contents mbr bus2 result stored accumulator comp complement circuitry consists 16 gates one bit bus1 thus circuitry produces 1s complement number output gate gated gate comp control signal operand complement contents accumulator result stored accumulator tca 2s complement accumulator command accomplished taking 1s complement operand rst adding 1 result shr shifting bit pattern right bit bus1 routed next least signicant bit bus3 transfer gated shr control signal least signicant bit bus1 bus10 lost shifting process signicant bit bus315 next least signicant bit bus314 bus3 thus leftmost bit output sign lled shl shifting bit pattern left bit bus1 routed next signicant bit bus3 shl control signal used gate transfer signicant bit bus1 bus115 lost shifting process least signicant bit bus3 bus30 zero lled tra1 operation transfers bit bus1 corresponding bit bus3 bit gated tra1 signal tra2 operation transfers bit bus2 corresponding bit bus3 bit gated tra2 control signal carry negative zero overow bits psr set reset alu based content acc end operation involving acc figure 66 shows circuits needed carry bit simply cout bit position 15 bus315 forms negative bit zero bit set bits bus3 zero overow occurs sum exceeds 2151 addition detected observing sign bits operands result overow occurs sign bits operands 1 sign bit result 0 vice versa also shl sign bit changes overow results detected comparing bus114 bus115 note psr bits updated simultaneously updating acc contents results opera tion figur e 66 also show circui derive accumulat orposi tive condi tion ie acc neither negat ive zero require bip instruc tion execution interr upt bit psr set reset interr upt logic interrupts discusse chapte r 7", "url": "RV32ISPEC.pdf#segment212", "timestamp": "2023-10-17 20:15:36", "segment": "segment212", "image_urls": [], "Book": "computerorganization" }, { "section": "6.5 INP UT= OUTPUT ", "content": "asc assumed one input device e outpu device funct ions could well performed termin al keyboard input display printe r outpu t assum e input outpu devices transf er 16bi data wor acc resp ectively wi base desi gn simpl est io scheme called progra mmed io th e alu control unit toge ther form central processi ng un cp u also ref er proce ssor progr ammed o scheme execu tion rwd instru ction cpu com mands input device send data word wait s input devi ce gath ers data input med ium ate ready data buffer informs cpu data read y cpu gates data acc dil s wwd cpu gates acc content onto ls comma nds output device acce pt data wait s data gated data buff er output device inform cpu data acceptance cpu proceeds execute next instruction sequence sequence operations described earlier known data communica tion protocol handshake cpu peripheral device data ipop control unit used facilitate io handshake rwd wwd protocols described detail 1 rwd a cpu resets data ipop b cpu sends 1 input control line thus commanding input device send data c cpu waits data ipop set input device d input device gathers data data buffer gates onto dils sets data ipop e cpu gates dils acc resets input control line resumes instruction execution 2 wwd a cpu resets data ipop b cpu gates acc onto dols sends 1 output control line thus commanding output device accept data c cpu waits data ipop set output device d output device ready gates dol data buffer sets data ipop e cpu resets output control line removes data dols resumes instruction execution seen p receding protocol cpu control com plete o process wait input utput occur since io devices much slower cpu cpu idles wait ing data program med o scheme slowest o schemes used practice simpl e design general ly low overh ead terms o protocol needed espec ially sma amounts data transf erred also note scheme adequate environment cpu alloca ted othe r task data transfer taking place chapter 7 provi des details popular io schemes generalizes asc io scheme described chapter", "url": "RV32ISPEC.pdf#segment213", "timestamp": "2023-10-17 20:15:36", "segment": "segment213", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6 CONTROL UNIT ", "content": "control unit complex block computer hardware designer point view function generate control signals needed blocks machine predetermined sequence bring sequence actions called instruction figure 67 shows block diagram asc control unit lists external internal inputs outputs ie control signals produced inputs control unit 1 opcode indirect bit index ag ir 2 contents psr 3 index register bits 015 test zero nonzero index register tix tdx instructions addition inputs control signals generated control unit functions contents following 1 data ipop used facilitate handshake cpu io devices 2 run ipop set start switch console see section 67 indicates run state machine run ipop must set control signals activate microoperation run ipop reset hlt instruction 3 state register 2bit register used distinguish three phases states instruction cycle control unit thus viewed threestate sequential circuit", "url": "RV32ISPEC.pdf#segment214", "timestamp": "2023-10-17 20:15:36", "segment": "segment214", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6.1 Types of Cont rol Units ", "content": "men tioned earlier function control unit gener ate control signals appro priate seque nce bring instru ction cycle corres ponds instru ction p rogram asc instruction cycle consist three p hases phase instru ction cycl e com posed seque nce microo peratio ns mi croopera tion one followi ng 1 simple register transfer operation transfer contents one register another register 2 complex register transfer involving alu transfer comple ment contents register sum contents two registers etc destination register 3 memory read write operation thus mac hine instru ction com posed sequen ce microoper ations ie register tran sfer seque nce use terms regi ster transfe r microo pera tion interchange ably two popul ar imple mentation methods control unit cu 1 hardwired control unit hcu output ie control signals cu generated logic circuitry built gates ipops 2 microprogrammed control unit mcu sequence micro operations corresponding machine instruction stored readonly memory called control rom crom sequence microoperations called microprogram microprogram consists microinstructions microinstruction corresponds one microoperations depending crom storage format control signals generated decoding microinstructions mcu scheme exib le hcu scheme becau se mea ning instru ction b e changed changi ng micro instruction sequen ce corres ponding instru ction instru ction set extended simply including new rom cont aining corr espond ing micro operation sequences hardware changes cont rol un thus min imal im plementati hcu change instru ction set requires subst antial change hardwired logic hcus howe ver generally faster mcus u sed cont rol unit must fast recent machines micro pro grammed cont rol units among machines mcu degre e microprogram changed user varies machine machine allow user change microprogram allow partial changes additions eg machines writable control store instruction set allow user microprogram complete instruction set suitable application latter type machine called soft machine design hcu asc section mcu design provi ded section 68", "url": "RV32ISPEC.pdf#segment215", "timestamp": "2023-10-17 20:15:36", "segment": "segment215", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 2 Hardwi red Cont rol Unit for ASC ", "content": "hcu either synchr onous async hronous synchronous cu opera tion cont rolled cloc k cont rolunit sta te easily determin ed know ing sta te cloc k asynchrono us cu com pletion one opera tion trigge rs next hence cloc k exists becaus e nature design async hronou cu com plex designed properl made faster synchr onous cu synchro nous cu clock frequenc must time betwee n two clock puls es sufc ient allow com pletion slowes micro operation char acteristic mak es synchronous cu relat ively slow desi gn synchr onous cu asc", "url": "RV32ISPEC.pdf#segment216", "timestamp": "2023-10-17 20:15:36", "segment": "segment216", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 3 Memor y versu s Proces sor Speed ", "content": "th e memor hardware usually slower cpu hardwar e although speed gap narrow ing advanc es hardwar e tec hnology memor organ iza tions help reduce speed gap descr ibed chapte r 9 assume semic onductor ram asc acce ss time equal two register transfer time s th us memor read address gate mar alon g read cont rol signal data available mb r end next regist er transfer time similar ly data addre ss provided mb r mar respective ly alon g write control sign al memor com pletes writing data end secon register transf er time char acterist ics shown figure 6 8 note content r alter ed un til read write opera tion comple ted", "url": "RV32ISPEC.pdf#segment217", "timestamp": "2023-10-17 20:15:36", "segment": "segment217", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 4 Machi ne Cycl es ", "content": "synchronous cu time betwee n two clock puls es regist er tran sfer time dete rmined slowest regist er transf er operation case asc slowes regist er transf er one involves adder alu register transfer time proce ssor know n proce ssor cycle time minor cycle major cycle processor consi sts several mino r cycles major cycles either xed variable number minor cycles used practical mac hines instruction cycle typically consumes one major cycles determine long major cycle needs examine fetch address calculation execute phases detail microoperations required fetch phase allocated minor cycles read memory signal issued t1 instruction fetched available end t2 memory read operation requires two minor cycles mar altered t2 bus structure used time perform operations thus increment pc time slot required fetch phase end t3 instruction available ir opcode decoded two 4to16 decode rs active high outpu ts connecte ir show n figur e 69 mak e hcu desi gn specic 16 instru ctions instru ction set th erefore one two decode rs utilized design bottom decode r extended instru ction set ie ir11 1 instru ction set extended change neede one part hcu instru ction ir zeroadd ress instru ction proceed execute cycle effective address must compute d facilitate addre ss computa tion 2to4 decode r connecte index ag bits ir shown figur e 6 9 outpu ts decode r used connec referenced inde x regist er bus2 note index ag 00 correspo nding inde x regist er refere ncing none three index regist ers connec ted bus2 hence b us lin es zero figure", "url": "RV32ISPEC.pdf#segment218", "timestamp": "2023-10-17 20:15:37", "segment": "segment218", "image_urls": [], "Book": "computerorganization" }, { "section": "6.10 ", "content": "show circuits needed direct data sel ected index register bus3 generat ion zero dex signal 0 indicates selected inde x register contain zero require duri ng tix tdx instru ctions followi ng index refers inde x register sel ected circui figure", "url": "RV32ISPEC.pdf#segment219", "timestamp": "2023-10-17 20:15:37", "segment": "segment219", "image_urls": [], "Book": "computerorganization" }, { "section": "6.10. ", "content": "th us inde xing perform ed duri ng t4 t4 mar 00000000 c ir70 index whe c conca tenation operator index ing t4 needed zero address instructions assembler inse rts 0s unused elds zeroadd ress instructions indexing opera tion t4 affect execution instructions way also note indexing allowed ldx stx tix tdx instructions micro operations t4 thus need altered instructions opcode assignment seen indexing required instructions opcode range 10000 11110 hence microoperations t4 use msb opcode ir15 inhibit indexing index reference instructions thus t4 ir15 0 mar ir70 index else mar ir70 fetch phase thus consists four minor cycles accomplishes direct indexed called address computations effective address mar end fetch phase rst three machine cycles fetch phase 16 instructions fourth minor cycle differ since zeroaddress instructions need address computation indirect address called machine enters defer phase memory read initiated result read operation transferred mar effective address thus redene functions three phases asc instruction cycle follows fetch includes direct indexed address calculation defer entered indirect addressing called execute unique instruction since address indexed needed mar end fetch phase defer phase use following time allocation t1 read memory t2 wait t3 mar mbr t4 operation although three minor cycles needed defer made defer phase four minor cycles long simplicity since fetch phase four minor cycles long assumption results inefciency t4 defer phase used perform microoperations needed execute phase instruction control unit would efcient complexity would increase execute phase differs instruction assuming major cycle four minor cycles lda instruction requires following time allocation t1 read memory t2 wait t3 acc mbr t4 operation asc execution instruction completed one major cycle additional major cycles must allocated instruction simplies design hcu results inefciency minor cycles additional major cycle utilized instruction alternative would make major cycles variable length would compli cated design design hcu synchronous circuit three states correspond ing three phases instruction cycle analyze microoperations needed asc instruction allocate whatever number machine major cycles require maintain machine cycles constant length four minor cycles simplicity zeroaddress instructions address calculation required use four minor cycles fetch state perform execution phase microoperations possible asc instruction cycle thus consists one machine cycle zeroaddress instructions enough time left fetch machine cycle complete execution instruction two machine cycles zero address singleaddress instructions indirect addressing three cycles singleaddress instructions indirect addressing multiple machine cycles depending io wait time io instructions table 62 lists complete set microoperations lda instruction control signals needed activate microoperations set control signals activate microoperation must generated simultaneously control unit note microoperations control signals simultaneous separated comma period indicates end set operations signals conditional microoperations signals represented using notation condition operation else operation else clause optional alternate set operations required condition true 1 transitions fetch f defer execute e states allowed minor cycle 4 cp4 machine cycle retain simplicity design complete machine cycle needed particular set operations state transitions could occur earlier machine cycle state transition circuit would complex state transitions represented theoperationsstate e state andstate ftheseareequivalent transferr ing codes correspo nding sta te state regist er use followi ng codi ng code state 00 f 01 10 e 11 used note cp2 fetch cycle constant register cont aining 1 neede facil itate increment pc opera tion constant regist er connecte bus2 signal 1 bus2 cp4 fetch cycle inde xing cont rolled ir 15 indire ct address com putation control led ir10 sta te transi tion either defe r execute e depend ing whethe r ir10 1 r 0 respective ly analyze remainin g instru ctions derive com plete set cont rol sign als required asc", "url": "RV32ISPEC.pdf#segment220", "timestamp": "2023-10-17 20:15:37", "segment": "segment220", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 5 One- Address Instru ctions ", "content": "fetch defer states identical ones shown table 62 oneaddress instructions table 63 lists executephase microprograms instructions", "url": "RV32ISPEC.pdf#segment221", "timestamp": "2023-10-17 20:15:37", "segment": "segment221", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 6 Zero -Addres s Instr uctions ", "content": "th e micro operati ons duri ng rst three minor cycl es fetch cycle simi lar lda zero address instru ctions also since three address compu tati som e execu tioncycl e opera tions perf ormed duri ng fourth mino r cycle execution cycle needed microop erati ons zero address instru ctions listed table 64 tca performed two steps hence needs execution cycl e shr shl hlt comple ted fetch cycle", "url": "RV32ISPEC.pdf#segment222", "timestamp": "2023-10-17 20:15:37", "segment": "segment222", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6. 7 Input =Outp ut Instr uctions ", "content": "dur ing rwd wwd proce ssor wait end execute cycle check see incomin g data ready data 1 r acce pted data 1 th e transi tion fetch state occur condi tions satised p rocessor wait loops execu te state condi tions met hen ce n umber cycles needed instru ctions depend speeds o devi ces table 65 lists cont rol signals implied micro operations tables 63 64 logic diagrams implemen hcu derived controlsignal information tables 62 65 figur e 611 shows impleme ntation fourpha se clock generate cp1 cp2 cp3 cp4 4bit shift register used implementation master clock oscillator starts emitting clock pulses soon power machine turned start button console discussed next section pushed run ipop msb shift register set 1 master clock pulse used circulate 1 msb shift register bits righ generat e four cloc k pulses seque nce thes e pu lses continues long run ip op set th e hlt instru ction resets run ip op thus stop ping fourpha se cloc k master clear button console clears ip ops whe n run 2to4 decoder used generate f e sign als correspo nding fetch 00 defe r 01 execu te 10 states figure", "url": "RV32ISPEC.pdf#segment223", "timestamp": "2023-10-17 20:15:37", "segment": "segment223", "image_urls": [], "Book": "computerorganization" }, { "section": "6.12 ", "content": "show state change circuitry derivat ion assumi ng ip ops th e cp4 used clock state state register along transitio n cir cuits shown figure", "url": "RV32ISPEC.pdf#segment224", "timestamp": "2023-10-17 20:15:37", "segment": "segment224", "image_urls": [], "Book": "computerorganization" }, { "section": "6.12. ", "content": "figure", "url": "RV32ISPEC.pdf#segment225", "timestamp": "2023-10-17 20:15:37", "segment": "segment225", "image_urls": [], "Book": "computerorganization" }, { "section": "6.13 ", "content": "show circui neede imple ment rst thr ee minor cycl es fetch fourth mi cycle fetch impleme nted circuit figure", "url": "RV32ISPEC.pdf#segment226", "timestamp": "2023-10-17 20:15:37", "segment": "segment226", "image_urls": [], "Book": "computerorganization" }, { "section": "6.14. ", "content": "e run ip op reset ir contain hlt instruction indexing performed ir15 0 instru ction tca hlt since reset run ip op overrid es othe r opera tion sufcien imple ment remai ning condition ir15 0 tca inde xing condi tion handled gate corre spondin g four inde x register refere nce instructions addre ss portion ir transf erred w ithout indexing mar control signals correspo nding four instru ctions similarly implemente d figur e", "url": "RV32ISPEC.pdf#segment227", "timestamp": "2023-10-17 20:15:38", "segment": "segment227", "image_urls": [], "Book": "computerorganization" }, { "section": "6.15 ", "content": "shows control circuits defe r cycle figure", "url": "RV32ISPEC.pdf#segment228", "timestamp": "2023-10-17 20:15:38", "segment": "segment228", "image_urls": [], "Book": "computerorganization" }, { "section": "6.16 ", "content": "execu te cycle th e state transi tion sign als figure 6 12 repe ated thes e circuit diag rams logi c minimizat ion attempt ed derivi ng thes e circui ts", "url": "RV32ISPEC.pdf#segment229", "timestamp": "2023-10-17 20:15:38", "segment": "segment229", "image_urls": [], "Book": "computerorganization" }, { "section": "6.7 CONSOLE ", "content": "included design console completeness section skipped without loss continuity consider console operation however mcu design next section figure 617 shows asc console console control panel enables operator control machine used loading programs data memory observing contents registers memory locations starting stopping machine 16 lights monitors console display contents selected register memory location switch bank consisting 16 twoposition switches used loading 16bit pattern either pc selected memory location two load switches load pc load mem load pc pushed bit pattern set switch bank transferred pc load mem pushed contents switch bank transferred memory location addressed pc set display switches enable display contents acc pc ir index registers psr memory location addressed pc display load switches pushbutton switches mechanically ganged together one switches pushed operative time switch previously pushed pops new switch pushed master clear switch clears asc registers start switch sets run ipop turn starts fourphase clock also power onoff switch console switch console invokes sequence microoperations typical operations possible using console along sequences microoperations invoked listed loading pc set switch bank required 16bit pattern load pc pc switch bank loading memory load pc memory address set switch bank data loaded memory location load mem mar pc mbr switch bank write display registers register contents displayed monitors lights pushing corresponding display switch monitor selected register display memory set pc memory address display mem mar pc read pc pc 1 monitor mbr loading displaying memory pc value incremented 1 end load display enables easier loading monitoring consecutive memory locations execute program program data rst loaded memory pc set address rst executable instruction execution started pushing start switch since content register memory location placed bus3 16bit monitor register connected bus3 lights console display contents register switch bank connected bus2 figure 618 shows control circuitry needed display memory console active run ipop reset one load display switches depressed console active ipop set time period equal three clock pulses console functions completed three clockpulse time run ipop also set period thus enabling clock clock circuitry active long enough give three pulses deactivated resetting run ipop startstop switch complements run ipop time depressed thus starting stopping clock note except startstop switch console inactive machine running circuits generate control signals corresponding load display switch must included complete design console designed simple compared consoles machines available commercially", "url": "RV32ISPEC.pdf#segment230", "timestamp": "2023-10-17 20:15:38", "segment": "segment230", "image_urls": [], "Book": "computerorganization" }, { "section": "6.8 MICROPROGRAMMED CONTROL UNIT ", "content": "hardwired control unit requires extensive redesign hardware instruc tion set expanded function instruction changed practice exible control unit desired enable tailoring instruction set application environment microprogrammed control unit mcu offers exi bility mcu microprograms corresponding instruction instruction set stored rom called control rom crom microcontrol unit mcu executes appropriate microprogram based instruction ir execu tion microinstruction equivalent generation control signals bring microoperation mcu usually hardwired comparatively simple design since function execute microprograms crom figure", "url": "RV32ISPEC.pdf#segment231", "timestamp": "2023-10-17 20:15:38", "segment": "segment231", "image_urls": [], "Book": "computerorganization" }, { "section": "6.19 ", "content": "shows block diagram mc u micropr ograms correspo nding fet ch defer execute cycles instruction stored crom beginnin g addre ss fetch seque nce load ed mma r whe n power turned cro transfer rst micro instru ction fetch mbr mcu decode micro instru ction gener ate control sign als require bring micro operation mar normally increment ed 1 clock pulse execu te next microins truction seque nce seque ntial executio n altered end fetch microprogr since execu tion micro pro gram corr espond ing execu te cycle instruction residing ir mus started hence mar mus set crom address appro priate execute micro program begi ns end execution execu te microprogr control transferr ed fetch seque nce function mcu thus set mma r proper value ie current valu e incr emented 1 jum p address depend ing opcode status signals zeroi ndex n z etc illus tration purpos es typical micro instruction might pc pc 1 microinstruction brought mmbr decoder circuits generate following control signals pc bus1 1 bus2 add bus3 pc mmar incremented 1 describe design mcu asc following section", "url": "RV32ISPEC.pdf#segment232", "timestamp": "2023-10-17 20:15:38", "segment": "segment232", "image_urls": [], "Book": "computerorganization" }, { "section": "6.8.1 MCU for ASC ", "content": "table", "url": "RV32ISPEC.pdf#segment233", "timestamp": "2023-10-17 20:15:38", "segment": "segment233", "image_urls": [], "Book": "computerorganization" }, { "section": "6.6 ", "content": "show comple te micro program asc micro program resem bles highlevel language program consists executable microinstruc tions produce control signals result execution control microinstructions change sequence execution microprogram based condi tions rst 32 microins tructions merely jum ps micro instru ction sequences correspo nding 32 possibl e instructions asc lo cation 0 cont ains inni te loop corr espond ing execution hlt instruc tion star swit ch console take asc loop star switch depre ssed microprogr begi ns execution 32 fetch seque nce begi ns soon instru ction fetched ir micropro gram jumps one 32 locations based opcode ir turn appro priate execu tion sequence th e inde xing indirect address com putation part execu tion sequence instru ctions use addressi ng mode s hcu index regist er selected ir decode r circui index ag corr espond 00 index regist er selected micro instru ction seque nce corr espond ing indire ct address com putation ie locations 3840 repe ated instruction needs simplic ity alt ernatively seque nce coul writte n subpr ogram could calle instru ction seque nce neede d end instruction execution micro program returns fet ch seque nce represe nt micro program rom microprogr shoul con verted binary proce ss simi lar assembler produc ing obje ct code two type instructions micro program men tioned earlier distingui sh betwee n two type wi use micro instruction format 1bi micro opcode micro opcode 0 indicates type 0 instru ctions produc e control signals whi le microopcode 1 indicates type 1 instructions jumps cont rol micro program o w formats two type microins tructions shown later bit type 0 instruction used represe nt cont rol signal thus whe n microins truction brough mmb r nonzero bit micro in struct ion would produc e correspo nding cont rol signal th organ ization would need decoding microinstruction disadvantage micro instruction word long thereby requiring large crom one way reduce microinstruction word length encoding signals several elds microinstruction wherein eld represents control signals required generated simultaneously eld decoded generate control signals obvio us sign al encodi ng scheme asc show n figur e 620 control signals partitioned based busses associated ie bus1 bus2 bus3 alu control signals four facilities need generated simultaneously listing shows elds require three bits represent signals eld figure 620b classies remaining control signals four elds 2 bits thus asc type 0 instruction would 21 bits long analysi micro program table 66 show eight dif ferent branchin g condi tions conditions shown figure 621 3bi eld required represe nt condi tions th e codi ng 3bit eld als show n figure 621 since total 119 microins tructions 7bi address needed com pletely addre ss crom words th us type 1 micro instru ctions represe nted 11 bits retain uniformi ty use 21bit format type 1 instru ctions also formats shown figur e 621b micro program table 66 must encoded usin g instru ction formats shown figur es 620 621 figur e 622 show som e exam ples encoding h ardware structure execu te micro program show n figur e 623 mma r rst set point begi nning fetch seque nce ie addre ss 32 clock pulse crom transfers micro instru ction addre ss pointed mma r mmb r rst bit mmbr 0 control decoders activa ted generat e cont rol signals mma r increment ed b 1 switch activated run ipop reset similarly corresponding load pc switch following microprogram required 120 load pc pc switch bank 121 go 0 activation load pc switch address 120 gated mmar clock started end load pc machine goes halt state detailed design console circuits let exercise mcu time required execute instruction function number microinstructions corresponding sequence mcu returns fetch new instruction sequence executed hardwired hcu waited cp4 state change overall operation however slower hcu since microinstruction retrieved crom time required two clock pulses thus equal slowest register transfer time crom access time function instruction easily altered changing microprogram requires new crom hardware changes needed attribute mcu used emulation computers available machine host made execute another target machine instructions changing control store host practice crom size design factor length microprogram must minimized reduce crom size thereby reducing cost mcu requires microinstruction contain many microoperations possible thereby increasing crom word size turn increases cost mcu compromise thus needed ascmcu microinstruction encoded number encoded eld instruction kept low reduce crom width microoperation span one crom word facilities needed buffer crom words corresponding microoperation required control signals generated simultan eously horizontally microprogrammed machine bit crom word corresponds control signal encoding bits hence execution microprogram fast since decoding necessary crom words wide vertically microprogrammed machine bits crom word encoded represent control signals results shorter words execution microprograms becomes slow owing decoding requirements conclude section following procedure designing mcu bus structure machine established 1 arrange microinstruction sequences one complete program micropro gram 2 determine microinstruction format needed generalized format shown condition control signals branch address format might result less efcient usage microword since branch address eld used microinstructions generate control signals increment mmar 1 point next microinstruction 3 count total number control signals generated number large thus making microinstruction wordlength large partition control signals distinct groups one control signals partition generated given time control signals need generated simultane ously separate groups assign elds control signal portion microinstruction format encode 4 encoding step 3 determine mmar size total number microinstructions needed 5 load microprogram crom 6 design circuit initialize update contents mmar", "url": "RV32ISPEC.pdf#segment234", "timestamp": "2023-10-17 20:15:39", "segment": "segment234", "image_urls": [], "Book": "computerorganization" }, { "section": "6.9 SUMMARY ", "content": "detailed design asc provided illustrates sequence steps design digital computer practice chronological order shown strictly followed several iterations steps needed design cycle complete design obtained several architectural performance issues arise step design process ignored issues chapter address subsequent chapters book chapter intro duced design microprogrammed control units almost modernday com puters use mcus exibility needed update processors quickly meetthemarketneeds", "url": "RV32ISPEC.pdf#segment235", "timestamp": "2023-10-17 20:15:39", "segment": "segment235", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 7 ", "content": "inputoutput design simple computer asc assumed one input device one output device transferring data accumulator using programmed io mode actual computer system however consists several input output devices peripherals although programmed io mode used environment slow may suitable especially machine used realtime processor responding irregular changes external environment consider example processor used monitor condition patient hospital although majority patient data gathering operations performed programmed io mode alarming conditions abnormal blood pressure temperature occur irregularly detection events requires processor interrupted event regular activity discuss general concept interrupt processing interruptdriven io chapter transfer information processor peripheral consists following steps 1 selection device checking device readiness 2 transfer initiation device ready 3 information transfer 4 conclusion steps controlled processor peripheral contingent upon transfer control located three modes io possible 1 programmed io 2 interrupt mode io 3 direct memory access dma discuss modes detail following discussion general io model pertinent details io structures popular computer systems provided examples", "url": "RV32ISPEC.pdf#segment236", "timestamp": "2023-10-17 20:15:39", "segment": "segment236", "image_urls": [], "Book": "computerorganization" }, { "section": "7.1 GEN ERAL I= O MODEL ", "content": "th e o str ucture asc one input device one outpu device show n figur e 71 asc com munica tes peripher als thro ugh datainput lines dil dataout put lines dol two cont rol lines input output data ipop cont rol unit used coord inate io activities figur e 72 shows gener alization asc o struct ure include multipl e o devi ces addre ss multipl e devices device numb er address neede d th devi ce address repr esented 8bi opera nd eld rwd wwd instructions fact sinc e index indirect elds instru ctions used device addre sses large 11 bit used figur e 72 assum ed 4 bits used represe nt device address thus possib le connect 16 input 16 ou tput devi ces asc 4to16 decode r attac hed devi ce address bits activate rwd wwd instru ction cycles sel ect 1 16 input outpu devi ces resp ectively input control line act ive selected input device sends data proce ssor outpu cont rol line active select ed outpu device receives data proces sor dil dol chapter 6 replaced bidire ctional data bus figur e 73 shows gener alized io struct ure used practice device addre ss carrie addres bus decode device device whos e address matches address bus wi part icipate either input outpu opera tion th e data bus bidirectiona l th e cont rol bus carrie control sign als input outpu etc addition sever al status sign als dev ice busy error origina ting device interf ace also form part control bus structure shown figure 73 memory interfaced central processing unit cpu memory bus consisting address data controlstatus lines peripherals communicate cpu io bus thus memory address space separate io address space system said use isolated io mode mode io requires set instructions dedicated io operations systems memory io devices connected cpu bus shown figure 74 structure device addresses part memory address space hence load store instructions used memory operands used read write instructions respect addresses congured io devices mode io known memorymapped io advantages memorymapped io separate io instructions needed disad vantages memory address space used io difcult distinguish memory iooriented operations program although earlier description implies two buses system structure isolated io mode need practice possible multiplex memory io addresses bus using control lines distinguish two operations figure 75 shows functional details device interface device interface unique particular device since device unique respect data representation readwrite operational characteristics device interface consists controller receives commands ie control signals cpu reports status device cpu device example tape reader typical control signals follows device busy advance tape rewind like typical status signals device busy data ready device operational etc device selection address decoding also shown part controller transducer converts data represented io medium tape disk etc binary format stores data buffer device input device case output device cpu sends data buffer transducer converts binary data format suitable output onto external medium eg 01 bit write format onto magnetic tape disk ascii patterns printer etc figure 76 shows functional details interface cpu magnetic tape unit interface much simplied illustrates major functions device address address bus corresponds tape device select device signal becomes active cpu outputs address valid control signal little decoder outputs settled used clock device ready ipop device ready ip op cleared since output decoder zero q0 output device ready ipop becomes device busy status signal tape advanced next character position position attained tape mechanism generates tape position signal sets device ready ipop asynchronous set input turn generates dat ready signal cpu dur ing data read operation input sign al active device reads data tape load buffer gates onto data bus response dat ready cpu gates data bus accum ulator dur ing data write cpu sends data data bus device buff er set outpu cont rol signal sets addre ss bus wi th device addre ss outputs add ress valid signal operation dev ice rea dy ip op simi lar data read peration data writte n onto tape data ready signal becomes active th e data ready als serves data acce pted sign al cpu resp onse cpu rem oves data address respective buses describ e cpui o device handshake next section", "url": "RV32ISPEC.pdf#segment237", "timestamp": "2023-10-17 20:15:40", "segment": "segment237", "image_urls": [], "Book": "computerorganization" }, { "section": "7.2 I=O FUNC TION ", "content": "th e major functions device interface 1 timing 2 control 3 data conversion 4 error detection correction th e timing control aspects corr espond manipulat ion cont rol sta tus signals bring data transfer addi tion opera ting speed dif ference betwee n cpu device must com pensated interface gener al data conver sion one code neede sinc e device medium whi ch data repr esented may use differ ent code represent data errors occur transmission must detected possible corrected interf ace discu ssion io functions follows", "url": "RV32ISPEC.pdf#segment238", "timestamp": "2023-10-17 20:15:40", "segment": "segment238", "image_urls": [], "Book": "computerorganization" }, { "section": "7.2. 1 Timing ", "content": "far chapter assumed cpu controls bus i e bus mas ter gener al whe n several devices connec ted bus possibl e device cpu become bus master thus among two devices involved data transfer bus one bus master slave sequence operations needed device become bus master described later chapter data transfer bus two devices either synchronous async hronous figur e 7 7 shows timing diagram typical synchr onous bus transfer clock serves timing reference signals either rising falling edge used read operation shown figure 77a bus master activates read signal places address slave device device trying read data bus devices bus decode address device whose address matches address bus responds placing data bus several clock cycles later slave may also provide status information error error etc master number clock cycles wait cycles required response depends relative speeds master slave devices waiting period known master gate data data bus appropriate time general provide exible device interface slave device required provide control signalacknowledge abbreviated ack data readyto indicate validity data bus upon sensing ack master gates data internal registers figur e 77b show timing synchronous wr ite operation e bus master activates write control signal places address slave device address lines bus devices decoding address master also places data data bus wait period slave gates data buffer places ack data accepted bus response master removes data write control signal bus note operations described earlier synchronized clock asynchronous transfer mode sequence operations except clock figure 78 shows timing diagrams asynchronous transfer source destination devices figure 78a source initiates transfer placing data bus setting data ready destination acks response data ready signal removed ack removed note data removed bus ack received figure 78b destination device initiates ie requests transfer response source puts data bus acknowledge sequence figure 78a peripheral devices usually operate asynchronous mode respect cpu usually controlled clock controls cpu seque nce f event required brin g data transf er two devices cal led protocol handsha ke", "url": "RV32ISPEC.pdf#segment239", "timestamp": "2023-10-17 20:15:40", "segment": "segment239", "image_urls": [], "Book": "computerorganization" }, { "section": "7.2.2 Control ", "content": "datatran sfer handsh ake events brough bus master som e brought slave devi ce data transf er comple tely controlle cpu program med o mode typical protocol mode o show n table 71 combine protocols input output operations devi ce either input r outpu mode given time sequen ce repeats transfer speed difference betwee n cpu device rende rs mode io inefcient alternat ive distribut e part cont rol act ivities devi ce control ler cpu sends comma nd device controlle r input ou tput data cont inues processi ng activit y th e control ler collec ts data sends data device int errupts cpu cpu disconnec ts devi ce data transfer complete ie cpu services interrupt returns mainline processing interrupted typical sequence events int erruptmod e io shown table 72 protocol assumes cpu always initiates data transfer practice peripheral device may rst interrupt cpu type transfer input output determined cpuperipheral handshake data input need initiated cpu interrupt mode io reduces cpu wait time slow device requires complex device controller programmed io mode causes interrupt popular interrupt structures arediscussedinthenextsection additio n earli er protocol control timing issues intro duced char acterist ics link ie data line lines connec devi ce cpu th e data link simpl ex unidir ectional h alf duplex eithe r dir ection one way time full duple x direct ions simulta neously seria l paral lel serial tra nsmission require constant clock rate main tained thr oughout data transmis sion avoid synchr onization probl ems parallel transm ission care must taken avoid data skew ing ie data arriving dif ferent time bus lines due different ele ctrical characterist ics indi vi dual bit lines data transf er peri pheral devices located vicinity cpu usual ly perform ed parallel mode remot e devi ces com munica te cpu serial mode assume parallel mode transfer following sectio ns serial transf er mode descr ibed sect ion 77", "url": "RV32ISPEC.pdf#segment240", "timestamp": "2023-10-17 20:15:40", "segment": "segment240", "image_urls": [], "Book": "computerorganization" }, { "section": "7.2.3 Data Conversion ", "content": "data representation io medium unique medium example magnetic tape uses either ascii ebcdic code represent data internally cpu might use binary bcd decimal representation addition interface link might organized serialbybit serialbycharacter quasiparallel serialbyword fully parallel thus two levels data conversion accomplished interface conversion peripheral link format link cpu format", "url": "RV32ISPEC.pdf#segment241", "timestamp": "2023-10-17 20:15:40", "segment": "segment241", "image_urls": [], "Book": "computerorganization" }, { "section": "7.2.4 Error Detection and Correction ", "content": "errors may occur whenever data transmitted two devices one extra bits known parity bits used part data representation facilitate error detection correction parity bits already present data included data stream interface transmission checked destination depending number parity bits various levels error detection correction possible error detection correction particularly important io peripheral devices exposed external environment error prone errors may due mechanical wear temperature humidity variations mismounted storage media circuit drift incorrect datatransfer sequences protocols like figure 79a shows parity bit included data stream 8 bits parity bit p 1 odd parity used total number 1s odd 0 even parity used total number 1s even 9bit data word transmitted receiver data computes parity bit p matches computed parity error transmission error detected data retransmitted figur e 79b show parity scheme magnet ic tape extra track track 8 added tape format parity track stores parity bit character represented eight tracks tape end record longitudinal parity character consisting parity bit track added elaborate error detection correction schemes often used references listed end chapter provide details schemes", "url": "RV32ISPEC.pdf#segment242", "timestamp": "2023-10-17 20:15:40", "segment": "segment242", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3 INTERRUPTS ", "content": "number conditions processor may interrupted normal processing activity conditions 1 power failure detected sensor 2 arithmetic conditions overow underow 3 illegal data illegal instruction code 4 errors data transmission storage 5 softwaregenerated interrupts intended user 6 normal completion asynchronous transfer conditions processor must discontinue processing activity attend interrupting condition possible resume proces sing activity interrupt occurred order processor able resume normal processing servicing interrupt essential least save address instruction executed entering interrupt service mode addition contents accumulator registers must saved typically interrupt received processor completes current instruction jumps interrupt service routine interrupt service routine program preloaded machine memory performs following functions 1 disables interrupts temporarily 2 saves processor status registers 3 enables interrupts 4 determines cause interrupt 5 services interrupt 6 disables interrupts 7 restores processor status 8 enables interrupts 9 returns interrupt processor disables interrupts long enough save status since proper return interrupt service routine possible status completely saved processor status usually comprises contents registers including program counter program status word servicing interrupt simply means taking care interrupt condition case io interrupt corresponds data transfer case power failure saving f registers status normal resumpti proce ssing powe r back arithmet ic condi tion checking previous operation simply setting ag indicate arithmeti c error interr upt service com plete processor status restor ed registers load ed values saved step 2 interrupt disabled restor e period comple tes interrupt serv ice processor ret urns norm al processing mode", "url": "RV32ISPEC.pdf#segment243", "timestamp": "2023-10-17 20:15:40", "segment": "segment243", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3.1 Interrupt Mechani sm for ASC ", "content": "interrupt may occur time th e processor recogniz es interrupts end executi current instru ction int errupt needs reco g nized earlier say end fetch befor e execu tion comple x design needed becau se p rocessor status rol led back end execution previous instruction assum ing simpler case asc fet ch micro sequence mus altered reco gnize int errupt let us assume interrupt input int control unit followi ng interrupt scheme asc assume one interr upt time interrup ts wi occur curr ent interrupt completely serviced remove restriction later reserve memory locations 0 5 saving registers pc acc index registers psr entering interrupt service routine located memory locations 6 onwards see figur e 710 also assume following new instru ctions available 1 sps store psr memory location 2 lps load psr memory location 3 enable interrupt 4 disable interrupt recall asc used 16 32 possible opcodes remaining 16 opcodes used new instructions fetch sequence looks like following t1 int 1 mar 0 else mar pc read t2 int 1 mbr pc write else pc pc 1 t3 int 1 pc 6 else ir mbr t4 int 1 state f else int high one machine cycle used entering interrupt service routine ie stores pc location 0 sets pc 6 rst part service routine figure 710 stores registers devices polled discussed later section nd interrupting device interrupt serviced registers restored returning interrupt th interr upt handl ing scheme require int line 1 fetch cycl e although int line go 1 time stay 1 end next fetch cycl e order reco gnized cpu fu rther mus go 0 end t4 wise another fetch cycle interrupt mode invok ed timing requireme nt n line simplie includi ng interruptenab le ip op interr upt ipop intf control unit gating int li ne int f t1 show n figur e 711 th e fetch seque nce accomm odate change shown t1 intf 1then r 0 else mar pc rea t2 intf 1then mbr pc write else pc pc 1 t3 intf 1then pc 7 else ir mbr t4 intf 1then state f disabl e te reset intf ack nowled ge else befor e note interrupt sequence disables interr upts t4 thereby requi ring disabl e interrupt instru ction location 6 figure 710 interrupt service routine execu tion start location 7 enab le interr upt instruction requi red th instru ction sets interrupt enable ip op allow interrupt note also interr upt acknowl edge sign al gener ated cpu t4 indicate externa l devices interrupt recognized schem e interrupt line mus held high next fetch cycle ie one instruction cycl e worst case interr upt reco gnized onc e interrupt reco gnized interr upts allowed u nless interr upts enabled", "url": "RV32ISPEC.pdf#segment244", "timestamp": "2023-10-17 20:15:41", "segment": "segment244", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3.2 ", "content": "multiple interrup ts previous interrupt scheme another interrupt occurs interr upt service com pletely ignored processor practice othe r interr upts occur interrupt serv ice shoul serv iced based priorities relat ive pri ority interrupt serv iced interr upt service routine figur e", "url": "RV32ISPEC.pdf#segment245", "timestamp": "2023-10-17 20:15:41", "segment": "segment245", "image_urls": [], "Book": "computerorganization" }, { "section": "7.10 ", "content": "alter ed accommodat e insert ing enabl e interrupt instruction location 12 right saving regi sters disable interrupt instruction befor e bloc k instructions resave regi sters although change recogniz es interrupt interr upt service note interr upt occur serv ice memor locati ons 05 overwr itten thus corrupt ing processor sta tus informat ion require normal return rst interru pt processor status saved stack rather dedicat ed memory locations done previous scheme multipl e interrupt b e serv iced rst interrupt sta tus 1 pushed onto sta ck secon interr upt status 2 pushed onto top stack second interru pt service comp lete status 2 popped stack thus leaving status 1 stack intact return rst interrupt stack thus allows nesting interrupts", "url": "RV32ISPEC.pdf#segment246", "timestamp": "2023-10-17 20:15:41", "segment": "segment246", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3. 3 Polling ", "content": "onc e interr upt recognized cpu mus invoke appro priate interrupt serv ice routine interrup condition interrupt mode o scheme cpu polls device identify interrupt ing device polling implemente either softw hardware softw impleme ntation pollin g routine addre sses o devi ce turn read status status indicates interr upting condi tion service routine correspo nding devi ce executed po lling thus incorporat es prio rity amo ng devices since highest priority device addre ssed rst fol lowed lower priority devi ces figur e 7 12 show hardwar e pollin g scheme bina ry count er contain addre ss rst device initially count clock pulse cpu pollin g mode count reaches addre ss interrupt ing device interr upt request irq ip op set thereby preventing clock increm ent ing bina ry counter address interrupt ing device binary counter practice priority encoder used connect device interrupt lines cpu thereby requiring one clock pulse detect interrupting device ex amples thi scheme given se ction 710", "url": "RV32ISPEC.pdf#segment247", "timestamp": "2023-10-17 20:15:41", "segment": "segment247", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3.4 Vectored Inter rupts ", "content": "alter native pollin g means recogniz ing interr upting device use vectore interrupt struct ure like one show n figure 713 e cpu generat es acknow ledge ack signal response interr upt device interrupt ing ie interrupt ip op set ack ip op devi ce interface set ack signal devi ce sends vector onto data bus cpu read vector identify int errupting device th e vecto r either device addre ss addre ss memor location interr upt service routine device begins device identied cpu sends second ack addre ssed device time device resets ack int errupt ipop circu itry secon ack show n figure 713 structure figur e 713 interrupt ing devices send vectors onto data bus simultaneous ly response rst ack thu cpu distingui sh betwee n devices prevent isolate vect single interrupt ing devi ce o devices connec ted daisyc hain structure highest priority device receives ack rst followed lower priority devices rst interrupting device chain inhibits propagat ion ack signal figure 714 shows typical daisyc hain interf ace since daisy chain ack path structure calle backwar daisy chai n scheme interrupt sign als o devices ored toge ther input cpu thus device int errupt cpu time th e cpu process interrupt order determin e pri ority interr upting devi ce com pared interr upt proce ssing time proce ssing highe rprio rity interrupt cpu generat e ack eliminate overhead comparing priorities interrupts cpu processing interrupt forward daisy chain used higher priority device prevents interrupt signals lowerpriority device reaching cpu figure", "url": "RV32ISPEC.pdf#segment248", "timestamp": "2023-10-17 20:15:41", "segment": "segment248", "image_urls": [], "Book": "computerorganization" }, { "section": "7.15 ", "content": "shows forward daisy chain device interface assumed interrupt status ipop addition usual interrupt ack ipops interrupt status ipop set interrupt service device begins reset end interrupt service therefore long interr upt ip op device set ie long device interrupt ing interr upt status ipop set i e device bein g serv iced interrupt sign al lowerpr iority device prevente reach ing cpu", "url": "RV32ISPEC.pdf#segment249", "timestamp": "2023-10-17 20:15:41", "segment": "segment249", "image_urls": [], "Book": "computerorganization" }, { "section": "7.3. ", "content": "5 types inter rupt structur es interrupt struct ure shown figur e", "url": "RV32ISPEC.pdf#segment250", "timestamp": "2023-10-17 20:15:41", "segment": "segment250", "image_urls": [], "Book": "computerorganization" }, { "section": "7.13, ", "content": "interrupt lines devices ored together hence device interrupt cpu called singlepriority structure devices equal importance far interrupting cpu concerned singlepriority structures adopt either poll ing vectoring identify interrupting device singlepriority polled struc ture least complex interrupt structure since polling usually done software slowest polling singlepriority vectored structure cpu sends ack signal response interrupt interrupting device returns vector used cpu execute appropriate service routine structure operates faster polled structure requires complex device controller forward daisychain structure figure 715 multipriority structure since interrupting cpu depends priority device highest priority device always interrupt cpu structure higherpriority device interrupt cpu cpu servicing interrupt lowerpriority device interrupt lowerpriority device prohibited reaching cpu servicing higherpriority device interrupt recognized cpu recognition interrupting device done either polling using vectors multipriority vectored structure fastest complex interrupt structures table 73 summarizes characteristics interrupt structures actual systems one interrupt input line provided cpu hardware multilevel structure possible within level devices daisy chained level assigned priority priorities may also dynamically changed changed system operation figure 716 shows masking scheme dynamic priority operations mask register set cpu represent levels permitted interrupt levels 2 3 masked levels 1 4 enabled int signal generated enabled levels interrupt levels masked interrupt cpu note scheme device connected one level d1 connected levels 1 2 level figure 716 receives ack signal cpu ack circuitry shown", "url": "RV32ISPEC.pdf#segment251", "timestamp": "2023-10-17 20:15:42", "segment": "segment251", "image_urls": [], "Book": "computerorganization" }, { "section": "7.4 DIRECT MEMORY ACCESS ", "content": "programmed interrupt mode io structures transfer data device cpu register accumulator asc amount data transferred large schemes would overload cpu data normally required memory especially voluminous complex computations performed dma scheme enables device controller transfer data directly main memory majority datatransfer control operations performed device controller cpu initiates transfer commanding dma device transfer data continues processing activities dma device performs data transfer interrupts cpu completed figure 717 shows dma transfer structure dma device either dma controller dma channel limitedcapability processor word count register wcr address register data buffer start transfer cpu initializes address register ar dma channel memory address data must transferred wcr number units data words bytes transferred note data bus connected two registers usually registers addressed cpu output devices using address bus initial values transferred via data bus dma controller decrement wcr increment ar word transferred assuming input transfer dma controller starts input device acquires data buffer register word transferred memory location addressed ar mar ar mbr data buffer write memory transfers done using address data buses scheme cpu dma device controller try access memory mar mbr since memory simultaneously accessed dma cpu priority scheme used prohibit cpu accessing memory dma operation memory cycle assigned dma device transfer data cpu prevented accessing memory called cycle stealing since dma device steals memory cycle cpu required access memory transfer complete cpu access memory dma controller decre ments wcr increments ar preparation next transfer wcr reaches 0 transfercomplete interrupt sent cpu dma controller figure 718 shows sequence events dma transfer dma devices always higher priority cpu memory access data available device buffer may lost transferred immediately hence io device connected via dma fast enough steal several consecutive memory cycles thus holding back cpu accessing memory several cycles cpu access memory dma transfer cycle dma cont rollers either dedi cated one devi ce shar ed among several i devi ces figure 719 shows bus struct ure enables shar ing bu structure calle com patibl e o bus struct ure becau se o device congured perform program med dma transfers dma channe ls shar ed o devices o devi ce may transf erring data time thr ough either program med dma path s com puter system use multi plebus structure som e o devi ces connecte dma bus whi le othe rs commun icate cpu programmed io mode refer section 710 com plete descrip tion io structure modern processor motorola 68000", "url": "RV32ISPEC.pdf#segment252", "timestamp": "2023-10-17 20:15:42", "segment": "segment252", "image_urls": [], "Book": "computerorganization" }, { "section": "7.5 BUS ARCHITECTURE ", "content": "importance bus architecture great processor systems operate without frequent data transfers along buses system described earlier chapter system bus responsible interfacing processor memory disk systems well io devices addition bus system provides system clock handles arbitration attached devices allowing conguration advanced io transfer mecha nisms several standard bus architectures evolved years generalize bus control arbitration concepts provide relevant details three standard bus architectures", "url": "RV32ISPEC.pdf#segment253", "timestamp": "2023-10-17 20:15:42", "segment": "segment253", "image_urls": [], "Book": "computerorganization" }, { "section": "7.5.1 Bus Control (Arbitration) ", "content": "two devices capable becoming bus masters share bus bus controller needed resolve contention bus typically cpu io device structure cpu handles bus control function modern com puter system structures common see one processor connected bus dma structure one example one processors handle bus control function separate bus controller may included system structure figure 720 shows three common bus arbitration techniques bus busy signal indicates bus already assigned one devices master thebuscontrollerassignsthebustoanewrequestingdevice ifthebusisnotbusyand priority requesting device higher current bus master requesting devices bus request bus grant control lines used purpose signals analogous interrupt ack signals respectively figur e 720 devi ces dais chai ned alon g bus grant path highest priority device device 1 lowest priority device device n devices send bus request controller bus busy controller sends grant along daisy chain highest priority device among requesting devices sets bus busy stops bus grant signal going daisy chain becomes bus master figure 720b response bus request one devices controller polls predesigned priority order selects highest priority device among grants bus one bus grant line shown selected device activated bus master ie accepts bus grant devices ignore alternatively could independent bus grant lines figure 720c figure 720c independent bus request grant lines controller resolves priorities among requesting devices sends grant highest priority device among", "url": "RV32ISPEC.pdf#segment254", "timestamp": "2023-10-17 20:15:42", "segment": "segment254", "image_urls": [], "Book": "computerorganization" }, { "section": "7.5.2 Bus Standards ", "content": "section provides details three bus standards multibus multibus ii vmebus description focused data control signals arbitration mechanisms interrupt generation handling references listed end chapter provide details electrical timing issues examining comparing bus architectures one must look several attributes transfer mechanism interrupt servicing bus arbitration vmebus multibus ii excellent buses demonstrate capabilities responsibilities implement variety different mech anisms interrupt servicing could take place via direct vectored techniques bus transfers synchronous asynchronous bus masters implement serial daisy chained parallel method compete control bus using techniques asynchronous arbitration dma modern buses able reduce bottleneck processor faces accessing memory io devices general processor performance advances quickly bus perform ance necessitating faster bus standards faster standardized buses become prevalent gap may eventually disappear least reduced level zero wait state execution becomes reality multibus i although older standard ieee standard 796 multibus proved welldesigned architecture provide viable system many years systems utilizing bus existed since 1976 implementations used modern systems lengthy existence multibus architecture direct result design goals simplicity processor system exibility ease upgrading suitability harsh environments multibus system comprises one boards connected via passive backplane architecture processor agnostic much systems utilizing bus designed around processors ranging intel 8080 80386 families z80 motorola 68030 multibus supports single multiprocessor architectures ieee standard 796 provides wide variation implementations section attempts describe bus terms overall capability noting relevant allowed variations multibus control signals active low terminated pullup resistors two clock signals present bus clock bclk runs 10 mhz used synchronize bus contention operations constant clock cclk also 10 mhz routed masters slaves use master use initiate read write memory io space write operation active signal denotes address carried address lines valid read operation transfer active inactive indicates master received requested data slave slaves raise transfer acknowledge xack signal indicate master completed requested operation initialize signal init generated reset system initial state lock lock signal may used lock bus explained later discussion multiprocessing features multibus supports 24 address lines adr0adr23 memory sizes greater 16 mb maximum allowed 24 address lines could handled memory management unit 8 16bit processors supported case 8bit system even odd bytes accessible via swapping mechanism io access either 8 16 address lines used giving io address space 64k separate data space 16 data lines dat0dat15 although 8bit systems rst 8 valid dat0 least signicant bit data lines shared memory io devices use 8 16bit systems permitted byte high enable signal bhen used 16bit systems signify validity eight lines two inhibit lines inh1 inh2 asserted slave inhibit another slave bus activity memory read write eight interrupt lines int0int7 available congured work either direct busvectored interrupt scheme interrupt acknowledge signal inta used busvectored mechanism master freeze interrupt status request interrupting device place vector address onto bus data lines bus arbitration handled via ve bus exchange lines bus busy busy line driven master currently ownership bus signify state bus bidirectional signal driven opencollector gate synchron ized bclk either serial parallel priority schemes bus masters use bus priority bprn indicate request bus control along chain parallel circuitry serial mechanism bus priority signal bpro passed next higher bus master propagate request bus control parallelpriority scheme bus master requesting access bus raises bus request signal breq parallel circuitry resolves priorities enables bprn highest priority master requesting control optional signal common bus request cbreq used master signal request bus control allows master lower priority request bus control multibus data transfer asynchronous supports dma transfer devices system may either masters slaves slaves memory unable initiate transfer multibus supports 16 bus masters using either serial parallel priority schemes data transfer takes place following steps 1 bus master places memory io address address lines 2 bus master generates appropriate command signal 3 slave either accepts data write places data data lines read 4 slave sends transfer acknowledge signal back master 5 bus master removes signal command lines clears address data lines since multibus data transfers asynchronous possible due error transfer could extend indenitely prevent bus timeout implemented terminate cycleafter preset interval least 1 ms memory transfer slave assert inhibit lines inhibit transfer another slave operation implementedfor use diagnostic applications devices widely used normal operation transferring 8 16bit devices byte high enable signal bhen least signicant address line adr0 used dene whether even odd byte transferred evenbyte transfer inactive oddbyte transfer bhen inactive adr0 active transferring two 16bit devices signals active multibus supports two methods interrupts direct nonbus vectored bus vectored direct interrupts handled master without need device address placed address lines busvectored interrupts handled master interrogating interrupting slave instead determining vector address irq occurs bus master interrupts processor generates int command freezes state interrupt logic priority request analyzed int command bus master determines address highest priority request places address bus address lines depending size address interrupt vector either one two inta commands must generated second one causes slave transmit loworder byte interrupt vector data lines necessary 16bit addresses third inta causes highorder byte placed data lines bus master uses highorder byte placed data lines bus master uses address service interrupt since multibus accommodate several bus masters must mech anism masters negotiate control bus take place either serial parallel priority negotiation serial negotiation daisy chain technique used priority master connected order priority bus master requests control bus generates bpro signal blocks bprn signal thus locking requests lowerpriority masters parallel mechanism bus arbiter receives bprn breq signals master bus arbiter deter mines priority request performs request bus arbiters usually designed backplane io processor bottlenecked bus multibus allows use bus extensions take highthroughput transfers dma standard multibus system maximum throughput limited 10 mb per second processors later life multibus architecture capable much higher rate 20 mhz intel 80386 chip transfer 40 mb per second multibus may implement ilbx isbx exten sions ilbx provides fast memorymapped interface expansion board form factor multibus standard maximum two masters share bus limits need complex bus arbitration arbitration take place modied asynchronous data transfers ilbx slaves available system byteaddressable memory resources controlled directly ilbx bus lines 16bit transfers improvements allow maximum throughput 19 mb per second isbx expansion board implements io dma transfers taking much function multibus architecture implementation isbx bus extension marked beginning evolution toward multibus ii multibus ii multibus introduced 1974 fundamentally cpu memory bus evolved multiplemaster shared memory bus capable solving realtime applications time late 1980s users demanded new standard bus therefore intel set consortium 18 industry leaders dene next generation busmultibus ii consortium decide single bus used satisfy user needs therefore multiplebus structure consisting four subsystems dened four subsystems isbx bus incremental io expansions local cpu memory expansion bus two system busesone serial one parallel consider local area network lan system functionally partitioned node lan independent others optimized part overall problem solution gives system architect freedom choose hardware software node best ts subtask moreover systems upgraded individually multibus ii allows creating local network within single chassis dividing multicpu application set networked subsystems allows optimization subsystem subtask works subsystem complex resources may spread multiple boards communicate via local expansion bus double eurocard format ieee 1101 standard dual 96 pin din connectors chosen multibus ii standard ushaped front panel licensed siemens germany chosen enhanced electromagnetic interference emi radiofrequency interference rfi shielding properties popularity multibus products encouraged adoption isbx ieee 894 incremental io bus ieeeansi 1296 specication dene exact bus local expansion bus varies depending performance required subsystem design intel initiated ilbx ii standard optimized 12 mhz intel 80286 processor highperformance local expansion buses futurebus used cpumemory bus buses adequate systemlevel requirements system space dened multibus ii specication consists two parts interconnect space message space interconnect space used initialization selfdiagnostics conguration requirements order maintain compatibility existing buses traditional cpumemory space retained intel implemented multibus ii parallel system bus single vlsi chip called messagepassing coprocessor mpc chip consists 70000 tran sistors contains almost logic needed interface processor system bus parallel system bus dened 32bit bus clocking 10 mhz thus allowing data transfers 40 mbit per second serial system bus ssb dened 2 mbit per serial bus imple mented software interface ssb must identical parallel bus major part ieeeansi 1296 multibus ii specication interconnect address space addresses board identication initialization conguration diag nostics requirements interconnect space implemented ordered set 8bit registers longword 32bit boundariesin way small endian micropro cessor 8086 family big endian microprocessor 68000 family access information identical manner software use interconnect address space get information environment operates functionality board slot board operates identication registers contain information board type manufac turer installed components readonly registers conguration regis ters readwrite locations set changed system software diagnostics registers used selfdiagnosis board interconnect space based idea possible locate boards within backplane physical slot positions principle called geographical addressing used systemwide initialization board rmware standardized 32byte header format containing 2byte vendor id 10byte board name vendordened information boot time system software scans board locate resources loads appropriate drivers method eliminates need reconguration time new board added system board system performs initialization testing using rmware passes information needed operating system turn generates resourcelocation map used basis mes sagepassing addresses thus achieving slot independence general board manufacturer also supplies function records make additional functionality board accessible interconnect space types function record common functions memory conguration serial io dened multibus ii board capable selftesting reporting status interconnection space selftesting invoked poweron initial ization explicitly console hardware failure detected yellow led font panel illuminate helping operating easily dene replace board high performance multibus ii achieved decoupling activities cpumemory local bus system bus approach gives two advantages parallelism operations increased one bus bandwidth limit transfer rate another local bus system bus work independently parallel achieved using nine 32byte rstinrstout fifo buffers integrated mpc five used interrupts one sends four receive four data transfer two send two receive multibus ii specication introduces hardwarerecognized data type called packet packets moved subsequent clock edges 10 mhz synchronous bus thus single packet occupy bus longer 1 ms address assigned board system address used source destination elds seven different types dened standard packets divided two groups unsolicited solicited unsolicited packets used interrupts solicited packets used data transfers data elds user dened may 0 32 28 unsolicited packets bytes long 4 byte increments unsolicited packets always surprise destinationbus mpc packets similar interrupts sharedmemory systems additional feature carry 28 bytes information general interrupt packets sent two boards broadcast interrupts sent boards system three types buffer request reject grant used initiate large data transfers unlike unsolicited packets solicited packets unpredictable destination mpc packets used large data transfers boards 16 mb data transferred solicited packets operations packets creation bus arbitration error checking correction done mpc transparent local processor multibus ii system bus utilizes distributed arbitration scheme board daisychain circuitry system bus continually monitored mpcs scheme supports two arbitration algorithms fairness high priority fairness mode bus used mpc waits requesting bus bus busy mpc makes request waits bus grant mpc uses bus request bus requesters used bus requests arrived since last usage bus mpc accesses bus without performing arbitration cycle mechanism prevents bus monopolized single board since mpc uses bus maximum 1 ms arbitration resolved parallel clock cycles wasted transfers operate backtoback highpriority mode used interrupts requesting mpc bus control ler guaranteed next access bus usually interrupt packets sent highpriority mode means interrupt packets maximum latency 1 ms although rare instances n boards initiate interrupt packets within 1 ms window packets may n1 ms latency parallel system bus implemented single 96 pin connector signals divided ve groups central control addressdata system control arbitration power parallel system bus synchronous great care taken maintain clear 10 mhz system clock ieeeansi 1296 specication details precisely happens upon synchronous clock edges ambiguity numerous state machines track bus activity dened guarantee compatibility board slot 0 called central services module csm generates central control signals implemented cpu board dedicated board backpl ane module drives reset rst active low signals denot ed to initial ize system com binations dclo w prot used disti nguish cold start warm start power fai lure reco two system clocks bckl 10 mhz cclk 20 mhz gener ated ieee ansi 1296 specica tion denes para llel syst em bus full 32 bit ad0 ad31 parity cont rol pa r0 p ar3 becaus e dened mul tiplexed address data bus system cont rol lines used disti nguish data addre sses since transf ers check ed parity error case parity failur e mpc bus control ler retri es operation error recovere withi n 16 tries mpc interr upts host processor asks assist ance 10 system control lines sc0 sc 9 functions also multipl exed sc0 denes current state bus cycl e request reply sc1sc7 lines interp reted sc8 provides even parity sc4s c7 sc9 provi des even parity sc0 sc3 table 74 taken handboo k digiac omo 1990 sum marizes functi ons syst em control lines common bus reque st line breq specica tion denes distribut ed arbitrat ion scheme grants bus nume rically highe r request ing board identied lines arb0arb5 mentioned earlier scheme supports two arbitration modes fairness high priority parallel system bus particularly easy interface io replier need impleme nt single repl ying agent sta te mac hine shown figur e 721 following example assumption made requestor makes valid requests replying agent bus monitor state transitions occur falling clock edge replier remains wait request state start request cycle detected sc0 low request addressed replier addr high state transition new state controlled local ready signal reprdy reprdy low waits device ready ready waits requestor ready sc3 low performs data transfer checks multibus transfer sc2 high state machine decides accept ignore data remainder cycle additional data handled replier sends continuation error waits requestor terminate cycle additional data handled replier oscillates replier wait state replier handshake state last packet sc2 low received amultibustransfer thereplierreturnstothewaitstate simple standardized interface processor independence multibus ii became popular market many vendors produced multibus iicompatible boards many different functions later ieeeansi 12962 standard adopted expanding multibus ii live insertion capabilities vmebus vmebus standard backplane interface simplies integration data processing data storage peripheral control devices tightly coupled hardware conguration vmebus interfacing system dened vmebus specication system designed following 1 allow communication devices without disturbing internal activities devices interfaced vmebus 2 specify electrical mechanical system characteristics required design devices reliably unambiguously communicate devices interfaced vmebus 3 specify protocols precisely dene interaction devices 4 provide terminology denitions describe system protocol 5 allow broad range design latitude designer optimize cost andor performance without affecting system compatibility 6 provide system performance primarily device limited rather system interface limited vmebus functional structure consists backplane interface logic four groups signal lines called buses collection functional modules lowest layer called backplane assess layer composed backplane interface logic utility bus modules arbitration bus modules vmebus data transfer layer composed data transfer bus priority interrupt bus modules data transfer bus allows bus masters direct transfer binary data slaves data transfer bus consists 32 data lines 32 address lines 6 address modier lines 5 control lines nine basic types data transfer bus cycles read write unaligned write block read block write readmodifywrite addressonly interrupt acknowledge cycle slave detects data transfer bus cycle initiated master cycles specify participation transfers data master priority interrupt bus allows interrupter modules request interrupts interrupt handler modules priority interrupt bus consists seven irq lines one interrupt acknowledge line interrupt acknowledge daisy chain interrupter generates irq driving one seven irq lines interrupt handler detects irqs generated interrupters responds asking status identication information request acknowledged interrupt handler interrupter provides 1 2 4 byte status identication interrupt handler allows interrupt handler service interrupt interrupt acknowledge daisychain driver function activate interrupt acknowledge daisy chain whenever interrupt handler acknowledges irq daisy chain ensures one interrupter responds status identica tion one generated irq vmebus designed support multiprocessor systems several masters interrupt handlers may need use data transfer bus time consists four bus request lines four daisychained bus grant lines two lines called bus clear bus busy requester resides board master interrupt handler requests use data transfer bus whenever master interrupt handler needs arbiter accepts bus requests requester modules grants control data transfer bus one requester time arbiters builtin time feature causes withdraw bus grant requesting board start using bus within prescribed time ensures bus locked result transient edge request line arbiters drive bus clear line detect request bus requester whose priority higher one currently using bus ensures response time urgent events bounded utility bus includes signals provide periodic timing coordinate powerup powerdown sequences vmebus system three modules dened utility bus system clock driver serial clock driver power monitor consists twoclock line systemreset line power fail line system fail line serial data line system clock driver provides xedfrequency 16 mhz signal serial clock driver provides periodic timing signal synchronizes operation vmebus vmebus part vmesystem architecture provides interprocessor serial communication path power monitor module monitors status primary power source vmebus system power strays outside limits required reliable system operation uses power fail lines broadcast warning boards vmebus system time effect graceful shutdown system controller board resides slot 1 vmebus backplane includes oneofakind functions dened vmebus functions include system clock driver arbiter interrupt acknow ledge daisychain driver bus timer two signaling protocols used vmebus closedloop protocols openloop protocols closedloop protocols interlocked bus signals open loop protocols use broadcast bus signals address strobe data strobes interlocked signals especially important interlocked data acknowledge bus error signals coordinate transfer addresses data protocol acknowledging broadcast signal instead broadcast maintained long enough ensure appropriate modules detect signal broadcast signals may activated time smallest addressable unit storage vmebus byte masters broadcast address data transfer bus beginning cycle addresses may consist 16 24 32 bits 16bit addresses called short addresses 24bit addresses called standard addresses 32bit addresses called extended addresses master broadcasts 6bit address modier code along address tell slaves whether address short standard extended four basic datatransfer capabilities associated datatransfer bus d08 eo even odd byte d08 oddbyte d16 d32 five basic types datatransfer cycles dened vmebus specication cycle types include read cycle write cycle block read cycle block write cycle readmodifywrite cycle two types cycles dened addressonly cycle interrupt acknowledge cycle exception addressonly cycle transfer data cycles used transfer 8 16 32 bits data read write cycle used read write 1 2 3 4 bytes data cycle begins master broadcasts address address modier mode block transfer cycles used read write block 1256 bytes data vmebus specication limits maximum length block transfers 256 bytes readmodifywrite cycles used read write slave location indivisible manner without permitting another master access location vmebus protocol allows master broadcast address next cycle data transfer previous cycle still progress vm ebus provides two ways proce ssors multi processing system commun icate usin g irq lines using location monit ors loca tion monit functi onal modu le intende use multipl eproces sor system s monitors data transfer cycl es vmebus activate onboard sign al whenever access done locations assigned watch multiplep rocessor syst ems events globa l importanc e need broad cast proce ssors accom plished processor bo ard include location monit vme bus provides performanc e vers atility neede appeal wide range users rapid rise popu larity made popul ar 32bit bus vm ebus system accomm odate inev itable change easi ly withou making existing equipme nt obsolet e section 710 provi des som e details peripheral com ponent interface pci standar d th e unive rsal serial bu us b mor e rece nt standar int erface bus", "url": "RV32ISPEC.pdf#segment255", "timestamp": "2023-10-17 20:15:44", "segment": "segment255", "image_urls": [], "Book": "computerorganization" }, { "section": "7.6 CHA NNEL S ", "content": "channel mor e sophist icated o controlle r dma devi ce performs i transf ers dm mode limit edcapabi lity p rocessor perform operations dma syst em additi channe l congur ed interface several o devices memor dm control ler usually connec ted one devi ce channe l perfo rms exten sive err detect ion corr ection data formatt ing code conversi unlik e dma channel interru pt cpu error condition two type channe ls multiplexe r sel ector multipl exer channe l connec ted several low mediums peed devi ces card read ers paper tape readers etc th e channel scans devices turn collects data b uffer unit data may tagg ed channe l indicate devi ce cam e input mode data transferred memor tags used identify memor buffer areas reserved device multipl exer channe l handles opera tions needed transfers mul tiple devices initial ized cpu interrupt cpu whe n tra nsfer com plete two type multiplexer channe ls common 1 char acter multipl exers transfer one character usua lly one byte device 2 bloc k multipl exers transfer block data devi ce connec ted selector channe l int erfaces highspe ed devi ces mag netic tapes disks memory devi ces keep channe l busy high data transfer rates altho ugh several devices connec ted select channel channel stays one devi ce data transf er devi ce complete figure 722 show typical com puter system struct ure several channe ls device assigned one channel possible connect device one channel multichannel switching interface channels nor mally treated part cpu conventional computer architecture channels cpuresident io processors", "url": "RV32ISPEC.pdf#segment256", "timestamp": "2023-10-17 20:15:44", "segment": "segment256", "image_urls": [], "Book": "computerorganization" }, { "section": "7.7 ", "content": "io proces sors cha nnels interrupt structure perform major ity f i operations cont rol thus freein g central processor intern al data proce ssing enhances thr oughput computer system step dir ection distributin g o proce ssing functi ons peripher als mak e channels vers atile like ful ledged processor s io proce ssors calle peripher al fron tend proce ssors fep advent f mi croproce ssors availa bility less expens ive hardw devices possibl e make fron tend processor vers atile enough keeping cost low large scale compute r syst em uses sever al minico mputers feps minic omputer mi ght use another mini micro comp uter fep since feps progr ammab le serve exib le io devi ces cpu also perform muc h processi ng data possibl e sour ce processing data transf erred memory fep wr itable control store ie control rom eld progr ammab le micropro gram change reec devi ce interface neede d th e coupling betwee n fep cent ral proce ssor either thro ugh disk syst em shar ed memor fig ure", "url": "RV32ISPEC.pdf#segment257", "timestamp": "2023-10-17 20:15:44", "segment": "segment257", "image_urls": [], "Book": "computerorganization" }, { "section": "7.23). ", "content": "disk coupled syst em fep stores data disk unit turn processed central processor output central processor stores data disk provides required control information fep enable data output system easier sharedmemory system implement even two processors fep cpu identical timing control aspects processordisk interface essentially independent sharedmemory system processor acts dma device respect shared memory hence complex handshake needed especially two processors identical system generally faster however since intermediate directaccess device used figure 723 shows fepcpu coupling schemes", "url": "RV32ISPEC.pdf#segment258", "timestamp": "2023-10-17 20:15:44", "segment": "segment258", "image_urls": [], "Book": "computerorganization" }, { "section": "7.7.1 IOP Organization ", "content": "figure 724 shows organization typical sharedmemory cpuiop interface shared main memory stores cpu iop programs contains cpuiop communication area communication area used passing information two processors form messages cpu places iooriented information communication area information consists device addresses memory buffer addresses data transfers types modes transfer address iop program etc information placed communication area cpu using set command words cw addition cw communication area contains space iop status initiating io transfer cpu rst checks status iop make sure iop available places cw communication area commands iop start start io signal iop gathers io parameters executes appropriate iop program transfers data devices data transfer involves memory dma mode transfer used acquiring memory bus using bus arbitration protocol transfer complete iop sends transfercomplete interrupt cpu io done line cpu typically three iooriented instructions handle iop test io start io stop io command word instruction set iop consists datatransfer instructions type read write n units device x memory buffer starting location z addition iop instruction set may contain address manipulation instructions limited set iop program control instructions depending devices handled may devicespecic instructions rewind tape print line seek disk address etc", "url": "RV32ISPEC.pdf#segment259", "timestamp": "2023-10-17 20:15:45", "segment": "segment259", "image_urls": [], "Book": "computerorganization" }, { "section": "7.8 SERIAL I=O ", "content": "assumed parallel buses transferring data word byte cpu memory io devices earlier sections chapter devices lowdatarate terminals use serial buses ie serial mode data transfer containing single line either direction transfer transmission usually form asynchronous stream characters xed time interval two adjacent characters figure 725 shows format 8bit character asynchronous serial transmission transmission line assumed stay 1 idle state character transmitted transition 1 0 indicates beginning transmission start bit start bit followed 8 data bits least 2 stop bits terminate character bit allotted xed time interval hence receiver decode character pattern proper 8bit character code figure 726 shows serial transmission controller cpu transfers data output interface buffer interface controller generates start bit shifts data buffer 1 bit time onto output lines terminates 2 stop bits input start stop bits removed input stream data bits shifted buffer turn transferred cpu electrical characteristics asynchronous serial interface standard ized electronic industries association called eia rs232c datatransfer rate measured bits per second 10 character per second transmission ie 10 character per second 3 11 bits per character assuming 2 stop bits per character time interval bit character 909 ms digital pulses signal levels used communication data local devices cpu distance communicating devices increases digital pulses get distorted due line capacitance noise phenomena certain distance recognized 0s 1s telephone lines usually used transmission data long distances transmission signal appropriate frequency chosen carrier carrier modulated produce two distinct signals corresponding binary values 0 1 figure 727 shows typical communication structure digital signals produced source computer frequency modulated bythemodulatordemodulator modern intoanalogsignalsconsistingoftwo frequenc ies analog sign als conver ted digi tal count erpart modem destina tion ordinar vo icegrade telepho ne line used tran smission medium two frequenc ies 1170 2125 hz used bell 108 mode sophistica ted modu lation schem es available today allow data rates 20000 bits per secon d figure 728 show interf ace typi cal termi nal cpu video display screen keyboa rd cont rolled b cont roller controlle r simpl e interface perform basic data transfer functions case terminal called dumb terminal microcom puter system capable performi ng loca l processing case terminal intel ligent terminal th e terminal commun icate cpu usual ly serial mode univ ersal asynch ronous rece iver transm itter uart facilitie serial com munica tion u sing rs232 interface cpu locate remot e site modem used interface uar telepho ne lines show n figur e 729 another modem cpu conver ts analog telepho ne li ne sign als correspo nding digi tal sign als cpu since terminal slow devi ces multiplexer typi cally u sed connec set termin als remote site cpu show n figure 730 figure 731 show structu system several remote terminal sties terminal multiplexer f figure 730 site replace cluste r controlle r th e clus ter cont roller capab le performi ng processin g cont rol tasks priority alloca tion etc cluste r controlle rs tur n interfaced cpu frontend processor communication terminals cpu networks type shown figure 731 follows one several standard protocols binary synchronous control bsc synchronous datalink control sdlc highlevel datalink control hdlc protocols provide rules initiating terminating transmission handling error conditions data framing rules identifying serial bit patterns characters character streams messages etc", "url": "RV32ISPEC.pdf#segment260", "timestamp": "2023-10-17 20:15:46", "segment": "segment260", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9 COMMON I=O DEVICES ", "content": "variety io devices used communicate computers broadly classied following categories 1 online devices terminals communicate processor interactive mode 2 offline devices printers communicate processor noninteractive mode 3 devices help realtime data acquisition transmission analogtodigital digitaltoanalog converters 4 storage devices tapes disks also classied io devices provide brief descriptions selected devices section area computer systems technology also changes rapidly newer versatil e devices announc ed daily basis uptoda te informat ion char acterist ics avai lable vendor lite rature magazi nes ones listed refere nces sectio n chapter", "url": "RV32ISPEC.pdf#segment261", "timestamp": "2023-10-17 20:15:46", "segment": "segment261", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9.1 Termina ls ", "content": "typical terminal consist monitor keyboa rd mous e wide variety terminals avai lable mos common cat hode ray tube crt based monit ors rapidly replace at panel displays display use liquid crys tal disp lay lcd technol ogy displays either character mapped bit map ped charac ter mapped moni tors typicall treat disp lay matrix characters bytes bitmappe monit ors treat display array picture eleme nts pixel pixe l depicting 1 bit informat ion display technol ogy exper iencing rapid change wi th newer capab ilities added displays almost daily phisticated alphanum eric graph ic display common ly availab le uch screen capability input mechanism com mon allows select ion men u display ed scre en touching item menu variet keyboa rds avai lable typical keyboa rd used p ersonal compute rs consist 102 keys vario us ergon omic keyboa rds appea ring mouse allows pointi ng area screen moveme nt mous epad buttons used perform vari ous operations based informat ion select ed spot scre en addi tion thes e three com ponents typi cal terminal vari ous devices light pen joys tick micro phones direct audio input speakers audio output camera vide input common ly used o devices", "url": "RV32ISPEC.pdf#segment262", "timestamp": "2023-10-17 20:15:47", "segment": "segment262", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9.2 Mouse ", "content": "mouse com mon input device today typicall h two buttons scroll wheel scroll whe el allows scro lling image scre en buttons allow select ion partic ular posi tion scre en mous e moved mouse pad left button sel ects cursor p osition rig ht button typicall provides men u f possibl e operations selected curs position earlier mouses used position ball u nderneath mous e track motion mech anically common see optical mous es apple comput er wireles mig hty mouse trackin g engine based las er technology delivers 20 times performance standard optical tracking giving accuracy responsiveness surfaces offers 3608 scrolling capability perfectly positioned roll smoothly one nger touch sensitive technology employed seamless top shell detects clicking forcesensing buttons either side mighty mouse let us squeeze mouse activate whole host customizable features instantly availa ble wired wireles vers ions ta ble 75 provides additional details", "url": "RV32ISPEC.pdf#segment263", "timestamp": "2023-10-17 20:15:47", "segment": "segment263", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9. 3 Print ers ", "content": "printers come long way nois da isyw el dotm atr ix p ri nte rs 19 8 0s da w e ca n pr int ny document crisp realistic colors harp text essentially f ont imagine w hether w e want pri nt p hotos family projects r oc uments go speciall designed p rinter job common type f p rinter foun h omes today inkjet printer print er works spraying ionized ink onto paper magnet ized plat es directin g ink desired hape inkjet printers capable producing highqualit text images black nd white color pproaching quality produced costly laser printers inkjet printers today capable printing photoquality images laser printers pr ov ide hi gh est quality text images operate using l aser beam produce e lectrically charged image drum n rolled reservoir toner toner picked e lectrically charged portions drum nd transferred paper combination heat pressure full color laser p rinters available tend much expensive black w hite versions require great deal printer memory produce highresolution images portable printer compact mobi le inkjet p ri nter t briefcase weigh li ttle r un n ba tte ry powe r infra d compati ble wireless printers available inkjet laserjet models allow us printing handheld device laptop computer digital c era w ireless shortrange radio technology allows thi happen called b luetooth allinone devices inkj et laser based combine functi ons pri nter scann er copi er fax one mac hine avai lable example hp color laserjet 1600 family basic color laser printer designed light printing needs 264 mhz processor 16 mb memor y table 76 provides speci cations", "url": "RV32ISPEC.pdf#segment264", "timestamp": "2023-10-17 20:15:47", "segment": "segment264", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9.4 Scanners ", "content": "scanners used transfer content hard copy document machine memory image retained transmitted needed also processed reduced enlarge rotate etc image treated word processable document optical character recognition rst performed scan ning scanners various capabilities price ranges available microtek scanmaker i900 atbed scanner handle legalsize originals features dualscanning bed produces best lm scans connected processor via usb firewire employs digital ice photo print technology correct dust scratches colorrescue colorbalance correc tions lm reective scans addition typical atbed glass plate i900 glassless lm scanner glass scan surface underneath eliminating glass way standalone lm scanner i900 capture tonal information lmfrom light shades deep shadows scanner optical resolution 3200 6400 dots per inch dpi", "url": "RV32ISPEC.pdf#segment265", "timestamp": "2023-10-17 20:15:47", "segment": "segment265", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9.5 A=D and D=A Converters ", "content": "figure 732 shows digital processor controlling analog device realtime processcontrol environments monitoring laboratory instruments fall appl ication mode shown g ure analog signal produc ed devi ce converte digita l bit patter n ad conver ter th e processor outpu ts data digita l form whi ch conver ted anal og form da conver ter da conver ters resist orladde r networ ks convert input nbit digita l inform ation corr esponding analog voltage leve l d converte rs norm ally use counter alon g da conver ter cont ents count er increment ed converte analog signals com pared inco ming anal og sign al count er value correspo nds voltage equivalent input analog voltage counter stop ped increm enting cont ents count er corr espond equi valent digital signal", "url": "RV32ISPEC.pdf#segment266", "timestamp": "2023-10-17 20:15:48", "segment": "segment266", "image_urls": [], "Book": "computerorganization" }, { "section": "7.9. 6 Tapes and Disk s ", "content": "stor age devices mag netic tapes eltore el cass ette streami ng cart rid ge disks hard oppy mag netic optical descr ibed cha pter 9 als used o devi ces table 77 lists typical atatransfer rates offered som e o devi ces mainly show relat ive speed devices speed capab ilities devices change rapidly tables b ecome outdated even publishe d th e descripti provided sectio n intent ionally brief read er shoul ref er vendors literature current informat ion othe r devi ces", "url": "RV32ISPEC.pdf#segment267", "timestamp": "2023-10-17 20:15:48", "segment": "segment267", "image_urls": [], "Book": "computerorganization" }, { "section": "7.10 EXAMPL ES ", "content": "comput er systems general ly congured aroun sing le multipl ebus struc tur e gener al baseli ne struct ure com mercial system syst em uniqu e mos modern system utilize one standar bus archit ec tur es allow easier interf ace devi ces disp arate vendors sectio n provi des brief descrip tions three o struct ures 1 motorola 68000 complete o structure example 2 intel 21285 versatile interconnect processor 3 control data 6600 o structure historical interest systems replace higher performanc e vers atile vers ions neverthel ess structure depi ct pertin ent char acterist ics simply refer manufacturers manuals details latest versions", "url": "RV32ISPEC.pdf#segment268", "timestamp": "2023-10-17 20:15:48", "segment": "segment268", "image_urls": [], "Book": "computerorganization" }, { "section": "7.10.1 Motorola 68000 ", "content": "although mc68000 replaced higher performance counterparts included provide complete view io structure commercially available processor system mc68000 popular microprocessor 32bit internal architecture capable interfacing 8 16bitoriented peripherals figure 733 shows functional grouping signals available processor brief description signals follows mc68000 memory space byte addressed address bus lines a1a23 always carry even address corresponding 16bit 2byte word upper data strobe uds lower data strobe lds active refer upper lower byte respectively memory word selected a1a23 d0d15 constitutes 16bit bidirectional data bus direction data transfer indicated readwrite rw control signal address strobe signal active indicates address determined address bus uds lds valid data acknowledge dtack used indicate data accepted write operation data ready data bus read processor waits dtack io operations variable speed peripherals three bus arbitration signals bus request br bus grant bg bus grant acknowledge bgac three interrupt lines ipl0ipl2 allow interrupting device indicate one seven levels interrupts possible level 1 lowest priority level 7 highe st level 0 meanin g interr upt th e processor commun icate 8bi toriented peripheral mc6800 enab le e valid memo ry add ress vma valid peripher al ad dress vp cont rol signals commun ication lines synchronous func tio n code fc0 fc2 ou tput sign als indi cate type bus activit currently taken suc h interr upt acknow ledge supervis user program data memor acce ss etc proce ssor thr ee system cont rol sign als bus erro r rr reset ha lt prog rammed o mc68000 mc 68000 uses memor ymappe io sinc e dedi cated io instructions availa ble io devices interfaced either async hronous o lines mc68000 oriented synchr onous o lines illustrate interfacing input output port mc68000 opera te programmed io mode asynchronous control lines using parallel interfacetimer chip mc68230 th e mc 68230 see figur e 734 peripher al interface timer pi t consist ing two independent sections ports timer port section two 8bit ports pa07 pb07 four handshake signals h1h4 two general io pins io six dualfunction pins dualfunction pins individually work either third port c alternate function related either port b timer pins h1h4 used various modes control data transfer ports generalpurpose io pins interruptgenerating inputs corresponding vectors timer consists 24bit counter prescaler timer io pins tin tout tiack also serve port c pins system data bus connected pins d0d7 asynchronous transfer mc68000 pit facilitated datatransfer acknowledge dtack register selects rs1rs5 timer interrupt acknowledge tiack readwrite rw port interrupt acknowledge piack restrict example port section pit 23 internal registers addressed using rs1rs5 associated three ports data register control register datadirection register ddr register 0 port general control register pgcr controls ports bits 6 7 register used congure ports one four possible modes shown following table bits pgcr used io handshaking operations ports congured unidirectional mode corresponding control registers used dene submode operation example unidirectional 8bit mode bit 7 control register 1 signies bitoriented io sense bit corresponding port programmed independently congure bit output input bit corresponding ddr bit set 1 0 respectively ddr settings ignored bidirectional transfer mode mc68000 interru pt system mc68000 interrupt syst em divided int two types interna l external internal interrupt calle exceptions correspo nd conditi ons divide zero illegal instru ction user dened interrupt use trap instructions th e externa l int errupts correspo nd seven levels int errupts b rought ipl0ipl2 lines mentioned earlier level 0 indicates interr upt leve l 7 highe st priori ty interr upt nonm askable proce ssor othe r levels recognized processor processor lower priority proce ssing mode 4byte eld reserve possible int errupt ext ernal internal eld contain begi nning address interrupt service routine correspo nding interrupt beginnin g addre ss e ld vector correspo nding interrupt proce ssor ready service interr upt pushes progr counter status regist er sr onto sta ck updat es priority mas k bits sr puts priority leve l current interrupt a1a 3 sets fc0f c2 111 indicate interrupt acknowl edge iack response iack externa l devi ce either send 8bit vect numb er nona utovect congur ed request processor gener ate vector auto matical ly aut ovector vector v known proce ssor jumps loca tion pointed memory addre ss 4v th e last instruction service routine return interrupt rt e pops progr counter pc sr stack thus returning former sta te figure 737 show vect map memory addresse 00h ie hexade cimal 00 thro ugh 2f h contain vect ors conditi ons reset bus error trace divide zero etc seven auto vectors th e opera nd trap instruction indicates 1 possibl e 15 trap conditions thus gener ating 1 15 trap vectors vec tor addre sses 40h ffh rese rve user nonautovec tors spurious interrupt vector handl es interr upt due noisy condi tions resp onse ia ck externa l devi ce assert vpa proce ssor generat es one seven vectors 19h 1fh auto matical ly thus externa l hardwar e neede provide interrupt vector nonautovec tor mode interrupt ing devi ce places vector number 40h ffh data bus lines d0d7 assert dtac k proce ssor reads vector jumps appro priate serv ice routine due syst em noise possi ble proce ssor interrup ted case receipt iack external timer activate berr certain time processor receives berr response iack generates spurious interrupt vector 18h figure 738 shows hardware n eeded interface two exte rnal devices mc68000 priority encoder 74ls148 generates 3bit interrupt signal processor based relative priorities devices connected processor recognizes interrupt puts priority level a1a3 sets f0f2 111 activates 3to8 decoder 74ls138 thus generates appropriate iack device 1 interrupting vpa line activated response iack1 thus activating autovector mode device 2 iack2 used gate vector onto data lines d0d7 generate dtack mc68000 dma perform dma external device reque sts bus activati ng br one clock period rece iving br mc68 000 enabl es bus grant line bg relinq uishes trista tes bus comple ting curr ent instruction cycl e indi cated b going high externa l device enables bga ck indicati ng bus use processor waits bgac k go high order use b us", "url": "RV32ISPEC.pdf#segment269", "timestamp": "2023-10-17 20:15:49", "segment": "segment269", "image_urls": [], "Book": "computerorganization" }, { "section": "7.10.2 Intel 21285 ", "content": "section extracted int el corpor ation 21285 core lo gic sa 110 micropr ocessor datashe et septemb er 1998 figure 739 shows bloc k diagram system wi th int el 21285 io processor connec ted intel sa110 strong arm micro processor sa110 optimize embedded applications intelligent adapt er card swit ches routers printe rs scanners raid controlle rs process control applications settop boxes etc intel 21285 o proce ssor consist follow ing component synchr on ous dynam ic rando access memor sdram interf ace read only memor rom interface peripher al component interconnec pci int erface dma con trollers interrupt cont rollers programma ble timers x bus interf ace serial port bus arbiter joint test architecture group jta g interface du al address cycles ac support power management support sdram controller controls 14 arrays synchronous drams sdrams consisting 8 16 64 mb parts sdrams share command address bits separate block chip select bits shown figure 740 sdram operations performed 21285 refresh read write mode register set reads writes generated either sa110 pci bus masters including intelligent io i2o accesses dma channels sa110 stalled selected sdram bank addressed unstalled data latched sdram bus rst sdram data driven bus pci memory write sdram occurs pci address matche sdra base address regi ster congur ation space register csr base address register pci comma nd either memor write memor wr ite invalidat e pci memor read sdr occurs pci address matche sdra base address register csr base address register com mand either memor read memory read line memor read multipl e th ere four registers controlli ng arrays 0 thr ough 3 four registers den e four sdram arrays sta rt addre ss size addre ss multiplex ing software must ensur e arrays sdram mapped overl ap addre sses arr ays need size however start addre ss arr ay must naturally aligned size array th e arrays need form contiguo us address space dif ferent size arrays place largest array lowest addre ss next largest arr ay etc figur e", "url": "RV32ISPEC.pdf#segment270", "timestamp": "2023-10-17 20:15:50", "segment": "segment270", "image_urls": [], "Book": "computerorganization" }, { "section": "7.41 ", "content": "shows rom congur ation rom output enabl e write enabl e connec ted address bits 30 31 respective ly rom addre ss connec ted addre ss bits 242 th e rom always addre ssed sa 110 41000000 h throug h 41f fffffh rese rom also aliased every 16 mb throughout memor space blocking acce ss sdra m allows sa110 boot rom address 0 sa 110 write alias address range disa bled th e sa 110 stalled whi le rom read data wor may require one two four rom read depend ing rom width th e data collec ted packe data words driven onto 310 rom read comple tes 21285 unstalls sa110 th e sa 110 stalled rom writte n rom write data must placed proper byte lanes software running sa110 data aligned hardware 21285 one write done regardless rom width rom write completes 21285 unst alls sa110 pci memor write rom occurs whe n pci address matche expansion rom base address regist er bit 0 f expans ion rom base addre ss regist er 1 pci comma nd either memory write memory write invalidat e pci memor write addre ss data collecte inbound rstin rst out fifo wr itten rom later time 2128 5 target disconnec ts one data phase pci memor read rom occurs whe n pci addre ss matches expans ion rom base address register bit 0 expansion rom base address regist er 1 pci comma nd either memory read memor read line memory read multipl e timing rom accesses controlled values sa110 control register rom access time burst time tristat e time speci ed 21285 programma ble twoway dm channe l move bloc ks data sdra pci pci sdra m dma channe ls read parameters list descr iptors memory perfo rm data move ment stop list exhauste d dma operati ons sa1 10 sets escriptor sdram figure 742 show dma descrip tors local memor y descrip tor occupies four data words must natu rally aligned th e channe ls read descrip tors local memory workin g registers sever al regist ers channe l byte count regist er pci addre ss register sdram addre ss regist er descr iptor pointer register control regi ster dac address register four data words provide followi ng inf ormation 1 number bytes transferred direction transfer 2 pci bus address transfer 3 sdram address transfer 4 address next descriptor sdram dac address sa110 writes address rst descriptor dma channel n descriptor pointer register writes dma channel n control register miscellaneous parameters sets channel enable bit channel initial descriptor register bit 4 clear channel reads descriptor block channel control channel pci address channel sdram address channel descriptor point register channel transfers data byte count exhausted sets channel transfer done bit 2 dma channel n control register end chain bit 31 dma channel n byte count register bit 31 rst word descriptor clear channel reads next descriptor transfers data set channel sets chain done bit 7 dma channel n control register stops pci arbiter receives requests ve potential bus masters four external 21285 grant made device highest priority main register bus arbiter xbus cyclearbiter register offset 148 h function used either control parallel port xbus internal pci arbiter two levels priority groups lowpriority groups one entry highpriority group priority rotates evenly among lowpriority groups device including 21285 appear either lowpriority group highpriority group according value corresponding priority bit arbiter control register master highpriority group bit 1 lowpriority group bit 0 priorities reevaluated every time frame1 asserted start new transaction pci bus arbiter grants bus higherpriority device next clock style interrupts lower priority device using bus master initiated last transaction lowest priority group pci local bus standard data bus connects directly micropro cessor developed intel corporation modern pcs include pci bus addition general industry standard architecture isa expansion bus many analysts believe pci eventually supplant isa entirely pci 64bit bus though usually implemented 32bit bus run clock speeds 33 66 mhz 32 bit 33 mhz yields throughput rate 133 mbps pci tied particular family microprocessors acts like tiny localarea network inside computer multiple devices talk sharing communication channel managed chipset pci target transactions include following 1 memory write sdram 2 memory read memory read line memory read multiple sdram 3 type 0 conguration write 4 type 0 conguration read 5 write csr 6 read csr 7 write i2o address 8 read i2o address 9 memory read rom pci master transactions include following 1 dac support sa110 dma 2 memory write memory write invalidate sa110 dma 3 selecting pci command writes 4 memory write memory write invalidate sa110 5 selecting pci command writes 6 memory read memory read line memory read multiple sa110 dma 7 io write 8 io read 9 conguration write 10 special cycle 11 interrupt acknowledge iac read 12 pci request operation 13 master latency timer message unit provides standardized messagepassing mechanism host local processor provides way host read write lists pci bus offsets 40 44 h rst base address function fifos hold message frame addresses offsets message frames i2o message units support four logical fifos i2o inbound fifos initiated sa110 host sends messages local processor local processor operates messages sa110 allocates memory space inbound freelist postlist fifos initializes inbound pointers sa110 initializes inbound freelist writing valid memory frame address mfa values entries increment number entries inbound freelist count register i2o outbound fifos initialized sa110 th e sa 110 alloca tes memory space outbound fre elist outbo und post list fifos th e host proce ssor initializes freel ist fifo writing valid mfa entrie see figures 743 744 th e 21 285 contains four 24bit ti mers th ere cont rol status registers time r timer two modes opera tion free run periodic th e timers interrupt indivi dually enabled disa bled th e interrupt enabl ed disabled setting appro priate bits irq enable set fast interrupt reque st fiq enabl e set registers timer 4 u sed watchd og time r requi res watchd og enable bit set sa 110 cont rol regist er figure 745 show bloc k diag ram typical timer cont rol status data regist ers capability accessi ble sa110 interface offset 4200 0000h th ere three regist ers govern opera tions time r periodi c mode load register cont ains value load ed onto wn count er decr emente zero load register used freerun mode cont rol regist er allows user select cloc k rate used decrement count er mode operation whethe r time r enabled clear register rese ts timer interrupt th e 21285 one regist er might considered status regist er value register read obtain 24bit value counter terms software interface user needs determine 1 clock speed used decrement counter 2 whether freerun periodic mode used 3 whether generate irq fiq interrupt determined clock speed mode enable bit timer must written timer control register interrupt type must enabled setting appropriate bits irq enable setfiq enable set registers freerun mode counter loaded maximum 24bit value decremented reaches zero 0 time interrupt generated periodic mode loads counter value load register decrements reaches zero 0 time interrupt generated upon reaching zero 0 timers freerun mode reloaded maximum 24bit value periodic mode timers reloaded load register cases reloaded counters decrement zero 0 repeat cycle 21285 uart support bit rates approximately 225 bps approximately 200 kbps uart contains separate transmit receive fifos hold 16 data entries uart generates receive transmit interrupts enableddisabled setting appropriate bits irq enable setfiq enable set registers control status data registers capability accessible sa110 interface offsets 4200 0000h four registers govern operations uart hubrlcr register allows user set data length enable parity select odd even parity select number stop bits enabledisable fifos generate break signal tx pin held low mubrlcr lubrlcr registers together contain 12bit baud rate divisor brd value uart con register contains uart enable bit along bits using siren hp sir protocol infrared data irda encoding method uart contains single data register allows transmit receive fifos accessed data uart accessed either via fifos single data word access method determined enable fifo bit hubrlcr control register 21285 two status registers rxstat indicates framing parity overrun errors associated received data read rxstat must follow read uartdr rxstat provides status associated last piece data read uartdr order reversed uartflg contains transmit receiver fifo status indication whether transmitter busy uart receives frame data start bit data bits stop bits parity data bits stripped frame put receive fifo fifo half full receive interrupt generated fifo full overrun error generated framing data examined incorrect framing error generated parity checked parity error bit set accordingly data words accessed reading uartdr errors associated read rxstat uart transmits data data word taken transmit fifo framing data added start bit stop bits parity frame data loaded shift register clocked fifo half empty transmit interrupt generated 21285 compliant institute electrical electronics engin eers ieee 11491 ieee standard test access port boundaryscan archi tecture provides ve pins allow external hardware software control test access port tap 21285 provides nirq nfiq interrupt sa110 processor interrupts logical bits irq status fiq status respectively registers collect interrupt status 27 interrupt sources interrupts equal priority control status data registers capability accessible sa110 interface figure 746 block diagram typical interrupt following control registers deal interrupts irq enable set register allows individual interrupts enabled irq enable clear register disables individual interrupt irq soft register allows software generate interrupt following status registers deal interrupts irq enable register indicates interrupts used system irq raw status register indicates interrupts active irq status register logical irq enable register irq raw status register interrupting device activates interrupt information loaded irq raw status register data anded irq enable register set bits irq status register bits irq status register nored together generate nirq interrupt sa110 sa110 determines caused interrupt reading irq status register interrupt cleared resetting appropriate bit irq enable clear register 21285 provides 8 16 32bit parallel interfaces sa110 duration timing relationship address data control signals accomplished via control register settings provides 21285 great deal exibility accommodate wide variety io devices control status data registers capability accessible sa110 interface offsets 4200 0000h figure 747 shows block diagram xbus interface 21285 provides following control status registers deal xbus xbus cycle register allows user set length readwrite cycle apply divisor clock used read write cycles choose chip select determine xbus pci bus used xbus pci interrupt levels xbus io strobe register allows user control xior xiow signals go low long stay low within programmed readwrite cycle length", "url": "RV32ISPEC.pdf#segment271", "timestamp": "2023-10-17 20:15:53", "segment": "segment271", "image_urls": [], "Book": "computerorganization" }, { "section": "7.10.3 Control Data 6600 ", "content": "cdc 6600 rst announced 1964 designed two types use largescale scientic processing time sharing smaller problems accommodate large scale scientic processing highspeed oatingpoint multifunctional cpu used peripheral act ivity separated cpu activi ty provi ding 12 o channe ls cont rolled 10 peripheral proce ssors architecture multiple functional units separate io structure multiple peripheral processors operating parallel adopted cray series super computers described chapter 10 figure 748 shows cdc 6600 system structure 10 peripher al proce ssors h ave access central memor y one processor acts cont rol proce ssor system othe rs perform ing i tasks processor memor used storin g programs ata bufferin g peripher al proce ssors access cent ral memory timesha red manner barrel mech anism barrel mechani sm 100 ns cycl e peripher al proce ssor connec ted cent ral memor 100 ns cycl e barrel peripher al proce ssor provid es memor y accessin g informat ion barrel 100 ns time slot actual memory access takes place remaining 900 ns barrel cycle peripheral processor start new acce ss uring next time slot th us barr el mec h anism handles 10 peripher al proce ssor reque sts overlapp ed manner o channels 12bit bidire ctional paths thus one 12bit word transferred memory ever 1000 ns channe l", "url": "RV32ISPEC.pdf#segment272", "timestamp": "2023-10-17 20:15:53", "segment": "segment272", "image_urls": [], "Book": "computerorganization" }, { "section": "7.11 SUMMA RY ", "content": "various mode data transfer io devices cpu discusse chapter th e advanc es hardwar e technol ogy dropping pri ces hardwar e mad e possi ble implement costeffective vers atile o struct ures details o structure represe ntati comme rcially availa ble machines provi ded int erfacing o devices cpu generally consider ed major task howeve r recent efforts standardizing io transfer protocols emergence standard buses reduced tedium task possible easily interface io device cpu made one manufacturer compatible devices another manufacturer trend delegating io tasks io processors continued fact modern computer structures typically contain one generalpurpose processor dedicated io tasks need arises chapter provided details representative io devices newer versatile devices introduced daily list speed performance characteristics devices provided book would quickly become obsolete refer magazines listed references section uptodate details several techniques used practical architectures enhance io system performance system unique architectural features concepts introduced chapter form baseline features help evaluate practical architectures practice system structure family machines evolves several years tracing family machines gather changing architec tural characteristics distinguish inuence advances hardware software technologies system structure would interesting instructive project", "url": "RV32ISPEC.pdf#segment273", "timestamp": "2023-10-17 20:15:53", "segment": "segment273", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 8 ", "content": "processor instruction set architectures main objective cha pter 6 illustr ate design simpl e comple te compu ter sc simplici ty main consider ation arch itec tural altern atives possibl e sta ge desi gn present ed cha pter 7 extended o subsyste asc light o struct ures found practical machin es chapter descr ibes selected architectu ral featur es popul ar processor enhanc ements asc architect ure concentr ate details instru ction set addressi ng modes register proce ssor struct ures archi tectural enhanc ements memory control unit alu pres ented subsequ ent chapter s sake com pleteness als provide chapter bri ef descriptio n fam ily micro processor int el cor poration advanced instruction set arch itecture discusse chapte r 10 rst distingui sh b etween four popul ar type compute r system available", "url": "RV32ISPEC.pdf#segment274", "timestamp": "2023-10-17 20:15:54", "segment": "segment274", "image_urls": [], "Book": "computerorganization" }, { "section": "8.1 TYPES OF COMPU TER SYS TEMS ", "content": "modern day compu ter system classied super compute r larg escale machine main frame minicom puter microc ompu ter com binations four major cat egories minimi crosyst em micro minisys tem etc also used prac tice dif cult produc e sets characterist ics would denitiv ely place system ne categor ies today original distinct ions becomi ng blurred advances hardw software technologie s desktop microcom puter system toda exam ple provide roughly proce ss ing capab ility large scale compute r system 1990s neverthel ess table 8 1 lists som e char acteristic four classes com puter system s noted considerable amount overlap characteristics supercomputer dened powerful computer system available date physical size minimum conguration system probably still distinguishing feature supercomputer largescale system would require whe elbarro w move one place another minicom puter carrie witho ut mec hanical help micro compute r t easi ly 8 12 inch board even one ic carrie one hand advanc es hardw softw technol ogies architect ural featur es found supercom puters large scale machin es event ually appea r mini micro comp uter system hence featur es discussed subse quent chap ters book assumed apply equally well classes compute r syst ems", "url": "RV32ISPEC.pdf#segment275", "timestamp": "2023-10-17 20:15:54", "segment": "segment275", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2 OPER AND (D ATA) TYPE S AND FORMA TS ", "content": "th e selection processor wor size inuenced types mag nitudes data perands expec ted appl ication proce ssor instru ction set also inuenced types operands allowed data represe nta tion dif fers machine machin e th e data format depend word size code used ascii ebcdic arithmet ic 1s 2s comple ment mode employe machin e th e mos com mon opera nd data type xedpoint integer bina rycoded decima l bcd o atingpoin real binary char acter string s cha pter 2 provided deta ils thes e data types representa tions typic al repr esentatio n formats thes e type sum marized", "url": "RV32ISPEC.pdf#segment276", "timestamp": "2023-10-17 20:15:54", "segment": "segment276", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2. 1 Fixed Point ", "content": "th e x edpoint bina ry numb er represe ntation consi sts sign bit 0 positive 1 negativ e n1 magnitude bits nbi machine negative numb ers repr esented either 2s com plemen 1s com plemen 2s com plement form common binary point assumed right end representation since number integer", "url": "RV32ISPEC.pdf#segment277", "timestamp": "2023-10-17 20:15:54", "segment": "segment277", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2.2 Decimal Data ", "content": "machines allow decimal bcd arithmetic mode use 4bits nibble per decimal digit pack asmany digits aspossible machine wordinsome machines separatesetofinstructionsisusedtooperateonsuchdatainothers thearithmeticmode ischanged bcd binary vice versa instruction provided purposethus thesamearithmetic instructions canbeusedtooperate oneither typeof dataabcddigitoccupies4bitsandhencetwobcddigitscanbepackedintoabyte forinstance ibm370allowsthedecimalnumberstobeofvariablelength from1to 16digits the length isspecied orimplied aspart oftheinstruction two digits packedintoeachbytezerosarepaddedonthemostsignicantendifneededatypical dataformat isshown infigure 81inthepacked format shown infigure 81a theleast signicantnibblerepresentsthesignandeachmagnitudedigitisrepresentedbyanibble intheunpackedformshowninfigure81b ibm370uses1byteforeachdecimaldigit theuppernibbleisthezoneeldandcontains 1111andthelowernibblehasthedecimal digitall arithmetic operations packed numbers unpacked format usefulforiosincetheibm370usesebcdiccode 1bytepercharacter", "url": "RV32ISPEC.pdf#segment278", "timestamp": "2023-10-17 20:15:54", "segment": "segment278", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2. 3 Charact er String s ", "content": "th ese repr esented typical ly 1 byte per character usin g either ascii ebcd ic code th e maximum length char acter string design parameter partic ular machin e", "url": "RV32ISPEC.pdf#segment279", "timestamp": "2023-10-17 20:15:54", "segment": "segment279", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2. 4 Floatin g-Point Num bers ", "content": "discusse chapte r 2 repr esentatio n consist sign number expone nt eld fraction eld com monly used ieee standar repr esentatio n fra ction represe nted using either 24 bits singl e prec ision 56 bits double precision figur e 81c show ibm 370 representa tion expone nt radix1 6 exponent expressed excess64 number ie 64 added true exponent exponent representation ranges 0 127 rather 64to63 note exponent 2 64 66 normalization done 4bit shifts base16 exponent rather single bit shifts used ieee standard", "url": "RV32ISPEC.pdf#segment280", "timestamp": "2023-10-17 20:15:55", "segment": "segment280", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2.5 Endien ", "content": "mentioned earlier book two ways representing multiplebyte data element byteaddressable architecture little endien mode least signicant byte data stored low addressed byte followed remain ing bytes higher addressed locations big endien mode opposite example hexadecimal number 56789abc requires 4 byte storage two endien formats storage shown important keep endien notation intact accessing data instance want add two 4byte numbers order addition bytes 0 1 2 3 little endien bytes 3 2 1 0 big endien thus little endien representation convenient addition compared big endien hand since signicant byte data sign bit normally located rst byte big endien representation convenient check number positive negative intel series processors use little endien motorola processor family uses big endien intel processors instructions reverse order data bytes within register general byte ordering important reading data writing data le application writes data les little endien format big endien machine reverse byte order appropriately utilizing data vice versa popular applications adobe photoshop jpeg mac paint etc use big endien others windows bmp gif etc use little endien applications microsoft wav tif support formats", "url": "RV32ISPEC.pdf#segment281", "timestamp": "2023-10-17 20:15:55", "segment": "segment281", "image_urls": [], "Book": "computerorganization" }, { "section": "8.2.6 Register versus Memory Storage ", "content": "asc accumulator architecture since arithmetic logic instructions implied one operands accumulator resulted shorter instructions also reduced complexity instruction cycle since accumulator temporary storage operands within processor memory trafc becomes heavy accessing operands memory time consuming accessing accumulator hardware technology progressed vlsiera became cost effective include multiple registers processor addition registers designated general purpose allowing used operations processor needed rather dedicating accumulator index register etc thus generalpurpose register gpr architectures allowed faster access data stored within cpu also register addressing requires much fewer bits compared memory addresses making instructions shorter availability large number registers processors arranged register stack see later section stack architectures allow accessing data instructions contain opcodethe operands implied top two levels stack architectures use registerbased stacks use memorybased stacks use combination two general best retain much data possible cpu registers allow easy fast access since accessing data memory slower", "url": "RV32ISPEC.pdf#segment282", "timestamp": "2023-10-17 20:15:55", "segment": "segment282", "image_urls": [], "Book": "computerorganization" }, { "section": "8.3 REGISTE RS ", "content": "registers asc desi gned special purpose regist er acc umu lator inde x register program counter etc assignm ent functi ons limit utilit registers machines simpler desi gn hardwar e tec hnology advanced yieldin g lower cost hardware numb er registers cpu increas ed th e major ity registers designat ed gpr hence used accumulat ors inde x registers pointer regi sters i e regist ers whose cont ent addre ss pointi ng memory location processor status regist er variousl ref erred status regist er condi tion code register progr status wor figur e 82 shows register struct ures mc6800 mc 68000 series proce ssors mc680 00 series processor opera te either user super visory mode userm ode registers com mon membe rs processor series show n addition processor several super visor mode registers type numb er supervis mode regist ers vary amo ng individual proce ssors series exampl e mc68000 system stack pointer uses 5 bits sign icant byte status register mc 68020 seven addi tional regist ers figur e 83 shows register str uctures int el 8080 808 6 80386 proce ssors note set gprs b c maint ained except width increas ed 8 bits 8080 32 bits 80386 regist ers also handl e special funct ions certain instru ctions fo r instanc e c used counter loop cont rol maintai n numb er shifts shift opera tions b used base register sourc e si dest ination di inde x registers used string man ipulation tel 8086 series views memor consisting several segment four segment registers code segment cs register points beginning code ie program segment data segment ds register points beginning data segment stack segment ss points beginning stack segment extra segment es register points extra data segment instruction pointer ip program counter value displacement address p ointed cs discuss base displacem ent addre ssing mode later thi chapter sim ilarly stac k pointer sp contain disp lacem ent valu e ss content points top level sta ck base pointer bp used access stack frame nontop sta ck leve ls refer section 86 etails int el pentium series f processor s dec pdp 11 operate three modes user supervis ory kerne l instru ctions valid kernel mode bu certain instru ctions suc h halt allowed two modes two sets six regist ers r0 r5 set 0 user mode set 1 modes thr ee stack pointer e mode r6 one progr counter regist ers used opera nd contai n data pointer contai n addre ss inde xed mode s th e proce ssor status wor format show n figure 84 dec vax11 maintains general register structure pdp11 contains sixteen 32bit gprs 32bit processor status register four 32bit registers used register pairs b c eeach register 8 bits long 8bit operands utilize individual registers 16bit operands utilize register pairs register pair h lusually used address storage 8080 also see figure 815 8bit data registers ah al bh bl ch cl dh dl pair used 16bit register pairs designated ax bx cx dx 16bit registers sp stack pointer bp base pointer ip instruction pointer pc si source index di destination index segment registers code cs data ds stack ss extra es status register containing overflow direction interrupt trap sign zero auxiliary carry parity carry flags b 8086 also see figure 818 32bit registers general data address registers eax ebx ecx edx esi edi ebp esp instruction pointer eip flag register eflags extended versions corresponding 8086 registers 16bit registers segment registers cs ss ds es fs gs last four used data addressing buffer registers store alu operandstwo 8 bits long status flag register5 bits z zero c carry sign p parity ac auxiliary carry decimal arithmetic program counter16 bits stack pointer16 bits accumulator8 bits c 80386 system console operations four 32bit clock function timing registers 32bit oatingpoint accelerator control register ibm 370 32bit machine 16 gprs 32 bits long used base registers index registers operand registers four 64bit oatingpoint registers program status word shown figure 85", "url": "RV32ISPEC.pdf#segment283", "timestamp": "2023-10-17 20:15:57", "segment": "segment283", "image_urls": [], "Book": "computerorganization" }, { "section": "8.4 INSTRUCTION SET ", "content": "selection set instructions machine inuenced heavily application machine intended machine generalpurpose data processing basic arithmetic logic operations must included instruction set machines separate instructions binary decimal bcd arithmetic making suitable scientic businessoriented processing respectively processors operate either binary decimal mode instruction operates either type data logical operations exclusiveor shift circulate operations needed access data bit byte levels implement arithmetic operations multiply divide oating point arithmetic control instructions branching conditional unconditional halt subroutine call return also required io performed using dedicated instructions io memorymapped io mode io devices treated memory locations hence operations using memory also applicable io general processors classied according type operand used instruction operand located either register memory location typical architectures 1 memorytomemory operands memory operation performed operands results left memory operands without need register 2 registertomemory least one operands register rest memory 3 loadstore operand loaded register operation performed result register stored memory needed", "url": "RV32ISPEC.pdf#segment284", "timestamp": "2023-10-17 20:15:58", "segment": "segment284", "image_urls": [], "Book": "computerorganization" }, { "section": "8.4.1 Instruction Types ", "content": "instruction sets typically contain arithmetic logic shiftrotate data movement io program control instructions addition may specialpurpose instructions depending application envisioned processor arithmetic instructions asc instruction set contained two arithmetic instruc tions add tca typical instructions type add subtract multiply divide negate increment decrement apply various data types allowed architecture access data cpu registers memory utilize various addressing modes instructions affect proces sor status register based result operation zero carry overow etc logic instructions instructions similar arithmetic instructions except perform boolean logic operations exclusiveor compare test affect processor status register bits instructions used bit manipulation ie set clear complement bits shiftrotate instructions asc instruction set two shift instructions shr shl arithmetic shift instructions since conformed rules 2s complement system typical instruction sets instructions allow logical shift rotate carry bit rotate without carry bit see figure 86 data moveme nt instruc tions asc instru ction set contain ed four data move ment instru ctions lda sta ldx stx move data betwee n memor regist ers accum ulator index instruc tion sets contain instru ctions moving vario us types sizes data memory location registers memor betwee n regist ers mac hines block move instru ctions move whole bloc k data one memory buff er instru ctions equiva lent move instru ction set loop su ch highle vel instru ctions calle mac ro instruc tions input output instruc tions asc provided two o instru ctions rwd wwd general input instruction transfers data device port register memory output instruction transfers data register memory device port memory mapped io scheme allows use data movement arithmetic logic instructions operating memory operands io instructions machines using isolated io ch em e h v e rc hi te ct ur e sp e c i c o instruc tions block io instructions common handle array string type data con trol instruc tions asc progr control instructions bru bip bin hlt tdx tix contro l instru ctions alter seque nce progr execu tion branc hes condition al uncondit ional subroutine calls ret urns halts instruc tions instructions t well int types instanc e operatio n nop instru ction allow program mer hold plac e progr replace anot instru ction later although nop perform usef ul functi consumes time nops used adju st execu tion time program accommodat e timing require ments th ere might instru ctions specic applicatio n hands tring image proce ssing highle vel language hll support", "url": "RV32ISPEC.pdf#segment285", "timestamp": "2023-10-17 20:15:58", "segment": "segment285", "image_urls": [], "Book": "computerorganization" }, { "section": "8.4. 2 Instr uction Lengt h ", "content": "th e leng th instru ction function numb er opera nds instru ction ty pically certai n numb er bits neede opcode regist er ref erences require large numb er bits memor refere nces consume major port ion instru ction hen ce memor yrefer ence memoryt omemory instruc tions wi longer othe r types instru ctions vari ablelength format used conse rve amount memor occupi ed progr increases complexity control unit number memory addresses instruction dictates speed execution instruction addition increasing instruction length one memory access needed fetch instruction memory complete instruction one memory word instruction longer one word multiple memory accesses needed fetch unless memory architecture provides access multiple words simultaneously described chapter 9 index indirect address compu tations needed memory operand instruction thereby adding instruction processing time instruction execution phase corresponding memory operand memory read write access needed since memory access slower register transfer instruction processing time considerably reduced number memory operands instruction kept minimum 2007 taylor francis group llc based number addresses operands following instruction organ izations envisioned 1 threeaddress 2 twoaddress 3 oneaddress 4 zeroaddress compare organizations using typical instruction set required accomplish four basic arithmetic operations comparison assume instruction fetch requires one memory access cases addresses direct addresses ie indexing indirecting needed hence address computation require memory accesses therefore compare memory accesses needed execution phase instruction threeaddress machine b c memory locations instructions requires three memory accesses execution practice majority operations based two operands result occupying position one operands thus instruction length address computation time reduced using twoaddress format although three memory accesses still needed execution twoaddress machine rst operand lost operation one operands retained register execution speeds instructions increased operand register implied opcode second operand eld required instruction thus reducing instruction length oneaddress machine acc accum ulator regi ster implied instruction load store instru ctions neede d operands b e held registers execu tion time decr eased furt opcode implies two registers instru ctions zero address type zero addres mach ine sl tl resp ective ly secon top levels lastin rst out sta ck zeroadd ress machin e arithmeti c operation perf ormed top tw levels stack explicit addre sses needed part instru ction howeve r memory ref erence oneaddr ess instru ctions loa store mo move data betwee n sta ck memor required appe ndix b describ es two mos popul ar impleme ntations stack zero address machin e uses hardwired stack abovem entione arithmet ic instru ctions require memory acce ss execu tion rambas ed sta ck used arithmet ic instru ction require thr ee memor accesses ass uming n bits addre ss represen tation bits opcode instru ction lengths four organ izations 1 threeaddress 3n bits 2 twoaddress 2n bits 3 oneaddress n bits 4 zeroaddress bits figur e", "url": "RV32ISPEC.pdf#segment286", "timestamp": "2023-10-17 20:15:59", "segment": "segment286", "image_urls": [], "Book": "computerorganization" }, { "section": "8.7 ", "content": "offers program com pute function f b c using abovem entione instru ction set s b c f memor loca tions con tents b c assumed integer values resu lts assumed t one memor wor d progr sizes easily com puted benchm ark programs type using typical set perations perf ormed applica tion environm ent th e number memor acce sses neede p rogram also show n gure although thes e numbers provide measure relat ive execu tion times time required nonm emory acce ss opera tions shoul added time com plete execution time anal ysis sults benchmar k stud ies used selection instruction set instruction format comparison processor architectures example ibm 370 instructions 2 4 6 bytes long dec pdp11 employs innovative addressing scheme represent single double operand instructions 2 bytes instruction intel 8080 1 2 3 bytes long instruction formats discussed later section", "url": "RV32ISPEC.pdf#segment287", "timestamp": "2023-10-17 20:15:59", "segment": "segment287", "image_urls": [], "Book": "computerorganization" }, { "section": "8.4.3 Opcode Selection ", "content": "assignment opcode instructions instruction set signicantly inu ence efciency decoding process execute phase instruction cycle asc opcodes arbitrarily assigned two opcode assignments generally followed 1 reserved opcode method 2 class code method reserved opcode method instruction would opcode method suitable instruction sets fewer instructions class code method opcode consists two parts class code part operation part class code identies type class instruction remaining bits opcode operat ion part iden tify particula r operation class method suitabl e lar ger instru ction set instru ction set variable instru ction leng ths class codes provide conven ient mea ns distingui shing betwee n various cla sses instructions instruction set th e two opcode assignm ent mode illus trated reser ved opcode instructi prac tice may possibl e identify class code pattern instru ction set instru ctions x ed length bits f opcode always com pletely decode may advantage assigni ng opcode cla ss code form exam ple intel 8080 instruction set exhi bit class code form ibm 370 rst 2 bits opcode distingui sh b etween 2 4 6byt e instru ctions mostek 650 2 8bit microproc essor used class code sense part opcode disti nguishes allowed addre ssing mode instruction fo r exampl e add memor accum ulator carry acc instru ction followi ng opcode variat ions describ e paged addressi ng mode later chapter", "url": "RV32ISPEC.pdf#segment288", "timestamp": "2023-10-17 20:15:59", "segment": "segment288", "image_urls": [], "Book": "computerorganization" }, { "section": "8.4. 4 Instr uction For mats ", "content": "majority machines use xedeld format within instruction varying length instruction accommodate varying number addresses machines vary even eld format within instruction cases specic eld instruction denes instruction format figur e 88 show instru ction format motorol 6800 68 000 series machines mc6800 8bit machine directly address 64 kb memory instructions 1 2 3 bytes long members mc68000 series processors internal 32bit data address architecture mc68000 provides 16bit data bus 24bit address bus topoftheline mc68020 provides 32bit data 32bit address buses instruction format varies registermode instructions one 16 bit word long memory reference instructions two ve words long figure 89 shows instruction formats dec pdp11 vax11 series pdp11 16bit processor series instruction format r indicates one eight gprs directindirect ag indicates mode register used registers used operand autoincrement pointer autodecrement pointer index modes describe modes later chapter vax11 series 32bit processors generalized instruction format instru ction consist opcode 1 2 byte long fol lowed 0 6 opera nd specier s operand speci er 1 8 bytes long th us vax11 instru ction 1 50 bytes long figure 810 show instru ction formats ibm 370 32bit proce ssor whose instructions 2 6 bytes long r1 r2 r3 refer one 16 gprs b1 b2 b ase regist er designatio ns one gprs da ta indicates im mediate data d1 d2 refer 12bit disp lacement l1 l2 indicate length data bytes basedispl acement addre ssing mode describ ed later chapt er", "url": "RV32ISPEC.pdf#segment289", "timestamp": "2023-10-17 20:16:00", "segment": "segment289", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5 ADD RESSING MODES ", "content": "direct indire ct inde xed addre ssing mode mos com mon machines allow preindexedindirect allow postindexedindirect allow modes modes descr ibed chapter 5 ther popul ar addressing modes described practice processors adopt whatever combination popular addressing modes suits architecture", "url": "RV32ISPEC.pdf#segment290", "timestamp": "2023-10-17 20:16:00", "segment": "segment290", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5.1 Immediate Addressing ", "content": "mode operand part instruction address eld used represent operand rather address operand program mer point view mode equivalent literal addressing used asc literal addressing converted direct address asc assembler practice accommodate immediate addressing mode instruction contains data eld opcode addressing mode makes operations constant operands faster since operand accessed without additional memory fetch", "url": "RV32ISPEC.pdf#segment291", "timestamp": "2023-10-17 20:16:00", "segment": "segment291", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5.2 Paged Addressing ", "content": "mode addressing allowed memory assumed consist several equalsized blocks pages memory address treated page number location address offset within page figure 811 shows paged memory containing 256 pages page 256 bytes long instruction format environment two elds represent memory address page eld 8 bits long case offset enough bits addre ss loca tions within page 8 bits case address modier indicate indexing indirect etc also include instru ction format p ages lar ge enough major ity memor references program within page thos e locations addre ssed offset eld thus saving numb er bits addre ss eld instruction com pared case direct addressing schem e refere nced location beyond page page numb er neede access note p age numb er maint ained register thus instruction contain offset part address ref erence regist er contain ing p age number segm ent regist ers tel 8086 essenti ally serve purpose th e base register addressing mode describ ed next variation p aged addressi ng mode usually z ero page memor used storage mos often used data pointer s page bit instru ction u sed specify whethe r address correspo nds zero page current page i e page instruction loca ted alternati vely opcode mi ght imply zero page operand mostek 6502 uses addre ssing schem e 16bit addre ss divided 8bi page addre ss 8bit offset wi thin page 256 bytes proce ssor uniqu e opcode zero page mode instru ctions instructions assembled 2 bytes implying higher byte address zero nonzero page instructions need 3 bytes represent opcode 16bit address paged memory schemes useful organizing virtual memory systems describ ed chapter 9", "url": "RV32ISPEC.pdf#segment292", "timestamp": "2023-10-17 20:16:00", "segment": "segment292", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5.3 Base-Register Addressing ", "content": "machines one cpu registers used base register beginning address program loaded register rst step program execution address referenced program offset displace ment respect contents base register base register identication displacement represented instruction format thus conserving bits since set instructions load base register part program relocation programs automatic ibm 370 series machines use scheme gprs designated base register programmer segment mode addressing used intel 8086 series another example baseregister addressing", "url": "RV32ISPEC.pdf#segment293", "timestamp": "2023-10-17 20:16:00", "segment": "segment293", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5.4 Relative Addressing ", "content": "mode addressing offset displacement provided part instruction offset added current value pc execution nd effective address offset either positive negative addressing usually used branch instructions since jumps usually within locations current address offset small number compared actual jump address thus reducing bits instruction dec pdp11 mostek 6502 use addressing scheme", "url": "RV32ISPEC.pdf#segment294", "timestamp": "2023-10-17 20:16:00", "segment": "segment294", "image_urls": [], "Book": "computerorganization" }, { "section": "8.5.5 Implied (Implicit) Addressing ", "content": "mode addressing opcode implies operand instance zeroaddress instructions opcode implies operands top two levels stack oneaddress instructions one operands implied register similar accumulator implementation addressing modes varies processor processor addressing modes practical machines listed m6502 m6502 allows direct preindexedindirect postindexedindirect addressing zeropage addressing employed reduce instruction length 1 byte addresses 2 bytes long typical absolute address instruction uses 3 bytes zeropage address instruction uses 2 bytes mc6800 addressing modes mc6800 similar m6502 except indirect addressing mode allowed mc68000 processor allows 14 fundamental addressing modes generalized six categories shown figure 812 intel 8080 majority memory references intel 8080 based contents register pair h l registers loaded incremented decremented program control instructions thus 1 byte long imply indirect addressing h l register pair instructions 2byte address part direct addressing also singlebyte instructions operating implied addressing mode intel 8086 segmentregisterbased addressing scheme series processors major architectural change 8080 series addressing scheme makes architecture exible especially development compilers operating systems dec pdp11 two instruction formats pdp11 shown earlier chapter provide versatile addressing capability mode bits allow contents referenced register 1 operand 2 pointer memory location incremented auto increment decremented auto decrement automatically accessing memory location 3 index index mode instructions two words long content second word address indexed referenced register indirect bit allows direct indirect addressing four register modes addition pdp11 allows pcrelative addressing offset provided part instruction added current pc value nd effective address operand examples shown figure 813 dec vax11 basic addressing philosophy vax11 follows pdp11 except addressing modes much generalized implied instruction format shown earlier chapter ibm 370 16 gprs used index register operand register base register 12bit displacement eld allows 4 kb displacement contents base register new base register needed reference exceeds 4 kb range immediate addressing allowed decimal arithmetic memorytomemory operations used 6byte instructions lengths operands also specied thus allowing variablelength operands indirect addressing allowed", "url": "RV32ISPEC.pdf#segment295", "timestamp": "2023-10-17 20:16:01", "segment": "segment295", "image_urls": [], "Book": "computerorganization" }, { "section": "8.6 INSTRUCTION SET ORTHOGONALITY ", "content": "instruction set orthogonality dened two characteristics independence consistency independent instruction set contain redundant instruc tions instruction performs unique function duplicate function another instruction also opcodeoperand relationship inde pendent consistent sense operand used opcode ideally operands equally well utilized opcodes addressing modes consistently used operands basically uni formity offered orthogonal instruction set makes task compiler devel opment easier instruction set complete maintaining high degree orthogonality", "url": "RV32ISPEC.pdf#segment296", "timestamp": "2023-10-17 20:16:01", "segment": "segment296", "image_urls": [], "Book": "computerorganization" }, { "section": "8.7 RISC VERSUS CISC ", "content": "advent vlsi provided capability fabricate complete processor ic chip ics control unit typically occupied large portion order 50 60 chip area thereby restricting number processor func tions could implemented hardware solution problem design processors simpler control units reduced instruction set computers risc enable use simple control unit since instruction sets tend small simplication control unit main aim risc designs rst initiated ibm 1975 later 1980 university california berkeley characteristics denition risc changed considerably since early designs currently single accepted denition risc one motivations creation risc criticism certain problems inherent design established complex instruction set computers ciscs cisc relatively large number complicated timeconsuming instructions also relatively large number addressing modes different instruction formats turn result necessity complex control unit decode execute instructions instructions may complex necessarily faster sequence several risc instructions could replace perform function complexity cisc control unit hardware implies longer design time also increases probability larger number design errors errors subsequently difcult time consuming costly locate correct provide perspective let us look example cisc one system dec vax11780 304 instructions 16 addressing modes sixteen 32bit registers vax supports considerable number data types among six types integers four types oating points packed decimal strings character strings variablelength bit elds numeric strings instruction short 2 bytes long 14 bytes may six operand speciers another example motorola mc68020 32bit microproces sor cisc also 16 generalpurpose cpu registers recognizes seven data types implements 18 addressing modes mc68020 instructions one word 16 bits 11 words length many ciscs great variety data instruction formats addressing modes instructions direct consequence variety highly complex control unit instance even less sophisticated mc68000 control unit takes 60 chip area naturally risctype system expected fewer 304 instruc tions exact consensus number instructions risc system berkeley risc 31 stanford mips 60 ibm 801 100 although risc design supposed minimize instruction count denitive characteristic risc instruction environment also simplied reducing number addressing modes reducing number instruction formats sim plifying design control unit basis risc designs reported far look following list risc attributes informal denition risc 1 relatively low number instructions 2 low number addressing modes 3 low number instruction formats 4 singlecycle execution instructions 5 memory access performed loadstore instructions 6 cpu relatively large register set ideally operations done registertoregister memory access operations minimized 7 hardwired control unit may changed microprogrammed techno logy develops 8 effort made support hll operations inherently machine design using judicious choice instructions optimized respect large cpu register set compilers stressed attributes considered exible framework denition risc machine rather list design attributes common risc systems boundaries risc cisc rigidly xed still attributes give least idea expect risc system risc machines used many highperformance applications embedded controllers good example also used building blocks complex multiprocessing systems precisely speed simplicity make risc appropriate applications large instruction set presents large selection choices compiler hll turn makes difcult design optimizing stage cisc compiler furthermore results optimization may always yield efcient fastest machine language code cisc instruction sets contain number instructions particularly specialized t certain hll instructions however machine language instruc tion ts one hll may redundant another would require excessive effort part designer machine may relatively low costbenet factor since risc relatively instructions addressing modes instruction formats relatively small simple compared cisc decode execute hardware subsystem control unit required chip area control unit considerably reduced example control area risc constitutes 6 chip area risc ii 10 motorola mc68020 68 general control area ciscs takes 50 chip area therefore risc vlsi chip area available features result considerable reduction control area risc designer t large number cpu registers chip turn enhances throughput hll support control unit risc simpler occupies smaller chip area provides faster execution instructions small instruction set small number addressing modes instruction formats imply faster decoding process large number cpu registers permit programming reduce memory accesses since cpu registertoregister operations much faster memory accesses overall speed increased since instructions length execute one cycle obtain streamlined instruction handling particularly wellsuited pipelined implementation since total number instructions risc system small compiler hll attempting realize certain operation machine language usually single choice opposed possibility several choices cisc makes part complier shorter simpler risc availability relatively large number cpu registers risc permits efcient code optimization stage compiler maximizing number faster registertoregister operations minimizing number slower memory accesses risc instruction set presents reduced burden compiler writer tends reduce time design risc compilers costs since risc small number instructions number functions performed instructions cisc may need two three instructions risc causes risc code longer constitutes extra burden machine assembly language programmer longer risc programs conse quently require memory locations storage argued risc performance due primarily extensive register sets risc principle general vlsi standpoint precisely due risc principles permitted signicant reduction control area risc designers able t many cpu registers chip rst place argued riscs may efcient respect operating systems auxiliary operations efciency riscs respect certain compilers benchmarks already established sub stantiated reason doubt efcient operating systems utility routines generated risc number risc processors introduced various performance rating register bus structures instructionset characteristics reader referred magazines listed references section details", "url": "RV32ISPEC.pdf#segment297", "timestamp": "2023-10-17 20:16:02", "segment": "segment297", "image_urls": [], "Book": "computerorganization" }, { "section": "8.8 EXA MPLE SYSTE MS ", "content": "practical cpus organ ized around either singlebus multipleb us struct ure disc ussed earlier sing lebus struct ure offe rs simpl icity hardware uniform interconnec tions expens e speed multipl ebus stru ctures allow simulta neous opera tions stru cture requi comple x h ardware selection bus structure thus compromi se betwee n hardware com plexity speed major ity mode rnday microproc essors organize aroun singlebus structure sinc e imple mentati bus consumes great deal silicon availa ble ic major ity cpus use parallel buses faster operating speeds required serial buses employe usual ly case processor ic used build calculat ors desi gn goal fabr icating calculat ic pack many functi ons possibl e one ic using serial buses silic area ic conse rved used imple menting comple x funct ions figure 814 show com parison three two singlebus schemes asc note add opera tion perform ed one cycle usin g threebu structure twobu struct ure cont ents acc rst brough buff er regist er alu one cycl e resu lts gated acc secon cycle thus requiring two cycles addition singlebus stru cture opera nds transf erred alu buffer regist ers buffer 1 buffer 2 one time result transf erred acc third cycle provide bri ef descr iptions b asic architect ural featur es repr e sentative processors intel family 8080 8086 pentium yet describ ed advanced archit ectural features used machin es section needs revisi learni ng conce pts subse quent chapters also descr ibe recent proce ssors int el fam ilies subse quent chapter book sectio n extracted intel hardwar e software developer man uals 4004 micro proce ssor rst esigned intel 1969 fol lowed 8080 8085 pe ntium ii arch itecture based 8086 int roduced 1978 8086 20bi addre ss coul refere nce 1 mb memory 16bit registers 16bi exte rnal data bus sister processor 8088 8bit bus memory divided 64 kb segments four 16bit segment registers used pointers addresses active segments socalled realmode", "url": "RV32ISPEC.pdf#segment298", "timestamp": "2023-10-17 20:16:02", "segment": "segment298", "image_urls": [], "Book": "computerorganization" }, { "section": "8.8.1 Intel 8080 ", "content": "8080 uses 8bit internal data bus register transfers refer figure 815 arithmeti c unit operate two 8bit operands residi ng tempor ary register temp reg accumulator latch cpu provides 8bit bidirectional data bus 16bit address bus set control signals used building complete microcomputer system singlebus architecture point view alu temporary register accumulator latch internal alu used storing second operand operations requiring two operands internal buffers transparent programmers 8080 synchronous hardwired control unit operating major cycle machine cycle intel terminology consisting three four ve minor cycles state intel terminol ogy instruction cycle consist one ve major cycles depend ing type instruction th e machine cycl es cont rolled twoph ase nonover lapping clock sync identi es begi nning machine cycle ten types mac hine cycl es fetch memor read memor write stack read stack wr ite input outpu int errupt acknow ledge halt acknowl edge interrupt acknow ledge halt instruc tions 8080 contain 13 bytes instru ction require one ve machin e memory cycl es fetchi ng execution machine cycles called m1 m2 m3 m4 m5 mach ine cycl e require three ve states t1 t2 t3 t4 t5 comple tion ea ch state durati one clock period 05 ms thr ee states wait hold halt last one indenit e numb er clock periods cont rolled externa l sign als machine cycle m1 always operation code fetch cycle lasts four ve clock periods mach ine cycles m2 m3 m4 m5 normally las thr ee clock periods understa nd basic timing operation tel 8080 refer instruction cycle show n figur e 816 dur ing t1 cont ent f progr counter sent addre ss bus sync true data bus cont ains status informat ion pert aining cycle current ly bein g initiat ed t1 always followed anot state t2 condi tion read hold halt acknowl edge signals tested read true t3 entered otherwis e cpu go wait state tw stays long read false ready thus allows cpu speed synchroni zed memory access time input device user prope rly controlli ng rea dy line singlestep program t3 data coming memor availa ble data bus transferr ed instru ction register dur ing m1 instru ction decode r control sectio n gener ate basic signals control intern al data transfer timing machine cycle require ments instruction end t4 cycl e complete end t5 8080 goes back t1 enters machin e cycl e m2 unle ss instruction requires one machine cycle execu tion cases new m1 cycle ente red loop repeate many cycles sta tes may required instruction instruc tionstate requi rements range minimum f four states non memory refere ncing instructions register accumul ator arithmet ic instructions 18 sta tes complex instru ctions suc h instru c tions exchange contents registers h l contents two top locations stack maximum clock frequency 2 mhz means instructions executed intervals 2 9 ms halt instruction executed processor enters wait state remains interrupt received figure 817 show micro operation seque nce two instru ctions one colu mn instruction rst three states t1 t2 t3 fetch machine cycle m1 instructions", "url": "RV32ISPEC.pdf#segment299", "timestamp": "2023-10-17 20:16:02", "segment": "segment299", "image_urls": [], "Book": "computerorganization" }, { "section": "8.8. 2 Intel 8086 ", "content": "th e 8086 proce ssor struct ure show n figure 818 consist two parts execution unit eu bus interface unit biu eu congured around 16bit alu data bus biu interface processor externa l memory peripher al devices via 20bit addre ss bus 16bit data bus eu biu connec ted via 16bit intern al data bus 6bit control bus q bus eu biu two independe nt units work concurre ntly thus increasing instructionprocessing speed biu generates instruction address transfers instructions memory instruction buffer instruction queue buffer hold six instructions eu fetches instructions buffer executes long instructions fetched sequential locations instruction queue remains lled eu biu operations remain smooth eu refers instruction address beyond range instruction queue may h appen jump instru ction instru ction queue needs relled new addre ss eu wait instru ction brough queue instruc tion buffers descr ibed cha pter 9 intel 80286 introduced protected mode using segment register pointer address tables address table provided 24bit addresses 16 mb physical memory could addressed also provided support virtual memory intel 80386 included 32bit registers used operands addressing lower 16 bits new registers used compatibility software written earlier versions processor 32bit addressing 32bit address bus allowed segment large 4 gb 32bit operand addressing instructions bit manipulation instructions paging introduced 80386 80386 six parallel stages bus interface code pre fetching instruction decoding execution segment addressing memory paging 80486 processor added parallel execution capability expanding 80386 processor instru ction decode executi unit ve pipeli ned sta ges chapter 10 int roduces pipe lining sta ge could wor k one instru ction one clock execu ting one instru ction per cpu clock 8 kb onch ip l1 cache added 80486 proce ssor increase propor tion instructions could execute rate f one per clock cycle chapte r 9 int roduces cache memor concepts 80486 include onchip oating point unit fpu incr ease performanc e", "url": "RV32ISPEC.pdf#segment300", "timestamp": "2023-10-17 20:16:03", "segment": "segment300", "image_urls": [], "Book": "computerorganization" }, { "section": "8.8.3 ", "content": "intel pent ium intel pentium proce ssor p6 introdu ced 1993 added secon executi pipeline achieve perform ance executing two instru ctions per clock cycle l1 cache increased 8 kb devot ed instru ctions anot 8 kb data write back writethr ough mode include efcienc y branch p rediction onch ip branc h table added increase pipe line perf ormance loopi ng const ructs th e intel pentium pro proce ssor introdu ced 1995 include additional pipe line processor could execute thr ee instructions per cpu clock penti um pro proce ssor provides microarc hitecture ow analysis outof order execu tion branc h predictio n speculati execu tion enhance pipe line per forman ce 256 kb l2 cache supports four concur rent acce sses include d pentium pro processor also 36bit address bu giving max imum physi cal address space 64 gb th e intel pe ntium ii processor enhanc ement pentium pro architec tur e combine funct ions p6 micro arch itecture include support intel mmm sing le instru ction multiple data simd proce ssing techniq ue eigh 46bi integer mmx instru ction regist ers use multim edia com mu nica tion processing th e pentiu ii avai lable clock speed 233 450 mh z cha pters 10 12 introdu ce advanc ed conce pts th e pentium ii processor irectly address 2 32 1 4 gb memor addre ss bus mem ory organ ized 8bit byte s b yte assigned physi cal memor locati called physi cal addre ss provide acce ss memory penti um ii provid es three modes addressing at segm ented realaddre ssing mode s th e at addres sing mode provides linear addre ss space progr instruc tions data stack content pres ent sequenti al byte rder 4 gb byte ordere memor suppor ted at addressi ng segm ented memor model provi des inde pendent memory address spaces typically used instruc tions data sta ck imple mentations program issues logical addre ss combina tion segment selector address offset identies partic ular byte segment intere st 216 1 r 16 232 segments f maximum 232 1 4 gb addre ssable pentium ii provi ding total appro xi matel 248 64 terabytes tb addre ssable memor y real addressing mode provi ded maint intel 8086 based memo ry organ ization th provided backwar compat ibility program eveloped earlier architecture s mem ory divided segment 64 kb max imum size linear address space realaddre ss mode 220 byte s th e pentium ii three separat e processing mode prot ected reala ddress syst em man agement prot ected mode processor use precedi ng memor mode ls at segm ented real realadd ress mode proce ssor suppor reala ddressi ng mode syst em mana gement mode pro cess uses addre ss space system man agement ram smra memor model used sm ram simi lar realaddre ssing mode th e pentium ii provi des 16 regi sters system proce ssing applica tion progr amming rst eigh regist ers gprs used logi cal arithmetic operands address calculations memory pointers registers shown figure 819 eax regist er used accum ulator perands results calculations ebx used pointer ds segment memory ecx loop string operations edx io pointer esi pointer data segment pointed ds register source pointer string operations edi pointer data destination segment pointed es register destination pointer string operations esp used stack pointer ss segment ebp register used pointer data stack ss segment lower 16 bits registers correspond 8086 architecture 80286 referenced programs written processors six segment registers hold 16bit address segment addresses cs register handles code segment selectors instruc tions loaded memory execution cs register used together eip register contains linear address next instruction executed ds es fs gs registers data segment selectors point four separate data segments applications program access ss register stack segment selector stack operations processor contains 32bit eflags register contains processor status control system ags may set programmer using specialpurpose instructions instruction pointer eip register contains address current code segment next instruction executed incremented next instruction altered move ahead backwards number instructions executing jumps call interrupts return instructions eip register accessed directly programs controlled implicitly control transfer instructions interrupts exceptions eip register controls program ow compatible intel microprocessors regardless prefetching operations many different data types pentium ii processor supports basic data types 8bit bytes 2 byte 16bit words double words 4 bytes 32 bits quad words 8 bytes 64 bits length additional data types provided direct manipulation signed unsigned integers bcd pointers bit elds strings oatingpoint data types used fpu special 64bit mmx data types number operands zero allowed intel architecture addressing modes fall four categories immediate register memory pointer io pointer immediate operands allowed arithmetic instructions except division must smaller maximum value unsigned double word 232 register addressing mode source destination operands given one 32bit gprs 16bit subsets 8bit gprs eflags register system registers system registers usually manipulated implied means system instruction source destination operands memory locations given combin ation segment selector address offset within segment applications process load segment register proper segment include register part instruction offset value effective address memory location referenced value may direct combinations base address displacement index scale memory segment processor supports io address space contains 65536 8bit io ports ports 16 bit 32 bit may also dened io address space io port addressed either immediate operand value dx register instruction set intel architectures classied ciscs pentium ii several types powerful instructions system application program mer instructions divided three groups integer instructions include mmx oatingpoint instructions system instructions intel processors include instructions integer arithmetic program control logic functions subcategories types instructions data transfer binary arithmetic decimal arithmetic logic shift rotate bit byte control transfer string ag control segment register miscellaneous following paragraphs cover mostly used instructions move instructions allow processor move data registers memory locations provided operands instruction conditional moves based value comparison status bits provided exchange compare push pop stack operations port transfers data type conversions included class binary arithmetic instruction class includes integer add add carry subtract subtract borrow signed unsigned multiply signed unsigned divide well instructions increment decrement negate compare integers provided decimal arithmetic instructions class instructions deals manipulation decimal ascii values adjusting values add subtract multiply divide functions logical xor exclusive instructions available integer values shift arithmetic left right include logical shift rotate left right without carry manipulate integer operands single double word length pentium allows testing setting bits registers oper ands bit byte functions follows bit test bit test set bit test reset bit test complement bit scan forward bit scan reverse set byte equalset byte zero set byte equalset byte zero control instructions allow programmer dene jump loop call return interrupt check outofrange conditions enter leave highlevel procedure calls jump functions include instructions testing zero zero carry parity less greater conditions loop instructions use ecx register test conditions loop constructs string instructions allow programmer manipulate strings providing move compare scan load store repeat input output string operands separate instruction included byte word double word data types mmx instructions execute packedbyte packedword packed doubleword quad word operands mmx instructions divided subgroups data transfer conversion packed arithmetic comparison logical shift rotate state management data transfer instructions movd movq movement double quad words figure 820 shows mapping special mmx registers mantissa fpu pentium ii correlates mmx registers fpu registers either mmx instructions fpu instructions manipulate values conversion instructions deal mainly packing unpacking word double word operands mmx arithmetic instructions add subtract multiply packed bytes words double words oatingpoint instructions executed processor fpu instructions operate oatingpoint real extended integer binarycoded decimal bcd operands floatingpoint instructions divided categories similar arithmetic instructions data transfer basic arithmetic comparison transcendental trigonometry functions load constants fpu control system instructions used control processor support operating systems executive instructions include loading storing global descriptor table gdt local descriptor table ldt task machine status cache table lookaside buffer manipulation bus lock halt providing performance information eflags instructions allow state selected ags eflags register read modied figure 821 shows eflags register hardware architecture pentium ii manufactured using intel 025 mm manufacturing process contains 75 million transistors runs clock speeds 233 450 mhz contains 32 kb l1 cache separated 16 kb instruction cache 16 kb data cache 512 kb l2 cache operates dedicated 64bit cache bus supports memory cacheability 4 gb addressable memory space uses dualindependent bus dib architec ture increased bandwidth performance system bus speed increased 66 100 mhz contains mmx media enhancement technology improved multimedia performance uses pipelined fpu supports ieee standar 754 32 64bi format 80bit format packaged new single edge contact sec cartridg e allow higher frequenc ies mor e handl ing protectio n pentium ii three way superscal ar arch itecture mea ning fetch decode execu te thr ee instructions per clock cycle progr execution pentium ii carrie b twelve stage ne grain pipe line consist ing fol lowing stages instru ction prefetch length decode instru ction decode rename resource allocati mop sched uling dispatc h execu tion write back retirem ent pipe line broke n three major sta ges twel sta ges either supportin g r cont ributing thes e thr ee sta ges th e thr ee stages fetch decode dispatch execute retire figur e", "url": "RV32ISPEC.pdf#segment301", "timestamp": "2023-10-17 20:16:04", "segment": "segment301", "image_urls": [], "Book": "computerorganization" }, { "section": "8.22 ", "content": "show stages interf ace instru ction pool connections l1 cache bus int erface fetch decode unit fet ches instru ctions ori ginal program order instruction cache decode instru ctions series micro operations op repr esent data ow instru ction also performs speculati prefetch also fetching next instru ction curr ent one instru ction cache figure", "url": "RV32ISPEC.pdf#segment302", "timestamp": "2023-10-17 20:16:04", "segment": "segment302", "image_urls": [], "Book": "computerorganization" }, { "section": "8.23 ", "content": "show fetch decode unit l1 instru ction cache loca l instru ction cache nex tip unit provi des l1 instruction cache inde x based inputs branch target buffer btb l1 instru ction cache fetches cache line corr espond ing inde x nextip next line prefetch passes decoder three parallel decode rs accept stream instru ction cache thr ee decoders two simple instruction decode rs one com plex instruction decode r decoder converts instruction triadic mops two logical sources one logical destination per mop instructions converted directly single mops instructions decoded onetofour mops complex instru ctions require microcode stor ed mi crocode instru ction sequen cer microcode set preprogr ammed seque nces f norm al ops decodi ng ops sent register alias table rat unit whi ch adds sta tus inf ormation mops enters int instruction pool instru ction pool im plemented array cont ent addre ssable memory called reorder buff er rob th rough use instruction pool dispatc hexecute unit perf orm instru ctions origina l program order mops execu ted based data depend enci es reso urce avai lability th e results stored back instru ction pool await retirem ent based progr o w th consid ered specu lative execu tion sinc e results may may used based ow progr one keep processor busy possibl e times proce ssor sched ule ve mops per clock cycle 3 mops typical one resource port figure 824 show dispatc hexecute unit th e disp atch unit selects mops instruction pool depend ing sta tus status indicates op operands dispatch unit checks see execution resource needed mop also available true reservation station removes mop sends resource executed results mop later returned pool mops avai lable execu ted mops execu ted rstin rst out fifo order th e core always looking ahead ther instructions could specu lativel executed typicall looking 2030 instru ctions front instru ction pointer th e retire unit resp onsible ensuring instru ctions comple ted original program order completed means temporary results dispatchexecute stage permanently committed memory combination retire unit instruction pool allows instructions started order always completed original program order figure 825 shows retire unit every clock cycle retire unit checks status mops instruction pool looking mops executed removed pool removed original target mops written based original instruction retire unit must notice mops complete must also reimpose original program order determining mops retired retire unit writes results cycle retirements retirement register le rrf retire unit capable retiring 3 mops per clock shown previously instruction pool removes constraint linear instruction sequencing traditional fetch execute phases biu responsible connecting three internal units fetchdecode dispatchexecute retire rest system bus interface communi cates directly l2 cache bus system bus figure 826 shows biu memory order buffer mob allows pass loads stores acting like reservation station reorder buffer holds suspended loads stores redispatches blocking condition dependency resource disappears loads encoded single mop since need specify memory address accessed width data retrieved destination register stores need provide memory address data width data written stores therefore require two mops one generate address one generate data mops must later recombine store complete stores also never reordered among store dispatched address data available older stores awaiting dispatch combination three processing techniques enables processor efcient manipulating data rather processing instructions sequentially three techniques multiple branch prediction data ow analysis speculative execution multiple branch prediction uses algorithms predict ow program several branches processor fetching instructions also looking instructions ahead program data ow analysis process performed dispatchexecute unit analyzing data dependencies instructions schedules instructions executed optimal sequence independent original program order speculative execution process looking ahead program counter executing instructions likely needed results stored special register used needed actual program path enables processor stay busy times thus increasing performance bus structure pentium ii referr ed di b arch itecture dib used aid processor bus bandwid th havi ng two inde pendent buses processor acce ss data either b us simul taneously parallel two buses l2 cache bus system bu s cache bus refers interface betwee n processor l2 cache pentium ii mounted substrate core l2 cache bus 64 bits wide runs half proce ssor core frequenc y syst em bus refers interf ace processor system core logic bus agents system bus connec cache bus system bus also 64 bits wi de runs 66 mh z pipelined allow simulta neous tran sactions figur e 827 show bloc k diagram pentium ii processor pentium ii contains two integer two fpus operate parallel shar ing sam e instru ction ecoder sequencer system bus fpu suppor ts real integer bcdint eger data type o ating point processing algorithms dened ieee 754 854 standards oatingpoint arithmetic fpu uses eight 80bit data registers storage values figure 828 shows relations hip betwee n integer fpus pentium ii uses io ports transfer data io ports created system hardware circuitry decodes control data address pins processor io port input port output port bidirectional port pentium ii allows io ports accessed two ways separate io address space memorymapped io io addressing handled proce ssor address lines pentium ii uses special memor yio transac tion syst em bus indicate whethe r address lines driven memor addre ss o addre ss acc essing o port thr ough o address space handled thr ough set o instru ctions special o protect ion mechani sm guara ntees wr ites io port comple ted befor e next instruction instru ction stream executed acc essing io ports memorym apped o h andled processor general purpose move string instru ctions protectio n provi ded thr ough segm entation pagi ng th e pentium ii two levels cache l1 cache l2 cache memory cache able 4 gb addressa ble memor space th e l1 cache 32 kb divided two 16 kb units form instru ction cache data cache th e instru ction cache fourway set assoc ia tive data cache twoway set associati 32byte cache line siz e l1 cache opera tes frequenc processor provi des fastes access frequent ly used inf ormation l1 cache miss l2 cache searched data th e pentium ii supports betwee n 256 kb 1 mb l2 cache 5 21 kb mos common l2 cache four way set assoc iative 32byt e cache line siz e th e l2 cache uses dedi cated 64bit bus tra nsfer data proce ssor cache cache coherency maint ained thr ough mes odi ed exclusive shar ed invalid snoopi ng protocol l2 cache suppor four concur rent cache acce sses long two dif ferent banks figure", "url": "RV32ISPEC.pdf#segment303", "timestamp": "2023-10-17 20:16:05", "segment": "segment303", "image_urls": [], "Book": "computerorganization" }, { "section": "8.29 ", "content": "show cache architect ure penti um ii mm x technol ogy consid ered mos sign icant enhanc ement int el arch i tec ture last 10 years technolog provides improv ed vide com pres siondecompression image manipulation encryption io processing needed multimedia applications improvements achieved use following single instruction multiple data simd technique 57 new instruc tions eight 64bit wide mmx registers four new data types simd techniq ue allows parallel proce ssing multiple data ele ments sing le instru ction accompl ished performi ng mmx instruction one mmx packed data types examp le add instru ction performed two packed bytes would add eight dif ferent values sing le add instru ction 57 new instructions cover follow ing areas basi c arithmet ic comparison operations conversio n instru ctions logical opera tions shift operati ons data transfer instru ctions generalpur pose instru ctions t easily parallel pipe lines processor desi gned support mmx packed data types mmx contain eight 64bi registers mm 0mm7 acce ssed directly mmx instructions th ese regi sters used perform cal culations mmx data type s registers two data access mode 64 bit 32 bit 64bit mode used tra nsfers mmx regist ers 32bit mode transfers integer regist ers mmx registers four new data type included mmx packed byte packe word packed doubl eword quadword see figure", "url": "RV32ISPEC.pdf#segment304", "timestamp": "2023-10-17 20:16:05", "segment": "segment304", "image_urls": [], "Book": "computerorganization" }, { "section": "8.30). ", "content": "data type grouping sign ed unsigned x edpoint integers byte words doubl ewords quadw ords single 64bit quant ity th ese stored 64bit mmx registers th en mmx instru ction executes values regist er sing le edge connec cartridg e sec metal plastic cartridge comple tely encloses processor core l2 cache enable highfr equency opera tions core l2 cache surface moun ted direct ly substra te inside cartridge cartridge connects mother board via single edge connec tor sec allows use high perf ormance bsra ms widely availab le cheap er l2 cache sec also provides better handling protectio n processor", "url": "RV32ISPEC.pdf#segment305", "timestamp": "2023-10-17 20:16:05", "segment": "segment305", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 9 ", "content": "memory storage dem onstrated use f ipops storing binary inf ormation sever al ipop put toge ther form regist er regist er used either store data temporari ly man ipulate data stored using logic circui try aroun memor subsys tem f digital com puter functi onally set regi sters data progr ams stored instru ctions program stored memory retrieved cont rol unit machin e digital com puter system decoded perf orm appropri ate operation data stored either memory set f registers processi ng u nit optimum operation machin e required program data accessib le control processi ng units quic kly possibl e main memory primary memory allow fast access fasta ccess require ment adds consider able amo unt hardwar e main memor thus mak es expens ive chapter 5 provi ded detail ram used asc reduce memory cost data programs im mediately neede machine norm ally stored lessexp ensive secondary memor subsys tem asc second ary memory brough main memory processi ng unit needs larger main memor inform ation stor e hence faster processing since mos informat ion required immediate ly avail able becau se main memory hardwar e expens ive speed cost tradeof f needed ecide amo unts main secondary storage neede d chapter provides models opera tion mos com monly used type memorie brief descr iption memory devices organization fol lowed descr iption virtual cache memor schemes provide mode ls operation four mos com monly used types memor ies next section sect ion 92 lists para meters used evaluating memory system describ es memory hier archy computer system s se ction 93 descr ibes popular semic onductor memor devices desi gn pri mary memor system using devi ces deta il followed b brief descrip tion p opular secon dary memory devi ces represent ative memor ics briey describ ed appe ndix desi gn primary memor using ics descr ibed section 94 memory speed enhanc ement conce pts introdu ced section 95 size enhanc ement discusse sect ion 96 cod ing data com pression integr ity fault tolerance covered section 97 chapter ends wi th exampl e memor system sect ion 98", "url": "RV32ISPEC.pdf#segment306", "timestamp": "2023-10-17 20:16:05", "segment": "segment306", "image_urls": [], "Book": "computerorganization" }, { "section": "9.1 TYPES OF MEMOR Y ", "content": "dep ending mec hanism used store retrieve data memory system classi ed one followi ng four types 1 randomaccess memory ram a read write memory rwm b readonly memory rom 2 contentaddressable memory cam associative memory 3 sequentialaccess memory sam 4 directaccess memory dam primar memor f ram type cams used special appl ications rapi data search retrieval neede d sam dam used secon dary memor devices", "url": "RV32ISPEC.pdf#segment307", "timestamp": "2023-10-17 20:16:05", "segment": "segment307", "image_urls": [], "Book": "computerorganization" }, { "section": "9.1. 1 Random -A ccess Memor y ", "content": "show n cha pter 5 ram addre ssable loca tion memor acce ssed rando manner process read ing writing loca tion ram consumes equal amount time matter whe locati physi cally memory two types ram available read write roms read write memory mos common type main memor rwm whose mode l shown figur e 91 rwm memory regist er memor loca tion addre ss associa ted data input w ritten outpu read memor location acce ssing location usin g ddress memory address register r figur e 91 stores addre ss n bits r 2 n locations addre ssed numb ered 0 2n 1 trans fer data memor usually terms set bits know n memor wor d 2n words memor figure 9 1 bits thus 2n 3 bit memory common notation used describe rams general n 3 unit memory contains n words units unit bit byte 8 bits word certain number bits memory buffer register mbr used store data written read memory word read memory address memory word read provided mar read signal set 1 copy contents addressed memory word brought memory logic mbr content memory word thus altered read operation write word memory data written placed mbr external logic address location data writte n placed r write signal set 1 memor logic transfers mbr cont ent int addresse memor loca tion cont ent memor wor thus alter ed wr ite opera tion memor word dened often acce ssed unit data typical word sizes used memor organ izations comme rcially avai lable machin es 6 16 32 36 64 bits addi tion addressi ng memor wor possi ble address portion eg halfwor quarter word multipl e eg doubl e word quad word depending memory organization byteaddressable memory exampl e address assoc iated byte usually 8 bits per byte memor memory word consi sts one bytes readonl memo ry literature routinely uses acro nym ram mean rwm fol low popular prac tice u se rwm cont ext requires us b e specic include mar mbr component memor syst em mode l practice thes e registers may located memory subsys tem othe r registers system may serv e functions thes e regist ers rom also ram excep data read data u sually written rom either memor manufac turer user offlin e mode ie special devi ces write burn data pattern rom model rom shown figure", "url": "RV32ISPEC.pdf#segment308", "timestamp": "2023-10-17 20:16:06", "segment": "segment308", "image_urls": [], "Book": "computerorganization" }, { "section": "9.2. ", "content": "rom als used main memor cont ains data program usually altered real time syst em opera tion mbr show n figur e", "url": "RV32ISPEC.pdf#segment309", "timestamp": "2023-10-17 20:16:06", "segment": "segment309", "image_urls": [], "Book": "computerorganization" }, { "section": "9.2. ", "content": "general assume data outpu lines available long memor enable signal latched externa l buffer regist er buffer provided part memor system technologie", "url": "RV32ISPEC.pdf#segment310", "timestamp": "2023-10-17 20:16:06", "segment": "segment310", "image_urls": [], "Book": "computerorganization" }, { "section": "9.1. ", "content": "2 cont entadd ressable memor type memor concept addre ss usual ly present rather memor logic searches loca tions containing specic pattern hence descr iptor content addre ssable assoc iative used typical oper ation memory data searched rst provi ded memor y th e memor hardwar e searches match either identies loca tion loca tions contain ing data returns match none locations cont data mode l assoc iative memor shown figur e", "url": "RV32ISPEC.pdf#segment311", "timestamp": "2023-10-17 20:16:06", "segment": "segment311", "image_urls": [], "Book": "computerorganization" }, { "section": "9.3. ", "content": "th e data sear ched rst placed data regi ster data need occupy com plete data register mask regi ster used identify region data regist er int erest partic ular search typically correspo nding mask regist er bits set 1 words elect register bits set indicat e wor ds invol ved search th e memor hardwar e searches thr ough thos e words bit posi tions mask regi ster bits set data thus selected match content data register corresponding results register bit set 1 depending application words responding search may involved proce ssing subse resp ondents may sel ected th e multipl emat ch resolve r mmr circuit implemen ts select ion process note data matching opera tion perform ed parallel hence extensive hardwar e needed impleme nt memor y figure 93b shows sequence search opera tion elect rst mmr circuit used select rst respondent amo ng resp ondent words proce ssing associ ative memor ies usef ul whe n iden tical operation mus performed several pieces data simulta neously whe n partic ular data patter n mus searched para llel exampl e memor wor reco rd pers onnel le records correspo nding set femal e emp loyees 25 years old searched setting data mask register bits appropriat ely sufc ient logic provided records resp onding sear ch also updat ed simul taneously practice cams built ram component addressi ng capability fact mmr returns address respondi ng word wor ds resp onse search major appl ication cam stor ing data whi ch rapid search updat e operati ons perform ed virtual memory scheme describ ed later chapter shows applica tion cams return descrip tion cams section 932 whe detailed desig n cam system given", "url": "RV32ISPEC.pdf#segment312", "timestamp": "2023-10-17 20:16:06", "segment": "segment312", "image_urls": [], "Book": "computerorganization" }, { "section": "9.1. ", "content": "3 sequen tialacces memor serialinput serialou tput shift register simplest mode l seque ntial mem ory right shift register figure", "url": "RV32ISPEC.pdf#segment313", "timestamp": "2023-10-17 20:16:06", "segment": "segment313", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4a, ", "content": "data enter left input leave shift register right output becaus e nly input output avai lable device data mus wr itten read device seque nce stored regi ster every data item seque nce rst data item desired item mus accessed order retrieve require data thus sam particula r model corr espond rstin rst out fifo sam since data items retrieved order wer e entered organizatio n sam also called queue note addi tion rightshift shift register mechani sms input output data similar r mbr ram mode l neede build fifo memory figure 94b show model lastin rst out lifo sam shift register shift right left used data always enter thr ough left input leave register left outpu t write ata plac ed input register shifted right reading data output read register shifted left thereby moving item regist er lef present ing next data item ou tput note data acce ssed thi device lifo manner rganization sam also calle stack data input operati describ ed push ing data sta ck data retrieved popi ng stack figure 95 shows fifo lifo organiza tions 4bi data words usin g 8bit shift regist ers f sam devi ces thus store eigh 4bit wor ds figure 96 show generalize model seque ntialaccess storage system read write transduc ers read data write data onto ata stor age medium current position medium move next position thus data item must examin ed seque nce retri eve desired data mode l applica ble secon dary storage devices magnet ic tape", "url": "RV32ISPEC.pdf#segment314", "timestamp": "2023-10-17 20:16:06", "segment": "segment314", "image_urls": [], "Book": "computerorganization" }, { "section": "9.1.4 Direct-A ccess Memory ", "content": "figure 97 show mode l dam devi ce mag netic disk ata accessed two steps 1 transducers move particular position determined addressing mechanism cylinder track 2 data selected track accessed sequentially desired data found type memory used secondary storage also called semiram since positioning readwrite transducers selected cylinder random accessing data within selected track sequential", "url": "RV32ISPEC.pdf#segment315", "timestamp": "2023-10-17 20:16:06", "segment": "segment315", "image_urls": [], "Book": "computerorganization" }, { "section": "9.2 MEMORY SYSTEM PARAMETERS ", "content": "important characteristics memory system capacity dataaccess time datatransfer rate frequency memory accessed cycle time cost capacity storage system maximum number units bits bytes words data store capacity ram instance product number memory words word size 2k 3 4 memory example store 2k k 1024 210 words containing 4 bits total 2 3 1024 3 4 bits access time time taken memory module access data address provided module data appear mbr end time ram access time nonram function location data medium reference position readwrite transducers cycle time measure often memory accessed cycle time equal access time nondestructive readout memories data read without destroyed storage systems data destroyed read operation destructive readout rewrite operation necessary restore data cycle time devices dened time takes read restore data since new read operation performed rewrite completed datatransfer rate number bps data read memory rate product reciprocal access time number bits unit data data word read parameter signicance nonram systems rams cost product capacity price memory device per bit rams usually costly memory devices parameters interest fault tolerance radiation hardness weight data compression integrity depending application memory used", "url": "RV32ISPEC.pdf#segment316", "timestamp": "2023-10-17 20:16:06", "segment": "segment316", "image_urls": [], "Book": "computerorganization" }, { "section": "9.3 MEMORY HIERARCHY ", "content": "primary memory computer system always built ram devices thereby allowing processing unit access data instructions memory quickly possible necessary program data primary memory processing unit needs would call large primary memory programs data blocks large thereby increasing memory cost practice really necessary store complete program data primary memory long portion program data needed processing unit primary memory secondary memory built direct serialaccess devices used store programs data immediately needed processing unit since randomaccess devices expensive secondary memory devices costeffective memory system results primary memory capacity minimized organization introduces overhead memory oper ation since mechanisms bring required portion programs data primary memory needed devised mechanisms form called virtual memory scheme virtual memory scheme user assumes total memory capacity primary plus secondary available programming operating system manages moving portions segments pages program data primary memory even current technologies primary memory hardware slow com pared processing unit hardware reduce speed gap small faster memory usually introduced main memory processing unit memory block called cache memory usually 10100 times faster primary memory virtual memory mechanism similar primary secondary memories needed manage operations main memory cache set instructions data immediately needed processing unit brought primary memory cache retained parallel fetch operation possible cache unit lled main memory processing unit fetch cache thus narrowing memorytoprocessor speed gap note registers processing unit temporary storage devices fastest components computer system memory thus generalpurpose computer system memory hierarchy highest speed memory closest processing unit expensive leastexpensive slowest memory devices farthest processingunitfigure98showsthememoryhierarchy thus memory system hierarchy common modernday computer systems consists following levels 1 cpu registers 2 cache memory small fast ram block 3 primary main memory ram processor accesses pro grams data via cache memory 4 secondary mass memory consisting semirandomaccess sam elements magnetic disks tapes fastest memory level 1 speed decreases move toward higher levels cost per bit highest level 1 lowest level 4 major aim hierarchical memory design enable speedcost tradeoff provide memory system desired capacity highest speed possible lowest cost let us consider memory system nlevel hierarchy let ci 1 n cost per bit ith level hierarchy si capacity ie total number bits ith level average cost per bit ca memory system given typically would like ca close cn possible since ci ci1 order minimize cost requires si si1 seen example systems described earlier chapters additional levels added hierarchy recent computer systems particular common see two levels cache processor main memory typically called level1 level2 on offchip cache depending system structure addition disk cache could present primary secondary memory enable faster disk access", "url": "RV32ISPEC.pdf#segment317", "timestamp": "2023-10-17 20:16:07", "segment": "segment317", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4 MEMORY DEVICES AND ORGANIZATIONS ", "content": "basic property memory device possess must two welldened states used storage binary information addition ability switch one state another ie reading writing 0 1 required switching time must small order make memory system fast cost per bit storage low possible address decoding mechanism implementation distinguish ram nonram since ram needs fast address decoding done electron ically thus involving physical movement storage media nonram either storage medium read write mech anism transducer usual ly moved appro priate address data found sharing addre ss ing mechani sm makes nonr less expens ive ram mec hanical moveme nt mak es slower ram terms dataacces times addi tion memor devi ce char acterist ics decoding externa l addre ss read write circuitr affect speed cost storage system semicon ductor mag netic tec hnologie popul ar prima ry mem ory device technolo gies magne tic core memor ies used exten sively prima ry memorie 1970s ob solete sinc e sem iconduct mem ories advant ages lower cost h igher speed one advant age mag netic core memor ies nonvol atile th data retai ned memory even power tur ned semicon ductor memor ies othe r hand vola tile either backup powe r source must used ret memory content whe n power turn ed memory cont ents dump ed secon dary memory restor ed neede circum vent volatilit thes e memor ies mos popul ar secondary storage devi ces magnet ic tape disk optical disks becom ing costeffe ctive introdu ction compact disk roms cdro writeo nce readmany times readmost ly wor disks eras able disks technol ogy memory devices organize various congur ations varying cost speed charact eristics exam ine represe ntative devices organ izations sem iconductor memor tec hnology", "url": "RV32ISPEC.pdf#segment318", "timestamp": "2023-10-17 20:16:07", "segment": "segment318", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.1 Random -Acce ss Memor y Devices ", "content": "two types semicon ductor rams available stati c dynamic static ram memory cel l built ipop th us cont ent memory cell either 1 0 remai ns intact long powe r hen ce memor device stati c dynam ic memor cell howe ver built capacitor char ge leve l capac itor determin es 1 0 state cel l becaus e char ge decay time thes e memor cells must refresh ed ie recharge ever often retai n memory cont ent dynamic memor ies require comple x refres h circuits becau se refresh time neede slower static memor ies mor e dynamic memor cells fabr icated area silicon sta tic memory cells thus large memor ies needed speed critical design parameter dynamic memories used static memories used speedcritical applications", "url": "RV32ISPEC.pdf#segment319", "timestamp": "2023-10-17 20:16:07", "segment": "segment319", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.1.1 Static RAM ", "content": "major components ram addressdecoding circuit readwrite circuit set memory devices organized several words memory device store bit appropriate hardware support decoding readwrite operations called memory cell flipops used forming static ram cel ls figure 99a show memor cell built jk ipop ipop clocked enable signal 1 either input sign al enters ip op cont ents ip op seen outpu based value read write signal enabl e 0 cell outputs 0 also mak es j k 0 lea ving contents ip op unchan ged read write signal 1 read ing ie outpu q 0 wr iting ie q input symbol memor cel l mc show n figure 99b 4 3 3 b ram built memor cells shown figure 910 2bi address mar decode 2to4 decoder select one four memor words fo r memor b e active memor enable line must 1 none words sel ected i e outputs decoder 0 memor enable line 1 r w line 1 outpu ts mcs enabled sel ected wor line input set gates whose outputs connec ted utput lines output lines rece ive sign als mcs enabl ed word since othe r mc outputs bit position 0 memor enabled rw line 0 selected wor rece ive put inf ormation numb er words large prac tical semico nductor ram gate shown figur e 910 become impra ctical elim inate gates mc fabr icated either open collec tor tristat e outpu ts open collec tor outpu ts provided ou tputs mcs bit posi tion tied togethe r form wired or thus eliminati ng gate howeve r pullup resistors requi red current dissipa tion gates limit numb er outpu ts gates wireore d mcs tristat e outpu ts used limit ing cases outpu ts mcs bit posi tion tied toge ther form outpu line numb er words memor large linear decodi ng tec hnique figure 910 results complex decoding circui try reduc e com plexity coin cident decoding schemes used figur e 911 shows scheme address divided two parts x y loworder bits address select column mc matrix highorder bits address x select row mc selected one intersection selected row column data word consists one bit several colu mns select ed row selected coincid ent decoding techniq ue enabl e input mc obta ined anding x select ion lines comme rcial memor ics provide one enable signal input chip multipl e enable input useful buildi ng large memor syst ems emp loying coincid ent memory decodi ng schem es outpu ts ics also either open collec tor tristate enabl e easier interconnec tion", "url": "RV32ISPEC.pdf#segment320", "timestamp": "2023-10-17 20:16:07", "segment": "segment320", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4. 1.2 Dynam ic Memor y ", "content": "sever al dynamic mc mc congur ations used figure 912 shows mos com mon dm c built single mos transi stor capacitor read write control achieved using two mos transi stors consid er ncha nnel mo nmo transist q1 figur e 912a transist three terminals drain source gate voltage gate positive excee ds certain threshold value tra nsistor conduc ts thus connecti ng drain sourc e gate voltage negativ e transist thus isolatin g drain source write control high q1 din transferr ed gate q2 across capac itor din high capacitor charged otherw ise capac itor discharg ed gatetos ource resistanc e q2 write active dischar ged capac itor correspo nds storing low mc thus stored low maintai ned indenitel y char ged capacitor correspo nds storin g high high v alue b e maint ained long capac itor remai ns charged fabricate denser memor ies capacitor mad e sma hence capaci tance quite small order fraction picofarad th e capac itor thus dischar ges several hundre milli secon ds requiring char ge restored refresh ed approximat ely every two milli seconds read cont rol goes high q3 drain q 2 connec ted dout since stored data impre ssed gate q 2 outpu drain q2 wi comple ment data stored cell th e outpu data usually inverted exte rnal circuitr y note read opera tion destruc tive since capacitor ischarged whe n data read thus data must b e refres hed figure 913 show 16 3 1 bit dy namic memor usin g dm c figure 912 dm cs intern ally organize 4 3 4 matrix high order two bits address select one rows read data selected row transf erred sense ampli ers since capac itance outpu lines much higher capac itor dmc utput voltage low consequent ly sense ampliers required detect data value presence noise ampliers also used refresh memory loworder two bits address used select one four sense ampliers 1bit data output data sense ampliers rewritten row dmcs write 1bit data selected row rst read data selected sense amplier changed new data value rewrite operation need refresh results requirement complex refresh circuitry also reduces speed dynamic memory devices small size dmcs possible fabricate dense memories chip usually refresh operation made transparent performing refresh memory otherwise used system thereby gaining speed memory capacity required small refresh circuitry built memory chip memory ics called integrated dynamic rams irams dynamic memory controller ics handle refresh generate control signals dynamic memory available used building large dynamic memory systems", "url": "RV32ISPEC.pdf#segment321", "timestamp": "2023-10-17 20:16:07", "segment": "segment321", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.1.3 Read-Only Memory ", "content": "rom ram data permanently stored nbit address input rom data stored addressed location output output lines rom basically combinational logic device figure 914 shows fourword rom 3 bits per word links junctions word lines bit lines either open closed depending whether 0 1 stored junction respectively word selected address decoder output line ie bit line closed link junction selected word line contain 1 lines contain 0 figure 914 contents locations 0 3 101 010 111 001 respectively two types roms commercially available maskprogrammed roms userprogrammed roms maskprogrammed roms used large number rom units containing particular program andor data required ic manufacturer asked burn program data rom unit program given user ic manufacturer prepares mask uses fabricate program data rom last step fabrication rom thus custom fabricated suit particular application since custom manufacturing ic expensive maskprogrammed roms costeffective unless application requires large number units thus spreading cost among units since contents roms unalterable change requires new fabrication userprogrammable rom programmable rom prom fabricated either 1s 0s stored special device called prom programmer used user burn required program sending proper current link contents type rom altered initial programming erasable proms eprom available ultraviolet light used restore content eprom initial value either 0s 1s reprogrammed using prom programmer electrically alterable roms ea roms anot kind rom uses special ly desig ned elect rical sign al alter cont ents roms used storin g program data expected change duri ng program execu tion ie real time also used impleme nting com plex boo lean functi ons code converte rs like exam ple rom based imple mentati fol lows exam ple 91 im plement bina rycoded decimal bcd t oexcess 3 decode r using rom figur e 915 show bcd toexce ss3 conver sion since ten input bc combina tions rom 16 words 2 3 10 24 mus used rst 10 words rom contain 10 excess 3 code words word 4bi ts long bcd input appea rs fouradd ress input lines rom th e content addresse wor output utput lines outpu requi red excess 3 code", "url": "RV32ISPEC.pdf#segment322", "timestamp": "2023-10-17 20:16:08", "segment": "segment322", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4. 2 Associ ative Memory ", "content": "typical assoc iative memor cell amc built jk ipop shown figur e 916 response 1 either data bit memor bit q match mas k bit 1 whe n mas k bit 0 cor respondi ng compare tru th table block diag ram simpl ied cel l also show n g ure addi tion response circui try c read wr ite enable circuits simi lar ram cell show n figure 99 four word 3bi tsperw ord built cel ls shown figure 917 th e data mask regist ers 3bits long word select regist er figur e 9 3 negl ected hence memory words select ed com parison data register respons e outpu ts cells wor anded together form word response sign al th us word respon dent resp onse output every cell word 1 th e mmr circuit shown selects rst resp ondent rst respon dent driv es 0 response outputs othe r words fol lowing input write outpu read circuitry als neede d simi lar rwm syst em show n figure 910 hence shown figur e 917 smal l ams availa ble ic chip s th eir capac ity order 8bi tsperw ord eigh words large r systems desi gned using ram chip s memor ies used ram cam mode s cause increas ed logic complex ity cam syst ems cost much mor e rw ms equal capacity", "url": "RV32ISPEC.pdf#segment323", "timestamp": "2023-10-17 20:16:08", "segment": "segment323", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4. 3 Sequen tial-Acces s Memor y Devi ces ", "content": "magne tic tape mos com mon sam devi ce magnet ic tape mylar tape coat ed mag netic materia l simil ar used home mus ic syst ems data recorded magnetic patterns tape moves past readwrite head read write data figur e 918 show two popul ar tape formats reelto reel tape used storing large volumes data usually largescale minicomputer systems cassette tape used small data volumes usually microcomputer systems data recorded tracks track magnetic tape runs along length tape occupies width sufcient store bit ninetrack tape example width tape divided nine tracks character data represented 9 bits 1 bit track one bits usually parity bit facilitates error detection correction several characters grouped together form record records separated interrecord gap 34 inch endofrecord mark special character set records forms le les separated endofle mark gap 3 inch cassette tapes data recorded serial mode one track shown figure 918b recording writing magnetic devices process creating magnetic ux patterns device sensing ux pattern medium moves past readwrite head constitutes reading data reeltoreel tapes data recorded along tracks digitally cassette tape bit converted audio frequency recorded digital cassette recording techniques also becoming popular note information magnetic tape nonvolatile magnetic tapes permit recording vast amounts data low cost however access time function position data tape respect readwrite head position along length tape long sequentialaccess devices thus form lowcost secondary memory devices primarily used storing data used frequently system backup archival storage transporting data sites etc", "url": "RV32ISPEC.pdf#segment324", "timestamp": "2023-10-17 20:16:08", "segment": "segment324", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.4 ", "content": "directa ccess storage devic es magneti c optical disks popular dir ect semiran domacc ess storage devices acc essing data thes e devices require two steps rando direct move ment read wr ite heads vicinity data fol lowed seque n tial access mas smemory devices used secondary storage devi ces compute r system storing data program", "url": "RV32ISPEC.pdf#segment325", "timestamp": "2023-10-17 20:16:08", "segment": "segment325", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.4.1 ", "content": "magn etic disk magnet ic disk see figure", "url": "RV32ISPEC.pdf#segment326", "timestamp": "2023-10-17 20:16:08", "segment": "segment326", "image_urls": [], "Book": "computerorganization" }, { "section": "9.7) ", "content": "at circul ar surface coat ed magnet ic materia l muc h like phonogr aph record sever al disk moun ted rotating spindle surface readwrite head surface divided several concentric circles tracks track 0 outermost rst positioning readwrite heads proper track data track accessed sequentially track normally divided several sectors corresponds data word address data word disk thus cor responds track number sector number tracks readwrite heads given time form cylinder time taken readwrite heads position cylinder called seek time readwrite heads positioned time taken data appear rotational delay access time sum seek time rotational delay latency function disk rotation speed average time taken desired sector appear readwrite head disk arm positioned track disk directory maintained disk drive maps logical le information physical address consisting cylinder number surface number sector number beginning read write operation disk directory read track directory placed plays important role improving access time earlier disk drives came removable disks called disc packs disk drives today come sealed units winchester drives magnetic disks available either hard disks oppy disks data storage access formats types disks floppy disks exible disk surface popular storage devices especially microcomputer systems although datatransfer rate slower harddisk devices hard disks remained main media data storage back floppy disks almost become obsolete yielding ash memory devices thumb drives", "url": "RV32ISPEC.pdf#segment327", "timestamp": "2023-10-17 20:16:08", "segment": "segment327", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.4.2 RAID (Redundant Array of Independent Disks) ", "content": "even smallest computer system nowadays uses disk drive storing data programs disk head crash end data gathering session business day results expensive data regathering loss productivity one solution problem duplicate data several disks redundancy provides fault tolerance one disks fails disk controller cache one disks accessed simultan eously thereby increasing throughput system traditionally large com puter systems used single large expensive disk sled drives late 1980s small disk drives available microcomputer systems raid systems formed utilizing array inexpensive disk units word inexpensive replaced independent raid provides advantages fault tolerance high performance hence used critical applications employing large le servers transaction application servers desktop systems applications cad computeraided design multimedia editing playback hightransfer rates needed raid system consists set matched hard drives raid controller array management software provides logical physical mapping data part raid control ler des criptions sever al popular con gura tions level raid give n later th e followi ng material extr acted http wwwacnc com raid html raid level 0 disk striping without fault tolerance mode data divided blocks blocks placed drives array interleaved fashion figure 919 shows data representation data le blocks k l n etc raid level 0 requires least two drives independent controllers used one drive writing reading block seeking next block mode offer redundancy use parity overhead utilizes array capacity fully example two 120 gb drives array total capacity system would 240 gb true raid redundant faulttolerant one drives fails contents drive accessible making data system usable raid level 1 mirroring duplexing raid level 1 requires least two drives shown figure 920 block data stored least two drives mirroring provide fault tolerance even one drive fails system continue operate using data mirrored drive mode offer improvement dataaccess speed capacity disk system half sum individual drive capacities used applications accounting payroll nancial environments require high availability raid level 0 1 high datatransfer performance raid level 0 1 requires minimum four drives combination raid 0 raid 1 created rst creating two raid0 sets mirroring shown figure 921 although mode offers high fault tolerance io rates expensive due 100 overhead used applications imaging general leservers high performance needed raid level 2 hamming code instead writing data blocks arbitrary size raid2 writes data 1 bit per strip shown figure 922 thus accommodate one ascii character need eight drives additional drives used hold hamming code used detect 2bit errors correct 1bit errors data drives fails hamming code used reconstruct data failed drive since 1 bit data written drive along corresponding hamming code drives synchronized retrieve data properly also generation hamming code slow making mode raid operation suitable applications requiring highspeed data throughput raid level 3 parallel transfer parity raid level shown figure 923 parity used instead mirroring protect disk failure requires minimum three drives implement one drive designated parity drive contains parity information computed drives simple parity bit calculated exclusiveoring corresponding bits stripe provides error detection capability additional parity bits needed provide error correction mode offers reduced costs since fewer drives needed implement redundant storage however performance degraded parity computation bottlenecks caused single dedicated parity drive raid level 4 independent data disks shared parity raid4 shown figure 924 raid0 parity data written blocks strips equal size drive creating stripe across drives followed parity drive contains corresponding parity strip parity drive becomes bottleneck since data readwrite operations require access practice appli cations provide uniform data block sizes storing varying size data blocks becomes impractical mode raid raid level 5 independent data disks distributed parity blocks raid 5 may popular powerful raid conguration shown figure 925 provides striping data well striping parity information error recovery parity block distributed among drives array requires minimum three drives implement bottlenecks induced parity drive eliminated well cost reduced mode offers high read datatransaction rates medium write data transaction rates disk failure medium impact throughput disk failure occurs difcult rebuild raid 5 compared raid level 1 typical applications le application servers database servers web email news servers raid level 6 independent data disks two independent distributed parity schemes raid 6 shown figure 926 essentially extension raid 5 allows additional fault tolerance using second independent distributed parity scheme dual parity data striped block level across set drives like raid 5 second set parity calculated written across drives two independent parity computations used order provide protection double disk failure two different algorithms employed achieve raid 6 provides extremely high data fault tolerance sustain multiple simultaneous drive failures requires minimum four drives implement perfect solution mission critical applications raid level 10 high reliability high performance raid 10 shown figure 927 implemented striped array whose segments raid 1 arrays requires minimum four drives implement fault tolerance raid level 1 overhead faulttolerance mirroring alone provides high io rates striping raid 1 segments certain circum stances raid 10 array sustain multiple simultaneous drive failures however expensive results high overhead offering limited scalability suitable applications database servers would otherwise gone raid 1 need additional performance boost expected large system might use one level raid depending cost speed fault tolerance requirements", "url": "RV32ISPEC.pdf#segment328", "timestamp": "2023-10-17 20:16:09", "segment": "segment328", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4.4.3 Optical Disks ", "content": "four types optical disks available cdroms similar maskprogrammed roms data stored disk last stage disk fabrication data stored altered cdrecordable cdr worm optical disks allow writing data portions disk written altered cdrewritable cdrw erasable optical disks similar magnetic disks allow repeated erasing storing data digital video disks dvd allow much higher density storage offer higher speeds cds manufacturing cdroms similar phonograph records except digital data recorded burning pit nopit patterns plastic substrate disk laser beam substrate metallized sealed reading recorded data 1s 0s distinguished differing reectivity incident laser beam since data recorded altered applica tion cdroms storing data change advantage cdroms magnetic disks low cost high density nonerasability erasable optical disks use coating magnetooptic material disk surface record data disks laser beam used heat magneto optic material presence bias eld applied bias coil bit positions heated take magnetic polarization bias eld polarization retained surface cooled bias eld reversed surface heated data corresponding bit position erased thus changing data disk requires twostep operation bits track rst erased new data written onto track reading polarization read laser beam rotated magnetic eld thus polarization laser beam striking written bit positions different rest media worm devices similar erasable disks except portions disk written erased rewritten optical disk technology offers densities 50000 bits 20000 tracks per inch resulting capacity 600 mb per 35 inch disk corresponding numbers magnetic disk technology 150000 bits 2000 tracks per inch 200 mb per 35 inch disk thus optical storage offers 3to1 advantage magnetic storage terms capacity also offers better perunit storage cost storage densities magnetic disks also rapidly increasing especially since advent vertical recording formats datatransfer rates optical disks much lower magnetic disks owing following factors takes two revolutions alter data track rotation speed needs half magnetic disks allow time heating changing bit positions since optical readwrite heads bulkier magnetic counterparts seek times higher dvd also come recordable nonrecordable forms digital versatile disks called essentially quaddensity cds rotate three times faster cds use twice pit density dvds congured single doublesided single doublelayer versions offer storage 20 gb data music video progress laser technology resulted use blueviolet lasers utilizing 450 nm wavelength two competing dvd formats market blueray format developed consortium nine consumer electronics manufacturers sony pioneer samsung etc offers 25 gb dvds hddvd format developed nec toshiba offers 15 gb dvds dvd formats horizon offering 30 35 gb dvds storage density speed access disks improve rapidly listing characteristics soon becomes outdated refer magazines listed reference section chapter details", "url": "RV32ISPEC.pdf#segment329", "timestamp": "2023-10-17 20:16:09", "segment": "segment329", "image_urls": [], "Book": "computerorganization" }, { "section": "9.5 MEMOR Y SYSTE M DESIGN USING IC S ", "content": "refer appendi x deta il representa tive memory ics memory system designer use comme rcially availa ble memor ics desi gn memor systems required size char acteristic s major steps memor desi gns followi ng 1 based speed cost parameters determining type memory ics static dynamic used design 2 selecting available ic type selected based access time requirements physical parameters restriction number chips used power requirements generally better select ic largest capacity order reduce number ics system 3 determining number ics needed n total memory capacity chip capacity 4 arranging n ics p 3 q matrix q number bitsperword memory system number bitsperword ic p nq 5 designing decoding circuitry select unique word corresponding address addre ssed issue memor control design procedure control unit com puter system memor part shoul produc e control signals strobe addre ss int mar enable read write gate data mb r appro priate times followi ng exam ple illustrate desi gn examp le 92 design 4k 3 8 memory usin g int el 2114 ram chip s 1 number chips needed total memory capacity chip capacity 4k 8 1k 4 8 2 memory system mar 12 bits since 4k 4 3 1024 212 mbr 8 bits 3 since 2114s organized 4bitsperword two chips used forming memory word 8 bits thus eight 2114s arranged four rows two chips per row 4 2114 10 address lines least signicant 10 bits memory system mar connected 10 address lines 2114 2to4 decoder used decode signicant 2 bits mar select one four rows 2114 chips cs signal 2114 chip 5 o lines chips row connected mbr note o lines congured tristate lines 2114 chips tied together form system memor syst em shown figure 928 note numb er bits memory word increased multiples 4 simply including additional columns chips number words needs extended beyond 4k additional decoding circuitry needed", "url": "RV32ISPEC.pdf#segment330", "timestamp": "2023-10-17 20:16:09", "segment": "segment330", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6 SPEED ENHANCEMENT ", "content": "traditionally memory cycle times much longer processor cycle times speed gap memory processor means processor must wait memory respond access request advances hardware technology faster semiconductor memories available replaced core memories primary memory devices processormemory speed gap still exists since processor hardware speeds also increased several techniques used reduce speed gap optimize cost memory system obvious method increasing speed memory system using higher speed memory technology technology selected access speeds increased judicious use address decoding access techniques six techniques described following sections", "url": "RV32ISPEC.pdf#segment331", "timestamp": "2023-10-17 20:16:09", "segment": "segment331", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.1 Banking ", "content": "typically main memory built several physicalmemory modules module memory bank certain capacity consists mar mbr semiconductor memories module corresponds either memory ic memory board consisting several memory ics described earlier chapter figure 929 shows memory banking addressing scheme consecu tive addresses lie bank bank contains 2n n words 2m banks memor system r would contain n bits figur e 92 9a show functi onal mode l bank ba nk select sign al bs equi valent chip select cs figur e 91b mos signica nt bits r decoded select one b anks least sign icant n bits used select wor selected bank since subsequent addresse bank uring sequential progr execu tion proce ss acce sses h ave made sam e progr bank thus scheme limit instruction fetch one instruction per memor cycl e howeve r data progr stored different banks next instru ction fetched program bank ata require execu tion curr ent instru ction fetched data bank thereby increas ing speed memor access anot advantage scheme even one bank fails othe r banks provi de cont inuous memory space operation machin e unaff ected exc ept reduc ed memor capacity", "url": "RV32ISPEC.pdf#segment332", "timestamp": "2023-10-17 20:16:09", "segment": "segment332", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6. 2 Inter leaving ", "content": "int erleaving memor banks tech nique spread subse quent addre sses separ ate physical banks memory system e using loworder bits address select bank show n figure 930 th e advant age interlea ved memor organiza tion access reque st next word memory initiated current word bein g acce ssed overlapp ing manner mode access increas es overa speed memory access disadvantage scheme one memory banks fails complete memory system becomes inoperative alternative organization implement memory several subsystems subsyste consisting several interlea ved banks figur e 931 shows organization general g uideline select one addressing scheme othe r among three described earlier com puter syst em uses scheme th e basic aim schemes spread subse quent memor references several physical banks faster acce ssing possibl e", "url": "RV32ISPEC.pdf#segment333", "timestamp": "2023-10-17 20:16:09", "segment": "segment333", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.3 Multipo rt Memor ies ", "content": "multiple port ultiport memor ies available port correspo nds r mbr indep endent access memor mad e port memor system resolve coni cts betwee n ports prio rity basis mul tiport memorie usef ul environm ent one devi ce accesses memor y example system sing leproc essor syst em directm emory acce ss dma o controlle r see cha pter 7 multipr oces sor system one cpu see cha pter 1 2", "url": "RV32ISPEC.pdf#segment334", "timestamp": "2023-10-17 20:16:09", "segment": "segment334", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.4 Wider-W ord Fetch ", "content": "consid er interleaved memory figure 930 mb rs emp loyed one block banks activate ie bank select portion addre ss n ot used fetch words ne cycle mb rs concept widerw ord fetch fo r instanc e ibm 370 fetches 64 bits two wor ds memor acce ss enhanc es execu tion speed secon word fetched likely contains next instruction execu ted thus saving wait fet ch secon word contain required instruction eg jump new fetch require", "url": "RV32ISPEC.pdf#segment335", "timestamp": "2023-10-17 20:16:09", "segment": "segment335", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.5 Instructi on Buffer ", "content": "mbr scheme considered instru ction buffer providing rstin rstout fifo buffer queue cpu pri mary memory enhances instru ction fetch speed instructi ons pri mary memor fed b uffer e end cpu fetches instru ctions end shown figur e 932 long instru ction execu tion seque ntial two operations lling buffer prefetchi ng fetching buff er cpu go simultaneous ly whe n jump cond itional uncondi tional instruction execu ted next instruction execu ted may may buffer require instruction n ot buffer fetch opera tion must directed primary memor buff er relled new memor address buffer large enough com plete range jump loop may accomm odated buffer cases cpu signal buff er freeze thereby stop ping prefetch opera tion loop jump sat ised fetch operations cont inue normal ly th e buff er management requi res hardware component man age queue eg check queue ful l queue emp ty mechani sms identify address range buffer freeze unfreeze buffer tel 8086 processor uses 6 byte long instruction buff er organ ized queue cdc 6600 uses instru ction buffer store eight 60bi words 1632 instru ctions sinc e instru ctions either 15 30bits long figur e 933 shows instru ction buffer organ ization instruc tions main memory brough buffer bu ffer regist er lowest leve l buffer transf erred instru ction register executi cont ents buffer move one posi tion new set instructions 60 bits enters buff er buffer regist er branch instru ction encoun tered addre ss branch within range buffer next instruction retrieved instructions fetched new memory address", "url": "RV32ISPEC.pdf#segment336", "timestamp": "2023-10-17 20:16:10", "segment": "segment336", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.6 Cache Memory ", "content": "analyses memory reference characteristics programs shown typical programs spend execution times main modules routines tight loops therefor e addresse generat ed processor dur ing instruction data acce ssing short time periods tend cluste r around small regions main memor y prope rty know n progra loca lity princip le utilized desig n cache memory scheme th e cache memor small rder 12k 2k words fast order 5 10 times speed main memor memor modu le inserted betwee n proce ssor main memor y contain informat ion fre quently neede proce ssor processor accesses cache rather main memory thereby increas ing acce ss speed figure", "url": "RV32ISPEC.pdf#segment337", "timestamp": "2023-10-17 20:16:10", "segment": "segment337", "image_urls": [], "Book": "computerorganization" }, { "section": "9.34 ", "content": "show cache memory organ ization prima ry memor cache ivided bloc ks 2n n words block size depend reference characteri stics implied program localit main memory organiza tion fo r instanc e main memor organized n way interleaved memory conven ient mak e block n words long since n words retrieved main memor one cycle assume cache capacity 2 b b block s followi ng discussion ref er block main memory main memory frame bloc k cache memor block cache line addi tion data area w ith total capacity b 3 n words cache also tag area consist ing b tags tag tag area identies addre ss range n word corr esponding bloc k cache prima ry memory address cont ains p bits least sign icant n bits used repr esent n words frame remainin g pn bits address form tag block whereby tag beginning address block n words processor references primary memory address cache mech anism compares tag portion tag tag area matching tag ie cache hit corresponding cache block retrieved cache least signican n bits used select appro priate word block transmitt ed p rocessor matchi ng tag i e cache miss rst frame corr espond ing brough cache requi red word transf erred processor exam ple 93 figur e 935 shows cache mec hanism system 64 kb pri mary memor 1 kb cache 8 bytes per bloc k general cache mec hanism consist followi ng three functi ons 1 address translation function determines referenced block cache handles movement blocks primary memory cache 2 address mapping function determines blocks located cache 3 replacement algorithm determines blocks cache replaced space needed new blocks cache miss functions described next", "url": "RV32ISPEC.pdf#segment338", "timestamp": "2023-10-17 20:16:10", "segment": "segment338", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.6.1 ", "content": "address tra nslation addre ss translat ion funct ion figure", "url": "RV32ISPEC.pdf#segment339", "timestamp": "2023-10-17 20:16:10", "segment": "segment339", "image_urls": [], "Book": "computerorganization" }, { "section": "9.34 ", "content": "simpl e impleme nt enfor ces nword boundar frame th us even refere nced addre ss corres ponds last word fra words fra transf erred cache translation functi general sense nw ord frame starting arbitrar addre ss transf erred cache tag would p bits long rat pn bits addi tion overh ead gener al scheme always possibl e retrieve n words e memory cycl e even primary memor nway interlea ved becaus e disadvant ages general addre ssing schem e scheme show n figure", "url": "RV32ISPEC.pdf#segment340", "timestamp": "2023-10-17 20:16:10", "segment": "segment340", "image_urls": [], "Book": "computerorganization" }, { "section": "9.34 ", "content": "popular one", "url": "RV32ISPEC.pdf#segment341", "timestamp": "2023-10-17 20:16:10", "segment": "segment341", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6. 6.2 Address Mappi ng Funct ion ", "content": "addre ss mapp ing scheme figur e 934 main memory frame occupy cache bloc ks fore scheme called fully assoc iative mapping schem e th e disa dvantage scheme b tags must sear ched order etermine hit miss b large search time consumi ng tag area im plemented usin g ram aver age b2 ram cycles neede com plete search alternative implemen tag area cache usin g assoc iative memory sear ch b tags require one assoc iative memory compar e cycle one way increas ing tag search speed use least signican b bits tag indicat e cache bloc ks frame reside tags woul p n b bits long shown th map ping calle direct mapp ing sinc e primary memor addre ss map uniqu e cache block tag search fast since reduced one compari son howeve r mechanism divides 2p addre sses main memor int b 2b partition s thus 2p b addre sses map cache bloc k note addresse map cache blocks b 3 n words apart prima ry memor y conse cutive references addresse map sam e bloc k cache mech anism becomes inefcient since requi res large numb er block replace ments exam ple 94 figur e 936 shows directm apping scheme memor system example 91 note data area cache cont ains 32 bloc ks 8 bytes compromi se betwee n two type mapping called set associa tive map ping b cache b locks divided 2 k k partit ions contain ing bk blocks addre ss partiti oning k way set associat ive mapping shown note scheme similar direct mapping divides mainmemory addresses 2 bk partitions however frame reside one k corresponding cache blocks known set tag search limited k tags set figur e 937 show deta ils setas sociative cache scheme also note following relat ions k 0 implies direct mapping k b implies fully associative mapping examp le 95 figure 938 show four way setas sociative scheme memor system example 91 note data area f cache cont ains total 128 block 1 kb", "url": "RV32ISPEC.pdf#segment342", "timestamp": "2023-10-17 20:16:10", "segment": "segment342", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.6.3 Repl acemen t Algo rithm ", "content": "cache miss frame correspo nding referenced address brough cache placem ent new block cache depend mapping scheme used cache mechanism direct mapping used one possible cache block frame occupy cache block already occupied another frame replaced new frame fully associative mapping new frame may occupy vacant block cache vacant blocks necessary replace one blocks cache new frame case kway setassociative mapping k elements set new frame maps occupied one eleme nts needs replace new fra mos popul ar replaceme nt algorithm used replaces least recently u sed lru block cache progr localit principle follow immedia te ref erences thos e addresse refere nced recently case lru replace ment policy wor ks well identify lru block counter associa ted cache block block refere nced counter set zero counters blocks incr emented 1 time lru bloc k one whose counter highe st value counters usual ly cal led aging counters since indicate age cache blocks fifo replace ment policy also used block cache longest replace d determ ine b lock replace time frame brough cache iden tication numb er loaded queue th e output queue thus always contains identicat ion frame entered cache rst although mechani sm easy imple ment disa dvan tage un der certain condition bloc ks replace frequent ly possible policies least frequently used lfu policy block experienced least number references given time period replaced ii random policy block among candidate blocks randomly picked replacement simulation shown performance random policy based usage characteristics cache blocks slightly inferior policies based usage characteristics", "url": "RV32ISPEC.pdf#segment343", "timestamp": "2023-10-17 20:16:10", "segment": "segment343", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.6.4 Write Operations ", "content": "description mentioned assumed read operations cache primary memory processor also access data cache update consequently data primary memory must also correspondingly updated two mechanisms used maintain data consistency write back write write back mechanism processor writes some thing cache block cache block tagged dirty block using 1bit tag dirty block replaced new frame dirty block copied primary memory writethrough scheme processor writes cache block corresponding frame primary memory also written new data obviously writethrough mechanism higher overhead memory oper ations writeback mechanism easily guarantees data consistency important feature computer systems one processor accesses primary memory consider instance system io processor since io processor accesses memory dma mode using cache central processor accesses memory cache necessary data values consistent multiple processor systems common memory bus writeonce mechanism used rst time processor writes cache block corresponding frame primary memory also written thus updating primary memory cache controllers invalidate corresponding block cache subsequent updates cache affect local cache blocks dirty blocks copied main memory replaced writethrough policy common since typically write requests order 1520 total memory requests hence overhead due writethrough signicant tag portion cache usually contains valid bit corresponding tag bits reset zero system power turned indicating invalidity data cache blocks frame brought cache corresponding valid bit set 1", "url": "RV32ISPEC.pdf#segment344", "timestamp": "2023-10-17 20:16:10", "segment": "segment344", "image_urls": [], "Book": "computerorganization" }, { "section": "9.6.6.5 Performance ", "content": "average access time ta memory system cache given tc tm average access times cache primary memory respect ively h hit ratio 1h miss ratio would like ta close tc possible since tm tc h close 1 possible 92 follows tc ta 1 h 1 h tmtc 93 typically tm 510 times tc thus would need hit ratio 075 better achieve reasonable speedup hit ratio 09 uncommon contemporary computer systems progress hardware technology enabled costeffective conguration largecapacity cache memories addition seen intel pentium architecture described earlier processor chip contains rstlevel cache secondlevel cache congured externally processor chip processor architectures contain separate data instruction caches brief descriptions representative cache structures provided later chapter", "url": "RV32ISPEC.pdf#segment345", "timestamp": "2023-10-17 20:16:10", "segment": "segment345", "image_urls": [], "Book": "computerorganization" }, { "section": "9.7 SIZE ENHANCEMENT ", "content": "main memory machine usually large enough serve sole storage medium programs data secondary storage devices disks used increase capacity however program data must main memory program execution mechanisms transfer programs data main memory secondary storage needed execution program thus required typically capacity secondary storage much higher main memory user assumes complete secondary storage space ie virtual address space available programs data although processor access mainmemory space ie real physical address space hence name virtual storage early days programmers developed large programs t mainmemory space would divide programs independent partitions known overlays overlays brought main memory needed execution although process appears simple implement increases program development execution complexity visible user virtual memory mechanisms handle overlays automatic manner trans parent user virtual memory mechanisms congured one following ways 1 paging systems 2 segmentation systems 3 pagedsegmentation systems paging systems real virtual address spaces divided small equalsized partitions called pages segmentation systems use memory segments unequalsized blocks segment typically equivalent overlay described earlier pagedsegmentation systems segments divided pages segment usually containing different number pages describe paging systems simpler implement two systems refer books operating systems listed references section chapter details consider memory system shown figure 939 main memory 2p pages long 2p real pages denote page slot main memory frame secondary storage consists 2q pages ie 2q virtual pages real virtual pages 2d words long also q p since user assumes complete virtual space program virtual address av ie address produced program referencing memory system consists q bits however main memory physical address ap consists p bits page brought secondary storage mainmemory frame page table updated reect location page main memory thus processor refers memory address q bit address transformed p bit primary address using pagetable information referenced virtual page main memory table 2 disk directory searched locate page disk table 2 transforms av physical address avp disk form drive number head number track number displacement within track page avp transferred available real page location processor accesses data operation virtual memory similar cache following mechanisms required 1 address translation determines whether referenced page main memory keeps track page movements memories 2 mapping function determines pages located main memory direct fully associative setassociative mapping schemes used 3 page replacement policy decides real pages replaced lru fifo policies commonly used although functionally similar cache virtual memory systems differ several characteristics differences described", "url": "RV32ISPEC.pdf#segment346", "timestamp": "2023-10-17 20:16:11", "segment": "segment346", "image_urls": [], "Book": "computerorganization" }, { "section": "9.7.1 Page Size ", "content": "page size virtual memory schemes order 512 4 kb compared 816 byte block sizes cache mechanisms page size determined monitoring address reference patterns application program exhibited program locality principle addition access parameters block size secondary storage devices inuence page size selection", "url": "RV32ISPEC.pdf#segment347", "timestamp": "2023-10-17 20:16:11", "segment": "segment347", "image_urls": [], "Book": "computerorganization" }, { "section": "9.7.2 Speed ", "content": "cache miss occurs processor waits corresponding block main memory arrive cache minimize wait time cache mechanisms implemented totally hardware operate maximum speed possible miss virtual memory mechanism termed page fault page fault usually treated operating system call virtual memory mechanisms thus processor switched perform task memory mechanism brings corresponding page main memory processor switched back task since speed main criterion virtual memory mechanisms completely implemented using hard ware popularity microprocessors special hardware units called memory mana gement units mmu handl e virtual memory funct ions avai lable one device describ ed later chapt er", "url": "RV32ISPEC.pdf#segment348", "timestamp": "2023-10-17 20:16:11", "segment": "segment348", "image_urls": [], "Book": "computerorganization" }, { "section": "9.7. ", "content": "3 address tra nslation th e number entries tag area cache mechanism sma com pared entrie p age table virtual memory mec hanism sizes main memor secondary storage result maint enance page table minim izing page stable sear ch time im portant consi der ations descr ibe popular address translat ion schemes n ext figur e", "url": "RV32ISPEC.pdf#segment349", "timestamp": "2023-10-17 20:16:11", "segment": "segment349", "image_urls": [], "Book": "computerorganization" }, { "section": "9.40 ", "content": "shows pageta ble structure v irtual memory scheme figur e 9 39 th e page table cont ains 2 slots page eld virtual address forms inde x page table secon dary page moved main memor main memor frame numb er entered int correspo nding slot page table residenc e bit set 1 indicate mainme mory fra cont ains valid page residence bit bein g 0 indicates correspo nding main memor frame emp ty page table search require one access main memor whi ch page table usually maint ained scheme sinc e ther e 2p fra mes main memory 2q 2p slots page table always empty since q large page table wastes main memory space example 96 consider memory system 64k secondary memory 8k main memory 1k pages 10 bits q 16 bits p 13 bits ie q 6 p 3 th ere eight frame main memory secon dary storage cont ains 64 pages page table consist 64 entries entry 4bits long one 3 bit frame numb er 1 residen ce bit note rder retrieve data main memor two accesses required rst acce ss page table secon retrieve data page fault page fault time require move page main memor needs added access time figure 941 shows page table structure cont aining 2p slots slot contains vir tual page number correspo nding main memory fra numb er vir tual page loca ted struct ure page table rst searched locate page numb er v irtual address refere nce matching number found correspo nding frame numb er replaces q bit vir tual page numb er process requires average 2p2 searches throug h page table examp le 97 memory system exam ple 96 page table eight entrie needed scheme entry consists 10 bits residen ce bit 3bit mainme mory frame numb er 6bi secon dary storage page numb er figure 942 show scheme enhanc e speed page table search recently ref erenced entries page table maint ained fast memory usually associative memory called translation lookaside buffer tlb sear ch initiat ed tlb page table simulta neously match found tlb sear ch page table termi nated page table sear ch cont inues con sider 64 kb main memor 1 kb page siz e 10 p 6 sinc e main memory addre ss 16bits long th e page table contain 26 64 entries easily maintained main memory mainmemory capacity increased 1 mb page table would contain 1k entries signicant portion main memory thus mainmemory size grows size page table poses problem cases page table could maintained secondary storage page table divided several p ages brough int main memor one page time perform com parison figur e 94 3 show paged segment ation scheme virtual memory manage men t scheme virtual addre ss divided segment numb er page nu mber disp lacement segment number index segm ent table whose entries point base addre ss correspo nding page tables thus page table segm ent th e page table searched usual man ner arr ive main memory frame number", "url": "RV32ISPEC.pdf#segment350", "timestamp": "2023-10-17 20:16:11", "segment": "segment350", "image_urls": [], "Book": "computerorganization" }, { "section": "9.8 ADD RESS EXTE NSION ", "content": "descr iption virtual memory schemes assumed capac ity secon dary storage muc h higher main memor y addition assumed memor references cpu terms vir tual addresse s practice addressing range cpu limited direct representation virtual address part instruction possible consider instruction format asc instance direct addressing range rst 256 words memory range extended 64 kwords indexing indirect addressing suppose secondary storage 256 kwords long thus requiring 18 bits virtual address since asc represent 16bit address remaining 2 address bits must generated mechanism one possibility output two signicant address bits external register using wwd command lower 16 bits address address bus thus enabling secondary storage mechanism reconstruct required 18bit address shown figure 944 make scheme work assembler modied assemble programs 18bit address space lower 16 bits treated displacement addresses 2bit page bank address maintained constant respect program segment execution cpu executes wwd output page address external register rst instruction program segment th e addre ss extension conce pt illustrate figur e", "url": "RV32ISPEC.pdf#segment351", "timestamp": "2023-10-17 20:16:11", "segment": "segment351", "image_urls": [], "Book": "computerorganization" }, { "section": "9.4 ", "content": "4 usually terme bank sw itching fact scheme generalize extend addre ss show n figure", "url": "RV32ISPEC.pdf#segment352", "timestamp": "2023-10-17 20:16:11", "segment": "segment352", "image_urls": [], "Book": "computerorganization" }, { "section": "9.45, ", "content": "signica nt bits addre ss used sel ect one 2m nbi registers address bus pbits wide virtual addre ss n p bits long descr iption assumed data bus u sed output bit addre ss port ion practice need addre ss bus timemulti plexed transf er two addre ss portion s dec pdp 11 23 system uses thi addre ss extensi schem e seen figure", "url": "RV32ISPEC.pdf#segment353", "timestamp": "2023-10-17 20:16:11", "segment": "segment353", "image_urls": [], "Book": "computerorganization" }, { "section": "9.46, ", "content": "proce ssors use either paged base displacem ent addressi ng mode accom modate addre ss exten sion naturally", "url": "RV32ISPEC.pdf#segment354", "timestamp": "2023-10-17 20:16:11", "segment": "segment354", "image_urls": [], "Book": "computerorganization" }, { "section": "9.9 ", "content": "example systems v ariety structure used implementa tion memor hierar chies modern compute r syst ems early syst ems com mon see cache mec hanism imple mented com pletely using hardwar e virtua l memor mec hanism software intensiv e advanc es hardwar e technol ogy mmu became com mon mmu man age cache leve l hierar chy othe rs manage cache virtua l levels common see mos memor man agement hardware implemen ted proce ssor chip section provide brief description three memory systems rst expand description intel pentium processor structure chapter 8 show memory managem ent details followed hardwar e stru cture etails motorol 680 20 processor nal exam ple describ es memor management aspects sun micros ystems system3", "url": "RV32ISPEC.pdf#segment355", "timestamp": "2023-10-17 20:16:12", "segment": "segment355", "image_urls": [], "Book": "computerorganization" }, { "section": "9.9.1 Memory Mana gement in Intel Pro cessors ", "content": "sect ion extracted intel architec ture softwa dev eloper manua ls listed reference sect ion chapt er operating system designed work processor uses proce ssor memory man agement facilitie acce ss memory th ese facilitie provide features segment ation pagin g allow memor managed efcient ly relia bly program usually address physical memor directly emp loy proce ssor memory managem ent facilitie s processor addre ssable memor space calle li near address space seg mentation provid es mec hanism divi ding linear addre ss space smaller protected address spaces called segm ents segment intel processor used hold code data stack program hold system data struct ures program set segment mor e one program task running proce ssor proce ssor enfor ces boun daries segment ensures one program oes interf ere executi another program writing program segment s segment ation also allows typing segments operations per formed partic ular type segment restricted segment within syst em contained proce ssor linear addre ss space intel architect ure supports either direct physi cal addressing memory v irtual memor paging linear address treated physi cal addre ss physical addressi ng used code data stack syst em segm ents paged mos recently accessed pages bein g held physical memor paging used progr running intel proce ssor give n set resource executi ng instru ctions storin g code data sta te informat ion th ey include address space 232 byte set general data registers set segment registers see chapte r 8 details set status control registers resource assoc iated memor management described sect ion intel arch itecture uses thr ee memor modes at segm ented realadd ress mode linear address space appears sing le cont inuous address space progr at memor model show n figure 947 th e code ata procedure stacks contain ed addre ss space linear addre ss space model byte addressa ble size linear addre ss space 4 gb segmented memor mode l linear address space appea rs group independe nt segment shown figur e 948 th e code data sta cks stored separate segments model used program also needs issue logical address address byte segment model used segment selector used identify segment accessed offset used identify byte address space segment processor address 16383 segments different sizes types segment large 232 bytes th e reala ddress mode fig ure 949 uses specic impleme ntation segmen ted memory linear address space progr opera ting syst em consists array segment 64 kb maximum siz e linear address space reala ddress mode 220 bytes proce ssor opera te three operating modes protected real address system managem ent processor use memor mode ls prot ected mode use memor mode ls depend design opera ting syst em differe nt memor models used multit asking im plemented proce ssor suppor ts reala ddress mode memory model whe n real addre ss mode used processor swit ches separ ate address space calle system managem ent ram smr syst em man agement mode used memory mode l used addre ss byte thi addre ss space simi lar reala ddress mode mode l th e intel processor provides four memorym anageme nt registers global descr iptor table register g dtr loca l descriptor table regist er ld tr interr upt descr iptor table register idtr task regist er tr registers speci fy locations data structure control segm ented memor man agem ent figure 950 shows details th e gdt r 48 bits long divided two parts 16bit table limit 32bi linear base address th e 16bit table limi speci es numb er bytes table 32bit base addre ss speci es linear addre ss byte 0 gdt two instructions used load lgdt store sgdt gdtr idtr also 48 bits long divided two parts 16bit table limit 32bit linear base address 16bit table limit species number bytes table 32bit base addre ss species linear address byte 0 idt lidt sid used load store register ldtr h system segment regist ers segment descriptor regi sters th e syst em segment regist ers 16bit long used select segment s segment limit tr automa tically load ed alon g task register task switch occurs ve cont rol registers cr0 cr1 cr2 cr3 cr4 deter mine opera ting mode processor charact eristics current ly executi ng tas ks int el pen tium architecture suppor ts caches tlbs write buffers temporary onch ip storage instructions data see figur e", "url": "RV32ISPEC.pdf#segment356", "timestamp": "2023-10-17 20:16:12", "segment": "segment356", "image_urls": [], "Book": "computerorganization" }, { "section": "9.51). ", "content": "two levels cache level 1 l1 leve l 2 l2 th e l1 cache closely coupl ed instruction fetch unit processor l2 cache closely coupled l1 cache processor syst em bus int el 48 6 proce ssor ther e one cache instru ction data pentium subse quent proce ssor series two separat e caches one instru ction data intel pentium proce ssor l2 cache external processor packa ge optional l2 cache unied cache storage instru ction data linear address contain two items segm ent selector offs et segment selector 16bit identi er segment usually points segm ent descrip tor denes segm ent segm ent sel ector contain three item index t1 table indicator ag requested privilege level rpl index used select one 8192 2 descriptors gdt ldt t1 ag used select descr iptor table use rpl used specify privilege level sel ector figur e 952 shows thes e item arranged segment sel ector fo r proce ssor access segm ent segment mus loaded appro priate segment regi ster processor provides six regi sters holding six segment select ors ea ch segment regist ers suppor ts specic kind memor refere nce code sta ck data codesegm ent cs datase gment sta cksegment ss regi sters must load ed valid segment selectors kind progr execu tion take place howeve r processor also provi des three addi tional datase gment regist ers es fs gs registers used mak e addi tional data segment avai lable progr ams segment regist er two parts visibl e part hidden part processor load base addre ss segment limi acce ss informat ion hidde n part segment sel ector visibl e part shown figur e 953 segm ent descrip tor entry gdt ldt provides processor siz e location segment well access cont rol status inf ormation intel system compiler linkers loaders opera ting system typical ly create segment descrip tors figure 954 shows gener al descrip tor format type intel segment descr iptors segm ent descr iptor table data structure linear address space cont ains array segment descrip tors cont 8192 2 32 8byte descriptors two kinds descriptor tables global descriptor table gdt local descriptor tables ldt intel system must one gdt dened one ldt dened base linear addre ss limi gdt must b e load ed gdtr format shown figur e 955 basis segm entation mechanism u sed wi de variet system designs possi ble th e intel system design varies at mode ls make minimal use f segm entation protect program multi segmented mode ls use segment ation create powerfu l opera ting envi ronment exampl es segment ation used intel system improv e memor man agement performanc e provided follow ing paragrap hs basic at model simpl est memory mode l syst em detail mode l show n figure 956 gives opera ting syst em programs access continuo us unsegmen ted address space int el system impleme nts mode l way hides segm entation mechanism architecture system designer application programmer order give greatest extensibility implement model system must create least two segment descrip tors one referencing code segment referencing data segment segments mapped entire linear space segment descrip tors point sam e base addre ss value 0 segment limit 4 gb protected at model simi lar basic a model th e dif ference segment limits mode l segment limit set include range addresse physical memor actually exists th model shown figure 957 model provide better protectio n addi ng simple paging structure multise gmente mode l uses full capabiliti es segm entation mec h anism provide protectio n code data struct ures tasks th e mode l show n figure 958 thi model program give n table segment descriptor segment s segment comple tely private assigned program shared amo ng progr ams show n g ure segment register separate segment descriptor associated segment descrip tor points separat e segment acc ess segm ents executi environm ents indivi dual program running intel system controlle hardw four cont rol regist ers facilitate paging options cr0 cont ains syst em control ags control ats cont rol opera ting mode states proce ssor bit 31 cr0 pagi ng pg ag pg ag enables page translat ion mechani sm ag must set processor imple menting dem andp aged virtual memory system operatin g syst em designed run mor e one program task vir tual 808 6 mode cr2 contain linear addre ss cause page fault cr3 cont ains physical addre ss base f page dir ectory referred page directory base register pd br 2 0 mos signica nt bits pdbr specied lower 12 bits assumed 0 bit 4 cr4 page size exte nsions pse ag pse ag enabl es larger page size 4 mb pse ag set com mon page length 4 kb used figure 959 shows various con gurations achieved setting pg pse flags well ps page size ag paging enabled processor uses information contained three data structures translate linear address physical addresses rst data structure page directory array 32bit pagedirectory entries pdes contained 4 kb page 1024 pdes held page directory second data structure page table array 32bit pagetable entries ptes contained 4 kb page 1024 ptes held page table third data structure page either 4 kb 4 mb at address space depend ing settin g ps ag figur es", "url": "RV32ISPEC.pdf#segment357", "timestamp": "2023-10-17 20:16:13", "segment": "segment357", "image_urls": [], "Book": "computerorganization" }, { "section": "9.60 ", "content": "", "url": "RV32ISPEC.pdf#segment358", "timestamp": "2023-10-17 20:16:13", "segment": "segment358", "image_urls": [], "Book": "computerorganization" }, { "section": "9.61 ", "content": "show format pagedir ectory ptes whe n 4 kb pages 4 mb p ages used dif ference formats 4 mb format u tilize page tables result page table entrie s th e functi ons ags e lds entries follow page base address bits 1231 2231 4 kb pages species physical address rst byte 4 kb page page table entry physical address rst byte page table pde 4 mb pages species physical address rst byte 4 mb page page directory bits 2231 used rest bits reserved must set 0 present p ag bit 0 indicates whether page page table pointed entry currently loaded physical memory ag clear page memory processor attempts access page generates pagefault exception readwrite rw at bit 1 species readwrite privileges page group pages case pde points page table ag clear page read ag set page read written usersupervisor us at bit 2 species usersupervisor privileges page group pages ag clear page assigned supervisor privilege level ag set page assigned user privilege level pagelevel writethrough pwt at bit 3 controls writethrough writeback caching policy individual pages page tables pagelevel cache disable pcd at bit 4 controls whether indivi dual pages page tables cached accessed at bit 5 indicates whether page page table accessed dirty ag bit 6 indicates whether page written page size ps ag bit 7 determines page size ag clear page size 4 kb pde points page table ag set page size 4 mb normal 32bit addressing pde points page pde points page table pages associated page table 4 kb long global g ag bit 8 indicates global page set availabletosoftware bits bit 911 bits available use software tlbs onchip caches processor utilizes store recently used pagedirectory ptes pentium processor separate tlbs data instruction caches well 4 kb 4 mb page sizes paging performed using contents tlbs tlbs contain translat ion informat ion reque sted p age bus cycl es page direct ory p age tables memor perform ed figure", "url": "RV32ISPEC.pdf#segment359", "timestamp": "2023-10-17 20:16:13", "segment": "segment359", "image_urls": [], "Book": "computerorganization" }, { "section": "9.62 ", "content": "show page directory pageta ble hierar chy whe n map ping linear addresse 4 kb pages entr ies page dir ectory point p age tables entries page table point pages physi cal memor y paging method used address 2 20 pages whi ch spans linear addre ss space 232 bytes 4 gb select various table entrie linear addre ss divided three sect ions bits 223 1 provide offset entry page directory sel ected entry provides b ase physical address page table bits 1221 provide offset entry selected page table entry provi des base physical addre ss page physical memor y bits 011 provide offset physical addre ss page figure 963 shows use page direct ory map linear addresse 4 mb pages entries page directo ry point 4 mb pages physi cal memory page tables used 4 mb pages page sizes map ped dir ectly one pdes pdbr points base page dir ectory bits 2231 provide offs et entry page directo ry selected entry provides base physical addre ss 4 mb page bits 021 provi de offs et physi cal address page expl ained previous ly whe n pse ag cr4 set b oth 4 mb pages page tables 4 kb pages acce ssed page directory mix ing 4 kb 4 mb p ages u seful fo r exam ple opera ting system executi kernel placed large page reduc e tlb misse thus improve overall system performance processor maintain 4 mb page entries 4 kb page entries uses separate tlb placing often used code kernel large page frees 4 kb page tlb entries application programs tasks intel architecture supports wide variety approaches memory man agement using paging segmentation mechanisms forced correspondence boundaries pages segments page contain end one segment beginning another likewise segment contain end one page beginning another segments mapped pages several ways one implementation demonstrated figure 964 approach gives segment page table giving segment single entry page directory provides access control information paging entire segment", "url": "RV32ISPEC.pdf#segment360", "timestamp": "2023-10-17 20:16:13", "segment": "segment360", "image_urls": [], "Book": "computerorganization" }, { "section": "9.9.2 Motorola 68020 Memory Management ", "content": "figure 965 shows block diagram mc68020 instruction cache data cached system cache contains 64 entries entry consists 26bit tag eld 32bit data instruction eld tag eld contains signicant 24 bits a8a31 memory address valid bit function code bit fc2 fc2 1 indicates supervisory mode fc2 0 indicates user mode operation address bits a2 a7 used select one 64 entries cache thus cache directmapped data eld hold two 16bit instruction words cache access tag eld selected entry rst matched a8a31 match valid hit 1 a1 selects upper lower 16bit word data area selected entry match valid bit 0 miss occurs 32bit instruction referenced address brought cache valid bit set 1 system onchip cache access requires two cycles offchip memory access requires three cycles thus hits memoryaccess time reduced considerably addition data accessed simultaneously instruction fetch", "url": "RV32ISPEC.pdf#segment361", "timestamp": "2023-10-17 20:16:13", "segment": "segment361", "image_urls": [], "Book": "computerorganization" }, { "section": "9.9.3 Sun-3 System Memory Management ", "content": "figure 966 shows structure sun3 workstation microcomputer system based mc68020 addition processor system uses oating point coprocessor mc68881 mmu cache unit 4 8 mb memory ethernet interface networking highresolution monitor 1 mb display memory keyboard mouse ports two serial ports bootstrap eprom id prom con guration eerom timeofday clock memory bus interface external devices interfaced 3100 3200 systems vme bus interface restrict description memory management aspects system although mc68020 memory management coprocessor mc68851 sun microsystems chose use mmu design mmu supports mapping 32 mb physical memory time total virtual address space 4 gb figure 967 virtual address space divided four physical ie real address spaces one physical memory three io device space physical address space divided 2048 segments segment containing 16 pages page size 8 kb segment page tables stored highspeed memory attached mmu rather main memory speeds address translation since page faults occur segment pagetable accesses mmu contains 3bit context register address translation mechanism shown figure 967 contents context register concatenated 11bit segment number 14 bits used index segment table fetch pagetable pointer pagetable pointer concatenated 4bit virtual page number used access page descriptor page table page table yields 19bit physical page numb er th conca tenated 13bit disp lacement v irtual addre ss drive 32bit physi cal address allow addre ssing range 4 gb four address spaces addi tion page numb er ptes cont protect ion sta tistics page type elds protectio n eld contain fol lowing bits va lid bit th b indicates page valid set access corr espond ing page resu lts page fault write enable bit set bit indicates write opera tion allowed page superv isor bit set bit indicates page may acce ssed whe n processor supervisory mode user mode accesses allow ed statis tics eld bits iden tify mos recently used pages page succe ssfully acce ssed mmu sets ccessed bit page access write modi ed bit page also set th ese bits used page replace ment modi ed pages transferr ed backing store duri ng page replace ment type bits bits indi cate four physi cal space page resides basis thes e bits memor yaccess request routed main memor onbo ard o bus vme bus memor may reside onboard io bus vm e bus memor map ped user program used main memor y th e mmu overl aps addre ss transl ation physi cal memory acce ss address transl ation sta rted 1 3bit disp lacem ent valu e sent memor row address th act ivates memor cel l ever page corr espond row set fetch cont ents translation com pleted translat ed part addre ss sent column address memor y time memor cycle almost complete transl ated addre ss selects memor wor appro priate row overlapp ed opera tion translat ion time eff ectively zero th e mmu control acce sses memor includi ng dma devices soc alled direct vir tual memor yaccess vma mode devices give n vir tual addresse drivers mmu interp oses betwee n evices memory translates virtual addresses real addresses transparent devices process places high load address translation logic avoid saturation translation logic mmu due load separate translation pipeline provided dma operations sun3 systems use two different cache systems 3100 series external cache uses onchip cache mc68020 3200 series uses exte rnal 64 kb cache fig ure 968 wor ks clos ely mmu cache organized 4096 lines 16 bytes direct mapped tag area ram comparator used tag matching cache uses writeback mechanism since cache maintained twoport ram writeback operations performed parallel cache access processor cache works virtual rather real addresses address translation started memory reference parallel cache search cache hit translation aborted reader referred manufacturer manual details", "url": "RV32ISPEC.pdf#segment362", "timestamp": "2023-10-17 20:16:13", "segment": "segment362", "image_urls": [], "Book": "computerorganization" }, { "section": "9.10 SUMMARY ", "content": "four basic types memories ram cam sam dam introduced chapter design typical memory cells large memory systems representative ics described main memory computer system built semiconductor ram devices since advances semiconductor tech nology made costeffective magnetic optical disks current popular secondary memory devices speedversuscost tradeoff basic parameter designing memory systems although largecapacity semiconductor rams available used system design memory hierarchies consisting combination random directaccess devices still seen even smaller computer systems memory subsystem modernday computer systems consists memory hierarchy ranging fast expensive processor registers slow least expensive secondary storage devices purpose hierarchy capacity speed cost tradeoff achieve desired capacity speed lowest possible cost chapter described popular schemes enhancing speed capacity memory system virtual memory schemes common even microcomputer system modernday microprocessors offer onch ip cache memor ymanag ement hardwar e enable b uilding large fast memor syst ems memory capac ity increas es addressing range proce ssor becomes li mitation popular schemes extend addressi ng range als escribed chapter th e speed cost charact eristics memor devices keep improv ing almost daily resu lt avoi ded listing characterist ics devices curr ently availa ble refer manufac turers manual mag azines liste references section chapter info rmation", "url": "RV32ISPEC.pdf#segment363", "timestamp": "2023-10-17 20:16:13", "segment": "segment363", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 10 ", "content": "arithmetic logic unit enhancement asc arithmeti c logic unit alu desi gned perform addition comple mentati singlebi shift opera tions 16bit bina ry data parallel four basic arithmet ic opera tions addit ion subt raction multiplic ation divisi perform ed using alu appro priate softw rout ines described chapter 5 advances ic technol ogy made practical impleme nt alu functio ns required hardwar e thereby minimi zing softw imple mentati incr easing speed chapter describ e nu mber enhancem ents make asc alu resemble prac tical alu majority modernday processors use bitparallel alus possible use bitserial alu slow operations acceptable example calculator ic serial arithmetic may adequate due slow nature human interaction calculator building processors single ics considerable amount silicon real estate conserved using serial alu silicon area thus saved used implementing complex operations may required discusse chapter 8 mos com mon opera nds alu deals binary xed oating point decima l charact er string s describe alu enhancem ents facil itate xedp oint bina ry logical data manipu lation next section section 102 deals wi th decima l arithmeti c se ction 103 describ es pipelinin g concept applied alus common see either multipl e functi onal units within alu r multipl e alu within cpu sect ion 104 outlin es use multipl e units enable para llelism f operations thereby achi eving higher speed s sect ion 105 provi des details sever al commercially available systems", "url": "RV32ISPEC.pdf#segment364", "timestamp": "2023-10-17 20:16:14", "segment": "segment364", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1 LOGICAL AND FIXED-POINT BINARY OPERATIONS ", "content": "asc alu designed perform singlebit right left shift types shift operations useful practice parallel binary adder used asc alu slow since carry ripple least signicant bit signicant bit addition complete several algorithms fast addition devised describe one algorithm section various multiplication division schemes devised describe popular ones", "url": "RV32ISPEC.pdf#segment365", "timestamp": "2023-10-17 20:16:14", "segment": "segment365", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1.1 Logical Operations ", "content": "singlebit shifts performed asc alu called arithmetic shift oper ations since results shift conformed 2s complement arithmetic used alu asc alu designed 1 bit shift time desirable capability performing multiple shifts using one instruction since address eld asc instruction format shift instructions used may represent number shifts shr n shl n operand n species number shifts alu hardware designed perform arbitrary number shifts one cycle multiple cycles using singleshift operation repeatedly control unit must changed recognize shift count address eld perform appropriate number shifts practical alus common see rotate operations rotate rightleft witho ut carry etc describ ed earli er see figur e 86 logic operations exclusive also useful alu functions operands operations could contents registers andor memory locations instance asc z might imply bitbybit anding accumulator contents memory location z might imply bitbybit complementation accumulator logical shift operations useful manipulating logical data characters strings logical shift bits shifted left right zeros typically inserted vacant positions sign copying performed arithmetic shifting", "url": "RV32ISPEC.pdf#segment366", "timestamp": "2023-10-17 20:16:14", "segment": "segment366", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1.2 Addition and Subtraction ", "content": "pseudoparallel adder used asc alu slow addition complete carry rippled signicant bit several techniques increase speed addition discuss one technique carry lookahead consider ith stage pseudoparallel adder inputs ai bi ci1 carry outputs si ci carry dene two functions funct ions imply state gener ates carry ai bi 1 carry ci 1 propa gated c ai bi 1 subst ituting g pi equatio ns sum carry functions ful l adder get eq uation 105 seen ci function gs ps th earlier stages cis simul taneously gener ated g ps availab le figure 10 1 shows 4bit carry lookahe ad adder cla schem atic detailed circui try adder faster rip plecarry adder since delay inde penden numb er stages adder equal delay three stages circuitry figure 101 ie six gate dela ys 4 8bit offtheshelf cla units available possible connect several units form larger adders refer sect ion 105 details offth eshelf alu", "url": "RV32ISPEC.pdf#segment367", "timestamp": "2023-10-17 20:16:14", "segment": "segment367", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1.3 Multip lication ", "content": "multiplica tion performed repeated addition multiplicand multiplier number times binary system multiplication power 2 corresponds shifting multiplicand left one position hence multiplica tion performed series shift add operations examp le 101 note nonzero partial produc multipl icand shif ted left appropri ate number bits since p roduct two nbi numbers 2 n bits long start 2nbit accumul ator cont aining 0s obtain produc followi ng algo rithm 1 perform following step n times 0 n 1 2 test multiplier bit ri ri 0 shift accumulator right 1 bit ri 1 add multiplicand signicant end accumulator shift accumulator right 1 bit figure 102a show set regist ers used multipl ying two 4bit numbers gener al multipl ying two nbi numb ers r regist ers n bits long accum ulator n 1 bits long conca tenation regi sters r ie c r used store produc t extra bit used represen ting sign produc t see multipl ication exam ple show n figure 102c extra bit neede store carry partia l produc computa tion multipl ication algo rithm shown figur e 102b variou multip lication algo rithms perform mul tiplication faster exam ple avai lable hardw multi pliers also available offthes helf units one multiplie r describ ed sect ion 105 book hwang 1979 listed reference end chapter provides details multiplication algorithms hardware", "url": "RV32ISPEC.pdf#segment368", "timestamp": "2023-10-17 20:16:14", "segment": "segment368", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1.4 Division ", "content": "division performed repeated subtraction describe two division algorithms utilize shift add operations section assume nbit integer x divide nd divided anot nbit int eger di visor obta nbit quotient q remain der r th e rst algorithm corr espond usual trialande rror procedure divisi illus trated exam ple 102 note subt ractions perform ed exampl e two numb ers rst com pared smaller numb er subt racted larger result sign larger number exam ple rst step 0001011 smaller 0011000 hence 0011000 0001011 0001101 resu lt negativ e calle restori ng division algo rithm sinc e dividend restor ed previous valu e result subt raction step negativ e numbers expresse comp lement form subt raction replace addition algorithm nbit divi sion general ized 1 assume initial value dividend n 1 zeros concatenated x 2 perform following n 1 0 subtract 2i result negative qi 0 restore adding 2i result positive qi 1 3 collect qi form quotient q value last step remainder second divi sion algori thm nonres toring division method exam ple", "url": "RV32ISPEC.pdf#segment369", "timestamp": "2023-10-17 20:16:14", "segment": "segment369", "image_urls": [], "Book": "computerorganization" }, { "section": "10.3 ", "content": "illustrate method th method generalize int fol lowing steps 1 assume initial value dividend n 1 0s concatenated x set n 1 2 subtract 2i 3 result negative qi 0 result positive qi 1 1 go step 4 4 result negative add 2 result otherwise subtract 2 result form new 0 go step 5 otherwise go step 3 5 nal result negative q0 0 otherwise q0 1 nal result negative correct remainder adding 28 th e multiplic ation divisi algo rithms discusse assum e perands po sitive provisi mad e sign bit although sign icant bit result divisi algo rithms tre ated sign bit direct methods multiplic ation divisi numbers represe nted 1s 2s comple ment forms availa ble th e hardwar e impleme ntation mul tiply divide usually optional featur e sma ller machin es mul tiply ivide algo rithms im plemented either hardwar e rm ware developi ng micro program impleme nt algo rithms usin g soft ware routine cha pter 5 figure", "url": "RV32ISPEC.pdf#segment370", "timestamp": "2023-10-17 20:16:14", "segment": "segment370", "image_urls": [], "Book": "computerorganization" }, { "section": "10.3 ", "content": "shows regist er usage binary multiply divide operations ibm 3 70", "url": "RV32ISPEC.pdf#segment371", "timestamp": "2023-10-17 20:16:14", "segment": "segment371", "image_urls": [], "Book": "computerorganization" }, { "section": "10.1 ", "content": "5 sta ckbas ed alu escribed chap ter 8 zeroadd ress machin e uses top two levels stack operands majority operations two operands discarded operation result operation pushed onto stack stack based alu architectures following advantages 1 majority operations top two levels stack hence faster execu tion times possible address decoding data fetch excessive 2 intermediate results computation usually left stack thereby reducing memory access needed increasing throughput 3 program lengths reduced since zeroaddress instructions shorter com pared one two threeaddress instructions figure 104 shows model stack data input register p ushed onto stack push cont rol signal activa ted conten ts top level stack always availa ble toplevel output pop act ivated pops sta ck thereby moving second leve l top level ow error generat ed attempt made pop emp ty stack overo w err condition occurs attempt made push data full stack appendi x b provides deta ils two popul ar im plementati ons sta cks hardware ie shift registerbased implementation used alu designs rather rambased implementation since fast stack operations desired machines use combination shift register rambased implemen tations implementing stacks alu set alu registers organized top levels stack subsequent levels imple mented ram area thus levels needed often accessed fast figure 105 shows details stackbased alu functional unit combinational circuit perform operations addition subtraction etc input operands x produce result z function module depends control signals control unit order add two numbers top two levels stack following sequence microoperations needed control unit produce microoperation sequences operations allowed alu note sequence assumed data top two levels stack result previous alu operations", "url": "RV32ISPEC.pdf#segment372", "timestamp": "2023-10-17 20:16:15", "segment": "segment372", "image_urls": [], "Book": "computerorganization" }, { "section": "10.2 DECIMAL ARITHMETIC ", "content": "alus allow decimal arithmetic mode 4 bits used represent decimal bcd digit arithmetic performed two modes bit serial digit serial examp le 104 figure 106 shows addition two decima l digits case 1 sum resu lts valid digit cases 2 3 sum exceeds highest v alid digi 9 hence 6 added sum bri ng prope r deci mal value thus add 6 correctio n needed whe n sum digits 16 f 16 figure 107a show digit serial bit parallel decima l adder bit serial adder shown figur e 107b th e bit serial adder similar serial adder discussed chapter 3 excep add 6 correct ion circui t sum bits enter b regist er lefthand input bit 4 decima l correct ion circuit examin es sum bit s0 s1 s2 s3 returns correct ed sum b register generat ing appropri ate carry next digit processor use separ ate instru ctions deci mal arithmeti c othe rs use speci al instru ctions switch alu deci mal bina ry arithm etic modes latter case set instru ctions operate deci mal binary data th e ek 6502 uses set deci mal instru ction ente r decima l mode clear deci mal instru ction return binary mode th e arithmeti c performed digit serial mode hewl ettpa ckard 35 system uses serial arithmet ic 13digi 52 bit oating point decimal nu mbers", "url": "RV32ISPEC.pdf#segment373", "timestamp": "2023-10-17 20:16:15", "segment": "segment373", "image_urls": [], "Book": "computerorganization" }, { "section": "10.3 PIPE LINING ", "content": "th e three ste ps invol ved oating point addi tion used desi gn three stage adder shown figure", "url": "RV32ISPEC.pdf#segment374", "timestamp": "2023-10-17 20:16:15", "segment": "segment374", "image_urls": [], "Book": "computerorganization" }, { "section": "10.8. ", "content": "rst stage perform equaliza tion f expone nts passe two modi ed operands secon stage secon stage perform addition passes result third stage third sta ge perform normal ization stage receives cont rol sign als r1 r2 holding regist ers hold data betwee n stages imple mentati calle pipeline note thr ee stages work inde penden tly data provided th us opera nds move secon stage rst stage new set opera nds inse rted rst stage similar ly whe n secon stage passes results third receives next set opera nds second stage opera tion cont inues rese mbles factor asse mbly line assume stag e takes 1 time unit com plete operation total time compute sum 3 units howeve r due pipeli ne mode operation see one resu lt emerging stage three ever time unit rst 3 time units ie pipeline full beginnin g pipeline emp ty thus add n sets operands pipe line need 3 n1 time u nits rather 3n time units neede nonpi peline impleme ntation speed up ffered pipe line 3n 3 n 1 large values n speed up tend 3 gener al stage pipeline offers speed up one require ment operands need aligned keep pipeline busy also since stage pipeline required operate inde pendently complex multistage control unit design needed provide details pipelines chapte r 11 example 106 figure 109 shows pipeline scheme add two 4bit binary numbers carriespropagatealongthepipelinewhilethedatabitstobeaddedareinputthrough lateral inputs stages note pipeline full stage performing addition time four sets operands processed addition operands entering pipeline time complete time 4d processing time stage necessary store sum bits produced four time slots get sum set operands", "url": "RV32ISPEC.pdf#segment375", "timestamp": "2023-10-17 20:16:15", "segment": "segment375", "image_urls": [], "Book": "computerorganization" }, { "section": "10.4 ALU WITH MULTIPLE FUNCTIONAL UNITS ", "content": "obvious method increasing throughput alu employ multiple functional units design functional unit either generalpurpose unit capable performing alu operations dedicated unit perform certain operations many functional units needed activated control unit simultaneously thereby achieving high throughput control unit design becomes complex since required maintain status functional units schedule processing functions optimum manner take advantage possible parallelism consider computation z b c ef program perform twoaddress machine would machine one functional unit operations add sub mpy div move perform one mpys div simultaneously rst add operation performed two mpys second add wait rst add div completed move performed second add thus possible invoke one operation simultaneously long obey precedence constraints imposed computation desired machine two mpy units mpys done simultaneously make operation efcient need complex instruction processing mechanism retrieves instructions program issues appropriate functional unit based availability functional units processors mechanisms invoke one instruction simultaneously known superscalar architectures description assumed functional unit capable performing one function need case practice units could fulledged processors handle integer oatingpoint operations may also multiples type units often pipelined offer better performance architectures also use sophisticated compilers schedule operations appropriately make best use computing resources compilers generate approxim ate sched ules instructions instru ction proce ssing hardw resolve depend enci es among instru ctions intel penti um describ ed chapte r 8 super scalar arch itecture multipl e integer oating point mmx units see figures 822 829 three instru ction proce ssing stag es fetch decode dispatch execu te retire retrievi ng instructions instru ction po ol reordering reserv ing run many functional units possibl e simul taneously long instruction word vliw architect ures instru ction wor capable holdi ng multipl e instructions instruction word fetched com ponent instructions invoked assum ing functional units available handle require com pilers assembler p ack multipl e instructions instru ction word assume vliw ve slots one type functional unit perform opera tions add sub mpy div move rewrite program machin e two mpy units could packe two mpys rst instruction word prac tice vliw architectu res p ack 48 instru ctions p er instruction word use sophi sticated com pilers handle depend ency among instru c tions packa ge appropriately since instructions scheduled compile time change hardware environment run time makes impractical obey schedule recompilation code would necessary int el itanium ia6 4 descr ibed chapter 11 exam ple vliw architecture", "url": "RV32ISPEC.pdf#segment376", "timestamp": "2023-10-17 20:16:15", "segment": "segment376", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5 EXAMPLE SYSTEMS ", "content": "section provide brief descriptions following commercially avail able systems ranging complexity alu ic supercomputer system 1 sn74181 alu ic capable performing 16 common arithmeticlogic func tions 2 texas instruments msp 430 hardware multiplier 3 motorola 68881 oatingpoint coprocessor supports integer processor 4 control data corporation cdc 6600 early system multiple functional units included historical interest system formed basis architecture cray series supercomputers 5 cray xmp early supercomputer system multiple pipelined functional units", "url": "RV32ISPEC.pdf#segment377", "timestamp": "2023-10-17 20:16:15", "segment": "segment377", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5.1 Multifunction ALU IC (74181) ", "content": "alu built using offtheshelf ics example sn 74181 shown figure 1010 multifunction alu perform 16 binary arithmetic operations two 4bit operands b function performed selected pins s0 s1 s2 s3 includes addition subtraction decrement straight transfer shown figure 1010b cn carry input cn4 carry output ic used 4bit ripplecarry adder ic also used lookahe ad carr generator 7418 2 build highspe ed adders multipl e sta ges forming 8 12 16bit adder", "url": "RV32ISPEC.pdf#segment378", "timestamp": "2023-10-17 20:16:15", "segment": "segment378", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5.2 ", "content": "texas instr uments msp 430 hardw multiplie r sect ion extracted msp430 har dware multipl ier funct ions applicat ions slaa042 april 1999 texas instrumen ts inc ms c 430 hardwar e multip lier allows three differ ent multiply operations modes 1 multiplication unsigned 8 16bit operands mpy 2 multiplication signed 8 16bit operands mpys 3 multiplyandaccumulate function mac using unsigned 8 16bit operands mixtu operand lengths 8 16 b possibl e addi tional operations possible suppl ementa l softw used signed multipl yand accumul ate figure", "url": "RV32ISPEC.pdf#segment379", "timestamp": "2023-10-17 20:16:15", "segment": "segment379", "image_urls": [], "Book": "computerorganization" }, { "section": "10.11 ", "content": "show hardw modules comprise msp430 multiplie r acce ssible regist ers explain ed follow ing paragrap hs figur e", "url": "RV32ISPEC.pdf#segment380", "timestamp": "2023-10-17 20:16:15", "segment": "segment380", "image_urls": [], "Book": "computerorganization" }, { "section": "10.11 ", "content": "intende depi ct ph ysical reality illus trates hardwar e multiplie r program mer point view figur e", "url": "RV32ISPEC.pdf#segment381", "timestamp": "2023-10-17 20:16:15", "segment": "segment381", "image_urls": [], "Book": "computerorganization" }, { "section": "10.12 ", "content": "shows system struct ure whe msp430 peripheral hardwar e multiplie r part ms p430 cpui peripher al interfer e cpu activities multip lier uses norm al peripher al registers loaded read using cpu instructions programmeraccessible registers explained next operand 1 registers operational mode msp430 hardware multiplier determined address operand 1 written 1 address 130h executes unsigned multiplication mpy 2 address 132h executes signed multiplication mpys 3 address 134h executes unsigned multiplyandaccumulate mac address operand 1 alone determines operation performed multiplier modication operand 2 operation started modication operand 1 alone example 107 multiply unsigned mpy operation dened started two operands reside r14 r15 operand 2 register operand 2 register address 138h common three multiplier modes modication register normally mov instruction starts selected multiplication two operands contained operand 1 operand 2 registers result written immediately three hardware registers sumext sumhi sumlo result accessed next instruction unless indirect addressing modes used source addressing sumlo register 16bit register contains lower 16 bit calculated product sum instructions may used access modify sumlo register high byte accessed byte instructions sumhi register contents 16bit register depend previously executed operation follows 1 mpy signicant word calculated product 2 mpys signicant word including sign calculated product 2s complement notation used product 3 mac signicant word calculated sum instructions may used sumhi register high byte accessed using byte instructions sumext register sum extension register sumext allows calculations results exceeding 32 bit range readonly register holds signicant part result bits 32 higher content sumext different multiplication mode 1 mpy sumext always contains zero carry possible largest possible result 0ffffh 3 0ffffh 0fffe0001h 2 mpys sumext contains extended sign 32 bit result bit 31 result multiplication negative msb 1 sumext contains 0ffffh result positive msb 0 sumext contains 0000h 3 mac sumext contains carry accumulate operation sumext contains 0001 carry occurred accumulation new product old one zero otherwise sumext register simplies multiple word operationsstraightforward additions performed without conditional jumps saving time rom space example 108 new product mpys operation operands r14 r15 added signed 64 bit result located ram word result result 6 hardware multiplier rules hardware multiplier essentially word module hardware registers addressed word mode byte mode byte mode address lower bytes upper byte addressed operand registers hardware multiplier addresses 0130h 0132h 0134h 0138h behave like cpu working registers r0r15 modied byte mode upper byte cleared case allows combination 8 16bit multiplications multiplication modes three different multiplication modes available explained following sections unsigned multiply two operands written operand registers 1 2 treated unsigned numbers 00000h smallest number 0fffffh largest number maximum possible result obtained input operands 0ffffh 0ffffh 0ffffh 0ffffh 0fffe0001h carry possibl e sumext regist er always contain zero table 101 gives products selected operands signed multipl two perands wr itten operand registers 1 2 treated sign ed 2s comple ment n umbers 08000h bein g negat ive number 32768 07fff h positive number 32767 sumext register contain exte nded sign calcul ated result sumext 00000h result positi sumext 0ffff h resu lt negative table 102 gives signedm ultiply products som e sel ected operands multipl yandacc umulat e mac th e two operands writ ten operand regist ers 1 2 treated unsigned numb ers 0h 0ff ffh maximum possible results obtaine input operands 0fff fh 0ff ffh 0ffffh 0f fffh 0fffe0001 h resu lt added previous cont ent two sum registers sumlo sumhi carry occur operation su mext register contain 1 cleare othe rwise sumext 00000h n carry occur red accumul ation sumext 00001h carry occurred accumul ation results table 103 assume sumhi sumlo contain accum u lated content c0000000 execution example see table 101 theresultsofanunsignedmultiplicationwithoutaccumulation multiplication word lengths msp430 hardware multiplier allows possible combinations 8 16bit operands notice input registers operand 1 operand 2 behave like cpu registers highregister byte cleared register modied byte instruction simplies use 8bit operands following examples 8bit operand use three modes hardware multiplier use 8bit operand r5 unsigned multiply movb r5 mpy high byte cleared use 8bit operand signed multiply movb r5 mpys high byte cleared sxt mpys extend sign high byte use 8bit operand multiplyandaccumulate movb r5 mac high byte cleared operand 2 loaded similar fashion allows four possible combin ations input operands 16 3 16 8 3 16 16 3 8 8 3 8 speed comparison software multiplication table 104 shows speed increase four different 16 3 16 bit multiplication modes software loop cycles include subroutine call call mulxx multiplication subroutine ret instruction cpu registers used multiplication see metering application report details four multiplication subroutines cycles given hardware multiplier include loading multi plieroperandsoperand1andoperand2fromcpuregistersandinthecaseofthe signed mac opera tionthe accumul ation 48 bit result thr ee cpu registers refer applica tion report cited details progr amming msp430 application details", "url": "RV32ISPEC.pdf#segment382", "timestamp": "2023-10-17 20:16:16", "segment": "segment382", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5.3 Motoro la 68881 Coprocess or ", "content": "mc68881 oating point coprocess provi des ieee sta ndard 754 oating point opera tions capab ility logical extension integer data p rocessing capabil ities mc68000 series microproc essors mpu major featur es mc68881 1 eight generalpurpose oatingpoint data registers 2 67bit arithmetic unit 3 67bit barrel shifter 4 46 instructions 5 full conformance ansiieee 7541985 standard 6 supports additional functions ieee standard 7 seven data formats 8 22 constants available onchip rom p e etc 9 virtual memory machine operations 10 efcient mechanisms exception processing 11 variable size data bus compatibility coprocess interface mec hanism attem pts provide programme r executi mode l based sequential instru ction execu tion mpu coprocess floati ngpoint instructions executed concur rently mpu integer instructions coprocess mpu com municate via standard bus protocols figure 1013 show mc 68881 simpl ied block diag ram mc68881 logicall divi ded two parts 1 bus interface unit biu 2 arithmetic processing unit apu task biu commun icate mpu monitors sta te apu even though operate independe ntly apu biu cont ains coprocess interface registers com munica tion status a gs timing control logic task f apu execute com mand wor ds operands sent biu must report internal status biu eight oating point data registers control sta tus instruction addre ss regi sters located apu highspeed arithmetic unit barrel shifter also located apu figur e 1014 pres ents progr amming mode l copro cessor mc68881 oatingpoint coprocessor supports seven data formats 1 byte integer 2 word integer 3 long word integer 4 singleprecision real 5 doubleprecision real 6 extendedprecision real 7 packed decimal string real int eger data format consi st standar 2s comple ment format single precision doubl eprecisi oating point formats impleme nted specied ieee standar d mixed mode arithm etic accom plished conver ting integers exte nded p recision oating point numbers used computa tion mc68881 oating point coprocess provides six classes instru ctions 1 moves coprocessor memory 2 move multiple registers 3 monadic operations 4 dyadic operations 5 branch set trap conditionally 6 miscellaneous cla sses provide 46 instru ctions include 35 arithmeti c perations", "url": "RV32ISPEC.pdf#segment383", "timestamp": "2023-10-17 20:16:16", "segment": "segment383", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5.4 Control Data 6600 ", "content": "cdc 6600 rst introdu ced 1964 designed two type use largescale scientic proce ssing time shar ing smaller problem s accomm odat e large scale scient ic proce ssing high speed o atingpoin cpu employi ng multiple function al units used figure 10 15 shows struct ure system peripher al activit separ ated cpu activity usin g 12 o channe ls controlle 1 0 peripher al proce ssors conce ntrate opera tion cpu seen figur e 1015 cpu com prises 10 functional unit 1 add 1 doubl e precision add 2 multiply 1 divide 2 increment 1 shift 1 boolean 1 branch units cpu obtains programs data central memory interrupted peripheral processor 10 functional units central processor operate parallel one two 60 bit operands produce 60 bit result operands results provided operating registers shown figure 1016 functional unit activated control unit soon operands available operating registers since functional units work concurrently number arithmetic operations performed parallel example computation z b c z b c memory operands progresses follows rst contents b c load ed set cpu registers says r1 r2 r 3 r4 respective ly th en r1 r2 assig ned inputs multipl unit anoth er register r5 alloca ted receive outpu t sim ultaneousl r3 r4 assigned input another multi ply unit r6 output r5 r6 assigned inputs add unit r7 output soon multipl units provide resu lts r5 r6 added add unit resu lt r7 stored z queue assoc iated functi onal unit cpu simply loads queues opera nd results regist er inform ation functi onal un fre e retri eves informat ion queue operate thus providi ng high para llelism th ere 24 opera ting registers eight 18bit index regi sters eight 18bit addre ss registers eigh 60bit oating point registers figure 1014 show data addre ss paths instructions either 15 30 bits long instru ction sta ck capab le ho lding 32 instru ctions enhanc es instru ction execu tion speed th e cont rol unit maintain scoreboard running le status registers functi onal units allocati new instru ctions fetched resource allocate execu te referring scoreboar d instruc tions queued later processi ng reso urces alloca ted ce ntral memor organ ized 32 banks 4 k words conse cutive address cal ls different bank five memor trunks provided betwee n memor v e oating point regi sters instru ction calling addre ss register implici tly initiat es memory refere nce trunk overlapp ed memory access arithm etic operation thus possibl e concurre nt operation funct ional units high tra nsfer rates betwee n registers memor separ ation periph eral activi ty proce ssing activit make cdc 6600 fast machin e shoul noted cra supercom puter family desi gned seym cray desi gner cdc 6600 series retains basic arch itectural features cdc 6600", "url": "RV32ISPEC.pdf#segment384", "timestamp": "2023-10-17 20:16:16", "segment": "segment384", "image_urls": [], "Book": "computerorganization" }, { "section": "10.5 .5 Archi tectur e of the Cray Ser ies ", "content": "th e cray1 secon dgenera tion vector proce ssor cray resear ch incorpor ated cr ay corpor ation describ ed powerf ul compu ter late 1970 s benchmar k stud ies show capab le sustaining com putational rates 138 mfl ops long periods time attaini ng speeds 2 50 mfl ops short bursts performance 5 times cdc 7600 15 times ibm system370 model 168 thus cray1 uniquely suited solution computationally intensive problems encountered elds weather forecast ing aircraft design nuclear research geophysical research seismic analysis figur e 1017 shows structure cray xmp successor cray1 four processor system xmp4 shown cray xmp consists four sections multiple cray1like cpus memory system io system processor interconnection following paragraphs provide brief description section memory memory system built several sections divided banks addressing interleaved across sections within sections across banks total memory capacity 16 megawords 64 bits per word associated memory word 8bit eld dedicated single error correctiondouble error detection secded memory system offers band width 25100 gigabits per second multiported cpu connected four ports two reading one writing one independent io access ing port ties one clock cycle bank access takes four clock cycles memory contention occur several ways bank conict occurs bank accessed still processing previous access simultaneous conict occurs bank referenced simultaneously independent lines different cpus line conict occurs two data paths make memory request memory section clock cycle memory conict resolution may require wait states inserted memory conict reso lution occurs element element vector references possible arithmetic pipelines fed vector references may experience clock cycles input produce degradation arithmetic performance attained pipelined functional units memory performance typically degraded 3 7 average due memory contention particularly bad cases 20 33 th e secon dary memor known solid state devi ce ssd used excep tional ly fasta ccess disk device although built mos rando access memor ics access time ssd 40 ms com pared 16 ms access time fastes disk drive cray search inc ssd used storage large scale sci entic program would otherw ise excee main memor capac ity reduc e bo ttlenecks obound applications th e central memor con nect ed ssd either one two 1000 mb per secon channels io subsys tem directly connec ted ssd thereby allow ing prefet ching large data sets disk syst em faster ssd proces sor interco nnectio n th e interconnec tion f cpus assumes coarse grain ed multipr ocessing environment th processor ideal ly executes tas k almost independe ntly requiri ng commun ication proce ssors ever mill ion billion instru ctions proce ssor intercon nection compri ses clus tered shar e registers th e processor may acce ss cluste r allocate either user syst em monitor mode processor monito r mode processor abili ty int errupt processor cause go monitor mode central processor cpu composed lowdensity ecl logic 16 gates per chip wire lengths minimized cut propagation delay signals 650 ps cpu register oriented vector p rocessor fig ure 1018 various sets registers supply arguments receive results several pipelined independent functional units eight 24bit address registers a0a70 eight 64bit scalar registers s0s7 eight vector registers v0v7 vector register hold sixtyfour 64bit elements number elements present vector register given operation contained 7bit vector length register vl 64bit vector mask register vm allows masking elements vector register prior operation sixtyfour 24bit address save registers b0b63 64 scalar save registers t0t63 used buffer storage areas address scalar registers respectively address registers support integer add integer multiply functional unit scalar vector registers supports integer add shift logical population count functional units three oatingpoint arithmetic functional units add multiply reciprocal approximation take arguments either vector scalar registers result vector operation either returned another vector register may replace one operands functional unit ie written back provided recursion 8bit status register contains ags processor number program status cluster number interrupt error detection enables register accessed register exchange address register 8bit register maintained operating system register used point current position exchange routine memory addition program clock used accurately measuring duration intervals mentioned earlier cpu provided four ports memory one port reserved inputoutput subsystem ios three labeled b c supporting data paths registers data paths active simultaneously long memory access conicts data transfer scalar address registers memory occurs directly ie individual elements referenced registers alternatively block transfers occur buffer registers memory transfer scalar address registers corresponding buffers done directly transfers memory vector registers done directly block transfer instructions available loading storing b using port using port b buffer registers block stores b registers memory use port c loads stores directly address scalar registers use port c maximum data rate 1 word every 2 clock cycles transfers b registers address scalar registers occur rate 1 word per clock cycle data moved memory b registers rate 1 word per clock cycle using three separate memory ports data moved common memory buffer registers combined rate 3 words per clock cycle 1 word b 1 word one functional units fully segmented ie pipelined means new set arguments may enter unit every clock period 85 ns segmenta tion performed holding operands entering unit partial results end every segment ag allowing proceed set number segments unit determines startup time unit table 105 shows functional unit characteristics example 109 consider vector addition c b 1 n assume n 64 b loaded two vector registers result vector stored another vector register unit time oatingpoint addition six clock periods including one clock period transferring data vector registers add unit one clock store result another vector register would take 64 3 8 512 clock periods execute addition scalar mode vector operation performed pipeline mode shown figure 1019 rst element result stored vector register 8 clock periods afterwards one result every clock period therefore total execution time 8 63 71 clock periods n 64 execution times reduced correspondingly n 64 computation performed units 64 elements time example n 300 computation performed 4 sets 64 elements followed nal set remaining 44 elements vector length register contains number elements n operated upon computation length vector registers machine following program used execute vector operation arbitraryvalueofn rst n mod ele ments perated upon fol lowed n m set elemen ts practice vector leng th known compile time compiler gener ates code similar vector op eration perform ed set length less equal th know n strip minin g strip minin g overh ead must also include startup time computa tions pipelin e order multiple functi onal units active simul taneously intermed iate resu lts mus b e stored cpu regi sters prope rly progr ammed cray arch itecture arrange cpu registers resu lts f one function al unit input anot inde penden funct ional unit thus addition pipe lining within functi onal units possibl e pipeline arithmeti c op erations betwee n functi onal units th calle chai ning chaining vector functional units overlapped concurrent operation important characteristics architecture vast speedup execution times example 1010 shows loop overlapping would occur exam ple 1010 con sider loop fo r j 164 c j j b j j e j f j en dfor addi tion multipl ication done parallel funct ional units totally independe nt ex ample 1011 illus trates chaining exam ple 1011 fo r j 164 c j j b j j c j e j en dfor output add funct ional unit input opera nd multipl i cation functional unit chaining wait entire array c computed beginning multiplication soon c 1 computed used multiply functional unit concurrently computation c 2 two functi onal units form stages pipe line show n figur e 10 20 assuming operands vector registers computation done without vectorization ie pipelining chaining requires add 64 3 8 512 plus multiply 64 3 9 576 1088 clock periods completed chain start time 17 63 computations 80 clock periods vectorization pipelining chaining employed note effect chaining increase depth pipeline hence startup overheads operands already vector registers need loaded rst result stored memory two load paths path stores data memory considered functional units chaining startup time load vector instruction 17 cycles thereafter 1 value per cycle may fetched operation using data may access 1 value per cycle 18 cycles figure 1021 shows example port used read v0 port b read v1 occurs parallel soon vector register rst operand oatingpoint add may begin processing soon rst operand placed v2 port c may used store back memory chain functional unit appear two fetches one store possible chain cray systems supply one types functional units demands two oatingpoint adds executed must occur sequentially two ports fetching vectors one port storing vectors user may view system two load functi onal units store functi onal unit cray xmp cray 1 one memor functio nal unit operan serve input one arithmeti c functio nal unit chain operand howe ver input b oth inputs functi onal unit requiring two opera nds th vect register tied functi onal unit duri ng vector instru ction v ector instru ction issu ed functi onal unit registers used instru ction reserve duration vector instru ction cray coin ed term chim e chaine vect time describe timing vect operations chime specic amo unt time b ut rath er timing conce pt representing number clock periods required comple te one vector opera tion equates length vector register plus clock periods chai ning cray syst ems chime equal 64 clock periods plus chime thus measure allows user estimat e speedup available pipe lining chainin g overlapp ing instructions th e numb er chimes neede com plete seque nce vector instructions depend ent several factor s since thr ee memor functio nal units two fetches one store operation may appear chime functional unit may used within chime operand may appear input one functional unit chime store operation may chain onto previous operation example 1012 figur e 1022 show number chimes requi red perf orm followi ng code cray xmp cray1 cray2 levesque williamson 1989 systems 1 64 30 20 b c endfor cray xmp requires two chimes cray1 requires four cray2 allow chaining requires six chimes execute code clock cycles cray1 cray xmp cray2 125 85 41 ns respectively chime taken 64 clock cycles time machine complete code cray1 4 chimes 64 125 ns 3200 ns cray xmp 2 chimes 64 85 ns 1088 ns cray2 6 chimes 64 41 ns 1574 ns thus instruction sequences cray xmp help chaining actually outperform cray2 faster clock since cray2 allow overlap ping actual gain cray xmp may large large array dimensions vector operations 64 target addresses could generated 1 instruction cache used intermediate memory overhead search 64 addresses would prohibitive use individually addressed regis ters eliminates overhead one disadvantage using cache programmer compiler must generate references individual registers adds complexity code compiler development instruction fetch control registers part special purpose register set used control ow instructions four instruction butters containing 32 words 128 parcels 16 bits used hold instructions fetched memory instruction buffer instruction buffer address register ibar ibar serves indicate instructions currently buffer contents ibar highorder 17 bit words buffer instruction buffer always loaded 32 word boundary p register 24bit address next parcel executed current instruction parcel cip contains instruction waiting issue next instruction parcel nip contains nips issue parcel cip also last instruction parcel lip used provide second parcel 32bit instruction without using extra clock period p register contains address next instruction decoded buffer checked see instruction located buffers address found instruction sequence continues however address found instruction must fetched memory parcels cip nip issued least recently lled buffer selected overwritten current instruction among rst eight words read rest buffer lled circular fashion buffer full take three clock pulses complete lling buffer branch outofbuffer address causes delay processing 16 clock pulses buffers shared processors system one realtime clock registers type include cluster consisting 48 registers cluster contains 32 1 bit semaphore synchronization registers eight 64 bit st sharedt registers eight 24 bit sb sharedb registers system two processors contain three clusters four processor system contain ve clusters io system input output xmp handled inputoutput subsystem ios ios made 24 interconnected io processors ios receives data four 100 mb per second channels connected directly main memory xmp also four 6 mb per second channels provided furnish control cpu ios processor local memory shares common buffer ios supports variety frontend processors peripherals disk drives drives support ios cpu two types io control registers current address channel limit registers current address registers point current word transferred channel limit registers contain address last word transferred systems series cary continued enhancement xmp architecture ymp yprime series cray ymp introduced 1988 extends xmp 24bit addressing scheme 32bit thus allowing address space 32 million 64bit words runs 6 ns clock uses eight processors thus doubling processing speed recent system cray xt4 utilizes 548 30508 processing elements processing element built using 26 mhz amd dual core processor system memory ranges 43 239 tb system offers 56 318 teraops peak", "url": "RV32ISPEC.pdf#segment385", "timestamp": "2023-10-17 20:16:17", "segment": "segment385", "image_urls": [], "Book": "computerorganization" }, { "section": "10.6 SUMMARY ", "content": "spectrum enhancements possible alus spanning range employing faster hardware algorithms multiple functional units described chapter several algorithms proposed years increase speed alu functions representative algorithms described chapter reader referred books listed references section details advances hardware technology resulted several fast offtheshelf alu building blocks examples ics given examples pipelined alu architectures provided trend alu design employ multiple processing units activate parallel high speeds achievedthroughsuperscalarandvliwarchitecturesdescribedinthischapter", "url": "RV32ISPEC.pdf#segment386", "timestamp": "2023-10-17 20:16:18", "segment": "segment386", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 11 ", "content": "control unit enhancement two popular impleme ntations control u nit hcu mcu described chapte r 6 subsequ ent chapter provi ded detail arch itectura l featur es found modernday machines inclusion new feature architecture requires corresponding enhancement control unit following parameters usually considered design control unit speed control unit generate control signals fast enough utilize processor bus structure efciently minimize instruction program execution times cost complexity control unit complex subsystem processor complexity reduced much possible make maintenance easier cost low general random logic implementations control unit ie hcu tend complex rombased designs ie mcu tend least complex flexibility hcus inexible terms adding new architectural features processor since require redesign hardware mcus offer high exibility since microprograms easily updated without substantial redesign hardware involved advances hardware technology faster versatile processors introduced market rapidly requires design cycle time newer processors must small possible since design costs must recovered short life span new processor must minimized mcus offer exibility lowcost redesign capabilities although inherently slow compared hcus speed differential two designs getting smaller since current technology mcu fabricated ic ie technology rest processor concentrate popular speedenhancement techniques used contemporary machines chapter", "url": "RV32ISPEC.pdf#segment387", "timestamp": "2023-10-17 20:16:18", "segment": "segment387", "image_urls": [], "Book": "computerorganization" }, { "section": "11.1 SPEED ENHANCEMENT ", "content": "asc control unit fetches instruction decodes executes fetching next instruction dictated program logic control unit brings instruction cycles one serial instruc tion execution mode way increase speed execution overall program minimize instruction cycle time individual instructions concept called instruction cycle speedup program execution time reduced instruction cycles overlapped next instruction fetched andor decoded current instruction cycle overlapped operation mode termed instruction execution overlap commonly pipelining another obvious technique would bring instruc tion cycle one instruction simultaneously parallel mode instruction execution describe speedenhancement techniques", "url": "RV32ISPEC.pdf#segment388", "timestamp": "2023-10-17 20:16:18", "segment": "segment388", "image_urls": [], "Book": "computerorganization" }, { "section": "11.1.1 Instruction Cycle Speedup ", "content": "recall processor cycle time ie major cycle time depends register transfer time processor bus structure processor structure consists multiple buses possible perform several register transfers simultaneously requires control unit produce appropriate control signals simultaneously synchronous hcu processor cycle time xed slowest register transfer thus even fastest registertransfer operation consumes complete processor cycle asynchronous hcu completion one register transfer triggers next therefore properly designed asynchronous hcu would faster synchronous hcu since design maintenance asyn chronous hcu difcult majority practical processors synchron ous hcus mcu slower hcu since microinstruction execution time sum processor cycle time crom access time hcu asc simplest conguration possible instruction cycle divided one phases state major cycles phase consisting four processor cycles ie minor cycles majority actual control units synchronous control units essence enhanced versions asc control unit optimization performed asc control unit reduce number major cycles needed execute certain instructions shr shl entering execute cycle since required microoperations implement instructions could completed one major cycle optimization pos sible necessary use complete major cycle microoperations corresponding instruction execution fetch defer completed part major cycle example microoperations execution branch instruction bru bip bin could completed one minor cycle rather complete major cycle currently implemented asc control unit thus three minor cycles could saved execution branch instructions returning fetch cycle rst minor cycle execute cycle enhancements implemented statechange circuitry control unit becomes complex execution speed increases note case f mcu conce pt major cycle pres ent basic unit work minor cycle i e processor cycl e time crom access time lengths microprogr ams correspo nding instru c tion different microins truction executed one minor cycle microprogr ams include idle cycles developi ng micro program asc microoper ation seque nces hcu design reorgani zed make short possi ble section 86 3 provided instru ction cycle details intel 8080 illustrating instruction cycle speed conce pt although obsolete processor selected simpl icity rece nt intel processors adopt techniq ues extensi vely", "url": "RV32ISPEC.pdf#segment389", "timestamp": "2023-10-17 20:16:18", "segment": "segment389", "image_urls": [], "Book": "computerorganization" }, { "section": "11.1.2 Instruc tion Execut ion Overlap ", "content": "even early processor intel 8080 instru ctions add r memory operand fet ched cpu addi tion perform ed cpu fetching next instruction seque nce memor y th overlap instru c tion fetch execu te phases increas es progr execu tion speed gener al control unit envi sioned device three sub function fetch decode address com putation execute cont rol unit designed modular form one modu le functions possibl e overlap instru ction processing functi ons ov erlapped processi ng brough pipe line describ ed previous chapter pipeline struct ure like automob ile asse mbly line consists several stations capable perform ing certain subtask wor k ows e sta tion next work leaves station subsequent unit work pick ed station work lea ves last sta tion pipeline task complete pipeline n stations work stays sta tion seconds com plete proce ssing time task n 3 seconds howeve r sinc e n stations working overlapp ed manner various tasks pipeline outpu ts one comple ted task every secon ds initial period pipeline lled figure 111 introdu ces conce pt instruction processi ng pipe line control unit thr ee modu les th e proce ssing sequence shown figur e 111b time t2 rst module fetching instruction 1 second module decoding instruction last module executing instruction 1 t3 onward pipeline ows full throughput one instruction per time slot simplicity assumed module pipeline consumes amount processing time equal time partitioning processing task made intermediate registers hold result ags indicate completion one task beginning next task needed assumed instructions always executed sequence appear program assumption valid long program contain branch instruction branch instruction encountered next instruction fetched target address branch instruction condi tional branch target address would known instru ction reac hes execu te stage branch inde ed taken instru ctions follow ing branc h instruction already pipelin e need b e disc arded pipeline needs ll ed tar get address another appro ach would stop fetchi ng subse quent instru ctions pipe line branc h instru ction fetched target addre ss know n th e former appro ach pref erred handl ing condi tional branc hes sinc e good chanc e branc h might occur case pipe line would ow ll th e pipe line ow suff ers whe n branch occur s uncondi tional branches latter appro ach used return detailed treatment pipe line conce pts later thi chapter impleme nt cont rol unit pipeline sta ge designed opera te inde pendentl p erformin g functi whi le sharing resource proce ssor bus stru cture main memor system desig ns become comple x th e pipeline conce pt used extensi vely mode rn proce ssors typical proce ssors today four ve stag es pipe lines hardw technol ogy progr esses proce ssors deeper pipeli nes i e pipelines larger numb er sta ges introdu ced processors belon g soc alled superpip elined proce ssor class sect ion", "url": "RV32ISPEC.pdf#segment390", "timestamp": "2023-10-17 20:16:18", "segment": "segment390", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4 ", "content": "subse quent chapter provi de som e exam ples class machin es", "url": "RV32ISPEC.pdf#segment391", "timestamp": "2023-10-17 20:16:18", "segment": "segment391", "image_urls": [], "Book": "computerorganization" }, { "section": "11.1 ", "content": "3 par allel instr uction execut ion seen descripti chapters 8 throug h 10 intel pen tium series processors contain two execution units integer unit oatingpoint unit control unit processors fetches instructions instruction stream i e program decodes delivers appro priate execution unit th us execu tion units would workin g para llel execu ting instruction control unit must capab le crea ting thes e para llel steams execution synchroni zing thos e streams appro priately based prec e dence constra ints impose progr resu lt com putation must whethe r progr executed serial parallel execu tion instructions cla ss architecture mor e e instru ction steam processe simul taneousl know n superscal ar architect ures introdu ced previous chapter book sect ion 114 subsequent chapters provide exampl es", "url": "RV32ISPEC.pdf#segment392", "timestamp": "2023-10-17 20:16:18", "segment": "segment392", "image_urls": [], "Book": "computerorganization" }, { "section": "11.1.4 Instruc tion Buffer and Cache ", "content": "instru ction buff ers instru ction cache architect ures introduced earlier book also bring instruction proce ssing overlap although comple te instruction level operation fetching instructions buff er cache pre fetching overlapp ed operati retrieving executing instructions buff er", "url": "RV32ISPEC.pdf#segment393", "timestamp": "2023-10-17 20:16:18", "segment": "segment393", "image_urls": [], "Book": "computerorganization" }, { "section": "11.2 HA RDWIR ED VER SUS MICRO PROGRA MMED CONT ROL UNITS ", "content": "speedup tec hniques describ ed previous section adopted hcus practical machin es mentioned earli er main advantage hcu speed disad vantage inexibilit y altho ugh asynchrono us hcus offer higher speed capabilit synchr onous hcu major ity practical machines synchro nous hcu simpler design two current vlsi era com plete processi ng system fabric ated sing le ic cause control unit com plex unit occupies large percent age chip real est ate com plexity increas es numb er instru ction instru ction set mac hine incr eases reduc ed instruc tions set compute rs risc introduced earlier book address com plexity problem mcu executi time instru ction proportiona l numb er micro operations requi red hence length mi croprogr instru c tion since mcu starts fetching next instruction last micro operation current instruction micro program execu ted mcu treat ed asynchrono us cont rol unit mcu slower hcu becau se addition crom acce ss time registertrans fer time exible hcu requires minimum changes hardware instru ction set modied enhanc ed speed techniq ues describ ed section 1 11 used prac tical mcus another level pipe lining show n figur e 1 12 possi ble mc u fetching next microinstruction overlapped execution current microinstruction crom word size one design parameters mcu although price roms decreasing cost data path circuits required within control unit increases crom word size increases therefore word size reduced reduce cost mcu examine microinstruction format used practical machines respect cost effectiveness common format microinstruction shown figure 113a instruction portion microinstruction used generating control signals address portion indicates address next microinstruc tion execution microinstruction corresponds generation control signals transferring address portion mmar retrieve next microinstruction advantage format little external circuitry needed generate next microinstruction address disadvantage conditional branches microprogram easily coded format shown figure 113b allows conditional branching assumed condi tion satise mma r simpl increment ed point next microins truction seque nce however requi res addi tional mma r circuitry mi croinstruc tion format show n figur e 113c explicit ly codes jump addresse correspo nding ou tcomes test condition thus reducing exte rnal mar circui try th e crom word size large thes e format sinc e cont one address elds possi ble reduce crom word size address repr esentatio n made implicit th e format shown figur e 113d simi lar one used mcu asc format distingui shes betwee n two type micro instru ctions mopcode type 1 micro instruction produces control signals type 0 man ages microprogr o w increment ing mma r execu tion type 1 microins truction im plicit format seen chapte r 6 format microins truction require fairly com plex circuitry man age mmar figure 114 shows two popul ar forms encoding instruc tions portion micro instru ction horizont al unpacked microins truction bit instru ction repr esents cont rol signal hen ce cont rol sign als generat ed simul taneousl externa l decoding neede generat e control signal thus mak ing mcu fast disa dvantage requires larger crom wor ds also instru ction encodi ng cumbers ome thorough familiari ty processor hardware struct ure needed prevent generat ion cont rol signals cause coni cting opera tions processor hardware vert ical packe micro instruction instru ction divided several elds eld corresponding either resource function processing hardware design asc mcu eld corresponded resource alu bus1 etc vertical microinstruction reduces crom word size decoders needed generate control signals eld instruction contribute delays control signals encoding vertical micro instruction easier horizontal microinstruction former functionresource partitioning foregoing discussion assumed control signals implied microinstruction generated simultaneously next clock pulse fetches new microinstruction type microinstruction encoding called monophase encoding also possible associate eld bit microinstruction time value execution microinstruction requires one clock pulse type microinstruction encoding called polyphase encoding figure 115a shows monophase encoding control sign als gener ated simultaneous ly clock pulse figur e 115b shows nphase encoding whe microoper ations 1 thr ough mn associa ted time valu e 1 thr ough tn respect ively", "url": "RV32ISPEC.pdf#segment394", "timestamp": "2023-10-17 20:16:19", "segment": "segment394", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3 PIPEL INE PERFO RMANC E ISSUES ", "content": "figure 116 show instru ction processing pipelin e consist ing six stages rst stage fetches instructions memory one instruction time end fetch stage also updates program counter point next instruction sequence decode stage decodes instruction next stage computes effective address opera nd followed stage fetches operand memory opera tion cal led instruction perf ormed execute stage resu lts stor ed memory next stage operation pipeline depicted modied timespace diagram called rese rvation table shown figure 117 reserva tion table row corresponds stage pipeline column corresponds pipeline cycle x intersection ith row jth column indicates stage would busy performing subtask cycle j cycle 1 corresponds initiation task pipeline stage reserved hence available task cycle j number columns reservation table given task determined sequence subtasks corre sponding task ow pipeline reservation table figure 117 shows stage completes task one cycle time hence instruction cycle requires six cycles completed although one instruction completed every cycle pipeline full pipeline cycle time determined stages requiring memory access tend slower stages assume memory access takes 3t time unit stage requiring memory access executes t pipeline produces one result every 3t rst 18t pipeline lled compared sequential execution instruction requires 14t asynchronous 18t synchronous control units th e assumpt ion far instru ction execu tion com pletely seque ntial operatio n pipeline long true pipe line com pletes one instruction per cycle prac tice program execu tion com pletely sequential becau se branch instru ctions consid er uncondit ional branc h instru ction entering pipeline figur e 116 target branch know n instru ction reac hes addre ss cal culate stage s3 pipeli ne allow ed function normally would fetched two instru ctions followi ng branch instru ction target address know n instru ctions entered pipeline branch instruction mus disc arded new instru ctions fetched target addre ss branc h pipe line draini ng operation results degra dation pipeline throughput solution would fre eze pipeline fetching instru ctions soon branc h opcode decode sta ge 2 target addre ss know n th mode op eration preve nts som e trafc memor system increas e pipeline efcienc com pared rst method instru ction entering pipe line conditional branch target addre ss branch know n evaluat ion condi tion sta ge s5 three modes pipelin e handling possible case rst mode pipe line froz en fetching subse quent instructions branc h target known case uncondi tional branch secon mode p ipeline fetches subse quent instructions normal ly ignorin g conditional branc h th pipe line predicts branch take n indeed branch taken pipeline ows normally hence degradation performance branch taken pipeline must drained restarted target address second mode preferred since pipeline functions normally 50 time average third mode would start fetching target instruction seque nce buffer soon target addre ss com puted s3 nonbra nch seque nce fed pipe line b ranch take n pipe line continue norm al opera tion cont ents buffer ignored branc h taken instru ctions alr eady pipeline discard ed i e pipeline ushed target instruction fetched buffer th e advant age fetching instru ctions buffer faster memor y las two modes operati activit pipeli ne resp ect instructions entering pipe line followi ng conditional branc h mus marked tempor ary made permane nt branc h taken problem introdu ced pipeline b ranch instru ctions called control hazard wi describ e mechani sms reduce effect control hazards pipeline performance section example 111 consider instruction sequence given executed processor uti lizes pipeli ne figure 116 load r1 mem1 r1 mem1 load r2 mem2 r2 mem2 mpy r3 r1 r3 r3 r1 add r4 r2 r4 r4 r2 r1 r2 r3 r4 registers mem1 mem2 memory addresses denotes contents indicates data transfer figure 118 depicts operation pipeline cycles 1 2 one memory access needed cycle 3 two simultaneous read accesses memory neededone due ca due fi cycle 4 three memory read accesses fo ca fi needed cycle 5 two memory read fo ca one register write ex required cycle 6 requires one access memory read wr ite regist er read write cycle 79 require memor access require one memor wr ite cycle 7 require one register read one regist er write cycl e 8 needs one regi ster write memory regi ster acce sses summa rized table 111 seen table 111 memory system mus accomm odate thr ee accesses per cycle pipe line operate properl y assume machin e separ ate data instru ction cache two simultan eous accesses handled th solves probl em cycles 5 6 assum ing machine accesses instru ction cache ca cycle 4 two accesses fi ca data cache would n eeded one way solv e problem stall ad instru ction i e initiat e add instru ction later cycle 4 cycle 6 shown figure 119 th e stalling process results degradat ion pipe line performanc e memor regist er acce sses figur e 119 sum marized ta ble 112 note pipeline controlle r must evalu ate resource require ments instru ction befor e instru ction ente rs pipelin e structural hazard elimina ted addition collision problems pipelines due improper initiation rate data interlocks occur shared data stages pipeline conditional branch instructions degrade performance instruction pipeline men tioned earli er problem common solu tions descr ibed sect ion", "url": "RV32ISPEC.pdf#segment395", "timestamp": "2023-10-17 20:16:20", "segment": "segment395", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.1 Data Inter locks ", "content": "instru ctionproc essing pipeline efcient whe n instru ctions ow stages smo oth man ner practice always possibl e becau se interin struction depend encies interi nstructio n depend encies due sharing resource memor location register instru ctions pipe line shar ing envi ronment com putation proce ed one stages op erating reso urce wait comple tion f operation examp le 112 consid er followi ng instruction seque nce loa r1 mem1 r1 mem 1 loa r2 mem2 r2 mem 2 mpy r1 r2 r1 r2 r1 add r1 r2 r1 r1 r2 figure 1110 shows opera tion pipe line figure 116 sequen ce note result second load instruction r2 loaded data mem2 cycle 6 mpy instruction reads r2 cycle 6 also general r2 guaranteed contain proper data end cycle 6 hence mpy instruction would operate erroneous data similar data hazard occurs cycle 7 results correct must ensure r2 r1 read written previous instruction cycles one possible solution forward data needed pipeline early possible instance since alu requires contents r2 cycle 6 memory read mechanism simply forward data alu written r2 thus accomplishing write read simultan eously concept internal forwarding described later section general following scenarios possible among instructions j j follows program 1 instruction produces result required j j delayed produces result 2 j required write common memory location register order writing might get reversed operation pipeline 3 j writes register whose previous contents must read i j must delayed register contents read i order operations reversed pipeline implied instruction sequence program result erroneous since instruction either reads resource writes four possible orders operations two instructions sharing resource 1 readread read read 2 readwrite read write 3 writeread write read 4 writewrite write write case rst operation earlier instruction second operation later instruction j orders reversed conict occurs readwrite conict occurs write operation performed j resource read reversing order readread detrimental since data changed either instructions hence considered conict writewrite conict result shared resource wrong one subsequent read operations established reads two writes pipeline allow write j disable write occurs order either readwrite writeread reversed instruction reading data gets erroneous value conicts must detected pipeline mechanism make sure results instruction execution remain specied program general two approaches resolve conicts rst one compare resources required instruction entering pipeline instructions already pipeline stall ie delay initiation entering instruction conict expected instruction sequence 1 j j 1 conict discovered instruction j entering pipeline instruction pipeline execution instructions j j 1 stopped passes conict point second approach allow instructions j j 1 enter pipeline handle conict resolution potential stage conict might occur suspend instruction j allow j 1 j 2 continue course suspending j allowing subsequent instructions continue might result conicts thus multilevel conict resolution mechanism may needed making pipeline control complex second approach known instruction deferral may offer better performance although requires complex hardware independent functional units section 1134 describes instruction deferral one approach avoid writeread conicts data forwarding instruction writes data also forwards copy data instructions waiting generalization technique concept internal forwarding described next internal forwarding internal forwarding technique replace unnecessary mem ory accesses registertoregister transfers sequence readoperatewrite operations data memory results higher throughput since slow memory accesses replaced byfaster register register operationsthis scheme also resolves data interlocks pipeline stages consider memory location registers r1 r2 exchange data three possibilities interest writeread forwarding following sequence two operations r1 r2 designates data transfer designates contents replaced r1 r2 r1 thus saving one memory access readread forwarding following sequence two operations r1 r2 repl aced r1 r2 r1 thus saving one memor access writewrite forwarding overwriting following sequence two operations r1 r2 repl aced r2 thus saving one memor access th e intern al forwar ding techniq ue applied seque nce opera tions show n following examp le exam ple 113 con sider opera tion p b c whe p b c memor opera nds perform ed followi ng sequence r1 r2 r1 b r3 c r4 r3 p r4 r2 th e data ow seque nce thes e opera tions show n figur e 1111a intern al forwarding data ow sequence altered figure 1111b c forwarded corresponding multiply units eliminating register r1 r3 respectively results multiply units forwarded adder eliminating transfers r2 r4 example generalized architecture allows operations memory register operands second fourth instructions register operands last instruction assumed two possibilities loadstore memorymemory architectures loadstore architecture arith metic operations always two register operands memory used load store memorymemory architectures operations performed two memory operands directly assessment performance instruction sequence architectures left exercise internal forwarding technique described restricted pipelines alone applicable general multiple processor architectures particular internal forwarding pipeline mechanism supply data produced one stage another stage needs directly ie without storing reading memory following example illustrates technique pipeline example 114 consider computation sum elements array ie pipe line comple tes task one pipelin e cycl e time com putation rst accum ulation would possibl e onwa rds fetch sum unit wait two cycles obta prope r value sum data interl ock resu lts degradat ion throughput pipe line one output ever three cycl es one obvio us thing feed outpu add unit back input show n figure 1112b require buff ering intermed iate sum value add unit elements accumulated value sum stored memory add unit requires one cycle compute sum solution results degradation throughput since adder becomes bottleneck kogge 1981 provided solution problems rewriting summing sumi sumid ai current iteration index sumid intermediate sum iterations ago number cycles required add unit sumid available ith iteration computation proceed every iteration computation results partial sums accumulating elements apart end partial sums added obtain nal sum thus pipeline work efciently least computation partial sums thus pipeline 2 require storing two partial sums sum1 sum2 obtained one two cycles ago respectively current iteration buffer holding partial sums arranged rstinrstout fifo buffer fetch sum unit fetches appropriate value iteration type solution practical changing order computation matter associative commutative operations addition multiplication even changing order might result unexpected errors instance many numerical analysis applications relative magnitudes numbers added arranged similar order addition changed structure would change resulting large number added small number thereby altering error characteristics computation", "url": "RV32ISPEC.pdf#segment396", "timestamp": "2023-10-17 20:16:21", "segment": "segment396", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2 Conditional Branches ", "content": "described earlier chapter conditional branches degrade performance instruction pipeline hardware mechanisms described earlier minimize branch penalty static nature sense take consideration dynamic behavior branch instruction program execution two compilerbased static schemes two hardwarebased dynamic schemes described", "url": "RV32ISPEC.pdf#segment397", "timestamp": "2023-10-17 20:16:21", "segment": "segment397", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2 .1 Branch Pr ediction ", "content": "approache described earli er way branc hprediction tech niques sense one predicted branch taken predicted branch take n runtime charact eristics program also utilize predict ing target branch instance condi tional branc h corr espond endo fdol oop test fo rtran progr safe predict branc h beginnin g loop th predict ion would correct ever time loop excep last iteration obvi ously predict ions performed com piler generates appro priate ags aid predictio n program execu tion gener al target branc h guess ed execu tion cont inues along path marking results tentat ive un til actual target known target known tentative results either made permanent discarded branch prediction effective long guesses correct refer section 131 descrip tion branchpr ediction mechani sm intel itanium", "url": "RV32ISPEC.pdf#segment398", "timestamp": "2023-10-17 20:16:22", "segment": "segment398", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2.2 Delayed Branching ", "content": "delayed branching technique widely used design mcus consider twostage pipeline execution instructions shown figure 1113a rst stage fetches instruction second executes effective thr oughput pipeline one instru ction per cycle seque ntial code condi tional branc h encoun tered pipeline opera tion suffers pipeline fetches target instruction shown figur e 1113b branch instruc tion interpreted execute next instruction branch conditionally pipeline kept busy target instruction fetched shown figure 1113c mode operation pipeline executes one instructions following branch executing target branch called delayed branching n instructions enter pipeline branch instruction target known branch delay slot length n shown branch instruction successor instruction 1 successor instruction 2 successor instruction 3 successor instruction n branch target instruction compiler rearranges program branch instruction moved n instructions prior normally occurs branch slot lled instructions need executed prior branch branch executes delayed manner obviously instructions branch delay slot affect condition branch length branch slot number pipeline cycles needed evaluate branch condition target address branch instruction enters pipeline earlier evaluated pipeline shorter branch slot guessed technique fairly easy adopt architectures offer singlecycle execution instructions utilizing twostage pipeline pipelines branch slot hold one instruction length branch slot increases becomes difcult sequence instructions properly executed pipeline resolving conditional branch modernday risc architectures adopt pipelines small number typically 25 stages instruction processing utilize delayed branching technique extensively rearrangement instructions accommodate delayed branching done compiler usually transparent programmer since compiled code dependent pipeline architecture easily portable processors delayed branching considered good architectural feature especially aggressive designs use complex pipeline structures techniques static nature sense predictions made compile time changed program execution following techniques utilize hardware mechanisms dynamically predict branch target prediction changes branch changes behavior program execution", "url": "RV32ISPEC.pdf#segment399", "timestamp": "2023-10-17 20:16:22", "segment": "segment399", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2.3 Branch-Prediction Buffer ", "content": "branchprediction buffer small memory buffer indexed branch instruction address contains 1 bit per instruction indicates whether branch taken pipeline fetches subsequent instruction based prediction bit prediction wrong prediction bit inverted ideally branchprediction buffer must large enough contain 1 bit branch instruction program bit attached instruction fetched instruction fetch increases complexity typically small buffer indexed several loworder bits branch instruction address used reduce complexity cases one instruction maps bit buffer hence prediction may correct respect branch instruction hand since instruction could altered prediction bit nevertheless prediction assumed correct losq 1984 named branchprediction buffer decode history table disadvantage technique time instruction decoded detect conditional branch instructions would entered pipeline branch successful pipeline needs relled target address minimize effect decode history table also contains target instruction addition information branch taken instruction enter pipeline immediately branch success", "url": "RV32ISPEC.pdf#segment400", "timestamp": "2023-10-17 20:16:22", "segment": "segment400", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2.4 Branch History ", "content": "branch history technique sussenguth 1971 uses branch history table stores branch probable target address target could well target reached last time program execution figure 1114 shows typical branch history table consists two entries instruction instruction address corresponding branch address stored cachelike memory soon instruction fetched address compared rst eld table match corresponding branch address immediately known execution continues assumed branch branchprediction technique branch address eld table continu ally updated branch targets resolved guessed implementation branch history table requires excessive number accesses table thereby creating bottleneck cache containing table", "url": "RV32ISPEC.pdf#segment401", "timestamp": "2023-10-17 20:16:23", "segment": "segment401", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.2.5 Multiple Instruction Buffers ", "content": "described use multiple instruction buffers reduce effect condi tional branching performance pipeline earlier details practical architecture utilizing feature provided figure 1115 shows structure ibm 36091 instructionprocessing pipe line instruction fetch stage consists two buffers sbuffer sequential instruction prefetch buffer tbuffer prefetch target instruction sequence stage followed decode stages similar pipelines described chapter contents sbuffer invalidated branch successful contents tbuffer invalidated branch successful decode unit fetches instructions appropriate buffer decoder issues request next instruction sbuffer looked ie singlelevel nonsequential search sequential instructions tbuffer looked conditional branch successful instruction available either buffer brought decode stage without delay instruction needs fetched memory incur memory access delay nonbranch instructions enter remaining stages pipeline decode complete unconditional branch instruction instruction target address immediately requested decode unit decoding performed instruction arrives memory conditional branch instruction sequential prefetching suspended instruction travers ing remaining stages pipeline instructions prefetched target address simul taneously un til branch resolve d branch successf ul instructions fetched tbuffer branc h successf ul normal fetchi ng sbuf fer continue", "url": "RV32ISPEC.pdf#segment402", "timestamp": "2023-10-17 20:16:23", "segment": "segment402", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.3 Interrupts ", "content": "com plex hardwar e software support neede handl e int errupts pipelined processor since many instructions executed overl apped man ner instru ctions gener ate interr upt idea lly instruction generates interrupt execution instructions 1 2 etc pipeline postponed interrupt serviced however instructions 1 2 etc already entered pipeline must completed interrupt service started ideal interrupt scheme known precise interrupt scheme note also instructions usually processed inorder ie order ap pear pro gram wh en elay ed b ranc hing u sed structio ns bra nch dela slot equ entially la ted st ruc tio n n br anch dela slot gen erates nterru pt b ran ch tak en nstru ctions bran chd elay slot b ran chtarg et ins truction us started n terrup pr oces sed n eces sitates u ltip le pro gram co un te rs sinc e nstru ctions sequ entially related several instru ctions gener ate interru pts simul taneously interrupts order interrupt due instru ction mus handled pri du e 1 ensur e status vect attac hed instru ction traverses pipeline machine state changes implied instruction marked temporary last stage last stage vector indicates interrupt machine state changed interrupt indicated inorder processing interrupts performed state changes committed", "url": "RV32ISPEC.pdf#segment403", "timestamp": "2023-10-17 20:16:23", "segment": "segment403", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.4 Instruction Deferral ", "content": "instruction deferral used resolve data interlock conicts pipeline concept process much instruction possible current time defer completion data conicts resolved thus obtaining better overall performance stalling pipeline completely data conicts resolved", "url": "RV32ISPEC.pdf#segment404", "timestamp": "2023-10-17 20:16:23", "segment": "segment404", "image_urls": [], "Book": "computerorganization" }, { "section": "11.3.4.1 CDC 6600 Scoreboard ", "content": "processing unit cdc 6600 shown figure 1116 consists 16 independe nt functional units 5 memory access 4 oatingpoint operations 7 integer operations input operands functional unit come pair registers output designated one registers thus three registers allocated functional unit corresponding arithmetic instruction control unit contains scoreboard maintains status registers status functional units registerfunctional unit associations contains instruction queue called reservation station instructions rst enter reservation station arithmetic instruction corresponds 3tuple consisting two source register one destination register designations load store instructions consist 2tuple corresponding memory address register designation source operands instruction tag associated tag provides indication availability operand register operand available indicates functional unit operand expected thornton 1970 instruction scoreboard determines whether instruction executed immediately based analysis data dependencies instruction executed immediately placed reservation station scoreboard monitors operand requirements decides operands available scoreboard also controls functional unit writes result destination register thus scoreboard resolves conicts scoreboard activity respect instruction summarized following steps 1 functional unit available instruction stalled functional unit free scoreboard allocates instruction active functional unit destination register resolves structural hazards writewrite conicts 2 source operand said available register containing operand written active functional unit active functional unit going write source operands available scoreboard allocates instruction functional unit execution functional unit reads operands executes instruction writeread conicts resolved step note instructions may allocated functional units order ie order specied program 3 functional unit completes execution instruction informs scoreboard scoreboard decides write results destination register making sure readwrite conicts resolved scoreboard allow writing results active instruction whose operand destination register functional unit wishing write result active instruction read operands yet correspond ing operand active instruction produced earlier instruction writing result stalled readwrite conict clears functional units active time elaborate bus structure needed connect registers funs 16 functional units cdc 6600 grouped four groups set busses data trunks group one functional unit group could active time note also results written register le subsequent instruction wait write takes place forwarding results thus long writewrite conicts infrequent scoreboard performs well", "url": "RV32ISPEC.pdf#segment405", "timestamp": "2023-10-17 20:16:24", "segment": "segment405", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4 EXAMPLE SYSTEMS ", "content": "concepts introduced chapter used processors described earlier chapters book section provide additional examples sun microsystem niagara superscalar superpipelined architecture motorola 88000 series processors selected represent early risc architec tures mips r10000 intel itanium represent contemporary superpipe lined superscalar architectures", "url": "RV32ISPEC.pdf#segment406", "timestamp": "2023-10-17 20:16:24", "segment": "segment406", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.1 Sun Microsystem\u2019s Niagara Microprocessor ", "content": "niagara released 2006 intended volume servers heart data centers running information web processing businesses univer sities hospitals factories like niagara named torrent data instructions ow chip memory designed away impact latency data instructions arrive memory geppert 2005 niagara uses heavily pipelined architecture stage pipeline per forms one step instruction execution every clock cycle rst stage niagara design pipeline gets instruction memory second stage selects instruction execution third stage determines kind instruction fourth stage executes instruction fth stage used getting data memory instruction access memory get data registers passes memory stage nal stage pipeline writes results operation back register every clock cycle new instruction enters pipeline goes well instructions march pipeline one instruction completed per clock cycle performance processor enhanced two techniques increasing clock frequency increasing number instructions processor execute one clock cycle clock frequencies soared past 30plus years ve orders magnitude tens kilohertz 4 gigahertz increase number instructions per clock cycle architects added pipelines microprocessors intel pentium 4 eight pipelines allowing principle complete eight instructions parallel single clock cycle designs put essential elements two microprocessors one piece silicon dualcore microprocessor eight pipelines core running 4 ghz could execute 64 billion instructions per second could complete one instruction per pipeline per cycle instruction pipeline needs data memory wait data arrive actually execute instruction length individual delays depends soughtafter data highspeed onchip cache memory wait could clock cycles however data cache retrieval offchip main memory may take hundreds even thousands clock cycles thus minimizing latency perhaps important aspect processor design improve throughput two mechanisms used executing instructions different order way occur instruction stream outoforder execution beginning execution instructions may never needed speculative execution niagara uses concept multithreading improve performance divides instruction stream several smaller streams known threads concept rst developed control data corp cdc 6600 supercomputer 1960s niagara design pipeline handle four threads cycle pipeline begins execution instruction different thread example instruction thread one stage three pipeline instruction thread two stage two one yet different thread stage one pipeline continue execute threadone instruction needs data memory stores information stalled instruction special type onchip memory called register le time continues execution threadtwo instruction rotating among three threads available needed data thread become available pipeline jumps back thread using register le pick exactly left conventional microprocessors architects obtain multigigahertz speeds increasing number stages pipeline basically processing step could completed one clock cycle slower pipeline needs two clock cycles job faster chip niagara ability keep pipelines running almost time architects run microproc essor multigi gahertz speeds order get good performanc e slower speeds translat e simpler pipe line simpler pipe line lower frequenc let niagar run much lower power comparab le microproc essors niagara uses sun sp arc arch itecture th e approach build simple straightforw ard p ipeline max imize numb er threads pipeline efcientl handl e put microproc essor core max imize numb er cores chip niagar chip eight cores use sparc instruction set pipe line core switch amo ng four threads single niagar chip handl e 32 thr eads execute eight instru ctions per clock cycle chip rarely waste time wait ing data come back memor y reduce latenc niagara architect used fat fast interf ace proce ssor cores two dif ferent kinds onch ip memor y chip two levels memor y every core leve lone memor whole group cores shar es larger leve ltwo memor y leveltwo memor send data cores aggre gate rate gigab ytes per second", "url": "RV32ISPEC.pdf#segment407", "timestamp": "2023-10-17 20:16:24", "segment": "segment407", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.2 Motoro la MC8 8100=88200 ", "content": "mc88100 rst processor mc 8800 famil risc processor s th e mc88000 family also include mc88 200 highspe ed memoryc aching demand paged memor ymanag ement unit gether ffer user powerf ul risc system claimed become new standar highperfor mance computing 1 mc88100 instruction set contains 51 instructions instructions include integer add subtract multiply divide floatingpoint add subtract multiply divide logical ad xor biteld instructions memorymanipulation instructions flowcontrol instructions 2 small number addressing modes three addressing modes data memory four addressing modes instruction memory three register addressing modes 3 xedlength instruction format implemented instruction 32 bits long 4 instructions either executed one processor cycle dispatched pipelined execution units produce results every processor cycle 5 memory access performed load store instructions 6 cpu contains 32 32bit user registers well numerous registers used control pipelines save context processor exception processing 7 control unit hardwired 8 highlevel language support exists form procedure parameter registers large register set general important design feature contributes singlecycle execution instructions multipl eexecut ion units see figur e 1117 instruction unit data unit oatingpoint unit integer unit integer unit oatingpoint unit execute data manipulation instructions data memory accesses performed data unit instruction fetches performed instruction unit operate independently concurrently execution units allow mc88100 perform ve operations parallel access program memory execute arithmetic logical biteld instruction access data memory execute oatingpoint integerdivide instruction execute oatingpoint integermultiply instruction three execution units pipelined data unit oatingpoint unit oatingpoint unit actually two pipelines add pipe multiply pipe data interlocks avoided within pipelines using scoreboard register time instruction enters pipe bit corresponding destination register set scoreboard register subsequent instruction entering pipe checks see bit corresponding source registers set waits upon completion instruction pipeline mechanism clears destination register bit freeing used source operand mc88100 incorporates delayed branching reduce pipeline penalties associated changes program ow due conditional branch instructions technique enables instruction fetched branch instruction optionally executed therefore pipeline ow broken also three internal buses within mc88100 two source operand buses destination bus buses enable pipeline access operand regi sters concurrently two different pipes accessing operand register one two source buses data instruction accessed via two nonmultiplexed address buses scheme know harvard architecture eliminates bus contention data accesses instruction fetches using mc88200 see figure 1118 mc88200 contains memorymanagement unit well datainstruc tion cache memorymanagement unit contains two addresstranslation caches batc patc batc generally used highbit addresses fully associative batc contains 10 entries 512 kb blocks eight entries controlled software two hardwired 1 mb io page memory protection implemented protection ags patc generally used accesses also fully associative patc contains 56 entries 4 kb pages hardware controlled memory protection imple mented via protection ags address translation tables standard twolevel mapping tables mc88200 searches batc patc parallel cache system capacity 16 kb fourway setassociative cache 256 sets four 32bit words per line cache updating memory user selectable user either write copy back also selectable area segment page block mc88200 uses lru replacement algorithm lines disabled line faulttolerant operation additionally mc88200 provides snoop capability coherency mc88200 interfaces pbus mbus pbus synchronous bus dedicated address data lines 80mbspeak throughput 20 mhz checker mode provided allow shadowing mbus synchron ous 32bit bus multiplexed address data lines figure 1119", "url": "RV32ISPEC.pdf#segment408", "timestamp": "2023-10-17 20:16:25", "segment": "segment408", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3 MIPS R10000 Architecture ", "content": "section extracted mips r10000 user manual version 20 mips technologies inc 1997 mips r10000 technical brief version 20 mips tech nologies inc 1997 zdnet review r10000 december 1997 r10000 singlechip superscalar risc microprocessor followon mips risc processor family includes chronologically r2000 r3000 r6000 r4400 r8000 integer oatingpoint performance r1000 makes ideal applications engineering workstations scientic computing 3d graphics workstation database servers multiuse systems r10000 uses mips architecture nonsequential dynamic execution scheduling andes supports two integer two oatingpoint execute instructions plus one loadstore instruction per cycle r10000 following major features 64bit mips iv instruction set architecture isa decoding four instructions pipeline cycle appending one three instruction queues five execution pipelines connected separate internal integer oatingpoint execution functional units dynamic instruction scheduling outoforder execution speculative instruction issue also termed speculative branching precise exception mode exceptions traced back instruction caused nonblocking caches separate onchip 32 kb primary instruction data caches individually optimized secondary cache system interface ports internal controller external secondary cache internal system interface controller multiprocessor support block diagram processor interfaces shown figure 1120", "url": "RV32ISPEC.pdf#segment409", "timestamp": "2023-10-17 20:16:25", "segment": "segment409", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.1 Instruction Set (MIPS IV) ", "content": "r10000 implements mips iv isa mips iv superset mips iii isa backward compatible frequency 200 mhz r10000 delivers peak performance 800 mips peak data transfer rate 32 gbs secondary cache mips dened isa implemented following sets cpu designs mips implemented r2000 r3000 mips ii implemented r6000 mips iii implemented r4400 mips iv implemented r8000 r10000 added prefetch conditional move ifp index loadstore fp etc original mips cpu isa extended forward three times extension backward compatible isa extensions inclusive sense new architecture level version includes former levels result processor implementing mips iv also able run mips mips ii mips iii binary programs without change", "url": "RV32ISPEC.pdf#segment410", "timestamp": "2023-10-17 20:16:25", "segment": "segment410", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.2 Superscalar Pipeline ", "content": "superscalar processor one fetch execute complete one instruction parallel r10000 fourway superscalar architecture fetches decodes four instructions per cycle decoded instruction appended one three instruction queues queue perform dynamic scheduling instructions queues determine execution order based availability required execution units instructions initially fetched decoded order executed completed order allowing processor 32 instructions various stages execution instructions processed six partially independent pipelines show figure 1121 fetch pipeline reads instructions instruction cache decodes renames registers places three instruction queues instruction queues contain integer address calculate oatingpoint instructions queues instructions dynamically issued ve pipelined execution units pipeline r10000 includes stages fetching stage 1 figur e 1121 decodi ng stage 2 iss uing instru ctions stage 3 read ing register operands stage 3 executing instructions stages 4 6 storing result stage 7 processor keeps decoded instructions three instruction queues integer queue address queue oatingpoint queue queues allow processor fetch instruction maximum rate without stalling instruction conicts dependencies queue uses instruction tags keep track instruction execution pipeline stage tags set done bit active list instruction completed integer queue issues instructions twointeger arithmetic units alu1 alu2 integer queue contains 16 instruction entries four instruc tions may written cycle newly decoded integer instructions written empty entries particular order instructions remain queue issued alu branch shift instructions issued alu1 integer multiply divide instructions issued alu2 integer instructions issued either alu integer queue controls six dedicated pots integer register le two operand read ports destination write port alu oatingpoint queue issues instructions oatingpoint multiplier oatingpoint adder oatingpoint queue contains 16 instruction entries four instructions may written cycle newly decoded oating point instructions written empty entries random order instructions remain queue issued oatingpoint execution unit oatingpoint queue controls six dedicated ports oatingpoint register le two operand read ports destination port execution unit oatingpoint queue uses multiplier issue port issue instructions squareroot divide units instructions also share multiplier register ports oatingpoint queue contains simple sequencing logic multiplepass instructions multiplyadd instructions require one pass multiplier one pass adder address queue issues instructions loadstore unit contains 16 instruction entries unlike two queues address queue organized circular fifo buffer newly decoded loadstore instruction written next available sequential empty entry four instructions may written cycle fifo order maintains program original instruction sequence memory address dependencies may easily computed instructions remain queue graduated deleted immediately issued since loadstore unit may able complete operation immediately address queue contains complex control logic queues issued instruction may fail complete memory depend ency cache miss resource conict cases queue must continue reissue instruction completed address queue three issue ports 1 issues instruction address calculation unit unit uses 2stage pipeline compute instruction memory address translate tlb addresses stored queue dependency logic port control two dedicated read ports integer register le cache available accessed time tlb tag check performed even data array busy 2 address queue reissue accesses data cache queue allocates usage four sections cache consist tag data sections two cache banks load store instructions begin tag check cycle checks see desired address already caches rell operation initiated instruction waits completed load instructions also read align doubleword value data array access may either concurrent subsequent tag check data present dependencies exist instruction marked done queue 3 address queue issue store instructions data cache store instruc tion may modify data cache graduates one store graduate per cycle may anywhere within four oldest instructions previous instructions already completed access store ports share four register le ports integer read write oatingpoint read write shared parts also used jump link jump register instructions move instructions integer register les three instruction queues issue one new instruction per cycle ve execution pipelines 1 integer queue issues instructions two integer alu pipelines 2 address queue issues one instruction loadstore unit pipeline 3 floatingpoint queue issues instructions oatingpoint adder multiplier pipelines 4 sixth pipeline fetch pipeline reads decodes instructions instruction cache 64bit integer pipeline following characteristics 1 16entry integer instruction queue dynamically issues instructions 2 64bit 64location integer physical register le seven read three write ports 3 64bit arithmetic logic units a arithmetic logic unit shifter integer branch comparator b arithmetic logic unit integer multiplier divider loadstore pipeline following characteristics 1 16 entry address queue dynamically issues instructions uses integer register le base index registers 2 16 entry address stack use nonblocking loads ands stores 3 44bit virtual address calculation unit 4 64 entry fully associative translation lookaside buffer tlb convert virtual addresses physical addressing using 40bit physicals address entry maps two pages sizes ranging 4 kb 16 mb powers 4 64bit oatingpoint pipeline following characteristics 1 16entry instruction queue dynamic issue 2 64bit 64location oatingpoint physical register le ve read three wire ports 32 logical registers 3 64bit parallel multiply unit 3cycle pipeline 2cycle latency also performs move instructions 4 64bit add unit 3cycle pipeline 2cycle latency handles addition subtraction miscellaneous oatingpoint operations 5 separate 64bit divide squareroot units operate concurrently units share issue completion logic oatingpoint multiplier", "url": "RV32ISPEC.pdf#segment411", "timestamp": "2023-10-17 20:16:25", "segment": "segment411", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.3 Functional Units ", "content": "ve execution pipelines allow overlapped instruction issuing instructions following ve functional units two integer alus alu1 alu2 loadstore unit address calculate oatingpoint adder oatingpoint multiplier also three iterative units computer complex results integer multiply divide operations performed integer multiply divide execution unit instructions issued alu2 alu2 remains busy duration divide floatingpoint divides performed divide execution unit instructions issued oatingpoint multiplier floatingpoint square root performed squareroot execution unit instructions issued oatingpoint multiplier instruction decode rename unit following characteristics 1 processes four instructions parallel 2 replaces logical registernumbers physical registernumbers register renaming 3 maps integer registers 33wordby6bit mapping table 4 writhe 12 read ports 4 maps oatingpoint registers 32wordby6bit mapping table 4 write 16 read ports 5 32entry active list instructions within pipeline branch unit following characteristics 1 allows one branch per cycle 2 conditional branches executed speculatively 4deep 3 44bit adder compute branch addresses 4 branch return cache contains four instructions following subroutine call rapid use returning leaf subroutines 5 program trace ram stores program counter instruction pipeline", "url": "RV32ISPEC.pdf#segment412", "timestamp": "2023-10-17 20:16:26", "segment": "segment412", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.4 Pipeline Stages ", "content": "stage 1 processor fetches four instructions cycle independent alignment instruction cacheexcept processor fetch across 16word cache block boundary words aligned fourword instruction register instructions left previous decode cycle merged new words instruction cache ll instruction register stage 2 four instructions instruction register decoded renamed renaming determines dependencies instructions pro vides precise exception handling renamed logical registers referenced instruction mapped physicals registers integer oatingpoint registers renamed independently logical register mapped new physical register whenever logical register destination instruction thus instruction places new value logical register logical register renamed mapped new physical register previous value retained old physical register instruction renamed logical register numbers compared determine dependencies exist four instructions decoded cycle physical register numbers become known physical register busy table indicates whether operand valid renamed instructions loaded integer oatingpoint instruction queues one branch instruction executed stage 2 instruction register contains second branch instruction branch decoded next cycle branch unit determines next address program counter branch taken reversed branch resume cache provides instruc tions decoded next cycle stage 3 decoded instructions written queues stage 3 also start ve execution pipelines stages 46 instructions executed various functional units units execution process described floatingpoint multiplier threestage pipeline single doubleprecision multi ply conditional move operations executed unit onecycle latency onecycle repeat rate multiplication completed rst two cycles third cycle use pack transfer result floatingpoint divide squareroot units single doubleprecision division squareroot operations executed parallel separate units units share issue completion logic oatingpoint multiplier floatingpoint adder threestage pipeline single doubleprecision add sub tract compare convert operations executed twocycle latency onecycle repeat rate although nal result calculated third pipeline state internal bypass paths set twocycle latency dependent add multiply instructions integer alu1 onestage pipeline integer add subtract shift logic operations executed onecycle latency onecycle repeat rate alu also veries predictions made branches conditional integer register values integer alu2 onestage pipeline integer add subtract logic operations executed onecycle latency onecycle repeat rate integer multiply divide operations take one cycle single memory address calculated every cycle use either integer oatingpoint load store instruction address load operations calculated address translated 44bit virtual address 40bit physicals address using tlb tlb contains 64 entries translate two pages entry select page size ranging 4 kb 16 mb inclusive powers 4 shown figure 1122 loadinstructionshaveatwocyclelatencyiftheaddresseddataarealreadywithinthe datacachestoreinstructionsdonotmodifythedatacacheormemoryuntiltheygraduate", "url": "RV32ISPEC.pdf#segment413", "timestamp": "2023-10-17 20:16:26", "segment": "segment413", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.5 Cache ", "content": "r10000 contains onchip cache consisting 32 kb primary data cache separate 32 kb instruction cache considered large onchip cache processor controls offchip secondary cache range size 512 kb 16 mb primary data cache consists two equal 16 kb banks data cache twoway interleaved two banks twoway set associative using least recently used replacement algorithm data cache line size 32 bytes data cache contains afxed block size eight words interleaved data cache design used access times currently available random access memory long relative processor cycle time interleaved data cache allows memory requests overlapped turn allows ability hide access recovery times bank speed access delays processor 1 execute 16 load store instructions speculatively order using nonblocking primary secondary caches meaning looks ahead instruction system nd load store instructions executed early addressed data block primary cache processor initiates cache rells soon possible 2 speculatively executed load initiates cache rell rell completed even load instruction aborted 3 external interface gives priority rell interrogate operations primary cache relled required data streamed directly waiting load instructions 4 external interface handle four nonblocking memory accesses secondary cache main memory interleaved data cache increased complexity required support complexity begins cache bank containing independent tag data arrays four sections allocated separately achieve high utilization five separate circuits compete cache bandwidth circuits address calculated tag check load unit store unit external interface data cache nonblocking allows cache accesses continue even though cache miss occurred important cache performance gives data cache ability stack memory references queuing multiple cache misses servicing simultaneously data cache uses write back protocol means cache store writes data cache instead writing directly memory data written data cache tagged directly memory data written data cache tagged dirty block prior dirty block replaced new frame written back offchip secondary cache secondary cache writes back main memory protocol used maintain data consistency note data cache written back prior secondary cache writing back main memory data cache subset secondary cache data cache said inconsistent modied corresponding data secondary cache data cache one following four states given time invalid clean exclusive dirty exclusive shared processor requires cache blocks single owner times thus processor adheres certain ownership rules processor assumes ownership cache block state block becomes dirty exclusive processor upgrade request processor assumes ownership block receiving external ack completion response processor gives ownership cache block state cache block changes invalid clean exclusive shared clean exclusive shared cache blocks always considered owned memory events trigger change state data cache block included following events 1 primary data cache readwrite miss 2 primary data cache hit 3 subset enforcement 4 cache instruction 5 external intervention shared request 6 intervention exclusive request secondary cache located offchip range size 512 kb 16 mb interfaced r10000 128bit data bus operate maximum 200 mhz yielding maximum transfer rate 32 gbs dedicated cache bus interrupted bus trafc system thus cache miss occurs access secondary cache immediate therefore true say secondary cache interface approaches zero wait state performance means cache receives data request processor always able return data following clock cycle external interface circuitry required secondary cache system secondary cache maintains many features primary data cache twoset associative uses least recently used replacement algorithm also uses writes back protocol one following four states invalid clean exclusive dirty exclusive shared events trigger changing secondary cache block primary data cache except events 2 4 1 primary data cache readwrite miss 2 data cache write hit shared clean exclusive block 3 secondary cache read miss 4 secondary cache write hit shard clean exclusive block 5 cache instruction 6 external intervention shard request 7 intervention exclusive request 8 invalidate request r10000 32 kb onchip instruction cache twoway set associative xed block size 16 words line size 64 bytes uses least recently used replacement algorithm given time instruction cache may one two states valid invalid following events cause instruction cache block change states 1 primary instruction cache read miss 2 subset property enforcement 3 various cache instructions 4 external intervention exclusive invalidate requests behavior processor executing load store instructions determined cache algorithm specied accessed address processor supports ve different cache algorithms 1 uncached 2 cacheable noncoherent 3 cacheable coherent exclusive 4 cacheable coherent exclusive write 5 uncached accelerated loads stores uncached cache algorithm bypass primary secondary caches cacheable noncoherent cache algorithm load store secondary cache misses result processor noncoherent block read requests cacheable coherent exclusive cache algorithm load store secondary cache misses result processor coherent block read exclusive requests processor requests indicate external agents containing caches coherency check must performed cache block must returned exclusive state cacheable coherent exclusive wire cache algorithm similar cacheable coherent exclusive cache algorithm except load secondary cache misses result processor coherent block read shared requests r10000 implements new cache algorithm uncached accelerated allows kernel mark tlb entries regions physical address space certain blocks data uncached signaling hardware may gather number uncached writes together series writes address sequential writes addresses block put uncached accelerated buffer issued system interface processor block write requests uncached accelerated algorithm differs uncached algorithm block write gathering performed", "url": "RV32ISPEC.pdf#segment414", "timestamp": "2023-10-17 20:16:26", "segment": "segment414", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.6 Processor-Operating Modes ", "content": "three operating modes listed order decreasing system privilege 1 kernel mode highest system privilege access change register innermost core operating system runs kernel mode 2 supervisor mode fewer privileges used lesscritical sections operating system 3 user mode lowest system privilege prevents users interfering one another selection three modes made operating system kernel mode writing status register ksu eld processor forced kernel mode processor handling error erl bit set exception exl bit set figure 1123 shows selection operating modes respect ksu exu erl bits also shows different instruction sets addressing modes enabled status register xx ux sx kx bits kernel mode kx bit allows 64bit addressing instructions always valid supervisor mode sx bit allows 64bit addressing mips iii instruction mips iv isa enabled time supervisor mode user mode ux bit allows 64bi addressing mips iii instructions xx bit allows new mips iv instruction proce ssor uses either 32bit 64bit addre ss spaces depending operating addressing modes set status regi ster processor uses followi ng addresse virtua l addre ss va region bits va region mapped virtual addresse translat ed tlb bits vs transl ated tlb sign extension bit va 32bit 64bit addre ss mode memory address space divided many regions shown figure 1124 specic char acterist ics uses th e user access useg region 32bi mode xuseg 64bit mode shown table th e supervis acce ss user region well sseg 32bit mode xsseg csseg 64bit mode show n figur e 112 4 kernel acce ss regions excep restrict ed b ecause bits va impleme nted tlb user mode single uniform virtua l addre ss spacelabel ed user segment availab le size 2 gb 32bit mode useg 16 tb 6 4bit mode xuse g ux 0 status register user mode addre ssing compatible 32bit addressing mode 2 gb user addre ss space availa ble labele useg valid user mode vir tual addresse mos sign icant bit cle ared 0 attempt ref erence address sign icant bit set user mode cause address err excep tion ux 1 status register user mode addressing extended 64bit mode l show n figur e 1124 6 4bit user mode processor provi des sing le uniform virtua l address space 244 byte labeled xuseg valid user mode virtua l addresse bits 6344 equal 0 attempt ref erence addre ss bits 6344 equal 0 cause addre ss err excep tion figure 1125 superv isor mode designed layered op erating syst ems true kernel runs processor kerne l mode rest opera ting system runs supervis mode figure 1126 mode sx 0 status regist er signica nt bit 3 2bit virtual address set 0 suseg virtua l address space sel ected cover full 231 bytes current user addre ss space virtual address extended contents 8bit eld form unique virtual address sx 0 status register three signicant bits 32bit virtual address 1102 sseg virtual address space selected covers 229 bytes current supervisor address space figure 1125 processor operate kernel mode whe n status regi ster cont ains kerne l mode bit valu es shown figure 1123 kernel mode virtua l addre ss space divided regions differentiated highorder bits virtual address mode kx 0 status register signicant bit virtual address a31 cleared 32bit kuseg virtual address space selected covers full 2 gb current user address space kx 0 status register signicant three bits virtual address 1002 32bit kseg virtual address space selected references kseg mapped tlb kx 0 status register signicant three bits 32bit virtual address 102 ksseg virtual address space selected current 512 mb supervisor virtual space kx 1 status register bits 6362 64bit virtual address 102 xkphys virtual address spaces selected set eight kernel physical spaces kernel physical space contains either one four 240 byte physical pages references space mapped physical address selected taken directly bits 390 virtual address bits 6159 virtual address specify cache algorithm cache algorithm either uncached uncached accelerated space contains four physical pages access addresses whose bits 5640 equal 0 cause address error exception virtual physical address translations maintained operating system using page tables memory subset translations loaded hardware buffer tlb tlb contains 64 entries maps pair virtual pages formats tlb entries shown figure 1127 cache algo rithm elds f tlb entryl o0 entryl o1 con g regist ers indicate data cached figur e 1127 shows tlb entry formats 32 64bit modes eld entry corresponding eld entryhi entrylo0 entrylo1 pagemask registers 64 bit unnecessarily large low 44 address bits translated high tow virtual address bits bits 6362 select user supervisor kernel address space intermediate address bits 6144 must either zeros ones depending address region data cache accesses joint tlb translates addresses address calculate unit instruction accesses jtlb translates pc address misses instruc tion tlb entry copied itlb subsequent accesses independent task process separate address space assigned unique 8bit address space identier asid identier stored tlb entry distinguish entries loaded different processes asid allows processor move one process another called context switch without invalidate tlb entries", "url": "RV32ISPEC.pdf#segment415", "timestamp": "2023-10-17 20:16:27", "segment": "segment415", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3.7 Floating-Point Units ", "content": "r10000 contains two primary oatingpoint units adder unit handles add operations multiply unit handles multiply operations addition two secondary oatingpoint units exist handle longlatency operations divide square root addition subtraction conversion instructions twocycle latency onecycle repeat rate handled within adder unit instructions convert integer values singleprecision oatingpoint values fourcycle latency must pass adder twice adder busy second cycle instruction issued oatingpoint multiply operations execute twocycle latency onecycle repeat rate handled within multiplier unit multiplier performs multiply operations oatingpoint divide square root units perform calculations using iterative algorithms units pipelined begin another operation current operation completed thus repeat rate approximately equals latency ports multiplier shared divide square root units cycle lost beginning operation fetch operand end store result oatingpoint multiplyadd operation occurs frequently computed using separate multiply add operations multiplyadd instruction madd fourcycle latency onecycle repeat rate combined instruction improves performance eliminating fetching decoding extra instruction divide square root units use separate circuitry operated simultaneously however oatingpoint queue issue instructions cycle oatingpoint add multiply divide squareroot units read oper ands store results oatingpoint register le values loaded stored register le loadstore move units logic diag ram oating point operations show n figur e 1128 data instructions read secondary cache prima ry caches processor decode append ed o atingpoin queue passed fp regist er le whe dynamic ally issued appropri ate functi onal unit execu tion functional unit results stored regist er le primary data cache oatingpoint queue issue one instruction adder unit one instruction multiplier unit adder multiplier two dedicated read ports dedicated write ports oatingpoint register le low repeat rates divide squareroot units issue port instead decode instructions issued multiplier unit using operand registers bypass logic appropriate second cycle alter storing result instru ction issued two operands read dedi cate read ports oating point register le op eration com pleted result wr itten back regist er le using dedi cated write port add multipl units write occurs four cycles operands read control o atingpoin execu tion shared followi ng u nits 1 oatingpoint queue determines operand dependencies dynamically issues instructions execution units also controls destination registers register bypass 2 execution units control arithmetic operations generate status 3 graduate unit saves status instructions graduate updates oatingpoint status register oating point u nit hardware impleme ntation coproce ssor 1 mips iv isa mips iv isa dees 32 logical oating point general regi sters fgrs show n figur e 1129 fgr 64 bits wide hold either 32bit sing leprecision 64bit doubl eprecision values hardwar e actual ly contain 64 ph ysical 64bit registers oating point regi ster le 32 logical registers taken floatin gpoint instru ction uses 5bit logi cal numb er select indivi dual fgr logical numb ers map ped physi cal regist ers rena unit pipeline stage 2 oatingpoint unit executes physical registers selected using 6bit addresses fr bit 26 status register determines number logical oating point registers fr 1 oatingpoint load stores operate follows 1 singleprecision operands read low half register leaving upper half ignored singleprecision results written low half register high half result register architecturally undened r10000 implementation set zero 2 doubleprecision arithmetic operations use entire 64bit contents operand result register register renaming every new result written temporary register conditional move instructions select new operand previous old value high half destination register singleprecision conditional move instruction undened even move occurs loadstore unit consists address queue address calculation unit tlb address stack store buffer primary data cache loadstore unit performs load store prefetch cache instructions load store instructions begin threecycle sequence issues instruction calculates virtual address translates virtual address physical address translated operation data cache accessed required data transfer completed provided primary data cache hit cache miss necessary shared register ports busy data cache data cache tag access mus repe ated data obtaine either secon dary cache main memor y tlb cont ains 64 entries transl ates virtual addresse physical addresse s virtual addre ss origina te either adders cal culation unit progr counter pc", "url": "RV32ISPEC.pdf#segment416", "timestamp": "2023-10-17 20:16:28", "segment": "segment416", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.3 .8 Inter rupts ", "content": "figure 1130 shows interrupt structure r10000", "url": "RV32ISPEC.pdf#segment417", "timestamp": "2023-10-17 20:16:28", "segment": "segment417", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.4 Intel Corpora tion\u2019s Itanium ", "content": "itanium processor 733800 mhz speed rst family 64 bit processor intel released 20 01 itaniu 2 proce ssor 900 mh z1 ghz launched 2002 figure 1131 shows block diagram itanium proce ssor architectura l com ponents proce ssor include 1 functional unit 2 cache three levels l1 l2 l3 3 register stack engine rse 4 bus proce ssor consi sts four oating point units four integer u nits four multime dia exten sions mmx units ia32 decode control unit four oating point units two work 82bit prec ision ther two used 32bit precision operations four perform fused multiply accumu late fmac operations along twooperand addition subtraction well oatingpoint integer multiplication also execute several singleoperand instructions conversions oatingpoint integers precision conversion negation absolute value integer units used arithmetic integer character manipulations mmx units accommodate instruc tions multimedia operations ia32 decode control unit provides compati bility families figure 1132 shows cache hierarchy consists three caches l1 l2 l3 l1 separate data l1d instruction cache l1i l1 instruction l1i cache dualported 16 kb cache one port used instruction fetch used prefetches 4way setassociative cache memory fully pipelined physically indexed tagged l1 data l1d cache fourported 16 kb size supports two concurrent loads two stores used caching integer data physically indexed tagged loads stores l2 cache 256 kb four ported supports four concurrent accesses 8way setassociative physically indexed tagged l2 cache used handle l1i l1d cache misses accesses four oatingpoint data units data requested either one two three four ports used instructions accessed l2 cache ports used cache miss occurs l2 cache request forwarded l3 cache l3 cache either 15 mb 3 mb size fully pipelined single ported 12way setassociative support eight requests physically indexed tagged handles requests caused l2 cache miss advanced load address table alat cache structure enables data speculation itanium 2 processor keeps information speculative data loads fully associative array handles two loads two stores tlbs itan ium 2 proce ssor two types data transl ation looka side buff er dtlb instru ction translat ion looka side buff er itlb two levels dtlb l1 dtlb l2 dtlb rst level dtlb fully assoc iative used perform virtual physical address transl ations load transac tions hit l1 cache three ports two read ports one write port suppor ts 4 kb pages secon level dtlb handl es virtua l physi cal addre ss translat ions data memor references stores fully assoc iative four ports suppor page siz es 4 k b 4 gb itlb categorize two levels level 1 itlb itlb1 level 2 itl b itlb 2 itlb 1 du al ported fully assoc iative responsi ble virtua l physi cal addre ss transl ations enable instru ction truncatio n hits l1 cache support page sizes 4 kb 4 gb itlb2 fully assoc iative resp onsible virtual physi cal address transl ations instruction memor refere nces miss itlb 1 supports page sizes 4 kb 4 gb", "url": "RV32ISPEC.pdf#segment418", "timestamp": "2023-10-17 20:16:28", "segment": "segment418", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4 .4.1 Regi sters ", "content": "itanium contains 128 integer registers 128 oatingpoint registers 64 predicate registers 8 branch registers 128register application store machine sta te cont rolled rse manages data integer registers figure 1133 shows complete register set consists 1 general purpose registers set 128 64bit generalpurpose regis ters named gr0gr127 registers partitioned static general registers 0 31 stacked general registers 32 127 gr0 always reads 0 sourced operand illegal operation fault occurs attempt write gr0 occurs 2 floatingpoint registers floatingpoint computation uses set 128 82bit oating registers named fr0fr127 registers divided subsets static oatingpoint registers include fr0 fr31 rotating oatingpoint registers fr32fr127 fr0 always reads 00 sourced fr1 always reads 10 sourced fault occurs either fr0 fr1 used destination 3 predicate registers set 64 1bit registers named pr0pr63 hold results compare instructions registers partitioned subsets static predicate registers include pr0pr15 rotating predicate registers extend pr16 pr63 pr0 always reads 1 result discarded used destination rotating registers support software pipeline loops static predicate registers used conditional branching 4 branch registers 8 64bit branch registers named br0br7 used hold indirect branching information 5 kernel registers 8 64bit kernel registers named kr0kr7 used communicate information kernel operating system application 6 current frame marker cfm describes state general register stack 64bit register used stackframe operations 7 instruction pointer 64bit pointer holds pointer current 16 byte aligned bundle ia64 mode offset 1 byte aligned instruction ia32 mode 8 performance monitor data registers pmd data registers performance monitor hardware 9 user mask um set singlebit values used monitor oatingpoint register usage performance monitor 10 processor identiers cpuid describe processor implementationdepen dent features 11 several 64bit registers operating systemspecic hardware specic applicationspecic uses covering hardware control system con guration rse used remove latency delay caused necessary saving restoring data processing registers entering leaving procedure stack providesforfastprocedurecallsbypassingargumentsinregistersasopposedtothestack procedure called new frame registers made available called procedure without need explicit save caller registers old registers remain large onchip physical register le long enough physical capacity number registers needed overows available physical capacity state machine called rse saves registers memory free necessary registers needed upcoming call rse maintains illusion innite number registers call return base register restored value caller using access registers prior call often return encountered even registers need saved making unnecessary restore cases rse saved callee registers processor stalls return rse restore appropriate number callee registers bus itanium processor 128 bits wide operates clock frequency 400 mhz transferring 64 gbs", "url": "RV32ISPEC.pdf#segment419", "timestamp": "2023-10-17 20:16:28", "segment": "segment419", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.4.2 Memory ", "content": "memory byte addressable accessed 64bit pointers 32bit pointers manipulated 64bit registers addressable unit alignment memory ia64 addressed units 1 2 4 8 10 16 bytes although data ia64 aligned boundary ia64 recommends items aligned naturally aligned boundaries object size example words aligned word boundaries one exception 10 byte oatingpoint values aligned 16 byte boundaries quantities loaded general registers memory placed least signicant portion register two endian models big endian little endian little endian results least signicant byte load operand stored least signicant byte target operand bigendian results signicant byte load operand stored least signicant byte target operand ia64 species endian model used ia32 cpus little endian ia64 instruction fetches performed little endian regardless current endian mode instruction appears reverse order instruction data read using big endian", "url": "RV32ISPEC.pdf#segment420", "timestamp": "2023-10-17 20:16:28", "segment": "segment420", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.4.3 Instruction Package ", "content": "six types instructions shown table 113 three 41bit instructions grouped together 128bit sized aligned containers called bundles pointer 04 bit indicates kinds instructions packed itanium architecture allows issuing independent instructions bundles parallel execution 32 possible kinds packaging 8 used thus reducing 24 littleendian format bundle appears figure 1134 instruction bundle executed order described 1 ordering bundles done lowest highest memory address instructions present lower memory address precede instructions higher memory addresses 2 instructions within bundle ordered instructions 13", "url": "RV32ISPEC.pdf#segment421", "timestamp": "2023-10-17 20:16:28", "segment": "segment421", "image_urls": [], "Book": "computerorganization" }, { "section": "11.4.4.4 Instruction Set Transition Model ", "content": "two operating environments supported itanium architecture 1 ia32 system environment supports ia32 32bit operating system 2 itanium system environment supports itaniumbased operating systems ia64 processor execute either ia32 itanium instructions time intel architecture ia64 compatible 32bit software ia32 software run real mode 16 bits protected mode 32 bits virtual mode 86 16 bits thus cpu able operate ia64 mode ia32 modes special instructions go one mode shown figure 1135 three instructions interruptions make transition ia32 itanium instruction sets ia64 1 jmpe ia32 instruction jumps 64bit instruction changes ia64 mode 2 bria ia64 instruction moves 32bit instruction changes ia32 mode 3 r ia64 instruction return interruption return happens ia32 situation ia64 depending situation present moment interruption invoked 4 interrupts transition processor itanium instruction set interrupt conditions itanium processor based explicit parallel instruction computing epic technol ogy provides pipelining capable parallel execution six instructions epic technology features provided itanium processor predication epic uses predicated execution branches reduce branch penalty illustrated example consider c source code x 7 z 2 else z 7 instruction ow would generally written 1 compare x 7 2 equal goto line 5 3 z 2 4 goto line 6 5 z 7 6 program continues code line 2 4 causes least one break goto instruction ow irrespective value x causes interruption program ow twice one pass itanium architecture assigns result compare operation predicate bit allows disallows output committed memory thus c source code could written using ia64 instruction ow compare x 7 store result predicate bit let p p 1 z 2 p 0 z 7 code value predicate bit p equal 1 indicates x 7 z assigned 2 otherwise 7 p set tested subsequent code speculation along predication epic supports control data speculation handle branches control speculation control speculation execution operation branch guards consider code sequence ab load ld addr1 target1 else load ld addr2 target2 operation load ldaddr1 target1 performed prior deter mination b operation would control speculative respect controlling condition b normal execution operation load ldaddr1 target1 may may execute new control speculative load causes exception exception serviced b true compiler uses control speculation leaves check operation original location check veries whether exception occurred branches recovery code data speculation also known advance loading execution memory load prior store preceded consider code sequence store st addr data load ld addr target use target example ldaddr staddr disambiguated process determining compile time relationship memory addresses load performed prior store load would data speculative respect store memory addresses overlap execution data speculative load issued store might return different value regular load issued store compiler data speculates load leaves check instruction original location load check veries whether overlap occurred branches recovery code register rotation epic supports register renaming using register rotation technique ie name register generated dynamically rotating register base rrb present rotating register le added register number given instruction modulo number registers rotating register le generated number actually used register address", "url": "RV32ISPEC.pdf#segment422", "timestamp": "2023-10-17 20:16:29", "segment": "segment422", "image_urls": [], "Book": "computerorganization" }, { "section": "11.5 SUMMA RY ", "content": "major para meters conce rn esign cont rol units speed cost comple xity exibility hcu offer highe r speeds mc us mcus offer better exib ility speed enhancem ent tec hniques applica ble type control unit discusse thi chapt er comm used microins truction formats introdu ced along speedcos tradeof fs pipelin ing techniq ues adopted extensi vely enhanc e perform ance serial proce ssing struct ures th chapt er cover basic princi ples design methodo logies pipelines althoug h throughput machin e improv ed employi ng pipe line tec hniques basic cycle time instructions remain nonpi pelined impleme ntations earlier machin es little pipelinin g average clock per instru ction cpi 510 modern risc archi tectures achieved cpi value close 1 pipel ines exploit instru ction level parallel ism program whereby instru ctions depend ent execu ted overlapp ed man ner instru ction level para llelism sufcien keep pipe line owing full inde penden instru ction achieve ideal mac hine cpi 1 three appro aches tried improv e performanc e beyond ideal cpi case superpip elining des cribed chapt er super scalar vliw architect ures des cribed chapte r 10 superpip eline uses deepe r pipelines control mec hanisms keep many stages pipeline busy concur rently note latenc stage gets longer instru ction iss ue rate drop also highe r potential data interlo cks supersca lar machin e allow issue one instruction per clock pipe line thus achievi ng instruction rate h igher cloc k rate architecture hardw evaluat es depend ency amo ng instru ctions dyn amically schedule packe inde penden instruction issue cycl e vli w arch itecture machin e instruction correspo nds packe inde penden instru ctions created compiler dynamic hardw scheduli ng used pipelin e tech niques extensi vely used architect ures today micro processor supercom puters chapter provided details four pipelined machin es", "url": "RV32ISPEC.pdf#segment423", "timestamp": "2023-10-17 20:16:29", "segment": "segment423", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 12 ", "content": "advanced architectures point considered processors single instruction stream ie single control unit bringing serial instruction execution operating upon single data stream ie single unit data operated upon given time also described various enhancements subsys tems memory alu control io increase processing speed system addition components fastest hardware technology available used implementation subsystems achieve higher speeds applications especially realtime processing make machine throughout inadequate even enhancements thus necessary develop newer architectures higher degrees parallelism enhancements provide order circumvent limits technology achieve faster processing speeds suppose application allows development processing algorithms degree parallelism language used code algorithm allows degree parallelism l compilers retain degree parallelism c hardware structure machine degree parallelism h processing efcient following relation must satised h c l development algorithms high degree parallelism applicationdependent great deal research devoted developing languages allow parallel processing constructs compilers either extract parallelism sequen tial program retain parallelism source code compilation produce parallel code chapter describe popular hardware structures used executing parallel code several hardware structures evolved years providing execution programs various degrees parallelism various levels flynn divided computer architectures four main classes based number instruction data streams 1 single instruction stream single data stream sisd machines uni processors asc intel 8080 2 single instruction stream multiple data stream simd architectures systems multiple arithmeticlogic processors control processor arithmeticlogic processor processes data stream directed single control processor also called array processors vector processors 3 multiple instruction stream single data stream misd machines single data stream simultaneously acted upon multiple instruction stream system pipelined alu considered misd although extends denition data stream somewhat 4 multiple instruction stream multiple data stream mimd machines contain multiple processors executing instruction stream process data stream allocated computer system one processor io channel working parallel simplest example mimd ie multi processor system provide deta ils last three above listed classi cations sect ions 121 123 sect ion 124 addresse cache coher ency problem sect ion 125 provi des brief descr iption dataow architectu res experiment al arch itecture type p rovides nest granular ity parallelism section 126 descr ibes systolic architect ures chapter ends descriptio n super com puter system se ction 127 chapter conce ntrates socalled tightly coupl ed multiple proce ssor comput er system s loosely coupl ed mul tiple processor com puter syst ems forming com puter networ ks distribu ted system descr ibed chapte r 14", "url": "RV32ISPEC.pdf#segment424", "timestamp": "2023-10-17 20:16:29", "segment": "segment424", "image_urls": [], "Book": "computerorganization" }, { "section": "12.1 MISD ", "content": "figure 121 shows organization misd machine n processors processing stations arranged pipeline data stream memory enters pipeline processor 1 moves station station processor n resulting data stream enters memory control unit machine shown n subunits one processor thus time n independent instruction streams note concept data stream model somewhat broad pipeline processing several independent data units given time however purposes model data units put together considered single data stream classication considered anomaly included sake completeness", "url": "RV32ISPEC.pdf#segment425", "timestamp": "2023-10-17 20:16:29", "segment": "segment425", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2 SIMD ", "content": "figure 122 shows structure simd n arithmeticlogic processors p1pn memory block m1mn individual memory blocks combined constitute system memory memory bus used transfer instruc tions data control processor cp control processor decodes instructions sends control signals processors p1pn processor interconnection network enables data exchange processors control processor practice fulledged uniprocessor retrieves instructions memory sends arithmeticlogic instructions processors executes control branch stop etc instructions processors p1pn execute instruction time data stream basis arithmeticlogic conditions processors may deactivated certain operations activation deactivation processors handled control processor th e followi ng exam ple com pares opera tion simd wi th sisd syst em exam ple 121 con sider addition two nelem ent vect ors b eleme ntbye lement crea te sum vector c c b 1 n th computati require n add times plus loop control overh ead sisd also sisd p rocessor fetch instru ctions correspo nding progr memor time throug h loop figure 123 shows simd implementation computation using n pes th iden tical simd model figure 122 consi sts multip le pes one cp memory system processor interconnection network figure 122 needed computation elements arrays b distributed n memory blocks hence pe access one pair operands added thus program simd consists one instruction c b instruction equivalent c b 1 n represents pe performing addition ith elements expression parentheses implies n pes active simultaneously total execution time computation time fetch one instruction plus time one addition overhead needed data need structured n memory blocks provide simultaneous access n data elements thus simds offer nfold throughput enhancement sisd provided application exhibits dataparallelism degree n simds special purpose machines suitable array vector processing", "url": "RV32ISPEC.pdf#segment426", "timestamp": "2023-10-17 20:16:30", "segment": "segment426", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2.1 Interconnection Networks for SIMD Architectures ", "content": "pes memory blocks simd architecture interconnected either nton nbyn switch also nbyn switch pes needed general allow fast exchange data computation proceeds onwards term interconnection network used refer interconnection hardware rather term switch term switch generally used refer component discussed later section petope interconnection network simd depends dataow requirements application fact depending application computation accomplished interconnection network network chosen appropriately thus reducing complexity pes drastically several ins proposed built last years section introduces terminology performance measures associated ins describes common ins applied simd systems next section extends description mimd architectures", "url": "RV32ISPEC.pdf#segment427", "timestamp": "2023-10-17 20:16:30", "segment": "segment427", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2.1.1 Terminology and Performance Measures ", "content": "consists several nodes interconnected links node general either pe memory block complete computer system consisting pes memory blocks io devices link hardware interconnect two nodes facilitates transmission messages data control information processes residing nodes two functional entities form interconnection structure paths switches path medium message transmitted two nodes composed one links switches link transmits message alter way switch hand may alter message ie change destination address route one number alternative paths available path unidirectional pointtopoint bidirectional pointtopoint bidirectional visit two nodes rst two types classied dedicated paths last type shared path two message transfer strategies used direct indirect direct strategy intervening switching elements communicating nodes indirect strategy one switching elements nodes indirect transfer strategy chosen either centralized decentralized transfer control strategy adopted central strategy switching done single entity called switch controller decentralized strategy hand switch control distributed among number switching elements instance ring network path node every node path neighboring node node length 1 path nonneighboring nodes length equal number links need traversed ie number hops needed transmit messages nodes decentralized control strategy used intermediate node path serves switch decodes destination address transmits message addressed next node examples provided earlier chapter cp controller ring issues multiple shift instructions depending distance see source destination nodes general able connect nodes system one another transfer maximum number messages per second reliably offer minimum cost various performance measures used evaluate ins described connectivity degree node number nodes immediate neighbors node number nodes reached node one hop instance unidirectional ring node degree 1 since connected one neighboring node bidirectional ring node degree 2 bandwidth total number messages network deliver unit time message simply bit pattern certain length consisting data andor control information latency measure overhead involved transmitting message network source node destination node dened time required transmit zerolength message average distance distance two nodes number links shortest path nodes network average distance given nd number nodes distance apart r diameter ie maximum minimum distance pairs nodes network n total number nodes desirable low average distance network low average distance would result nodes higher degree larger number communication ports node may expensive implement thus normalized average distance dened p number communication ports per node hardware complexity network proportional total number links switches network desirable minimize number elements cost usually measured network hardware cost fraction total system hardware cost incremental hardware cost ie cost modularity terms much additional hardware redesign needed expand network include additional nodes also important place modularity measure expandability terms easily network structure expanded utilizing additional modules regularity regular structure network pattern repeated form larger network property especially useful implementation network using vlsi circuits reliability fault tolerance measure redundancy network allow communication continue case failure one links additional functionality measure functions computations message combining arbitration etc offered network addition standard message transmission function complete ie nbyn network link node every node ideal network since would satisfy minimum latency minimum average distance maximum bandwidth simple routing criteria complexity network becomes prohibitively large number nodes increases expandability also comes high cost complete interconnection scheme used networks large number nodes topologies provide better costperformance ratio topologies classied either static dynamic static topology links two nodes passive dedicated paths changedtoestablishadirectconnectiontoothernodestheinterconnectionsareusually derived based complete analysis communication patterns application thetopologydoesnotchangeasthecomputationprogressesindynamictopologies interconnectionpatternchangesasthecomputationprogressesthischangingpatternis brought setting network active elements ie switches mode operation network synchronous asynchronous com bined dictated data manipulation characteristics application ring network used examples earlier chapter example synchronous network networks communication paths established message transfer occurs synchronously asynchronous networks connection requests issued dynamically transmission message progresses means message ow asynchronous network less orderly compared synchronous network combined network exhibits modes operation", "url": "RV32ISPEC.pdf#segment428", "timestamp": "2023-10-17 20:16:30", "segment": "segment428", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2.1.2 Routing Protocols ", "content": "routing mechanism establishes path two nodes trans mission messages simple possible result high overhead preferably dependent state node rather state whole network three basic routing protocols switching mechanisms adopted years circuit switching packet switching wormhole switching circuit switching path rst established source destination nodes path dedicated transmission complete message dedicated hardware path two nodes utilized message transmission complete mode ideal large messages transmitted packe switching mes sage broke n small units called packets packe destination addre ss used rout e packe destina tion thr ough networ k nodes store forward manner packe travels sourc e destina tion one link time interm ediate node packe usual ly stored buffere forwar ded appro priate link based destina tion addre ss th us packe might follow differ ent route destination packe ts may arrive destina tion order packe ts reassembl ed destination form com plete mes sage packet switchin g efci ent short message frequent transm issions increas es hard ware comp lexity swit ches becau se buffering require ment th e packe switchi ng mode analo gous workin g postal system letter packe t unlik e mes sages lett ers sourc e destination follow route arrive dest ination u sually sam e order sent circui swit ching mode anal ogous telepho ne networ k dedi cated path rst establ ished maint ained throughout conver sation underlyi ng hardwar e telepho ne networ k uses packet swit ching user sees virtual dedicat ed connec tion th e wormhole switchi ng cutth rough routing combina tion two methods e message broke n sma units call ed ow control igits it packet switchi ng its fol low sam e route destina tion unlik e packets since lea ding it sets swit ches path others fol low store forward buff ering overhead reduced rou ting mechanism either sta tic dete rministic dynamic adapt ive static schemes determin e unique path message source destina tion based solely topol ogy since take int consi deration sta te networ k potential uneven usage network resou rces conges tion dynamic rout ing hand utilize state networ k determin ing path message hence avoid congestion routing messages around heavily used links nodes routing realized setting switching elements network appropri ately mentioned earlier switch setting control function either managed centralized controller distributed switching element detail routing mechanisms provided following sections along description various topologies", "url": "RV32ISPEC.pdf#segment429", "timestamp": "2023-10-17 20:16:31", "segment": "segment429", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2.1.3 Static Topologies ", "content": "figur es 124 127 show som e many static topolog ies proposed representing wide range connectivity simplest among linear array rst last nodes connected one neighboring node interior node connected two neighboring nodes complex complete node connected every node brief description topologies follows linear array ring loop onedimensional mesh linear array figure 124a node connected neighboring node interior node connected two neighboring nodes boundary nodes con nected one neighbor links either unidirectional bidirectional ring loop network formed connecting two boundary nodes linear array figure 124b structure node connected two neighboring nodes loop either unidirectional bidirectional unidirectional loop node source neighbor destination neighbor th e node receives message sourc e sends mes sages dest ination neig hbor mess ages circul ate aroun loop source interm ediate nodes acting buffers destinati messages xed variable leng th loop designed either one multip le mes sages circulat ing simul taneously th e logical complexit loop network low node capab le originating message destine single destinatio n recogniz e mes sage destine relay message destine addi tion f node networ k require one additional link ow message sign icantly affect ed additional link fault tolera nce loop low failure one link unidire ctional loop cause commun ication stop least betwee n nodes connec ted link loop bidire ctional failure one link b reak loop two link failure partitions loop two disc onnected parts loops redun dant path designed provi de fault tolera nce chord al ring fig ure 124c degree node 3 exam ple bandw idth loop bottleneck com munica tion require ments increas e also possi ble one node saturate entire bandwid th loop lo op network evol ved data com munica tion environment geogr aphical ly dispers ed nodes connecte le transf ers resource shar ing used bitse rial data links com munication path s note loop network used interconnec pes simd syst em message transfer pes simultaneous since pes work lockstep mode transfer typically controlled cp shift rotate instructions issued transfer neighboring pes multiple times transf ers betwee n remo te pes th e links typicall carry data parallel rather bitse rial fashion twod imensional mesh pop ular twodi mension al mesh neares tneig h bor show n figur e 125a nodes arr anged twodim ensional matrix form node connected four neighbors north south east west connectivity boundary nodes depends application ibm wire routing machine wrm uses pure mesh boundary nodes degree 3 corners degree 2 mesh network illiaciv bottom node column connected top node column rightmost node row connected leftmost node next row network called torus cost twodimensional network proportional n number nodes network latency n message transmission delay depends distance destination node source hence wide range delays maximum delay increases n higherdimensional mesh constructed analogous one two dimensional meshes kdimensional mesh constructed arranging n nodes form kdimensional array connecting node 2k neighbors dedicated bidirectional links diameter network kpn routing algorithm commonly employed mesh networks follows traver sal one dimension time instance fourdimensional mesh nd path node labeled b c node labeled p q r rst traverse along rst dimension node p b c along next dimension p q c along third dimension p q r nally along remaining dimension p q r possible add single node structure general number nodes additional hardware needed depend mesh dimensions cost placemodularities poor regular structure routing simpler fault tolerance high since failure node link compensated nodes alternative paths possible star star interconnection scheme figure 125b node connected central switch bidirectional link switch forms apparent source destination messages switch maintains physical state system routes messages appropriately routing algorithm trivial source node directs message central switch turn directs appropriate destination one dedicated links thus either nodes involved message transmission central switch message path link connecting path consists two links cost placemodularity scheme good respect pes poor respect switch major problems switch bottleneck catastrophic effect system case switch failure additional node added system requires bidirectional link switch extension switch facilities accommodate new node note central switch basically interconnects nodes network interconnection hardware centralized switch often allowing cent ralized instea distribut ed control routing interc onnecti structure withi n swit ch coul topol ogy binar trees binary tree networ ks fig ure 12 5c interior nod e degre e 3 two childr en one pare nt lea ves degree 1 parent root degree 2 two childr en simple routing algorithm used tree networ ks reac h node node x traver se tree x ancestor reac hed traver se y nd shortes path need rst nd ances tor lowest leve l tree asce nding x decide whethe r fol low rig ht left link descendi ng toward y th e nodes tree typical ly numb ered conse cutively starting root node 1 node numb ers left right children node z 2z 2z 1 resp ective ly node numb ers level root leve l 1 ibits long thus numb ers left right childr en node obta ined append ing 0 left child 1 right child parent number numb ering schem e order n path node x node rst extract longe st common bit patter n numb ers x y node numb er common ancestor a differ ence lengths numb ers x number f levels trav ersed x reac h rst remo common signica nt bits numb ers y tra verse based remaining bits going left 0 right 1 bits exhausted almasi gottlieb 1989 tree network n nodes latency log2n cost proportional n degree nodes 3 independent n complet e interco nnectio n figur e 126 show comple te node connected every node dedicated path thus network n nodes nodes degree n 1 n n 1 2 links since nodes connected minimal length path two nodes simply link connecting routing algorithm trivial source node selects path destination node among n 1 alternative paths available nodes must equipped receive messages multiplicity paths network poor cost modularity since addition node n node network requires n extra links nodes must equipped additional ports receive message new node thus complexity network grows fast number nodes increased hence complete ins used environments small number nodes 4 16 interconnected place modularity also poor reasons cost modularity network provides high bandwidth fault tolerance character istics good failure link make network inoperable since alter native paths readily available although routing scheme gets complex intel paragon mit alewife use twodimensional networks maspar mp1 mp2 use twodimensional torus network node connected eight nearest neighbors use shared x connections fujitsu ap3000 distributedmemory multicomputer twodimensional torus network cray t3d t3e use threedimensional torus topology hyp ercube hypercube multidi mensional near neighbor networ k k dimensi onal hypercube k cube cont ains 2k nodes degree k label node bina ry number k bits labels neig hboring nodes differ one bit position figure 127 shows hypercubes various values k 0cub e one node shown figure 127a 1cube connects two nodes labeled 0 1 shown figure 127b 2cube figure 127c connects four nodes labeled 00 01 10 11 node degree 2 labels neighboring nodes differ one bit position 3cubes shown figure 127d eight nodes labels ranging 000 111 decimal 0 7 node 000 example connected directly nodes 001 010 100 message transmission neighboring nodes requires one hop transmit message node 000 node 011 two routes possible 000 001 011 000 010 011 routes require two hops note source destination labels differ two bits implying need two hops generate routes simple strategy used first message 000 label 000 compared destination address 011 since differ bit positions 2 3 message routed one corresponding neighboring nodes 010 001 message 010 label compared 011 noting difference position 3 implying forwarded 011 similar process used route 001 011 thus routing algorithm hypercube simple kdimensional hypercube routing algorithm uses k steps step messages routed adjacent node dimension ith bit x 1 otherwise messages remain hypercube networks reduce network latency increasing degree node ie connecting n nodes log2 n neighbors cost order nlog2n latency log2n several commercial architectures using hypercube topology intel corporation ncube corporation thinking machines corporation available one major disadvantage hypercube topology number nodes always power two thus numbers nodes need double every time single node required added network", "url": "RV32ISPEC.pdf#segment430", "timestamp": "2023-10-17 20:16:32", "segment": "segment430", "image_urls": [], "Book": "computerorganization" }, { "section": "12.2.1.4 Dynamic Topologies ", "content": "parallel computer system static interconnection topology expected well applications partitioned processes predictable communication patterns consisting mostly exchanges among neighboring pro cessing elements examples application domains analysis events space vision image processing weather modeling vlsi design application exhibit predictable communication pattern machines static topologies become inefcient since message transmission nonneighboring nodes results excessive transmission delays hence computer systems using static ins tend specialpurpose compared using dynamic ins simd mimd architectures used static topologies sever al dynamic ins propos ed built last years wide range perform ance cost char acterist ics cla ssied fol lowing categor ies 1 bus networks 2 crossbar networks 3 switching networks bu networks th ese simple build sever al sta ndard bus congu rations evol ved data path width high 64 bits bus networ k provi des lea st cost among three type dynamic networ ks also lowest perform ance bus networks suitable pe interconnec tion simd system next sectio n provi des details bus networ ks cont ext mimd arch itecture s cr ossbar network cros sbar network highest perf ormance highe st cost alter native amo ng three dynamic n etwork types allow pe con nect ed othe r nonbusy pe time figur e 128 show n 3 n cros sbar connec ting n pes n memor elements th e numb er f pes memor elements need equal although usual ly powe rs 2 numb er f memor eleme nts usually sma multiple numb er pes n2 cros spoints crossbar one row colu mn intersectio n pes produc e 16bi addre ss work 16bit data units cros spoint crossbar correspo nds inters ection 32 lines plus som e cont rol lines assuming 4 control lines build 1 6 3 16 cros sbar would need least 16 3 16 3 36 switching devices add one pe crossbar one extra row crosspoints needed thus although wire cost grows number processors n switch cost grows n2 major disadvantage network crossbar network figure 128 pe connected memory n pes connected distinct memory simultaneously establish connection pe memory block switch settings one crosspoi nt need change d since one set switches path crossbar offers unif orm latenc y two pes try access memory content ion one needs wait content ion problem minimi zed appro priate memor rganization s opera tion mimd requires proce ssorme mory interc onnec tions change dynam ic fash ion highspe ed switchi ng needed operation highfreque ncy capac itive inductive effect result nois e problems dominat e crossbar desi gn becaus e nois e problem high cost large crossbar networ ks prac tical comple te connectivi ty offered crossbars may neede always dependi ng application sparse cros sbar networ k certain crosspoi nts switches may sufc ient long satise bandwid th connecti vity requi rements descrip tion uses memory bloc ks one set nodes connecte set pes general node coul either pe memory block com plete com puter system pe memory io component s simd arch itecture using cros sbar pes switch set tings changed accordi ng connec tivity require ments appl ication hand done either cp dedi cated swit ch cont roller alte rnativ ely decentr alized control strategy used whe swit ches cros spoint forward message toward dest ination node switchi ng ne tworks single multist age switchi ng n etworks offer cost per forman ce comprom ise betwee n two extremes f bus cros sbar networks major ity swit ching networ ks proposed based interconnec tion scheme known perfect shufe th e fol lowing paragrap hs illustrate single mul ti stage networ k concep ts applied simd architect ures perfect shufe discussion switching networks deferred next section perfect shufe stone 1971 network derives name property rearranging data elements order perfect shufe deck cards card deck cut exactly half two halves merged cards similar position half brought adjacent example 122 figure 129 show shufe networ k eight pes two sets pes shown clarity actually eight pes shufe network rst partitions pes two groups rst containing 0 1 2 3 second containing 4 5 6 7 two groups merged 0 adjacent 4 1 5 2 6 3 7 interconnection algorithm also derived cyclic shift addresses pes numberpes starting 0 pe number consisting n bits total number pes 2n determine destination pe shufe shift nbit address pe left cyclically shufe shown figure 129 thus derived following whe n numb er pes powe r 2 th e shufe networ k figure 129 sing lestage network opera tion requi res mul tiple shufe complete networ k used multiple times th data recirculate networ k figur e 1210 show multist age shufe network eight pes stage corresponds one shufe data inserted rst stage ripples multiple stages rather recirculating single stage general multistage network implementations provide faster computations expense increased hardware compared singlestage network implementations following example illustrates network figure 1210 rst shufe makes vector elements originally 2 n1 distance apart adjacent next shufe brings elements originally 2 n2 distance apart adjacent general ith shufe brings elements 2 ni distance apart adjacent addition shufe network pes connected adjacent pes exchange data combined network used efciently several computations figure 1211 shows network eight pes addition perfect shufe adjacent pes connected function block capable performing several opera tions data pes connec ted several possibl e funct ions show n figur e 1211b add functi returns sum two data valu es top pe order function rearranges data increasing decreasing order swap function exchanges data values pass function retains example 123 figure 1212 shows utility network figure 1211 accumulation n data values pes stage network shufes data adds neighboring elements sum appearing upper output sum block four sum blocks required rst stage rst third sum blocks needed second stage rst one last stage computation shufeandadd cycle consumes time units addition requires log2n 3 time units compared n 3 time units required sequential process also note cp issue one instruction add perform addition addition n elements also accomplished single stage network figure 1211 functional block sum block data circulate network log2n times thus cp execute following program rather single instruction implemen tation multistage network 1 log2n shufe add endfor assuming time units shufe add single stage network implementation requires execution time 3 log n fetchdecode time instructions loop body time loop loop control overhead", "url": "RV32ISPEC.pdf#segment431", "timestamp": "2023-10-17 20:16:32", "segment": "segment431", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3 MIMD ", "content": "figure 1213 shows mimd structure consisting p memory bocks n processing elements inputoutput channels processortomemory interconnection network enables connection processor memory blocks addition establishing processormemory connections network also memorymapping mechanism performs logicaltophysical address mapping processortoio interconnection network enables con nection io channel processors processortoprocessor interconnection network interrupt network data exchange network since majority data exchanges performed memorytoprocessor interconnection important characteristic multiprocessor systems discussed chapter processors function independently unlike simd systems processors execute instruction given instant time processor multiprocessor system executing different instruction instant time reason flynn classied multiple instruction stream multiple data stream mimd computers mentioned earli er need para llel execu tion arises sinc e device tec hnology limit speed execu tion sing le proce ssor simd syst ems increas ed perform ance speed manifold simply due datapa rallelism para llel processin g improv es perf ormance limit ed case appl ications organize series repe titive operations unif ormly struct ured data since nu mber appl ications represente man ner simd system panac ea led evolution gener al form mim arch itecture whe proce ssing element arithm eticlogic unit alu cont rol unit necessary wn memor input output o devi ces thus proce ssing element com puter syst em itsel f capab le performi ng processing tas k totally independe nt proce ss ing eleme nts processi ng eleme nts interco nnected man ner allow excha nge data program synchr onize activit ies th e major advantage mimd syst ems 1 reliability processor fails workload taken another processor thus incorporating graceful degradation better fault tolerance system 2 highperformance consider ideal scenario n processors working useful computation times peak performance processing speed mimd n times single processor system however peak performance difcult achieve due overhead involved mimd operation overhead due a communication processors b synchronization work one processor another processor c wastage processor time processor runs tasks d processor scheduling ie allocation tasks processors task entit proce ssor assigned task progr funct ion procedure execu tion given p rocessor pro cess simpl another wor task proce ssor processi ng ele ment hardwar e reso urce tasks executed processor executes several tasks one another sequence tasks performed given processor succession forms thread thus path execution processor number tasks called thread multiprocessors provide simultaneous presence number threads execution application example 124 con sider figure 12 14 whi ch bloc k represe nts tas k task unique number required application task 2 executed task 1 completed task 3 executed task 1 task 2 completed thus line tasks 1 2 3 forms thread execution two threads b c also shown threads limit execution tasks specic serial manner although task 2 follow task 1 task 3 follow task 2 note task 4 executed parallel task 2 task 3 similarly task 6 executed simultaneously task 1 suppose mimd three processors task takes amount time execute seven tasks shown gure executed following manner initially since two tasks executed parallel allocated processors 1 2 arbitrarily processor 3 sits idle end rst time slot three processors get task three different threads thus tasks 2 4 7 executed parallel finally tasks 3 5 executed parallel one processors sitting idle due lack tasks instant figure 1214 implies application hand partitioned seven tasks exhibits degree parallelism 3 since three tasks executed simultaneously assumed interaction tasks gure practice tasks communicate since task depends results produced tasks obviously communication tasks reduces zero tasks combined single task run single processor ie sisd mode rc ratio r length run time task c communication overhead produced task signies task granularity ratio measure much overhead produced per unit computation high rc ratio implies communication overhead insignicant compared computation time low rc ratio implies communication overhead dominates computation time hence poorer performance high rc ratios imply coar se grain parallel ism low r c ratios result n e grain parallel ism general tend ency obta maximu performanc e reso rt nest possibl e g ranularity thus providi ng highest degree para llelism howeve r care taken see maxim um parall elism lead maximum overh ead th us tradeoff require reach optim um leve l granu larity", "url": "RV32ISPEC.pdf#segment432", "timestamp": "2023-10-17 20:16:33", "segment": "segment432", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3 .1 MI MD Organizat ion ", "content": "men tioned earli er mimd syst em processin g eleme nt works inde penden tly others proce ssor 1 said workin g inde pendentl processor 2 3 n instant tas k bein g executed proce ssor 1 intera ctions tasks execu ted processor 2 3 n vice versa instant however results tasks executed processor x may needed processor sometime future make possible processor must capability communicate results task performs tasks requiring done sending results directly requesting process storing shared memory memory processor equal easy access area communication models resulted two popular mimd organizations 1 shared memory architecture 2 message passing architecture", "url": "RV32ISPEC.pdf#segment433", "timestamp": "2023-10-17 20:16:33", "segment": "segment433", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.1.1 Shared Memory Architecture ", "content": "figur e 1215a shows structure shar ed memor mim d processor access memory module j interconnection network results computations stored memory processor executed task results required task easily accessed memory note processor fulledged sisd capable fetching instructions memory executing data retrieved memory processor local memory called tightly coupled architecture since processors interconnected interchange data shared memory quite rapid main advantage architecture also memory access time processors hence name uniform memory architecture uma processors system nonhomogeneous data transformations needed exchange instance system consists 16bit 32bit processors shared memory consists 32bit words memory word must converted two words use 16bit processors vice versa overhead another problem memory contention occurs whenever two processors try access shared memory block since memory block accessed one processor time processors requesting access memory block must wait rst processor using addition two processors simultaneously request access memory block one processors given preference memory organization concepts discussed section 2", "url": "RV32ISPEC.pdf#segment434", "timestamp": "2023-10-17 20:16:33", "segment": "segment434", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.1.2 Message Passing Architecture ", "content": "extreme shared memory system processor local memory block attached conglomeration local memories total memory system possesses figure 1215b shows block diagram conguration also known loosely coupled distributed memory mimd system data exchange required two processors conguration requesting processor sends message processor j whose local memory required data stored reply request processor j soon reads requested data local memory passes proce ssor interconnec tion n etwork thus com munica tion betwee n proce ssors occurs message passing th e request ed proce ssor usually nish es task hand accesses memor reque sted data passes interconnec tion network interc onnecti networ k rout es towar ds reque sting processor time reque sting processor sits idle waiting data thus incurring large ov erhead th e memor acce ss time varies processor hence thes e architec tur es know n non uniform memory architectu res num thus tightly coupl ed mim offers rapid data interc hange betwee n p rocessors loos ely coupled mimd memor cont ention problem pres ent mes sage passing system sinc e one proce ssor acce ss memor b lock sh ared memory architect ures als know n mul tiprocessor syst ems whi le message passing architectures called multicomputer systems architec tures two extremes mimd systems practice may reasonable mix two arch itecture shown figur e 1215c structure processor operates local environment far possible interprocessor communication either sharedmemory messagepassing several variations memory architecture used instance data diffusion machine ddm hagersten 1992 uses cache memory architecture coma system memory resides large caches attached processors order reduce latency network load ibm research parallel processor rp3 consists 512 nodes containing 4 mb memory interconnection nodes 512 memory modules used one global shared memory purely local memories message passing mode communication combination mimd systems also conceptually modeled either privateaddressspace sharedaddressspace machines addressspace models implemented sharedmemory messagepassing architectures private memory sharedaddress space machines numa architectures offer scalability benets message passing architectures programming advantages sharedmemory architectures example type jmachine mit small private memory attached large number nodes common address space across whole system dash machine stanford considers local memory cache large global address space global memory actually distributed general actual conguration mimd system depends application characteristics system designed", "url": "RV32ISPEC.pdf#segment435", "timestamp": "2023-10-17 20:16:33", "segment": "segment435", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.1.3 Memory Organization ", "content": "two parameters interest mimd memory system design bandwidth latency mimd system efcient memory bandwidth must high enough provide simultaneous operation processors memory modules shared memory contention must minimized addition latency time elapsed processor request data memory receipt must minimized section examines memory organ ization techniques reduce problems minimum tolerable level memory late ncy reduced increas ing memory bandwid th turn accompl ished e fol lowing mechani sms 1 building memory system multiple independent memory modules thus providing concurrent accesses modules banked interleaved com bination two addressing architectures used systems recent trend use multiport memory modules design achieve concurrent access 2 reducing memory access cycle times utilizing memory devices highest speed technology available usually accompanied high price alternative use cache memories design stand rst method consi der mimd syst em n proce ssors sharedmem ory unit worst case one proce ssor may waiting get access memory useful computa tion since one processor access memor given instant time bottle necks overall performance system solution problem organize memory one simultaneous access memory possible example 125 figure 1216a shows mimd structure n memor modu les connec ted n processors crossbar interconnection network n memory modules accessed simultaneously n different processors crossbar make best possible use design instructions executed one processor kept one memory module thus given processor accesses given memory block long time possible concurrency memory accesses maintained longer duration time mode operation requires banked memory architecture interleaved memory archi tecture used consecutive addresses lie different memory modules thus instructions corresponding task would spread several memory modules two tasks require code segment possible allow simultaneous access code segment long one task starts slightly least one instruction cycle time earlier thus processors accessing code march one behind spreading memory access different modules minimizing contention figure 1216b shows use multiport memories memory module three port memory device three ports active simultaneously reading writing data memory block restriction one port write data memory location two ports try access location writing highest priority port succeed thus multiport memories contention resolution logic built provide concurrent access expense complex hardware large multiport memories still expensive hardware complexity architecture figure 1216c depicts second method increasing memory bandwidth cache memory high speed buffer memory placed close proximity processor ie local memory anytime processor wants access something memory rst checks cache memory required data found ie cache hit need n ot acce ss main sha red memor usually four twent time slower success strategy depend well appl ication partitioned processor accesses privat e memory long possibl e i e high cache hit ratio rarely acce sses shar ed memory th also require interp rocessor com munica tion minimi zed issues f conce rn desi gn f mimd syst em 1 processor scheduling efcient allocation processors processing needs dynamic fashion computation progresses 2 processor synchronization prevention processors trying change unit data simultaneously obeying precedence constraints data manipulation 3 interconnection network design processortomemory processortoperi pheral interconnection network still probably expensive element system become bottleneck 4 overhead ideally n processor system provide n times throughput uniprocessor true practice overhead processing required coordinate activities various processors 5 partitioning identifying parallelism processing algorithms invoke concur rent processing streams trivial problem important note architecture classicat ion describ ed section uniqu e compute r syst em may clearly belong one classes fo r exampl e cray series super compute rs could cla ssied one four classes depend ing perating mode given time sever al classi cation schemes taxo nomies propos ed refer iee e computer novem ber 1988 critique taxo nomies", "url": "RV32ISPEC.pdf#segment436", "timestamp": "2023-10-17 20:16:34", "segment": "segment436", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2 Interco nnection Networ ks for MI MD Architect ures ", "content": "interc onnecti networ k important component mimd syst em ease speed proce ssortopr ocessor processor tomem ory commun ication dependent interconnection network design system use either static dynamic network choice depending dataow program charac teristics application design structure advantages disadvantages number interconnec tion networ ks describ ed section 123 section extends description mimd systems shared memory mimd system data exchange processors shared memory hence efcient memory processor interconnec tion network must network interconnects nodes system node either processor memory block processortoprocessor interconnection network also present systems network commonly called synchronization network provides one processor interrupt inform shared data available memory message passing mimd system interconnection network provides efcient transmission messages nodes node typically complete computer system consisting processor memory io devices th e mos common interconnec tion structure used mim system 1 bus 2 loop ring 3 mesh 4 hypercube 5 crossbar 6 multistage switching networks detai ls loop mesh hypercu crossb ar networ ks provided previ ous section appl ied simd system s networks used mimd syst em design also excep commun ication occur async hronou man ner rath er synchrono us com munica tion mode simd syst ems th e rest sectio n highli ghts charact eristics networks applied mimd syst ems cover bus multistage switchi ng networ ks detail", "url": "RV32ISPEC.pdf#segment437", "timestamp": "2023-10-17 20:16:34", "segment": "segment437", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3 .2.1 Bus Networ k ", "content": "bus networ ks simpl e build provi de least cost among three type dynam ic networ ks discussed earli er th ey als offe r low est performanc e bandw idth bus de ned produc clock frequ ency width data path bus bandw idth mus large enough accomm odate com munica tion needs nod es connec ted since bandwid th avai l able network node decr eases numb er nodes networ k increas es bus networ ks suitable interconnec ting sma nu mber nodes th e bus bandw idth increased increas ing clock fre quency tec hnologic al advanc es make higher bus cloc k rates possibl e als provi de faster proce ssors hence ratio proce ssor speed bus bandw idth likely remai n thus limit ing number proce ssors connec ted singlebus structure length bus also affects bus bandwidth since physical para meters capacitance inductance signal degradation proportional length wires addition capacitive inductive effects grow bus frequency thus limiting bandwidth figur e 1217 show shar ed memory mimd system th e global memory connected bus several nodes connected node consists processor local memory cache io devices absence cache local memories nodes try access shared memory single bus structure provide maximum performance shared bus shared memory high enough bandwidths bottlenecks reduced application partitioned majority memory references processor local memory cache blocks thus reducing trafc common shared bus shared memory course presence multiple caches system brings problem cache coherency use multiport memory system shared memory multiple bus interconnection structure used port memory connected bus structure reduces number processors bus another alternative bus window scheme shown figure 1218a set processors connected bus switch ie bus window buses connected form overall system message transmission charac teristics identical global bus except multiple bus segments available messages retransmitted paths received paths figure 1218b shows fat tree network gaining popularity communication links fatter ie higher bandwidth interconnect nodes note practice applications partitioned pro cesses cluster communicate often clusters links near root tree must thinner compared ones near leaves thinking machine incorporated cm5 uses fat tree interconnection network several standard bus congurations multibus vme bus etc evolved years offer support terms data address control signals multiprocessor system design", "url": "RV32ISPEC.pdf#segment438", "timestamp": "2023-10-17 20:16:34", "segment": "segment438", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2.2 Loop or Ring ", "content": "ring network suitable messagepassing mimd systems nodes interconnected ring pointtopoint interconnection nodes ring could either unidirectional bidirectional transmit message sender places message ring node turn examines message header buffers message designated destination message eventually reaches sender removes ring one popular protocols used rings token ring ieee 8025 standard token unique bit pattern circulates ring node wants transmit message accepts token ie prevents moving next node places message ring message accepted receiver reaches sender sender removes message places token ring thus node transmitter token since interconnections ring pointtopoint physical parameters controlled readily unlike bus interconnections especially high bandwidths needed one disadvantage token ring node adds 1bit delay message transmission thus delay increases number nodes system increases network viewed pipeline long delay bandwidth network effectively utilized accommodate mode operation nodes usually overlap computations message transmission one way increase transmission rate provide transmission new message soon current message received destination node rather waiting message reaches sender removed", "url": "RV32ISPEC.pdf#segment439", "timestamp": "2023-10-17 20:16:34", "segment": "segment439", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2.3 Mesh Network ", "content": "mesh networks ideal applications high nearneighbor inter actions application requires large number global interactions efciency computation goes since global communications require multiple hops network one way improve performance augment mesh network another global network maspar architectures intel ipsc architectures utilize global interconnects 2007 taylor francis group llc", "url": "RV32ISPEC.pdf#segment440", "timestamp": "2023-10-17 20:16:34", "segment": "segment440", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2.4 Hypercube Network ", "content": "one advantage hypercube networks routing straightforward network provides multiple paths message transmission node also network partitioned hypercubes lower dimensions hence multiple applications utilizing smaller networks simultaneously implemented instance fourdimensional hypercube 16 processing nodes used two threedimensional hypercubes four twodimensional hypercubes one disadvantage hypercube scalability since number nodes increased powers two increase number nodes 32 33 network needs expanded vedimensional sixdimensional network consisting 64 nodes fact intel touchstone project switched mesh networks hypercubes scalability issue", "url": "RV32ISPEC.pdf#segment441", "timestamp": "2023-10-17 20:16:34", "segment": "segment441", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2.5 Crossbar Network ", "content": "crossbar network offers multiple simultaneous communications least amount contention high hardware complexity number memory blocks system least equal number processors processor memory path one crosspoint delay hardware complexity cost crossbar proportional number crosspoints since n2 crosspoints n 3 n cross bar crossbar becomes expensive large values n", "url": "RV32ISPEC.pdf#segment442", "timestamp": "2023-10-17 20:16:34", "segment": "segment442", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3.2.6 Multistage Networks ", "content": "multistage switching networks offer costperformance compromise two extremes bus crossbar networks large number multistage networks proposed past years examples omega baseline banyan benes networks networks differ topology operating mode control strategy type switches used capable con necting source input node destination output node differ number different nton interconnection patterns achieve n number nodes system networks typically composed 2input 2output switches see figure 1219 arranged log2n stages thus cost network order nlog2n compared n2 crossbar communication paths nodes networks equal length ie log2n latency thus log2n time cros sbar gener al prac tice large crossbar longer cycle time com pared thos e small switches used multistage networks th e major ity multist age networ ks based perfect shufe intercon nect ion scheme", "url": "RV32ISPEC.pdf#segment443", "timestamp": "2023-10-17 20:16:34", "segment": "segment443", "image_urls": [], "Book": "computerorganization" }, { "section": "12.3 .2.7 Ome ga Networ k ", "content": "th e omega n etwork lmasi gottli eb 1989 simpl est multistage networ ks nin put n output omega interconnec tion topol ogy shown figur e 1220 consist log2n stages th e perfect shufe int erconnecti used betwee n stages sta ge cont ains n2 2input 2 output swit ches swit ches perform four functi ons shown figur e 1219 network employs packet switching mode communication packet composed data destination address address read switch packet forwarded next switch arrives destination node routing algorithm follows simple scheme starting source switch examines leading bit destination address removes bit bit 1 message exits switch lower port otherwise upper port figure 1220 shows example message sent node 001 node 110 since rst bit destination address 1 switch rst stage routes packet lower port similarly switch second stage also routes lower port since second bit address also 1 switch third stage routes upper port since third bit address 0 path shown solid line gure routing algorithm implies distributed control strategy since control function distributed among switches alternative control strategy would centralized control controller sends control signals switch based routing required may operate either stage stage basis starting rst last may rese whole networ k time depend ing commun ication require ments figure 1220 also show dotted lines path n ode 101 node 100 note link betwee n stages 1 2 common path shown gure thus nodes 001 101 send packe ts simul taneousl correspo nding destina tion nodes one packets block ed com mon link free hence thi networ k called block ing networ k note e uniqu e path network input node outpu node disadvant age since message transm ission gets blocked even one link path part path another transm ission progr ess one way reduc e dela ys due blocking provide buff ers swit ching elemen ts p ackets retai ned locally bloc ked links free switches also designed combine message bound destinati call cros sbar n onblocki ng networ k much expensive ome ga n etwork progr ess hardwar e technol ogy resu lted avai lability fast processor memory devices tw com ponents mim system stand ard bus syst ems multibus vm e bus allow implementa tion busb ased mimd systems altho ugh hardwar e modu les implement interconnec tion loop crossbar etc structures appearing offtheshelf design implementation appropriate interconnection structure still crucial expensive part mimd design", "url": "RV32ISPEC.pdf#segment444", "timestamp": "2023-10-17 20:16:35", "segment": "segment444", "image_urls": [], "Book": "computerorganization" }, { "section": "12.4 CACHE COHERENCE ", "content": "consid er multipr ocess system figure 1215c proce ssor local private memory local memory viewed cache computation proceeds processor updates cache however updates private cache visible processors thus one processor updates cache entry corresponding shared data item caches containing data item updated hence corresponding processors operate stale data problem wherein value data item consistent throughout memory system known cache incoherency hardware software schemes applied insure processors see recent value data item time process making caches coherent two popular mechanisms updating cache entries writethrough write back writethrough processor updating cache also simultaneously updates corresponding entry main memory writeback updated cacheblock written back main memory block replaced cache writeback mechanism clearly solve problem cache inco herency multiprocessor system writethrough keeps data coherent single processor environment consider multiprocessor system processors 1 2 load block main memory caches suppose processor 1 makes changes block cache writes main memory processor 2 still see stale data cache since updat ed two po ssible solutions u pdate caches contain shar ed data wr itethro ugh takes place writethr ough wi create enorm ous overh ead memor syst em ii invali date correspo nding entry processor cache whe n wr itethrough occur s th forces proce ssors copy updated data cache whe n needed later ca che coher ence active area research intere st several cache coherency schem es evolved year s popular schem es outlin ed b elow 1 least complex scheme achieving cache coherency use private caches figure 1221 cache associated shared memory system rather processor memory write processor update common cache present cache seen processors simple solution major disadvantage high cache contention since processors require access common cache 2 another simple solution stay private cache architecture figure 1221 cache nonshared data items shared data items tagged noncached stored common memory advantage method processor private cache nonshared data thus providing higher band width one disadvantage scheme programmer and compiler tag data items cached noncached would preferred cache coherency schemes transparent user access shared items could result high contention 3 cache ushing modication previous scheme shared data allowed cached known one processor accessing data shared data cache accessed processor using issues ushcache instruction causes modied data cache written back main memory corresponding cache locations invalidated scheme advantage allowing shared areas cached disadvantage extra time consumption ushcache instruction also requires program code modication ush cache 4 coherency schemes eliminate private caches limit may cached require programmer intervention caching scheme avoids problems bus watching bus snooping bus snooping schemes incorporate hardware monitors shared bus data load store processor cache controller shown figure 1222 snoopy cache controller controls status data contained within cache based load store seen bus cache archit ecture writethr ough ever store cache written simul taneousl main memory case snoopy controlle r sees stores take actions based typ ically store made loca lly cache bloc k bloc k also cached one remote cache snoopy cont rollers remote cache either updat e invalidat e bloc ks cache s choice updat ing inva lidating remote caches effect perf ormance prima ry difference time involved updat e cache entr ies versus merely changin g status remot ely cached bloc k secondly numb er proce ssors increas es shared bus may become saturated note ever stor e main memory must accessed ever store h additional bus overhead generat ed loads performed additional overhead instance followi ng explanatio n illi nois cache coher ence protocol papama rcos pa tel 1984 based expl anation archibal 1986 cache holds cache state per block cache state one 1 invalid data block cached 2 validexclusive data block valid clean identical data held main memory cached copy block system 3 shared data block valid clean possibly cached copies block system 4 dirty da ta bl ock v alid modi e relative memory nd c ache c op block sys tem ni tially bloc ks inva lid caches cache tates change accordi ng b us transactions sho wn figure 12 23 cache state transitions reque sting processor sho wn ol id lines cache state trans ition sno pi ng pr oc essors ho wn dotted lines instance read snooping processor copy block requesting processor transition state validexclusive snooping processor copy block requesting processor transition state shared addition snooping processors copy block observe read transition state shared snooping processor block state dirty write data main memory time 5 solution appropriate bus organized multiprocessor systems proposed goodman 1983 scheme invalidate request broadcast block written cache rst time updated block simultaneously writtenthrough main memory block cache written necessary write back replacing thus rst store causes writethrough main memory also invalidates remotely cached copies data subsequent stores get written main memory since cache copies marked invalid copies caches required processor executes load data cachecontroller locates unit main memory cache valid copy data data block marked dirty cache supplies data writes block memory technique called writeonce writeback cache scheme since modied cache block loaded main memory block replaced conserves bandwidth shared bus thus generally faster writethrough added through put cost complex buswatching mechanism addition cache controller watching bus must also maintain ownership information cached block allowing one copy cached block time writable type protocol called ownership protocol ownership protocol works general follows block data one owner main memory cache owns block copies block readonly ro processor needs write ro block broadcast main memory caches made attempt nd modied copies modied copy exists another cache written main memory copied cache requesting readwrite privileges privileges granted requesting cache section addressed primitive cache coherency schemes cache coherence active area research resulted several schemes", "url": "RV32ISPEC.pdf#segment445", "timestamp": "2023-10-17 20:16:35", "segment": "segment445", "image_urls": [], "Book": "computerorganization" }, { "section": "12.5 DATA-FLOW ARCHITECTURES ", "content": "dataow architectures tend maximize concurrency operations parallelism breaking processing activity sets primitive operations possible computations dataow machine datadriven oper ation performed operands available unlike control driven machines described far required data gathered instruction needs sequence operation dataow machine obeys precedence constraint imposed algorithm used rather control statements program dataow architecture assumes number functional units available many functional units possible invoked given time functional units purely functional sense induce side effects either data computation sequence dataow diagram figure 1224 shows computation roots quadratic equation assuming b c values available b b2 ac 2a computed immediately followed computation 4ac b2 4ac b2 4ac p order b b2 4ac p b b2 4ac p simultaneously computed followed simultaneous computation two roots note requirement operands ready operation invoked time sequence constraints imposed figure 1225 shows schematic view dataow machine machine memory consists series cells cell contains opcode two operands operands ready cell presented arbitra tion network assigns cell either functional unit operations decision unit predicates outputs functional units presented distribution network stores result appropriate cells directed control network high throughput achieved algorithms represented maximum degree concurrency possible three networks processor designed bring fast communication memory functional decision units two experimental dataow machines university utah toulouse france built dataow project massachusetts institute technology concentrat ed design languages represe ntation techniq ues feas ibility evaluat ion dataow concepts simulat ion", "url": "RV32ISPEC.pdf#segment446", "timestamp": "2023-10-17 20:16:35", "segment": "segment446", "image_urls": [], "Book": "computerorganization" }, { "section": "12.6 SYST OLIC ARC HITECTU RES ", "content": "kung 1982 proposed systolic architectures means solving problems special purpose systems must often balance intensive computations demand ing io bandwidths systolic arrays architectures pipelined multiprocessors data pulsed rhythmic fashion memory network processors results returned memory see figure 1226 global clock explicit timing delay synchronize pipeline dataow consists operands obtained memory partial results used processor processors interconnected regular local interconnections time interval processors execute short time invariant sequence instructions systolic architectures address performance requirements special purpose systems achieving signicant parallel computation avoiding io mem ory bandwidth bottlenecks high degree parallelism achieved pipelining data multiple processors typically arranged twodimensional fashion systolic architectures maximize computations performed datum fetched memory external device datum enters systolic array passed processor needs without intervening store memory example 126 figure 1227 show simpl e systo lic array woul calcul ate produc two matrices zero inputs shown moving array used synchronization processor begins accumulator set zero cycle adds product two inputs accumulator ve cycles matrix product complete variety special purpose systems used systolic arrays algorithm specic architectures particularly signal processing applications programmable recongurable systolic architectures intel iwarp saxpy matrix con structed", "url": "RV32ISPEC.pdf#segment447", "timestamp": "2023-10-17 20:16:36", "segment": "segment447", "image_urls": [], "Book": "computerorganization" }, { "section": "12.7 EXAMPLE SYSTEMS ", "content": "several example systems described earlier book operate simd mimd modes certain extent pipelining misd structure used extensively modern day machines cray series architectures described earlier book operate four modes flynn classi cation depend ing context execution sect ion conce ntrate exam ples simd mimd architecture k nown super compute rs traditio nal deniti f super compute rs powerf ul compute rs availa ble given time dynamic deni tion toda supercom puters consider ed super computers years almost supercom puters toda design ed highspe ed o atingpoin operations basis size cla ssied highend midrange single user system s several super compu ter arch itecture attempt ed sinc e 1960s overcome sisd throughput bottleneck majority architectures implemented quantities one two nevertheless research development efforts contributed immensely area computer architecture top 500 list supercomputers published november 2006 http www top500orglists200611 list ibm bluegenel system installed doe lawrence livermore national laboratory llnl retains number one spot linpack performance 2806 teraops trillions calculations per second tops number two system sandia national laboratories cray red storm supercomputer second system ever recorded exceed 100 tops mark 1014 tops initial red storm system ranked 9 last listing cray ibm selected high productivity computing systems hpcs program defense advanced research projects agency darpa develop powerful easier use systems providing performance one quadrillion oatingpoint operations per second petaops details ibm blue gene l system refer gara et al 2005 http www research ibmcom journal rd 492 garahtml provide bri ef descr iption latest supercom puter system cray section", "url": "RV32ISPEC.pdf#segment448", "timestamp": "2023-10-17 20:16:36", "segment": "segment448", "image_urls": [], "Book": "computerorganization" }, { "section": "12.7.1 Cray XT4 ", "content": "section extracted http craycomdownloadscrayxt4datasheetpdf cray xt4 system offers new level scalable computing single powerful computing system handles complex problems every component engineered run massively parallel processing mpp applications comple tion reliably fast operating system management system tightly integrated designed ease operation massive scale scalable perform ance analysis debugging tools allow rapid testing ne tuning applications highly scalable global io performance ensures high efciency applications require rapid io access large datasets xt4 system brings new levels scalability sustained performance high performance computing hpc engineered meet demanding needs capability class hpc applications feature function selected enable larger problems faster solutions greater return investment designed support challenging hpc workloads xt4 supercomputer delivers scalable power toughest computing challenges every aspect xt4 engineered deliver superior performance massively parallel applications including 1 scalable processing elements high performance amd pro cessors memory 2 high bandwidth low latency interconnect 3 mpp optimized operating system 4 standardsbased programming environment 5 sophisticated reliability availability serviceability ras system manage ment features 6 high speed highly reliable i system th e basi c buildi ng block xt4 processing elem ent pe pe com posed one opter processor single dual quad core coupled memor dedi cated com munication resource design elim inates sched uling comple xities asymmetr ic performanc e problems associa ted clus ters shared memory processor smp ensur es performanc e unif orm across distribu ted memory processe absolut e require ment scalable algo rithms xt4 com pute blad e include four com pute pes high scal ability sma footprint service blad es include two service pes provide direct io connec tivity th e opt eron micro processor offers number advant ages super ior perf ormance scalabilit y onch ip highly associa tive data cache suppor ts aggre ssive outof order execu tion iss ue nine instru ctions simul taneousl y integrate memory cont roller elimina tes need separ ate memor controlle r chip providi ng extrem ely low latenc path local memor less 60 ns th sign icant performanc e advantage part icularly algo rithms require irregul ar memor access 128bit wide memory con tro ller provides 106128 gb s local memor b andwidth per amd opt eron mor e one byte per flop balance brin gs perform ance advantage algo rithms stress local memor bandw idth hypertr ansport technol ogy enabl es 64 gb s direct connec tion betwee n processor xt4 intercon nect remo ving pci bottlen eck inherent interc onnects xt4 pe congu red 1 8 gb ddr2 memor y memory com pute pes unbuffer ed p rovides appl ications lowest possi ble memor latency th e xt4 system fig ure 1228 inco rporate high bandw idth low latenc interc onnect com posed cray seast ar2 chips high speed links based hypertr ansport proprie tary protocol s interconnec directly connec ts processing eleme nts xt4 system 3d torus topol ogy elim inating cost comple xity external switches imp roves reliabili ty allow syst ems economi cally scal e tens thousands nodes well beyond capacity fattree swit ches backbo ne xt4 system intercon nect carries message passing trafc well io trafc global le system th e cray seast ar2 figure 1229 chip combine com munica tion proce ssing high speed routing single device communication chip composed hypertransport link direct memory access dma engine communication management processor powerpc 440 highspeed interconnect router service port router cray seastar2 chip provides six highspeed network links connect six neighbors 3d torus peak bidirectional band width link 76 gbs sustained bandwidth excess 6 gbs router also includes reliable link protocol error correction retransmission dma engine powerpc 440 processor work together offload message preparation demultiplexing tasks amd processor leaving free focus exclusively computing tasks logic within seastar2 ef ciently matches message passing interface mpi send receive operations eliminating need large applicationsrobbing memory buffers required typical clusterbased systems dma engine xt4 operating system work together minimize latency providing path directly application communication hardware without traps interrupts associated traversing protected kernel link chip runs reliability protocol supports cyclic redundancy check crc automatic retransmission hardware presence bad connection link congured run degraded mode still providing connectivity cray seastar2 chip provides service port bridges separate management network cray seastar2 local bus service port allows management system access registers memory system facilitates booting maintenance system monitoring xt4 operating system unicoslc designed run large complex applica tions scale efciently 120000 processor cores previous generation mpp systems cray unicoslc consists two primary componentsa microkernel compute pes fullfeatured operating system service pes xt4 microkernel runs compute pes provides computational environment minimizes system overhead critical allowing systems scale thousands processors microkernel interacts application process limited way including managing virtual memory addressing providing memory protection performing basic scheduling special light weight design means virtually nothing stands user scalable application bare hardware microkernel architecture ensures reproducible runtimes mpp jobs supports ne grain synchronization scale ensures high performance low latency mpi shared memory shmem communication service pes run full linux distribution congured provide login io system network services login pes offer programmer look feel linuxbased environment full access programming environment standard linux utilities commands shells make program development easy portable network pes provide highspeed connectivity systems io pes provide scalable connectivity global parallel le system system pes used run global system services system database system services scaled t size system specic needs users jobs submitte intera ctively login pes using xt4 job launch comma nd pbs pro batc h progr tigh tly integr ated syst em pe sched uler jobs sched uled dedi cated sets com pute pes system admini strator dene batch interact ive partit ions syst em provides accou nting parallel jobs single entitie aggregated resource usage xt4 system maint ains single root le syst em across nodes ensur ing modi cations immed iately visibl e thr oughout system witho ut transm it ting change individual pe fast boot time ensure software upgra des comple ted quickly minimal time design ed around open syst em standar ds xt4 easy progr system single pe arch itecture microkern elbase opera ting syst em ensure system induced performanc e issu es elim inated allowing user focus exclusiv ely applica tion xt4 progr amming envi ronment include tools designed com plemen enhance othe r resulting rich easyto use program ming environment facilitat es devel opment scalable appl ications amd processor native support 32bi 64bit appl ications full x8664 com patibilit mak es xt4syst em comp atible vast quantity exis ting com pilers libraries including optimize c c fortran9 0 compiler high perform ance math librarie optim ized vers ions bla ffts lapac k scalap ack superlu communi cation libraries include mpi shme m th e mpi impleme ntation compliant mp 20 sta ndard opti mized take advantage scalable interconnec offeri ng scalabl e message passi ng perform ance tens thousands f pes th e shm em library compatibl e previous cray systems operates directly cray seastar2 chip ensure uncompromised com munications performance cray apprentice performance analysis tools also included xt4 allow users analyze resource utilization throughout code help uncover loadbalance issues executing parallel cray ras mana gement system crms see figur e 1230 integrates hardware software components provide system monitoring fault identica tion recovery independent system control processors supervisory network crms monitors manages major hardware software components xt4 addition providing recovery services event hardware software failure crms controls powerup power boot sequences manages interconnect displays machine state system administrator crms independent system processors supervisory net work services crms provides take resources running applications component fails crms continue provide fault identication recovery services allow functional parts system continue operating xt4 designed high reliability redundancy built critical components single points failure minimized example system coul lose io pe witho ut losing job usin g processor loca l memory could fail yet jobs routed thr ough nod e continue unint errupted system board contain moving parts enhanc ing overa reliabili ty th e xt4 proce ssor io board use socke ted compon ents whe rever possib le th e seast ar2 chip ras processor modu le dimms voltage regulator modu les vrm amd proce ssors eld replace able upgrade able component redund ant power including redun dant vrms syst em blades th e xt4 o subsys tem scales mee bandw idth n eeds even data intensiv e appl ications io arch itecture consi sts stor age arrays con nect ed directly o pes reside highspe ed interconnec t lustre le syst em man ages striping le opera tions acro ss arr ays highly scalabl e o architecture enabl es customers congur e xt4 desired bandw idth selecting appro priate numb er arr ays service pes gives user appl ications acce ss highper formance le syst em globa l name space max imize o perf ormance lustre integrate direct ly appl ications runni ng system microkern el data moves directly betwee n appl ications space lustre servers o pes without need interv ening data copy thr ough lightwei ght kerne l xt4 combine scalabilit micro kernel based operating syst em o performanc e normally associa ted large scale smp servers figure 1231 show perform ance characteri stics various congurations xt4", "url": "RV32ISPEC.pdf#segment449", "timestamp": "2023-10-17 20:16:37", "segment": "segment449", "image_urls": [], "Book": "computerorganization" }, { "section": "12.8 SUMMARY ", "content": "chapter provided details popular architecture classication scheme shown practical machines fall neatly one classications scheme span several classications according mode operation two supercomputer architectures simd mimd detailed along brief description commercially available supercomputer dataow architectures described chapter provide structures accommodating ne grain parallelism utilizing datadriven paradigm systolic architectures specialized mimd architectures major aim advanced architec tures exploit parallelism inherent processing algorithms architecture parallelism spans negrain parallelism provided dataow architec turestocoarsegrainedparallelismprovidedbydistributedsystems", "url": "RV32ISPEC.pdf#segment450", "timestamp": "2023-10-17 20:16:37", "segment": "segment450", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 13 ", "content": "embedded systems embedded system specialpurpose system embeds computer designed perform one predened tasks usually specic requirements computer system completely encapsulated device controls instance trafc light controller system designed perform single function heart system computer embedded contrast desktop systems personal computers perform general purpose computation tasks since embedded systems dedicated specic tasks design optimized reduce size cost product often mass produced thus multiplying cost savings almost systems use daily lives today embed computers examples 1 mp3 players 2 handheld computers personal digital assistants pda 3 cellular telephones 4 automatic teller machines atms 5 automobile engine controllers antilock brake controllers 6 home thermostats ovens security systems 7 videogame consoles 8 computer peripherals routers printers 9 guidance control systems aircraft missiles probably rst massproduced embedded system autonetics d17 guidance computer built 1961 minuteman missile built discrete transistor logic devices hard disk main memory 1966 d17 replaced new design rst highvolume use integrated circuits 1978 national engineering manufacturers association released standard programmable microcontroller denition included single board computers numerical controllers sequential controllers perform eventbased instructions progress hardware technology vlsi provided capability build chips enormous processing power functionality low cost instance rst microprocessor intel 4004 used calculators othe r small system requi red externa l memor suppor chips 1980s externa l system com ponents integr ated chip proce ssor resulting microc ontr ollers rst gener ation embedded syst ems embe dded syst ems toda classied mi crocontr ollers embedded proce ssors systems chip soc depend ing cont ext functi onality com plexity soc application specic integr ated circuit asic uses intel lectual propert ip proce ssor archit ecture distinct ion betwee n thes e cla sses cont inues blur due progress hardwar e software tec hnologie s concepts discusse chapt er apply equal ly well three classes exam ple 131 con sider design trafc light control ler simpl est design imple men ts x ed redy ellowgre en li ght sequence ligh staying prede termined time period inputs system thr ee outpu ts corr espond ing light system desi gned seque ntial circuit thr ee states impleme nted usin g small med iums cale ics alte rna tivel program mable logic device pla pal used alon g ip ops reduc e part count volume justies fpga used im plement controlle r circuit let us enhanc e functi onalit trafc light cont roller add capab ility sense numb er cars passing inters ection dir ections adju st redyellow green periods based trafc count need sensors street correspo nding inputs controlle r circuit outputs handl e lights th e control algo rithm mor e comple x altho ugh hardwar e imple mented using pla pal fpga advant ageous use processor proce ssors designed application microc ontr ollers sing lechip devi ces available various vendor s th ey allow program ming imple ment change cont rol algorithm contain sma amount ram rom provide inputs outpu ts interf aces provide timi ng circui ts comple xity system increas es whe singlechi p microcon tro ller solu tion adequa te c soluti used quantitie needed justies cost system many cent ral processi ng unit cpu arch itectures used emb edded designs arm mips coldr e 68k powe rpc intel x86 8 051 pic micro control ler atmel avr renesa h8 sh v850 etc highvo lume appl ications use either fpga c operatin g system dos linux netb sd embedded realtime opera ting syst ems rtos qnx inferno used designi ng embedded system s outlin e emb edded system characterist ics next sectio n follow ed bri ef introdu ction softw architect ures section 1 32 rtos introdu ced sect ion 133 brie f descrip tions tw comme rcially avai lable architect ures provi ded section 134", "url": "RV32ISPEC.pdf#segment451", "timestamp": "2023-10-17 20:16:37", "segment": "segment451", "image_urls": [], "Book": "computerorganization" }, { "section": "13.1 CHARACTERISTICS ", "content": "mentioned earlier embedded systems specialpurpose machines designed one specic tasks application may also realtime performance constraints must met major emphasis embedded system design reduce hardware complexity cost performance requirements dictate aspects input output devices switches motors lights etc attached embedded processor io interfaces called eld devices proces sor usually integrated housing eld devices even part circuit board eld device processor examines status input eld devices carries control plan ie executes program produces responses control output eld devices cycle activity called scan scan inputs tested control plan evaluated outputs updated control program stored rom flash memory chips small amount ram supports program operation embedded systems may reside hostile environments expected run continuously years addition need recover error occurs facilitate unreliable mechanical moving parts disk drives switches buttons avoided far possible software thoroughly tested deployment special timing circuits watchdog timers used watchdog timer error recovery mechanism initiated certain value beginning scan dened time intervals counts scan progresses processor periodically noties watchdog reset notication received watchdog value reaches zero predened value assumes fault condition occurred resets processor thus bringing system back normal operation watchdog timers may also take systems safety state turning potentially dangerous subsystems fault cleared early systems used assembly language developing applications recent systems use highlevel languages c java versions embedded systems along embedded assemblylevel code critical sections code addition compilers assemblers debuggers software designers use incircuit emula tors ice crc checkers tools develop embedded system software ice special hardware device replaces plugs embedded processor facilitates loading debugging experimental code system embedded systems startup rmware runs selftest starting application code selftest covers cpu ram rom peripherals power supplies passing selftest usually indicated leds visual means providing simple diagnostics technicians users addition safety tests run within safety interval assure system still reliable", "url": "RV32ISPEC.pdf#segment452", "timestamp": "2023-10-17 20:16:37", "segment": "segment452", "image_urls": [], "Book": "computerorganization" }, { "section": "13.2 SOFTWARE ARCHITECTURES ", "content": "section provides brief descriptions common software architectures embedded systems", "url": "RV32ISPEC.pdf#segment453", "timestamp": "2023-10-17 20:16:37", "segment": "segment453", "image_urls": [], "Book": "computerorganization" }, { "section": "13.2.1 Simple Control Loop (Round Robin) ", "content": "architecture control software consists simple loop within loop calls made subroutines subroutine handles part control function andor manages part hardware state machine model software utilized represent set states system changes simplest software architectures employed small devices standalone microcontroller dedicated simple task system imposes timing constraints loops modules need designed meet constraints since software controlled main loop interrupt handling adding new features becomes difcult example 132 code controller servicing several devices shown controller true service service b service c service x devices serviced order b c x scan additional devices included simple include additional calls main loop controller servicing c b requires service b wait next scan long delays tolerated architecture good enough handle simple control tasks example 133 consider multimeter measure current amps resistance ohms voltage volts type measurement needed selected switch multimeter control software device shown multimeter controller true position read switch position switch position case current read amps output break case voltage read volts output break case resistance read ohms output break program services selected device scan thus scan period dictated slowest devices", "url": "RV32ISPEC.pdf#segment454", "timestamp": "2023-10-17 20:16:37", "segment": "segment454", "image_urls": [], "Book": "computerorganization" }, { "section": "13.2.2 Interrupt-Controlled Loop (Round Robin with Interrupts) ", "content": "interruptcontrolled architecture tasks performed system triggered events timer trigger input port receiving data byte etc systems run simple task main loop also interrupt handled interrupt event handler ie interrupt service routine example 134 following code shows software controller three devices handle three interrupt ags one device set hardware corresponding device interrupts main loop checks interrupt ag services device whose ag set multimeter controller flaga false flagb false flagc false interrupthandlera service device flaga true interrupthandlerb service device b flagb true interrupthandlerc service device c flagc true main true flaga flaga false perform io flagb flagb false perform io b flagc flagc false perform io c interrupt occurs controller gets main loop handles interrupt interrupts assigned appropriate priorities execution time interrupt handlers needs short keep interrupt latency minimum longer tasks usually added queue structure interrupt handler processed main loop later architecture corresponds multitasking kernel discrete processes", "url": "RV32ISPEC.pdf#segment455", "timestamp": "2023-10-17 20:16:37", "segment": "segment455", "image_urls": [], "Book": "computerorganization" }, { "section": "13.2.3 Real Time Operating System (RTOS)-Based Architectures ", "content": "two rtosbased software architectures nonpreemptive multi tasking function queue scheduling architecture programmer implements control algorithm series tasks task running environment tasks arranged event queue loop processes events one time adding new functionality easier include new task adding queue interpreter preemptive multitasking architecture system uses rtos allowing application programmers concentrate device functionality rather operating system services refer books simon 1999 vahid givargis 2002 lewis 2002 listed references details architec ture brief description operating systems os follows", "url": "RV32ISPEC.pdf#segment456", "timestamp": "2023-10-17 20:16:37", "segment": "segment456", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3 OPERATING SYSTEM ", "content": "executing program task control operating system operating system execute multiple tasks simultaneously said multi tasking use multitasking operating system simplies design complex software applications enabling application partitioned set smaller manageable tasks multitasking os allows execution simpler tasks facilitates intertask communication needed mode operation also removes complex timing sequencing details appli cation code transfers responsibility operating system major functions operating system 1 keeping track status resources processors memories switches io devices instant time 2 assigning tasks processors justiable manner ie maximize processor utilization 3 spawning creating new processes executed parallel independent 4 spawned processes completed collecting individual results passing processes required", "url": "RV32ISPEC.pdf#segment457", "timestamp": "2023-10-17 20:16:38", "segment": "segment457", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3.1 Multitasking versus Concurrency ", "content": "consider singleprocessor system execute single task time application partitioned multiple tasks strict serial dependency among tasks one task run order dependency dictates context advantage multitasking os application allows partitioning multiple tasks run simultaneously even singleprocessor system multitasking os provides advantage example 135 figure 131 shows timing diagram depicting execution three tasks singleprocessor system assume task given time slot processor executes tasks order shown time slot practice round robin uniform scheduling tasks may possible task might require resource peripheral register etc may available task suspended either os task resource available scheduler part os kernel decides task executing particular time suspend resume task several times task lifetime addition suspended involuntarily rtos kernel task may choose suspend done task either wants delay ie sleep certain period wait ie block resource become available event occur task execution environment shown figure 132 t1 task 1 executing t2 kernel suspends task 1 starts resumes task 2 let us assume task 2 locks input device 1 exclusive access t3 kernel suspends task 2 resumes task 3 suppose task 3 tries access input device 1 nds device locked hence continue execution hence task 3 suspends t4 task 2 may resume t4 complete operation device 1 release kernel resumes task 1 t5 followed task 3 t6 task 3 access device 1 continues execute suspended kernel switching tasks equivalent interrupt servicing recall processor status saved entering interrupt service routine resaved return interrupt service along lines task blocked suspended resources holding contents various registers peripherals etcthe context must maintained allow task resume properly operating system kernel responsible ensuring process saving context task suspended restoring context task resumed called context switching since realtime systems designed provide timely response realworld events rtos scheduling policy must ensure deadlines imposed system requirements met achieve objective typically assign priority task scheduling policy rtos ensures highest priority task execute particular time task scheduled execute example 136 consider realtime system equipped keypad lcd required user must get visual feedback key press within 50 ms thus user see key press accepted within 50 ms system awkward use response 0 50 ms would acceptable system performs control function samples set sensors inputs executes control algorithm produces outputs operate set valves required control cycle executed every 4 ms timer provides timing trigger every 4 ms two tasks required application one handle key presses handle control cycle note control task higher priority keyhandlertask key handling implemented using innite loop suspend waiting key press process key press controltask suspend waiting 4 ms since beginning previous cycle sample sensors perform control algorithm output addition assume kernel idle task indicate system idle waiting idle task operation idle task always state able execute hence begin idle task executing key press detected key handler task executed timer issues timing trigger idling control task executed key press occurs control task running processing key press deferred control cycle completed priorities assigned tasks", "url": "RV32ISPEC.pdf#segment458", "timestamp": "2023-10-17 20:16:38", "segment": "segment458", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3.2 Process Handling ", "content": "section illustrates several aspects process creation handling assume multiprocessor system context description concepts discussed equally applicable singleprocessor context except work done single processor rather shared multiple processors example 137 consider example element element addition two vectors b create vector c ci ai bi 1 n 131 clear computation consists n additions executed independent thus mimd n processors compu tation one addition time number processors less n rst tasks allocated available processors remaining tasks held queue processor completes execution task allocated new task queue allocated mode operation continues tasks completed represented following algorithm 1 spawn n1 processes distinct process identication number k spawned process starts label k 1 n1 fork label k 2 process executed fork assigned k n process reaches spawned processes jump directly label k n 3 add kth element vector n different processes perform operation necessarily parallel label c k k b k 4 terminate n processes created fork 1 process continues point join n new aspects algorithm fork join constructs two typical commands multiprocessing operating system used create synchronize tasks processes fork command requests operating system create new process distinct process identication number k example program segment corresponding new process starts statement marked label note initially entire algorithm constitutes one process rst allocated one free processors system processor execution forkloop step 1 requests operating system create n 1 tasks continues step 2 thus execution step 2 n processes waiting executed process k n continues processor already active ie processor spawned processes remaining n 1 processes created operating system enter process queue processes waiting queue allocated processors processors become available example kth process adds kth elements b creating kth element c program segment corresponding process ends join statement join command viewed inverse fork command counter associated starts 0 processor executing join increments counter 1 compares n value counter equal n processor execute hence terminates process returns available pool processors subsequent allocation tasks value counter n process unlike others terminated continues execution beyond join command thus join ensures n processes spawned earlier completed proceeding program several aspects algorithm worth noting 1 algorithm use number processors parameter works systems number processors overall execution time depends number processors available 2 operating system functions creating queuing tasks allocating processors etc done another processor process visible discussion 3 procedure creating processes requires n time time reduced log2 n complex algorithm makes new process perform fork thus executing one fork simultaneously 4 step 1 algorithm executed n1 times thus nth process specically created fork given processor active already alternatively forkloop could executed n times processor executing loop must deallocated brought pool available processors new process could allocated thus procedure eliminates overhead deallocation allocation one process processor typically creation allocation tasks results considerable overhead requiring execution 50500 instructions 5 process example performs addition hence equivalent three instructions load add b store c thus process creation allocation overhead mentioned order 10100 times useful work performed process", "url": "RV32ISPEC.pdf#segment459", "timestamp": "2023-10-17 20:16:38", "segment": "segment459", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3.3 Synchronization Mechanisms ", "content": "explicit interprocess communication example practice various processes system need communicate general application set shared data items tasks comprises task private data items stacks queues important shared data items accessed properly among competing tasks since processes executing various processors independent relative speeds execution processes easily estimated welldened synchronization processes needed communication hence results computation correct processes operate cooperative manner sequencecontrol mechanism needed ensure ordering operations also processes compete gain access shared data items resources access control mechanism needed maintain orderly access section describes primitive synchronization techniques used access sequence control example 138 consider two processes p1 p2 executed two different processors let shared variable memory sequence instructions two processes follows p1 1 mov reg1 rst operand destination 2 inc reg1 increment reg1 3 mov reg1 p2 10 mov reg2 20 add reg2 2 add 2 reg2 30 mov reg2 thing sure far order execution instructions concerned instruction 2 20 executed instruction 1 10 instruction 3 30 executed instructions 1 10 2 20 thus follow ing three cases actual order execution possible 1 1 2 3 1 2 3 1 2 3 1 2 3 location nally value 3 assuming 0 initially 2 1 1 2 2 3 3 location nally value 2 3 1 10202 303 location nally value 1 desired answer attained rst case possible process p1 executed entirety process p2 vice versa p1 p2 need execute mutually exclusive manner segments code executed mutually exclusive manner called critical sections thus p1 p2 example critical section complete code corresponding process example 139 another example consider code shown figure 133 compute sum alltheelementsofannelementarrayahere eachofthenprocessesspawnedby fork loop adds element shared vari able sum obvious ly one process updat ing sum given time imagine proce ss 1 read value sum yet stored updated value meanwhil e proce ss 2 reads sum updates e process 1 followed process 2 resulting sum would err oneous obta corr ect value sum make sure p rocess 1 reads sum othe r proce ss acce ss sum proce ss 1 writ es updat ed v alue mutual excl usion processe s mutual exclusion accompl ished loc k unl ock const ructs pair const ructs ag associa ted wi th proce ss execu tes lock checks value ag ag implies som e othe r process accessed sum hence process waits ag a g proce ss sets gain access sum updat es execu tes unl ock cle ars ag thus loc kunlo ck brings synchroni zation proce sses note lock operation functi ons fetchi ng ag check ing value updat ing must done indivi sible man ner proce ss access ag operations comple te indivi sible opera tion brough hardwar e p rimitive know n test ands et testa ndset use test ands et primitive show n figur e 134 k shar ed memor variabl e valu e either 0 1 0 testandset returns 0 sets k 1 process enters critical section k 1 testandset returns 1 thus locking process entering critical section process executing critical section resets k 0 thus allowing waiting process access critical section figure 133 critica l sectio n part loop th e range loop single statement terminated loop makes process wait k goes 0 enter critical section two modes implementing wait busy waiting spin lock task switching rst mode process stays active repeatedly checks value k 0 thus ifseveralprocessesarebusywaiting theykeepthecorrespondingprocessors busy useful work gets done second mode blocked process enqueued processor switched new task although mode allows better utilization processors taskswitching overhead usually high unless special hardware support provided several processes could performing testandset operation simultan eously thus competing access shared resource k hence process examining k setting must indivisible sense k accessed process testandset completed testandset usually instruction instruction set processor minimal hardware support needed build highlevel synchronization primitives testandset effectively serializes two processes execute mutually exclusive manner process critical section processes blocked entering testandset thus mutual exclusion brought serializing execution processes hence affecting parallelism addition blocked processes incur overhead due busy waiting task switching semaphores dijkstra 1965 introduced concept semaphores dened two highlevel synchronization primitives p v based semaphore variable s dened p wait 0 process invoking p delayed 0 0 s1 process invoking p enters critical section v signal 1 initialized 1 testing decrementing p indivisible incrementing v figure 135 shows use p v synchronize processes p1 p2 binary variable implementation p v integer variable called counting semaphore initialized processes critical section time fetchandadd fetchandadd similar testandset implementation nonblocking synchronization primitive allows operation several processes critical section parallel yet nonconict ing manner fetchandadd shown fetchandadd temp return temp two parameters passed fetchandadd shared variable integer initialized 0 two processes p1 p2 make call fetch andadd roughly time one reaching fetchandadd rst receives original value second one receives two processes execute independently although updating effectively serialized general fetchandadd gives contending process unique number allows execute simultaneously unlike testandset serializes contending processes another example instruction fetchandadd sum increment provides addition increment sum several processes simul taneously results correct value sum without use lockunlock fetchandadd useful cases variable accessed several contending processes fetchandadd different variables done sequentially variables reside memory implementation cost fetchandadd high limited environments updates become bottleneck 10s processes contending access shared variable one process requests access shared variable time testandset economical processes messagepassing mimd automatically synchronized since message received sent messageprocessing protocols deal problems missing overwritten messages sharing resources section described primitive synchronizing mechanisms refer silberschatz et al 2003 stallings 2005 tanenbaum woodhull 2006 details", "url": "RV32ISPEC.pdf#segment460", "timestamp": "2023-10-17 20:16:39", "segment": "segment460", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3.4 Scheduling ", "content": "recall parallel program collection tasks tasks may run serially parallel optimal schedule determines allocation tasks processors mimd system execution order tasks achieve shortest execution time scheduling problem general np complete several constrained models evolved years currently problem active area research parallel computing scheduling techniques classied two groups static dynamic static scheduling task allocated particular processor based analysis precedence constraints imposed tasks hand time task executed allocated predetermined processor obviously method take consideration nondeterministic nature tasks brought condi tional branches loops progr tar get conditional branc h upper bounds loops know n program execu tion begi ns th us sta tic sched uling optim al dynamic sched uling tasks alloca ted processor based execu tion char acterist ics usu ally som e load balanc ing heuristic emp loyed determ ining optim al alloca tion since sched uler knowled ge local inf ormation p rogram instant time nding globa l optimum dif cult anot disadvantage increased overh ead since schedule dete rmined tasks runni ng refer adam et al 1974 bashi r et al 1983 elre wini lewis 1990 deta ils", "url": "RV32ISPEC.pdf#segment461", "timestamp": "2023-10-17 20:16:39", "segment": "segment461", "image_urls": [], "Book": "computerorganization" }, { "section": "13.3 .5 Real -Time Operat ing Syst ems ", "content": "rto offers serv ices descr ibed earlier section excep opera ting envi ronment tend much simpl er gener alpurp ose system kerne l core component within operating system typical consider serv ices memor man agement network software suppor debugging etc n ot p art kernel although serv ices provided os yet rtos realtime kerne l designatio ns interc hangeably used practice th ere sever al difference operation rtos com pared nonrto s desktop envi ronment exampl e os invok ed takes cont rol system soon powe r tur ned os allows invocat ion appl ications facilit ates com pilation linkin g loading new appl ica tions micro controlle r sta rtup softw starts appl ication turn cal ls upon rtos neede d rtos appl ications closel intert wined fault condi tion results crashing microcont roller rtos also goes system restar ted gener al os environment appl ication fault condi tions bring os general os congured applica tion rtos allows congured include needed services particular application allows optimized usage memory resources main consideration buildi ng emb edded system s chapte r 14 provi des additional details operating systems", "url": "RV32ISPEC.pdf#segment462", "timestamp": "2023-10-17 20:16:39", "segment": "segment462", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4 EXAMPLE SYSTEMS ", "content": "section provides brief description two systems used embedded appli cations rst popular microcontroller family second processor core used ip several embedded applications", "url": "RV32ISPEC.pdf#segment463", "timestamp": "2023-10-17 20:16:39", "segment": "segment463", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.1 8051 Family of Microcontrollers ", "content": "mentioned earlier microcontroller basically entire computer single chip usually includes cpu rom ram parallel io serial io counters prime use microcontrollers control operation machine using xed program stored rom change lifetime system family microcontrollers group devices share basic elements basic group instructions several microcontroller families available market today leaders probably motorola 6811 microchip pic intel 8051 families following sections give brief overview 8051 microcontroller family one recent derivatives ds89c450 dallas semiconductor term 8051 loosely refers mcs51 family microcontrollers started intel 1980 8051 family composed 300 different ics microcontroller family boasts complement features suited particular design setting table 131 summarizes differences among popular 8051 family chips 8052 enhanced 8051 extra timer ram rom 8031 8032 identical 8051 8052 except rom area unused program code must stored external eprom memory chips 87c series advantage providing eprom instead rom intel 8051 extremely popular 1980s early 1990s currently largely superseded wide range enhanced derivatives 8051 compatible processor cores produced several independent manufacturers includ ing atmel dallas semiconductor cypress semiconductor silicon labs nxp formerly philips semiconductor texas instruments winbond dallas semiconductor offers several families 8051compatible microcontrollers including secure microcontroller highspeed microcontroller ultrahighspeed ash microcontroller families preserving instruction set object code com patibility families provide additional architectural features well enhanced performance power consumption compared older 8051 members one latest 8051derivitive microcontrollers dallas semiconductor ds89c450 uses ultrahighspeed core following overview detailed architecture characteristics ds89c450 presented dallas semicon ductor manual", "url": "RV32ISPEC.pdf#segment464", "timestamp": "2023-10-17 20:16:39", "segment": "segment464", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.1.1 DS89C450 Ultra-High-Speed Flash Microcontroller ", "content": "ds89c450 fully static cmos microcontroller maintains pin software compatibility standard 8051 general software developed existing 8051 based systems works ds89c450 without modication exception critical timing routines ds89c450 performs instructions much faster given crystal selectioninaddition ds89c450can beused asa dropin replacement older 8051 microcontroller without circuit modication cases ds89c450 newly designed processor core executes instructions 12 times faster original 8051 similar crystal speed also operate maximum clock rate 33 mhz combined 12 times speed allows maximum performance 33 million instructions per second mips besides greater speed ds89c450 offers many added hardware features 8051 standard resources includes 1 kb data ram second full hardware serial port seven additional interrupts two extra levels interrupt priority programmable watchdog timer brownout monitor powerfail reset furthermore ds89c450 pro vides several peripherals hardware features including three 16bit timercoun ters two fullduplex serial ports ve levels interrupt priority dual data pointers 256 bytes direct ram 1 kb extra movx ram architectural features ds89c450 detailed section extracted ds89c450 user manual", "url": "RV32ISPEC.pdf#segment465", "timestamp": "2023-10-17 20:16:39", "segment": "segment465", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.1.2 Internal Hardware Architecture ", "content": "mentioned earlier microcontroller highly integrated chip contains cpu form memory io ports timers detailed view specic ds89c450 shown figure 136 cpu controls activity instruc tions data travel back forth cpu memory data bus communication outside world takes place io ports timers provide realtime information interrupts processor terms well microcontroller features discussed next central processing unit discussed previously cpu administers activity microcontroller system performs operations data executes program instructions including arithmetic addition subtraction logic data transfer program branching operations external crystal provides timing reference clocking cpu addressdata bus device addresses 64 kb program 64 kb data memory area resides combination internal external memory external memory accessed ports 0 2 used multiplexed address data bus memory ultrahighspeed ash microcontrollers use several distinct memory areas including internal registers scratchpad ram program memory data memory registers located onchip program data memory spaces internal external ds89c450 uses memoryaddressing scheme separates program memory data memory 16bit address bus address memory area maximum 64 kb program data segments overlapped since accessed different manners ds89c450 64 kb onchip program memory internal rom 1 kb onchip data memory space sram 256 byte internal registers internal ram also capability address 64 kb external ram 64 kb external rom maximum address onchip program data memory exceeded ds89c450 performs external memory access using expanded memory bus details memory map organization presented later discuss ds89c450 programming model input output parallel ports without way exchange information outside world computer useless pcs fairly standardized io connections com1 ltp1 etc microcontrollers excel providing much adaptable ios ds89c450 offers four 8bit parallel io ports labeled p0 p1 p2 p3 port appears special function register sfr addressed byte eight individual bit locations io port used general purpose bidirectional parallel io port data written port latch serve set level direction data pin serial ports serial ports transfer single bits data one another taking least eight transfers exchange byte ds89c450 provides two uarts universal asynchronous receivertransmitter controlled accessed sfrs uart address used read write value contained uart address used read write operations read write operations distinguished instruction interrupts interrupt dened signal informing program event occurred program receives interrupt signal takes specied action ignore signal interrupt signals cause program suspend temporarily service interrupt following special set events routines called interrupt handlers interrupts mechanism microcon troller enables respond events moment occur regardless microcontroller time important provides connection microcontroller environment surrounds generally interrupt changes program ow interrupts executing interrupt subprogram interrupt routine continues point ds89c450 provides 13 interrupt sources interrupts exception power fail controlled series combination individual enable bits global enable ea interruptenable register ie7 setting ea logic 1 allows individual interrupts enabled setting ea logic 0 disables interrupts regardless individual interruptenable settings powerfail interrupt controlled individual enable ve levels interrupt priority level 4 0 highest interrupt priority level 4 reserved powerfail interrupt interrupts individual priority bits interrupt priority registers allow interrupt assigned priority level 3 0 powerfail interrupt always highest priority enabled interrupts also natural hierarchy manner set interrupts assigned priority second hierarchy determines interrupt allowed take precedence timerscounters ds89c450 incorporates three 16bit programmable timers watchdog timer programmable interval three timers used either counters external events 1to0 transitions port pin monitored counted timers count oscillator cycles timers 0 1 nearly identical timer 2 several additional features updown counting capture values optional output pin make unique timers 0 1 three common operating modes 13bit timercounter 16bit timercounter 8bit timercounter autoreload timer 0 additionally congured operate two 8bit timers watchdog timer programmable freerunning timer provides supervisory function applications afford run control watchdog timer reset provides cpu monitoring requiring software clear timer userselected interval expires timer reset cpu reset watchdog timing control ds89c450 microcontroller provides onchip oscillator use external crystal bypassed injecting clock source xtal1 pin clock source used create machine cycle timing four clocks ale psen watchdog timer serial baud rate timing addition onchip ring oscillator used provide approximately 10 mhz clock source frequency multiplier feature included selected sfr control multiply input clock source either two four allows lowerfrequency cost crystals used still allowing internal operation full 33 mhz limit power monitor bandgap reference analog circuitry incorporated monitor powersupply conditions vcc begins drop tolerance power monitor issues optional early warning powerfail interrupt power continues fall power monitor invokes reset condition remains power returns normal operating voltage programming model section provides programmer overview ds89c450 microcontroller core includes information memory map sfrs addressing modes instruction set memory map critical understand memory layout ds89c450 architecture program device complete memory map shown figure 137 registers located 256 bytes onchip scratchpad ram labeled internal registers figure 137 divided two subareas 128 bytes separate classes instructions used access registers programdata memory upper 128 bytes overlapped 128 bytes sfrs memory map indirect addressing used access upper 128 bytes scratchpad ram sfr area accessed using direct addressing sfrs discussed detail later section lower 128 bytes accessed using direct indirect addressing contains 16 bytes 128 bits bitaddressable data memory allowing bit access character integer variables stored area also contains four banks eight working registers generalpurpose ram locations addressed within selected bank instructions use r0r7 register bank selection controlled program status register sfr area contents working registers used indirect addressing upper 128 bytes scratchpad ram internal 1 kb sram usable data program merged programdata memory upon poweron reset internal 1 kb memory disabled transparent program data memory maps sram enabled internal data memory memory addressed movx accesses rst 1 kb 0000h03ffh internal sram sram congured program memory memory addressed movc accesses second 1 kb 4000h07ffh internal sram program memory area instructions fetched inherently read onchip program memory begins address 0000h ends ffffh 64 kb ds89c450 exceeding maximum address onchip program memory causes device access offchip memory maximum on chip decoded address selectable software using romsize feature soft ware cause ds89c430 behave like device less onchip memory benecial overlapping external memory used maximum memory size dynamically variable thus portion memory removed memory map access offchip memory restored access onchip memory fact onchip memory removed memory map allowing full 64 kb memory space addressed offchip memory specialfunction registers ds89c450 contains several dedicated internal registers provide special functions cpu programmer dedi cated registers called sfrs peripherals operations explicit instructions ds89c450 controlled sfrs common features basic architecture mapped sfrs include cpu registers acc b psw data pointers stack pointer io ports timercoun ters serial ports many cases sfr controls individual function reportsthefunction sstatusthesfrsresideinregisterlocations80hffhandare acce ssible direct addressi ng table 132 shows sfrs locations following description important sfrs accumulator acc many operations involving math data movement decisions accumulator acts source destination even though bypassed highspeed instructions need use accumulator one argument b register b used second 8bit argument multiply divide operations used tasks b register used generalpurpose register program status word psw psw stores selection bit ags include carry ag auxiliary carry ag generalpurpose ag register bank select overow ag parity ag data pointers dptr dptr1 data pointers used allocate memory address movx instructions address point datamemory location either on offchip memorymapped peripheral moving data one memory area another memory memorymapped peripheral pointer needed source destination program counter pc program counter 16bit value designates next program address fetched onchip hardware automatically increments pc value move next program memory location stack pointer sp stack pointer indicates register location top stack recent used value although lower bytes normally used working registers user place stack anywhere scratchpad ram setting stack pointer desired location instruction set instructions 100 binary compatible industry standard 8051 different number machine cycles used instructions refer ds89c450 manuals complete instruction set instructions occupy 1 2 3 bytes instructions following format opcode destination source however based addressing mode destination andor source elds omitted instructions addressing modes ds89c450 microcontroller supports eight addressing modes 1 register addressing 2 direct addressing 3 register indirect addressing 4 immediate addressing 5 register indirect addressing displacement 6 relative addressing 7 page addressing 8 extended addressing five eight addressing modes used address operands remainder three used program control branching mode addressing summarized next note many instructions add multiple addressing modes available register addressing register addressing used operands located one eight working registers r7r0 determined current register bank select bits register bank selected using 2 bits psw two examples register addressing provided add r3 add register r3 accumulator inc r5 increment value register r5 rst example value r3 source operation latter r5 destination direct addressing direct addressing mode used access entire lower 128 byte scratchpad ram sfr area commonly used move value one register another two examples shown mov 72h 74h move value register 74 register 72 mov 90h 20h move value register 20 sfr 90h port 1 note instruction difference ram access sfr access direct addressing also extends bit addressing group instructions explicitly use bits address information provided instruction bit location rather register address example direct bit addressing follows mov c 0b7h move contents bit b7 carry ag register indirect addressing mode used access scratchpad ram locations 7fh also used reach lower ram 0h7fh needed address supplied contents working register specied instruction thus one instruction used reach many values altering contents designated working register note general r0 r1 used pointers example register indirect addressing follows mov r0 move contents ram location whose address held r0 accumulator mov r1 b move contents b ram location whose address held r1 immediate addressing immediate addressing used one operands predetermined coded software mode commonly used initialize sfrs mask particular bits without affecting others example follows orl 30h logical accumulator 30h register indirect displacement register indirect addressing displacement used access data lookup tables program memory space location created using base address index base address either pc dptr index accumulator result stored accumulator example follows movc dptr load accumulator contents program memory pointed contents dptr plus value accumulator relative addressing relative addressing used determine destination address conditional branch instructions includes 8bit value contains 2s complement address offset 127 128 added pc determine destination address destination branched tested condition true pc points program memory location immediately following branch instruction offset added tested condition true next instruction performed example follows jz 15 branch location pc 2 15 contents accumulator 0 page addressing page addressing used branching instructions specify destination address within 2 kb block next contiguous instruction full 16bit address calculated taking ve highest order bits next instruction pc 2 concatenating lowest order 11bit eld contained current instruction example follows 0800h acall 100h call subroutine address 100h plus current page address 800h 100h extended addressing extended addressing used branching instructions specify 16bit destination address within 64 kb address space destination address xed software absolute value example follows ljmp 0f712h jump address 0f712h", "url": "RV32ISPEC.pdf#segment466", "timestamp": "2023-10-17 20:16:41", "segment": "segment466", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4 .2 ARM (Adva nced RIS C Machi ne) Microproc essor ", "content": "device similar microcont roller micro proce ssora general purpose digita l compute r cpu execu tes stored set f instructions carry user den ed tasks key term describ ing desig n micro processor general purpos e implies hardware design arranged small large system con gured around microproc essor cpu appl ication dem ands microc ontroller othe r hand usual ly serve one speci c task ref erred special purp ose devi ces th e prime use micro processor read data perf orm exte nsive calculat ions store calculat ions mas storage devi ce display resu lt human use sect ion gives overv iew arm one popul ar proce ssors today aco rn comput ers england introdu ced arm corn risc machine rena med advanced risc mach ine betwee n 1983 1985 rst ris c micro proce ssor developed comme rcial purposes day arm becom e lea der mi croproce ssor market accou nting 75 32bi embedded cpus arm proce ssors evolved sign icantly since rst developed sever al features added seven major versions arm arch itecture rmv1a rmv7 exis today instruction set many versions qualied variant letter indicate collection additional instructions included version collections vary small v ariant large variant refer table 133 summary different arm versions main distinguishing features informa tion arm versions indication variant letter refer arm architecture reference manual section extracted unless otherwise specied following information applies arm version 4 higher", "url": "RV32ISPEC.pdf#segment467", "timestamp": "2023-10-17 20:16:41", "segment": "segment467", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2.1 Internal Hardware Architecture ", "content": "arm architecture probably widely used 16 32bit embedded risc solution world maintains good balance high performance low code size low power consumption low silicon area arm architecture incorp orates following risc architecture features 1 large uniform register le 2 loadstore architecture allows dataprocessing operations operate register contents instead directly operating memory contents 3 simple addressing modes 4 fixedlength instructions simplify instruction decode moreover arm provides additional features autoincrement auto decrement addressing modes optimize loops loading storing multiple instructions maximize data throughput arm also gives developer full control alu shifter every dataprocessing instruction allows conditional execution instructions maximize execution throughput sectio n describ es internal structure f two basic organ izations arm processor core threestage pipe line used earlier arm vers ions developed 1995 highe rperfo rmance ves tage pipeline", "url": "RV32ISPEC.pdf#segment468", "timestamp": "2023-10-17 20:16:41", "segment": "segment468", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2 .2 Arm Thr ee-Stage Pipe line Organiza tion ", "content": "pri mary eleme nts arm threesta ge pipeline show n figur e 138 explain ed belo w register bank used hold state processor two read ports one write port used access register additional read port additional write port give special access program counter barrel shifter shift rotate one operand number bits words performs logical shift left logical shift right arithmetic shift right rotate right operations arithmetic logic unit part processor performs arithmetic logic functions required instruction set address register incrementer selects holds memory addresses generates sequential addresses required data registers used store data memory instruction decoder control logic block contains mechanisms decode instruction control logic", "url": "RV32ISPEC.pdf#segment469", "timestamp": "2023-10-17 20:16:42", "segment": "segment469", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4 .2.3 Single- Cycle Instr uction Execu tion ", "content": "single cycl e data processi ng tw regist er operands accessed value b bus shifted com bined value bus alu resu lt wr itten int regist er bank pc value stored addre ss register increm ented increment er increment ed value copied pc register bank also addre ss register used address next instruction fetch simple threesta ge pipe line arm processor follow ing stages 1 fetch instruction fetched memory placed instruction pipeline 2 decode instruction decoded datapath control signals produced next cycle 3 execute register bank read operand shifted alu result gener ated written back destination register th e thr eestage pipeline opera tion singlecyc le instru ctions shown figur e 13 9 multicyc le instru ction executed o w less regul ar show n figur e 1310 th shows sequence sing lecycle add instru ctions data store multicycl e instru ction str fol lowing rst ad d th e light shadi ng repr esents cycl es acce ss main memory datapat h involved execu te cycles address cal culation data transf er decode logic always gener ates control signals data path use next cycl e addition expl icit decode cycles also generat es cont rol data transf er duri ng addre ss cal culation cycle str thus instruction sequence parts processor active every cycle simplest way exam ine breaks arm pipeline obser 1 instructions occupy datapath one adjacent cycles 2 cycle instruction occupies datapath occupies decode logic immediately preceding cycle 3 rst datapath cycle instruction issues fetch next instruction one 4 branch instructions ush rell instruction pipeline", "url": "RV32ISPEC.pdf#segment470", "timestamp": "2023-10-17 20:16:42", "segment": "segment470", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2 ", "content": "4 ar fivest age pipe line organiza tion higher perform ance major cri teria developm ent new p rocessors althoug h three stage pipeline cost effect ive higher perform ance needs proce ssor redesig n time requi red execute given program given ninst f clk fclk ninst number arm instru ctions execu ted course program cpi average numb er cloc k cycl es per instru ction fclk cloc k fre quency processor since ninst constant give n program ther e tw ways incr ease performanc e 1 increase clock rate fclk requires logic pipeline stage simplied number pipeline stages increased 2 reduce average number clock cycles per instruction cp requires either instructions occupy one pipeline slot threestage pipeline reimplemented occupy fewer slots else pipeline stalls caused dependencies instructions decreased combination 2007 taylor francis group llc basic problem decreasing cpi relative threestage core associated von neumann bottleneck storedprogram computer single instruction data memory performance restricted available memory bandwidth threestage arm core accesses memory almost every clock cycle either fetch instruction transfer data simply tightening cycles memory used yield small performance gain get signicantly better cpi memory system must deliver one value clock cycle either delivering 32 bits per cycle single memory separate memories instruction data accesses th e higherperf ormance arm core employ vestag e pipeline figu 1311 separat e instru ction data memorie s breaking instru ction execu tion ve compo nents rather three decreases maximum work must comp leted cloc k cycle hence allow higher clock fre quency b e used separat e instruction data memorie may separ ate caches connecte unie instru ction data main memory allow signican reduction core cpi ves tage pipeline fol lowing stages 1 fetch instruction fetched memory placed instruction pipeline 2 decode instruction decoded register operands read register le three operand read ports register le arm instructions source operands one cycle 3 execute operand shifted alu result generated instruction load store memory address generated alu 4 buffer da ta data memory accessed required otherwise alu result simply buffered one clock cycle give pipeline ow instructions 5 writeback results generated instruction written back register le including data loaded memory v estage pipe line b een used man risc processor con sidered classic way design processor principal conces sions arm instru ction set arch itecture organ ization show n figure 1311 thr ee source opera nd read ports two wr ite ports register le inclusion address incrementing hardware execute stage support load store multiple instructions", "url": "RV32ISPEC.pdf#segment471", "timestamp": "2023-10-17 20:16:42", "segment": "segment471", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2.5 Data Forwarding ", "content": "major source complexity vestage pipeline instruction execution spread across three pipeline stages way resolve data dependencies without stalling pipeline introduce forwarding paths data dependencies arise instruction needs use result one predecessors result returned register le forwarding paths allow results passed stages soon available vestage arm pipeline requires three source operands forwarded three intermediate result registers vestage pipeline reads instruction operands one stage earlier pipeline would naturally get different value pc 4 rather pc 8 would lead unacceptable code incompatibilities vestage pipeline arms emulate behavior older threestage designs incremented pc value fetch stage fed directly register le decode stage bypassing pipeline register two stages pc 4 next instruction equal pc 8 current instruction correct r15 value obtained without additional hardware", "url": "RV32ISPEC.pdf#segment472", "timestamp": "2023-10-17 20:16:42", "segment": "segment472", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2.6 Programming Model ", "content": "section introduces programmer overview arm processor provides information data types memory register set exceptions data types arm processors support following three data types 1 bytes 8bit signed unsigned bytes 2 halfwords 16bit signed unsigned halfwords aligned 2 byte boundaries 3 words 32bit signed unsigned words aligned 4 byte boundaries memory arm architecture single at address space 232 byte address range 0 232l based different arm versions address space consist 230 32bit word word aligned 231 16bit halfword halfword aligned 232 8bit byte words aligned address space means address consists 4 byte halfwordaligned address space means address consists 2 byte arm supports littleendian memory system however also cong ured work bigendian memory system littleendian memory system byte halfword wordaligned address least signicant byte halfword within word address similarly byte halfword aligned address least signicant byte within halfword address bigendian memory system byte halfword wordaligned address signicant byte halfword within word address similarly byte halfwordaligned address signicant byte within halfword address systems illustrated figure 1312", "url": "RV32ISPEC.pdf#segment473", "timestamp": "2023-10-17 20:16:42", "segment": "segment473", "image_urls": [], "Book": "computerorganization" }, { "section": "13.4.2.7 Processor Modes ", "content": "arm architecture supports following seven processor modes 1 user usr normal program execution mode 2 fast interrupt q supports highspeed datatransfer channel process 3 interrupt irq used generalpurpose interrupt handling 4 supervisor svc protected mode operating system 5 abort abt implements virtual memory andor memory protection 6 undefined und supports software emulation hardware coprocessors 7 system sys runs privileged operating systems tasks program appl ication runs user mode allow running progr acce ss prot ected system reso urces change proce ssor modes six mode calle privilege mode becau se full access system resource change mode freely mode fiq irq super visor abor und efined cal led exception mode becau se entered specic excep tion occur s syste mode entered excepti ons intende use operating system tasks arm registe r set th e arm proce ssor 37 regist ers 31 general purpose registers 6 status regist ers regist ers 32 bit wide arranged partially overlapp ing banks dif ferent register bank proce ssor mode show n figure 1313 general purpose regist ers regi sters r0 r15 gisters r0r7 refer 32bit physical regi sters processor mode mea ns visible program s gisters r8r1 4 however refer dif ferent physi cal registers based current processor mode total numb er general purpose registers 31 15 regis ter r15 holds program count er pc accessible proce ssor modes status registers include current program status register cpsr ve saved program status registers spsr cpsr contains condition code ags negative zero carry overow interrupt enable bits f current processor mode status control information current process mode interrupt enable bits accessible user process mode exception mode spsr preserve value cpsr associated exception occurs instruction set except compressed 16bit thumb instructions arm instruc tions exactly one word 32 bit aligned 4 byte boundary arm instruction set divided six major categories 1 branch instructions instructions change control ow program execution 2 dataprocessing instructions instructions perform calculations generalpurpose registers include arithmeticlogic instructions com parison instructions 3 status register transfer instructions instructions transfer contents cpsr spsr generalpurpose registers 4 load store instructions copy memory values registers load instructions copy register values memory store instructions 5 coprocessor instructions start coprocessorspecic internal operation transfer coprocessor data memory allow coprocessor value transferred arm register 6 exception generating instructions instructions designed cause specic exceptions occur io system arm handles peripherals disk controllers network interfaces memorymapped devices interrupt support internal regis ters devices appear addressable locations within arm memory map may read written using loadstore instructions memory location peripherals may attract processor attention making interrupt request using either normal interrupt irq fast interrupt fiq input interrupt inputs level sensitive maskable normally interrupt sources share irq input one two timecritical sources connected higher priority fiq input systems may include direct memory access dma hardware external processor handle highbandwidth io trafc", "url": "RV32ISPEC.pdf#segment474", "timestamp": "2023-10-17 20:16:43", "segment": "segment474", "image_urls": [], "Book": "computerorganization" }, { "section": "13.5 SUMMARY ", "content": "hardware software technologies progressed level providing costeffective processors embedded processors become com mon chapter introduced concept embedded processing provided examples two commercially available architectures software structures operating system characteristics suitable architecture type also discussed", "url": "RV32ISPEC.pdf#segment475", "timestamp": "2023-10-17 20:16:43", "segment": "segment475", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 14 ", "content": "computer networks distributed processing simd mim arch itectures sever al proce ssors connecte othe r memor bloc ks throu gh intercon nection networ k tigh tly coupled manner th ese systems typicall reside one mor e cabinet room rather widely disp ersed geographi cally th ey easily expand ed small increment typi cally emp loy one type processor hence suitable envi ronments arr ay f special ized applica tions th ey usual ly employ x ed interconnec tion topology thereby restrict ing user applica tions dictate different efcient topology modern simd mimd system addresse thes e shortfalls using heterogeneous processing nodes scalable fairly large number nodes als merged simd mimd conce pts evidence evolution thinking machine con nection mach ine series th e earli er machin es series simds cm5 operates modes computer networks common multicomputer architectures today figure 141 show structure compute r network essential ly messagepassing mimd system except nodes loosely coupled communication network node host independent computer system user access resources nodes network important concept user executes application node connected far possible submits job nodes resources execute job available node internet best example worldwide network introduction microprocessors local networks become popular system architecture provides dedicated processor local processing providing possibilities sharing resources nodes various networks built using largescale machines minicomputers microcomputers number topologies communication networks exist computer network node fails resources node longer available hand node network generalpurpose resource processing continue even node fails although reduced rate distributed processing system one processing data control distributed several generalpurpose processors dis tributed geographically one processor master controller time central data base rather combination subdatabases machines although ideal distributed processing system exists author knowledge systems adopt distributed processing concepts various degrees exist advantages distributed processing systems 1 efcient processing since node performs processing suited 2 dynamic reconguration system architecture suit processing loads 3 graceful degradation system case failure node redundancy needed next step progression building powerful computing environments grid computing grid network computers act single virtual computer system utilizing specialized scheduling software grids identify resources allocate tasks processing y resource requests processed wherever convenient wherever particular function resides centralized control grid computing exploits underlying technol ogies distribut ed computing job sched uling workload man agement around decade recent trend hardw commo dity serv ers blad e serv ers storage networks highspe ed networ ks etc software lin ux web serv ices open sourc e technolog ies etc contri b uted mak e grid com puting practical hewlet tpackard ada ptive enterprise initiativ e ibm dem com puting eff ort su n mic rosystems netwo rk one framewor k examples com mercial gri com puting products section 141 provides deta ils f networ ks section 142 addresse distributed processi ng section 143 covers grid architecture", "url": "RV32ISPEC.pdf#segment476", "timestamp": "2023-10-17 20:16:43", "segment": "segment476", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1 COMP UTER NETWO RKS ", "content": "mentioned earlier compute r networ k simpl system interconnec ted computing devi ces share informat ion resource amo ng othe r term comp uting devices include traditio nal pers onal com puters pcs laptops well p ersonal digital assist ants pdas web tvs sma rt ph ones sectio n use word comp uters include devices connec tion amo ng networ k compute rs necessar ily via copper wire fiber optics micro waves infrared commun ication satelli tes also used computer networks involve several prima ry com ponents 1 hosts computing devices connected networks called hosts end systems 2 links communication links paths transmitted information takes sending receiving hosts 3 routers router takes information arriving one incoming communica tion links forwards one outgoing communication links 4 bridges bridge reduces amount trafc network dividing data segments 5 protocols protocols ip tcp control sending receiving information within network", "url": "RV32ISPEC.pdf#segment477", "timestamp": "2023-10-17 20:16:43", "segment": "segment477", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1.1 Networ k Architect ure ", "content": "computer network divided based underlying arch itecture design tw main categories client serv er architect ure peer topeer architect ure client serv er architecture com puter network either client server servers powerf ul com puters cont rol data man age networ k trafc clie nts hand rely servers reso urces run applications typical client sever network arrangem ent shown figur e 142 peertope er arch itecture h ost n etwork equiv alent capab ilities responsibili ties xed divisio n client servers typical peertope er networ k arrangem ent show n figur e 143 peer topeer networks generally simpler clientserver networks usually offer performance heavy loads", "url": "RV32ISPEC.pdf#segment478", "timestamp": "2023-10-17 20:16:43", "segment": "segment478", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1.2 Network Reference Models ", "content": "network reference model layered abstract description communications computer network protocol design used visualize clearly describe structure network section gives overview two major network reference models osi model tcpip model osi model protocols seldom used model still valid tcpip model hand commonly used protocols widely spread tanenbaum 2003 models described next osi model open system interconnection osi reference model describes information transfers one enduser application another enduser applica tion network model based proposal developed internati onal standards ganization iso late 1970s rst attem pt internationa l stan dardization protocol used differ ent laye rs model seven layers speci c functional ity figur e 144 show order layers figure 145 illus trates transf erring data different layers sending host router receiving host follow ing brief description layer functional ity sta rting top layer layer 7 ap plication laye r main interface user intera ct appl ication therefore networ k provi des network services simple mail transfer protocol smtp le transfer prot ocol ftp telnet allow enduser acce ss informat ion networ k one wi dely used applica tion prot ocol hyper text transfer protocol htt p basis worldw ide web ww w layer 6 presenta tion layer transform data provide standard interf ace appl ication layer converts local repr esentatio n data canonica l form vice v ersa data com pression mim e encoding data encryption similar manipulat ion presentation done layer layer 5 th e sess ion layer cont rols connections sessions compute rs denes format data sent ver connec tions est ablishes maintai ns ends sess ions acro ss network provides synchr onization services plan ning check points data stream one sess ion fails data recent checkpoint need transmitted layer also responsible name identication designated parties join session layer 4 transport layer subdivides userbuffer networkbuffer sized datagrams segments enforces desired transmission control controls reliability given link ow control segmentationdesegmentation error control also provides errorchecking mechanism guarantee errorfree data delivery losses duplications provides acknowledgment successful transmissions retransmission requests packets arrive errorfree two bestknown transport protocols transmission control proto col tcp user datagram protocol udp layer 3 network layer responsible routing directing datagrams one host another managing network problems packet switching data congestion also translates logical network address names physical address router send datagrams large source computer sends network layer may break large datagrams smaller packets network layer host receiving packet reassemble fragmented datagram best known example networklayer protocol internet protocol ip internet protocol identies host 32bit 4byte ip address written four dotseparated decimal numbers 0 255 example 12314567240 rst 3 bytes ip identify network remaining bytes identify host network layer 2 datalink layer denes format data network provides functional procedural ways transfer data one network element adjacent network element detect probably correct errors may happen physi cal laye r th e input data broke n sende r node hundred thousand byte data fra mes packet transm itted seque n tially receive r networ k data frame includes check sum source destin ation address data rece iving sending host datal ink laye r handles data betwee n physical network layers sending h ost turns packets networ k laye r raw bits send physi cal layer sendi ng end turns received raw data phy sical laye r data frames deliver networ k laye r layer 1 physi cal layer denes physical tra nsmission medium speci cations include voltage cabl es pin layout successfully transm raw bit stream med ium med ia functional ly equi valent key difference conven ience cost installa tion maint enance tcp ip mode l even thoug h osi mode l widel used often considered standar tcp ip model used unix workstat ion vendors f less strictl dened layers provide easier t realwor ld protocol s tcpip model also calle interne model becau se originall crea ted 1970s defens e adva nced search projects agency darpa use devel oping int ernet protocols str ucture internet still closel aff ected mode l also cal led dod mode l since desi gned department defense unlike osi model tcp ip mode l ofcially document ed document agree numb er names model layers reason different textbook give dif ferent descrip tion model earlier versions model described simple fourlayer scheme shown figure 146 modern documentations model evolved velayer model splitting rst layer network access two layers physical data link also internetinternetworking layer renamed networking layer new model show n figure 147 obviou sly evolved version closer osi model original one reason text books refer model hybrid model osi tcpip", "url": "RV32ISPEC.pdf#segment479", "timestamp": "2023-10-17 20:16:44", "segment": "segment479", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1.3 Network Standardization ", "content": "standards considered one fundamentals technology computer networking exception communication among different computers would rather impossible network vendors suppliers ways agree important aspects area computer networks standards set several organizations including iso american representative ansi american national stand ards institute nist national institute standards technology ieee institute electrical electronics engineers largest professional organ ization world section discusses ieee 802 one ieee popular standards deal computer networks ieee 802 standards ieee 802 refers family ieee standards dealing local area networks lans metropolitan area networks mans discussed later chapter number 802 simply next free number ieee could assign though 802 sometimes associated date rst meeting heldfebruary 1980 services protocols specied ieee 802 designated lower two layers physical data link osi reference model mentioned earlier ieee 802 standards divide datalink layer two subgroups logical link control llc manages datalink communication denes use logical interface points called service access points sap responsible sequencing frames controlling frame trafc media access control mac provides shared access physical layer computer nic recognizes frame addresses checks frame errors ensures delivering errorfree data two computers network ieee 802 standard family maintained ieee 802 lanman stand ards committee lmsc collection working groups listed table 141 produce standards different areas networking standards become successful others ethernet family wireless lan token ring bridging virtual bridged lans commonly used standards", "url": "RV32ISPEC.pdf#segment480", "timestamp": "2023-10-17 20:16:44", "segment": "segment480", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1.4 Computer Network Types ", "content": "common types computer networks classied scope scale 1 lanlocal area network 2 wlanwireless local area network 3 wanwide area network 4 manmetropolitan area network 5 cancampus area network 6 dandesk area network 7 panpersonal area network section examines three common network types lan wan wlan local area network lan lans privately owned networks limited relatively small spatial area room single building aircraft commonly used connect personal computers workstations schools houses company ofces share resources printers exchange information current lans likely based switched ieee 8023 ethernet technology running 101000 mbps lan topology one distinguishing characteristics addition limited size high transfer rate various possible topologies lans shown figure 148 wide area network wan contrast lan wan covers broad area often country continent uses communications circuits connect intermediate nodes wans used connect lans types networks together users computers one location communicate users computers locations wan viewed geographically dispersed collection lans connected routers switching circuits trans mission rates typically slower lans range 2625 mbps sometimes considerably two basic types wans central distributed central wans consists server group servers central location client computers connected central server provide functionality network distributed wans hand consists client server computers distributed throughout wan functionality network distributed throughout wan several wans built including public packet networks large corpor ate networks airline reservation networks banking networks military net works largest wellknown example wan internet spans earth wireless local area network wlan wlan wireless lan connecting two computers without using wires gives users mobility move around within broad coverage area still connected network wlan usually based ieee 80211 highspeed wifi technology network recently become common ofces homes owing increasing popularity laptops", "url": "RV32ISPEC.pdf#segment481", "timestamp": "2023-10-17 20:16:44", "segment": "segment481", "image_urls": [], "Book": "computerorganization" }, { "section": "14.1.5 Internet and WWW ", "content": "see term computer network rst thing comes mind internet world wide web www neither internet www computer network internet single network network networks www distributed system runs top internet terms internet www used interchangeably however synonyms internet worldwide publicly accessible network interconnected computer networks linked copper wires beroptic cables wireless connec tions web collection interconnected documents resources linked hyperlinks urls internet outcome visionary thinking people early 1960s foresaw great potential connecting computers share information research development scientic military elds jcr licklider mit proposed idea global network computers 1960 paper mancomputer symbiosis 1962 appointed head us department defense darpa information processing ofce formed research group develop idea internet known arpanet rst brought online 1969 rst arpanet link established connecting four major computers universities southwestern united states ucla stanford research institute ucsb university utah development tcpip architecture 1970s big jump internet advancement tcpip adopted defense department 1980 replacing earlier network control protocol ncp universally adopted 1983 period 1970 1980 internet predecessors four main applications email news groups le transfer remote login limited application difculty use internet mostly limited academic government use late 1980 new application www changed attracted millions new nonacademic users net tim bernerslee others european laboratory particle physics popularly known cern proposed new protocol information distribution based hypertexta system embedding links text link text alon g mosaic brow ser wr itten marc andrees sen nati onal center supercom puter appli cations 1993 www made possibl e site cont number pages different inform ation types includi ng text p ictures sound vide emb edded link transpor user differ ent pages simpl e click", "url": "RV32ISPEC.pdf#segment482", "timestamp": "2023-10-17 20:16:45", "segment": "segment482", "image_urls": [], "Book": "computerorganization" }, { "section": "14.2 DI STRIBUT ED PRO CESSING ", "content": "distrib uted com puting system com putational task split smaller chunks subtasks perf ormed sam e time inde penden tly various proce ssors system subt asks shoul able execu te inde penden tly task split u p way suitabl e runni ng distribut ed syst em con sider computa tion column sum n 3 n matrix compu tati split n tasks whe task com putes sum elements column com putational task sta rted one proce ssors distri buted system creates distribu tes addi tional tasks com puta tion comp lete resu lts gathere one processor gener ate colu mn sum task corr espond program compute sum data ele ments colu mn assigned memor space run program return result several require ments schem e work rst part f job split com putation n tasks package appropri ate colu mn task tasks need distributed various nodes th e com puting nodes send result back one node puts results toge ther also note com putation performed b task minimal com pared overh ead put tas k togethe r coord inate execu tion may gain muc h mode operation exampl e might package computation several columns task balance overhead simplest distributed computing models clientserver consists two main entities server many clients server generates work packages passed onto worker clients clients perform task detailed work package pass completed work package server figur e 149 shows protocol clients server execu te column sum computation task assume n multiple m subtask corresponds computing column sum columns server three states initialize create provide subtasks clients receive results assemble client two states request task wait complete task go back request mode server clients nodes interconnection network forming distributed system network transports messages clients server ubiquitous internet become transport medium web server permanently online set distribute work packages environment clients make http request url server responds work package general client server homogeneous need make sure messages translated appropriately ends bring communication general distributed system nodes capable server client mode operation changed dynamically", "url": "RV32ISPEC.pdf#segment483", "timestamp": "2023-10-17 20:16:45", "segment": "segment483", "image_urls": [], "Book": "computerorganization" }, { "section": "14.2.1 Processes and Threads ", "content": "two main characteristics process task unit resource ownership unit dispatching process allocated virtual address space hold process image control resources les io devices unit dispatching process execution path one programs execution state dispatching priority execution may interleaved processes resource ownership dispatching characteristics treated independent resource ownership usually referred process task unit dispatching usually referred thread lightweight process thus thread also execution state blocked running ready thread context saved running execution stack static storage local variables access memory address space resources process threads process share resources process one thread alters nonprivate memory item threads process see le opened one thread available others implemented correctly threads several advantages multiple process implementations takes less time create new thread process since newly created thread uses process address space takes less time terminate thread compared terminating process less time switch two threads within process communication overhead considerably reduced since threads share resources process depending operating system capabilities four threadprocess combin ations possible single process single thread msdos single process multipl e threads multi ple p rocesses single thread unix multi ple processe multiple threads process solaris figure 1410 show thes e com binations program sever al activities depend ent upon imple mented multipl e threads ne activi ty since one act ivity wait othe r comple te application responsi veness improv ed appl ication designed run mul tiprocesso r system concur rency requireme nts f appl ication translat ed thr eads pro cess es progr depend ent n number availa ble proce ssors th e perf ormance application improv es transpa rently processor added system thus applica tions high degre e parallelism run muc h faster imple mented threads multipr ocessor multithr eaded progr ams adaptiv e variation user dem ands singleth readed progr ams mul tithreade im plementati ons als ffer improv ed program struct ure becau se multipl e inde penden u nits execu tion com pared single mono lithic thr ead also use er system reso urces figur e 1411 shows proce ss models typic al resources proce ss imple men tation process cont rol bloc k user address space user kernel sta cks multit hreaded proce ss threads utilize sam e process control bloc k user addre ss space thread cont rol block user kernel stacks", "url": "RV32ISPEC.pdf#segment484", "timestamp": "2023-10-17 20:16:45", "segment": "segment484", "image_urls": [], "Book": "computerorganization" }, { "section": "14.2 .2 Rem ote Proce dure Call ", "content": "th e remot e proce dure call rp c tec hnique allows progr cause subro utine procedure execute anot address space common ly another com puter th e progr ammer expl icitly code details remot e inter action programmer would write essentially code whether subroutine local executing program remote objectoriented environments rpc referred remote method invocation rmi several often incompatible technologies implement rpc onc rpc dcerpc allow access heterogeneous clients servers number standardized rpc systems created use interface description language idl idl les used generate code interface client server rpcgen common tool used purpose rpc technique constructing distributed clientserverbased applica tions allows extending conventional local procedure calling environ ment called procedure need exist address space calling procedure two processes may system may different systems network connecting rpc programmer distributed applications avoid details interface network since rpc transportindependent isolates application physical logical elements data communication mechanisms also enables application using variety transports local function call rpc made calling arguments passed remote procedure caller waits response returned remote procedure client makes rpc sending request server waits thread blocked processing either reply received times request arrives server calls dispatch routine performs requested service sends reply client rpc completed client continues three steps needed develop rpc application 1 specifying clientserver communication protocol 2 client program development 3 server program development communication protocol implemented generated stubs stubs rpc libraries linked protocol compiler rpcgen used dene generate protocol protocol identies name service procedures data types parameters return arguments protocol compiler reads denition automatically generates client server stubs rpcgen uses language rpc languagerpcl typically gener ates four les client stub server stub xdr external data representation lters header le needed xdr lters xdr data abstraction needed machineindependent communication client application code must communicate via procedures data types specied protocol server side register procedures may called client receive return data required processing client applications call remote procedure pass required data receive returned data concept rpc rst described 1976 rfc707 xerox implemented courier 1981 sun unixbased rpc onc rpc rst used basis sun network file system nfs still widely used several platforms apollo computer network computing system ncs rpc used foundation dcerpc open software foundation osf distributed computing environment dce microsoft adopted dcerpc basis microsoft rpc msrpc mechanism implemented dcom top 1990s xerox parc ilu object management group corba common object request broker architecture offered rpc paradigm based distributed objects inheritance mechanism microsoftnet remoting offers rpc facilities distributed systems implemented windows platform java java rmi api provides similar functionality standard unix rpc methods two common implementations java rmi api original implementation depends java virtual machine jvm class representation mech anisms supports making calls one jvm another protocol underlying javaonly implementation known java remote method protocol jrmp nonjvm cont ext corb version later developed usag e term rm may enote solely program ming interface may signify api jrmp wherea term rmiiiop denotes rmi interface delegating function ality supporting cor ba im plementati origina l rmi api generalize som ewhat suppor different imple mentati ons http transpor t additiona lly work done corb addin g pass value capability support rmi interf ace still rmi iiop jrmp imple mentati ons fully identical interf aces", "url": "RV32ISPEC.pdf#segment485", "timestamp": "2023-10-17 20:16:46", "segment": "segment485", "image_urls": [], "Book": "computerorganization" }, { "section": "14.2.3 Proce ss Synchr onizat ion and Mutual Exclus ion ", "content": "multiple proce ssors execu ting distributed syst em esse ntial act ivities coordina ted make sure start stop appropri ately access data item exclusi man ner fo r instanc e process may run certai n point stop wait another proce ss comple te certain computatio ns device location memor may shar ed several processe hence requires exclusi access processes coordina te among themselve ensur e acce ss fair exclusi set proce ss syn chroniza tion techniq ues avai lable coord inated executi amo ng pro cesses seen earlier book centrali zed system com mon enfor ce exclusive acce ss shar ed code testand set lock semaphores condition variabl es used bring mutual exclusi n ow iscuss popular algo rithms achi eve mutual exclusion distribut ed systems unique iden tier used represe nt critica l reso urce iden tier typically nam e nu mber recogniz able proce sses passed parameter reque sts central server algorithm cent ral serv er algori thm simul ates single proce ssor system one proce sses distribut ed system rst ele cted coordina tor see figur e 1412 proce ss needs ente r critica l sectio n sends reque st identicat ion critica l sectio n coord inator process currently criti cal section coord inator sends back grant mar ks process usin g critica l sectio n another process already claimed critical sectio n serv er reply hence request ing proce ss gets b locked enters queue f proce sses request ing critica l sectio n process exits critica l sectio n sends releas e coord inator th e coord inator sends grant next proce ss queue proce sses waiting critica l sectio n algori thm fair processe reque sts order received easy impleme nt verify major drawbac k coord inator b ecomes sing le point failur e th e central ized server also become bottle neck system dist ributed mutual exclu sion distributed mut ual excl usion algo rithm propos ed ricar agrawal 1981 algorithm process wan ts enter critica l sectio n composes mes sage consist ing iden tier iden tier critica l sectio n curr ent time ie time stamp sends reque st processe group th e reque sting process wait processe group give permi ssion enter critical sectio n process rece iving reque st takes one three possibl e act ions contender critica l section sends permi ssion reque sting process receive r already criti cal sectio n reply b ut adds reque ster local queue request s receiver cont ender critica l sectio n sent reque st com pares time stamp receive message e sent earliest timestamp wins receiver loser sends permission requester receiver winner reply adds requester queue process exits critical section sends permission processes queue deletes processes queue figur e 1413a two p rocesses c reque st acce ss cri tical section process sends request timestamp 18 process c sends request timestamp 22 since process b interested critical section immediately sends back permission c process interested critical section sees timestamp process c later thus process wins queues request c figure 1413b process c also interested critical section compares timestamp message received process sees win hence sends permission process continues wait processes give permission enter critical section figure 1413c soon process c sends permission process process received permissions entire group enter critical section process exits critical section examines queue pending permissions nds process c queue sends permission enter critical section figure 1413d process c received permission processes enters critical section algorithm requires total ordering events system messages reliable one drawback algorithm single point failure previous algorithm replaced n points failure remedied sender always send reply message yes request reply lost sender time retry drawback algorithm heavy message trafc token ring algorithm algorithm assumes group processes inherent ordering processes ordering imposed group example identify process machine address process id obtain ordering using imposed ordering logical ring constructed software process assigned position ring process must know neighboring process ring figure 1414 shows ring n nodes ring initialized giving token process 0 token circulates around ring process k passes process k 1 mod n process acquires token checks see attempting enter critical section enters work exit passes token neighbor process interested entering critical section simply passes token along one process token time must token work critical section mutual exclusion guaranteed order also welldened starvation occur biggest drawback algorithm token lost generated determining token lost difcult", "url": "RV32ISPEC.pdf#segment486", "timestamp": "2023-10-17 20:16:46", "segment": "segment486", "image_urls": [], "Book": "computerorganization" }, { "section": "14.2.4 Election Algorithms ", "content": "earlier discussion see one process acts coordinator may matter process group agreement one assume processes exactly distinguishing charac teristics process obtain unique identier typically machine address process id process knows every process know bully algorithm bully algorithm selects process largest identier coordinator follows 1 process k detects coordinator responding requests initiates election three steps k sends election message processes higher numbers none processes respond k take coordinator one processes responds job process k done 2 process receives election message lowernumbered process time sends replay ok back holds election unless already holding one 3 process announces election sending processes message telling new coordinator 4 process recovers holds election ring algorithm ring algorithm uses ring arrangement token ring mutual exclusion algorithm employ token processes physically logically ordered knows successor process detects failure constructs election message process id sends successor successor skips sends message next party process repeated running process located step process adds process id list message eventually message comes back process started process sees id list changes message type coordinator list circulated process selecting highest numbered id list act coordinator coordinator message circulated fully deleted multiple messages may circulate multiple processes detected failure although creates additional overhead produces result", "url": "RV32ISPEC.pdf#segment487", "timestamp": "2023-10-17 20:16:46", "segment": "segment487", "image_urls": [], "Book": "computerorganization" }, { "section": "14.3 GRID COMPUTING ", "content": "several denitions grid computing architecture dene clus tered servers share common data pool grids others dene large distributed networked environments make use thousands heterogeneous information systems storage subsystems grids denitions fall somewhere two ends spectrum grid general network architecture connecting compute storage resources based standards allow heterogeneous systems applications share compute storage resources transparently computational grid hardware software infrastructure provides dependable consistent pervasive inexpensive access highend computational capabilities foster kesselman 2004 resource sharing primarily le exchange rather direct access computers software data resources required range collaborative problemsolving resourcebrokering strategies emerging industry science engineering sharing necessarily highly controlled resource providers consumers dening clearly carefully shared allowed share conditions sharing occurs foster et al 2001 thus grid system coordinates resources subject centralized control uses standard open generalpurpose protocols interfaces delivers nontrivial qualities service grid consists networks computers storage devices pool share resources grid architecture provisions resources users andor applications elements grid resource pool considered virtual ie activated needed provisioning involves locating computing storage resources making available requestors usersapplications ways provision resources reconguring meet new workload repartitioning existing server environment provide additional storage activating installed active com ponents cpu memory storage local server provide local access additional resources accessing additional computing storage resources found exploited lan across wan nding exploiting additional computing storage resources using grid computing storage resources considered virtual become real activated two approaches exploiting virtualized resources local server environment manual programmatic reconguration activa tion computestoragememory modules shipped systems ie activated paid used virtualized resources also found network instance clustered systems provide access additional cpu power needed utilizing tightly coupled network grid software nd exploit loosely coupled virtual resources lan wan addition virtualized services acquired externally using utility computing model grids generally deployed department level departmental grid across enterprise intergrids across multiple enterprises multiple organizations extragrids sun microsystem scalable virtual computing concept identies three grid levels 1 cluster grid departmental computing simplest grid deployment maximum utilization departmental resources resources allocated based priorities 2 enterprise grid enterprise computing resources shared within enterprise policies ensure computing demand gives multiple groups seamless access enterprise resources 3 global grid internet computing resources shared internet global view distributed datasets growth path enterprise grids 2001 global grid forum ggfthe primary grid standards organization put forward architectural view grids web services could joined architecture called open grid services architecture ogsa since ggf announced ogsa view strong progress articulation web services grid standards following organizations leading standards organizations involved articulating implementing web services serviceoriented architectures soa grid architecture ggf primary standards setting organization grid computing ggf works closely oasis described later well distributed management task force help build interoperable web services manage ment infrastructure grid environments organization advancement structured information standards oasis active setting standards web services works closely ggf integrate web services standards grid standards distributed management task force dmtf works ggf help implement dmtf management standards dmtf common infor mation model cim webbased enterprise management wbem stand ards grid architecture worldwide web consortium w3c also active setting web services standards standards relate xml globus alliance formerly globus project also instrumental grid standardsbut implementation pointofview globus alliance multiinstitutional grid research development organization develops implements basic grid technologies builds toolkit help organiza tions implement grids grid standards even ogsa proposed standards", "url": "RV32ISPEC.pdf#segment488", "timestamp": "2023-10-17 20:16:47", "segment": "segment488", "image_urls": [], "Book": "computerorganization" }, { "section": "14.3.1 OGSA ", "content": "ogsa architectural vision web services grid architectures combined four working groups help implement ogsa standards groups focus dening clear programmatic interfaces management interfaces naming conventions directories specic ogsa working groups involved theses activities open grid services architecture working group ogsawg open grid services infrastructure working group ogsiwg open grid service architecture security working group ogsasecwg database access integration services working group daiswg open grid services infrastructure ogsi implementationtestbed ogsa several standards involved building soa underlying grid architecture support business process management standards form basic architectural building blocks allow applications databases execute service requests moreover standards also make possible deploy business process management software enables information systems executives manage business process ow important grid grid related standards include 1 programtoprogram communications soap wsdl uddi 2 data sharing extensible markup languagexml 3 messaging soap wsaddressing mtom attachments 4 reliable messaging wsreliable messaging 5 managing workload wsmanagement 6 transactionhandling wscoordination wsatomic transaction wsbusiness activity 7 managing resources wsrf web services resource framework 8 establishing security wssecurity wssecure conversation wstrust wsfederation web services security kerberos binding 9 handling metadata wsdl uddi wspolicy 10 building integrating web services architecture grid see ogsa 11 orchestration standards used abstract business processes application logic data sources set rules allow business processes interact 12 overlaying business process ow business process engineering language web servicesbpel4ws 13 triggering process ow events wsnotication grids used variety scientic commercial applications aerospace automotive collaborative design modeling architecture engineering construction electronics design testing nance stock portfolio analysis risk management life sciences data mining pharmaceuticals manufacturing interintrateam collaborative design process management mediaentertainment digital animation famous scientic research grids include 1 seti home projectthousands internet pcs used search extraterrestrial life 2 mersenne projectthe great internet mersenne prime search gimps worldwide mathematics research project 3 nasa information power gridthe ipg joins supercomputers storage devices owned participating organizations single seamless computing environment project allow government researchers industry amasscomputingpowerandfacilitateinformationexchangeamongnasascientists 4 oxford escience gridoxford university escience project addresses scientic distributed global collaborations require access large data collections largescale computing resources highperformance visual ization back individual user scientists 5 intelunited devices cancer research projectthis project gridbased research project designed uncover new cancer drugs use organizations individuals willing donate excess pc processing power excess power applied grid infrastructure used operate specialized software research focuses proteins determined possible target cancer therapy th e largest grid effort currently way teragrid sci entic researc h proj ect teragrid launched united state nati onal science foun dation augu st 2001 multiyea r effort build world largest g rid inf rastructu scientic com puting 2004 te ragrid include 20 tera o ps com puting powe r almost one petabyt e ata highre solution visu al iza tion envi ronments mode ling simulat ion suppor ting grid networ k expec ted op erate 40 gigabits per secon d altho ugh preponder ance com pute g rids scientic rese arch educational com munit ies strong growth com pute grids comme rcial environment s end 2003 ofce science us department energy published report called facilities future science 20year outlook located http wwwerdoegov sub facilitiesforfuture 20yearoutlook screenpdf report details us government use ultrascale comput ing highspeed grid approach powerful servers supercomputers encourage discovery public sector indi h taken bu ilding national igrid informa tion grid indi centre develop ment adva nced comput ing makers india par pa dma supercompu terssees role helping indi carv e nich e global arena advanc ed informat ion tec hnology expand frontiers high performanc e com puting utilize resulting intel lectual prope rty benet societ converting exci t ing business opportuni ty est ablishing selfs ustaining wealth crea ting opera tion th e united kingdom created escience centers utilize data gri ds sci entic researc h proj ects national esci ence centre coordina tes p rojects regi onal cent ers located belfast cambrid ge cardi ff london manc hester newca stle oxford southamp ton well sites daresb ury ruther ford appl eton th ese cent ers provide facil ities scient ists researcher wish collabo rate large datainte nsive proj ects project one dozens gover nmenta l grid project within uk num erous exampl es gover nment grids found http www gri dscentero rg news new sdeploym entasp", "url": "RV32ISPEC.pdf#segment489", "timestamp": "2023-10-17 20:16:47", "segment": "segment489", "image_urls": [], "Book": "computerorganization" }, { "section": "14.4 SUMMARY ", "content": "chapter provided brief introduction three architecture types computer networks distributed systems grids internet made architectures cost effective fact say network computer architectures allowed us build complex systems open scalable features also contributed increased requirements security reliability", "url": "RV32ISPEC.pdf#segment490", "timestamp": "2023-10-17 20:16:47", "segment": "segment490", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 15 ", "content": "performance evaluation previous chapters highlighted parameters performance evalu ation various components architectural features computer systems perform ance evaluation estimate necessary either acquiring new system evaluating enhancements made made existing system ideally best develop target application system evaluated determine performance next best thing simulate target application existing system practice modes evaluation always possible may prove cost effective analytical methods evaluating performance determining costs necessary systems get complex analytical methods become unwieldy benchmarking used practice evaluate complex systems chapter introduces common analytical techniques bench marking also provides brief introduction program optimization techniques major aim system design maximize performancetocost ratio target system maximizing performance minimizing cost three major aspects computer system performance maximization point view 1 processor bandwidth fastest execution instructions 2 memory bandwidth fastest instructiondata retrieval storage 3 io bandwidth maximum throughput application said processor bound memory bound io bound depending aspects limits performance total time execute program product number instructions program number cycles major minor context asc per instruction time per cycle processor design contributes last two items program design contributes rst addition seen features pipelining superscalar execution branch prediction contributing enhancement program performance memory bandwidth dependent cache virtual structures shared vs message passing structures speeds memory components architecture program allow best utilization cache virtual memory schemes io bandwidth function device speeds bus interconnects speeds control structures dma peripheral processors associated protocols luxury building computer system optimize performance particular application holistic approach used mentioned previous chapter suppose application allows development processing algo rithms degree parallelism a degree parallelism simply number computations executed concurrently language used code algorithm allows representation algorithms degree parallel ism l compilers produce object code retains degree parallelism c hardware structure machine degree parallelism h processing efcient following relation must satised h c l 151 objective minimize computation time application hand hence processing structure offers least computation time efcient one architecture efcient development application algorithms programming languages compiler operating system hardware structures must proceed together mode development possible special purpose applications development general purpose architectures however application characteristics easily taken account development components proceed concurrently far possible development algorithms high degree parallelism application dependent basically human endeavor great deal research devoted developing languages contain parallel processing constructs thereby enabling coding parallel algorithms compilers parallel processing languages retain parallelism expressed source code compilation process thus producing parallel object code addition compilers extract parallelism serial program thus producing parallel object code developed progress hardware technology yielded large number hard ware structures used executing parallel code according amdahl law system speedup maximized per formance frequently used component system maximized stated 1 1 f f k 152 overall system speedup f represents fraction work performed enhanced component k speedup enhanced component", "url": "RV32ISPEC.pdf#segment491", "timestamp": "2023-10-17 20:16:47", "segment": "segment491", "image_urls": [], "Book": "computerorganization" }, { "section": "15.1 PERFORMANCE MEASURES ", "content": "several measures performance used evaluation computer systems common ones million instructions per second mips million operations per second mops million oatingpoint operations per second mflops megaops billion oatingpoint operations per second gflops gigaops million logical inferences per second mlips machines capable trillion oatingpoint operations per second teraops available meas ure used depends type operations one interested particular application machine evaluated use mips measure illustrate various aspects performance measurement section concepts equally valid performance measures also example 151 consider shl instruction asc chapter 6 instruction cycle requires four minor cycles assuming minor cycle corresponds nanosecond asc complete 1 4 109 shl instructions per second ips 025 109 ips 025 103 250 mips hand lda load indirect instruction asc requires 12 cycles instruction executed asc mips rating 2503 833 mips salesman trying sell asc tendency quote mips rating 250 critical evaluator use 833 mips rating neither ratings useful practice evaluate machine since application use several instructions asc example 152 let us suppose application uses following four instructions average arithmetic mean speed 8 4 12 12 4 9 cycles resulting mips rating 19 3103 11111 mips extend analysis include instructions asc rather four instruc tions used measure also realistic since frequency instruction usage ie instruction mix depends application characteristics thus rating based mix operations representative occurrence application example 153 consider following instruction mix weighted average weighted arithmetic mean instruction speed 8 3 03 4 3 02 12 3 03 12 3 02 92 cycles assuming one nanosecond per cycle machine performs 192 3 10 10869 mips rating represen tative machine performance maximum rating 250 mips computed using speed execution 4 cycles fastest instruction shift thus performance rating could either peak rate ie mips rating cpu exceed realistic average sustained rate addition comparative rating compares average rate machine wellknown machines eg ibm mips vax mips etc also used better measure arithmetic mean compare relative performances geomet ric mean geometric mean set positive data dened nth root product members set n number members geometric mean set a1 a2 arithmetic mean relevant time several quantities add together produce total arithmetic mean answers question quantities value would value order achieve total geometric mean hand relevant time several quantities multiply together produce product geometric mean answers question quantities value would value order achieve product geometric mean data set always less equal set arithmetic mean two means equal members data set equal geometric mean useful determine average factors example stock rose 10 rst year 20 second year fell 15 third year compute geometric mean factors 110 120 085 110 3 120 3 085 13 10391 conclude stock rose 391 per year average using arithmetic mean calculation incorrect since data multiplicative example 154 assume asc instruction speeds improved resulting two versions asc2 asc3 following table shows cycles needed four instructions three versions original machine asc1 asc2 asc3 speeds normalized respect asc1 last row shows geometric means three machines ratio geometric means indication relative performances relative performances asc2asc1 0806 asc3asc1 066 asc3asc2 0660806 0819 ratios remain consistent matter machines used reference another measure commonly used harmonic mean harmonic mean group terms number terms divided sum terms reciprocals harmonic mean h positive real numbers a1 dened h n 1 a1 1 a2 1 154 measure useful measures rates instructions per second useful environments known work loads certain situations harmonic mean provides correct notion aver age instance one travels 40 kmh half distance trip 60 kmh half average speed trip given harmonic mean 40 60 48 total amount time trip traveled entire trip 48 kmh note however one traveled half time one speed half another arithmetic mean 50 kmh would provide correct notion average harmonic mean never larger arithmetic geometric means equivalent weighted arithmetic mean value weight reciprocal value since harmonic mean list numbers tends strongly toward least elements list tends compared arithmetic mean mitigate impact large outliers aggravate impact small ones example 155 consider application four modules performance module monitored following table lists average number instructions executed per second module modules enhanced improve performance new instructions per second ratings shown note performance module 4 actually worsened harmonic means versions shown table according harmonic mean analysis overall performance improved 5", "url": "RV32ISPEC.pdf#segment492", "timestamp": "2023-10-17 20:16:48", "segment": "segment492", "image_urls": [], "Book": "computerorganization" }, { "section": "15.2 COST FACTOR ", "content": "unit cost machine usually expressed dollars per mips mflops important note cost comparison performed architectures approximately performance level example application hand requires performance level n mips usually overkill select architecture delivers mips far greater n even though unit cost latter system lower hand architecture offers nx mips lower unit cost would better application hand possible attain n mips using systems x lower total cost compared architecture delivering n mips course multiple units nxmips machine congured deliver n mips candidate comparison obviously simplication since conguring multiple machines form system typically requires considerations partitioning application subtasks reprogramming sequential application parallel form overhead introduced communication multiple processors considerations discussed later book cost computer system composite software hardware costs cost hardware fallen rapidly hardware technology progressed software costs steadily rising software complexity grew despite availability sophisticated software engineering tools trend continues cost software would dictate cost system hardware would come free software purchased cost either hardware software dependent two factors upfront development cost per unit manufacturing cost development cost amort ized life system distributed unit produced thus number systems produced increases development component cost decreases production cost characteristics hardware software differ produc tion unit hardware requires assembly testing hence cost operations never zero even cost hardware components tends negligible case software assume changes software developed resulting zero maintenance costs production cost becomes almost zero number units produced large producing copy software system testing make sure accurate copy original bitbybit comparison expensive operation however assumption zero maintenance costs realistic since software system always undergoes changes enhancements requested users continual basis effects progress hardware software technologies cost system technology provides certain level performance performance requirements increase exhaust capability technology hence move new technology assuming progress technology user driven practice technology also driving user requirements sense progress technology provides systems higher performance lower cost levels thereby making older systems obsolete faster means life spans systems getting shorter bringing additional burden recuperating development costs shorter period time cost considerations thus lead following guideline system architect make architecture general purpose possible order make suitable large number applications thus increasing number units sold reducing cost per unit", "url": "RV32ISPEC.pdf#segment493", "timestamp": "2023-10-17 20:16:48", "segment": "segment493", "image_urls": [], "Book": "computerorganization" }, { "section": "15.3 BENCHMARKS ", "content": "analytical techniques used estimating performance approxima tions complexity system increases techniques become unwieldy practical method estimating performance cases using benchmarks benchmarks standardized batteries programs run machine estimate performance results running benchmark given machine compared known standard machine using criteria cpu memory utilization throughput device utilization benchmarks useful evaluating hardware well software single processor well multiprocessor systems also useful comparing performance system certain changes made highlevel language host computer architecture execute efciently features programming language frequently used actual programs ability often measured benchmarks benchmarks considered repre sentative classes applications envisioned architecture provide brief description common benchmarks real worldapplication benchmarks use system userlevel software code drawn real algorithms full applications commonly used systemlevel benchmarking usually large code data storage requirements derived benchmarks also called algorithmbased benchmarks extract key algorithms generate realistic data sets realworld applica tions used debugging internal engineering competitive analysis single processor benchmarks lowlevel benchmarks used measure per formance parameters characterize basic architecture computer hardwarecompiler parameters predict timing performanceof complex kernels applications used measure theoretical parameters describetheoverhead potential bottleneck orthe propertiesof part hardware kernel benchmarks code fragments extracted real programs code fragment responsible execution time advantage small code size long execution time examples linpack lawrence livermore loops linpack linear algebra package measures mflops rating machine solving system linear equations double precision arithmetic fortran environment basic linear algebra subroutines blas benchmark developed argonne national laboratory 1984 evaluate performance supercomputers c java versions benchmark suite available lawrence livermore loops measure mflops rating executing 24 common fortran loops operating data sets 1001 fewer elements local benchmarks programs sitespecic include inhouse applications widely available since user interested performance machine hisher applications local benchmarks best means evaluation partial benchmarks partial traces programs general difcult reproduce benchmarks portion benchmarks traced unknown unix utility application benchmarks programs widely employed unix user community spec systemstandard performance evaluation cooperative effort benchmark suite belongs category con sists 10 scenarios taken variety science engineering applications suite developed consortium 60 computer vendors evaluation workstation performance performance rating provided spec marks three main groups working distinct aspects perform ance evaluation open systems group osg concentrates desktop work station le server environments graphics performance characterization group gpcg concentrates graphicintensive multimedia systems highperformance computing group hpg concentrates multiprocessor systems supercomputers groups select applications represent typical work loads corresponding environments io noncpu intensive parts applications removed application obtain kernel com posite kernels forms benchmark suite synthetic benchmarks small programs constructed specially bench marking perform useful computation statistically approximate average characteristics real programs examples whetstone dhrys tone benchmarks whetstone benchmark original form benchmark set published 1976 developed algol 60 whetstone reects mostly numerical computing using substantial amount oatingpoint arithmetic chiey used fortran version main characteristics high degree oatingpoint data operations since benchmark meant represent numeric programs high percentage execution time spent mathematical library functions use local variables since issue local vs global variables hardly discussed benchmarks developed instead local variables large number global variables used therefore compiler heavily used global variables used register variables c boost whetstone performance since benchmark consists nine small loops whetstone extremely high code locality thus near 100 hit rate expected even fairly small instruction caches distribution different statement types benchmark determined 1970 benchmark expected reect features modern programming languages eg record pointer data types also recent publications interaction programming languages architecture examined subtle aspects program behavior eg locality data refer enceslocal versus global explicitly considered earlier studies dhrystone benchmark early efforts dealing performance different computer architectures performance usually measured using collection programs happened available user however following pioneering work knuth early 1970s increasing number publications providing statistical data actual usage programming lan guage features dhrystone benchmark program set based statistics particularly systems programming benchmark suite contains measurable quantity oatingpoint operations considerable percentage execution time spent string functions c compilers number goes 40 unlike whetstone dhrystone contains hardly loops within main measure ment loop therefore processors small instruction caches almost memory accesses cache misses cache becomes larger accesses become cache hits small amount global data manipulated data size scaled parallel benchmarks evaluating parallel computer architectures 1985 workshop national institute standards nist recommended following suite parallel computers linpack whetstone dhrystone liver loops fermi national accelerator laboratory codes used equipment pro curement nasaames benchmark 12 fortran subroutines john rice numerical problem set raul mendez benchmarks japanese machines stanford small programs concurrent development rst risc systems john hennessy peter nye stanford computer systems laboratory collected set small c programs programs became popular basis rst comparisons risc cisc processors collected one c program containing eight integer programs permu tations towers hanoi eight queens integer matrix multiplication puzzle quicksort bubble sort tree sort two oatingpoint programs matrix multiplication fast fourier transform perfect performance evaluation costeffective transformations benchmark suite consists 13 fortran subroutines spanning four application areas signal processing engineering design physical chemical modeling uid dynamics suite consists complete applications inputoutput portions removed hence constitutes signicant measures performance slalom scalable languageindependent ames laboratory oneminute measurement designed measure parallel computer performance function problem size benchmark always runs 1 min speed system test determined amount computation performed 1 min transaction processing benchmarks transaction processing servers perform large number concurrent shortduration activities transactions typically involve disk io communications ibm introduced benchmark tp1 1980s evaluating transaction processing mainframes several benchmarks augment tp1 proposed later transaction processing performance council tpc consortium 30 enterprise system vendors released several versions tpc benchmark suites many benchmark suites use developed important note benchmarks provide broad performance guideline responsibility user select benchmark comes close application evaluate machine based scenarios expected application machine evaluated", "url": "RV32ISPEC.pdf#segment494", "timestamp": "2023-10-17 20:16:49", "segment": "segment494", "image_urls": [], "Book": "computerorganization" }, { "section": "15.4 CODE OPTIMIZATION ", "content": "mentioned earlier rst step optimizing performance program select appropriate algorithm algorithm coded using appropriate language optimizing compilers exist application developers especially supercomputer systems spend enormous amount time tweaking code produced compilers optimize performance fact code tweaking also extends back source code observing code produced com pilers section provides brief description common code tweaking techniques scalar renaming typical programmers use scalar variable repeatedly shown following loop example 156 1 n x b 2 x x c d p x 2 endfor second instance x renamed shown two code segments become data independent dataindependent code segments handled two loops loop running separate processor concurrently 1 n x b 2 x xx c d p xx 2 endfor scalar expansion following code segment x assigned value used subsequent statement example 157 1 n x b 2 x endfor scalar x expanded vector shown two statements made independent thus allowing better vectorization 1 n x b 2 x endfor loop unrolling loop small vector length efcient eliminate loop construct expand iterations loop example 158 loop 1 3 x b endfor unrolled following x 1 1 b 1 x 2 2 b 2 x 3 3 b 3 eliminates looping overhead allows three computations performed independently case computations iteration dependent case computations must partitioned nondependent sets loop fusion jamming two loops executed number times using indices combined one loop example 159 consider following code segment 1 n x z endfor 1 n p x endfor note loop would equivalent vector instruction x stored back memory rst instruction retrieved second loops fused shown memory trafc reduced 1 n x z p x endfor assumes enough vector registers available processor retain x processor allows chaining loop reduced 1 n p z endfor loop distribution loop body contains dependent ie statements data dependent nondependent code way minimize effect dependency break loop two one containing dependent code nondependent code force maximum work inner loop since maximizing vector length increases speed execution inner loop always made longest dependency conicts avoided shifting dependencies inner loop outer loop possible subprogram inlining small subprograms overhead control transfer takes longer actual subprogram execution calling subprogram might consume 1015 clock cycles arguments passed one argument might nearly double overhead cases better move subpro gram code calling program", "url": "RV32ISPEC.pdf#segment495", "timestamp": "2023-10-17 20:16:49", "segment": "segment495", "image_urls": [], "Book": "computerorganization" }, { "section": "15.5 SUMMARY ", "content": "performance parameters various components architectural features discussed previous chapters various performance enhancement techniques also described chapters chapter provided brief introduction common analytical techniques performance evaluation cost factor common benchmarks addition performance cost factors considered evaluating architectures generality wide range applications suited architecture ease use expandability scalability one feature receiving considerable attention openness architecture architec ture said open designers publish architecture details others caneasilyintegratestandardhardwareandsoftwaresystemstoit", "url": "RV32ISPEC.pdf#segment496", "timestamp": "2023-10-17 20:16:49", "segment": "segment496", "image_urls": [], "Book": "computerorganization" }, { "section": "CHAPTER 1 Introduction to Computer Systems ", "content": "technological advances witnessed computer industry result long chain immense successful efforts made two major forces academia represented university research centers industry represented computer companies however fair say current tech nological advances computer industry owe inception university research centers order appreciate current technological advances computer industry one trace back history computers development objective historical review understand factors affecting computing know today hopefully forecast future computation great majority computers daily use known general purpose machines machines built specic application mind rather capable performing computation needed diversity applications machines distinguished built serve tailored specic applications latter known special purpose machines brief historical background given section 11 computer systems conventionally dened interfaces number layered abstraction levels providing functional support pre decessor included among levels application programs highlevel languages set machine instructions based interface different levels system number computer architectures dened interface application programs highlevel language referred language architecture instruction set architecture denes interface basic machine instruction set runtime io control different denition computer architecture built four basic viewpoints structure organization implementation performance denition structure denes interconnection various hardware com ponents organization denes dynamic interplay management various components implementation denes detailed design hardware components performance species behavior computer system architectural development styles covered section 12 number technological developments presented section 13 discus sion chapter concludes detailed coverage cpu performance measures", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-17 22:51:24", "segment": "segment0", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "1.1. HISTORICAL BACKGROUND ", "content": "section would like provide historical background evolution cornerstone ideas computing industry emphasize outset effort build computers originated one single place every reason us believe attempts build rst computer existed different geographically distributed places also rmly believe building computer requires teamwork therefore people attribute machine name single researcher actually mean researcher may led team introduced machine therefore see appropriate mention machine place rst introduced without linking specic name believe approach fair eliminate controversy researchers names probably fair say rst programcontrolled mechanical computer ever build z1 1938 followed 1939 z2 rst oper ational programcontrolled computer xedpoint arithmetic however rst recorded universitybased attempt build computer originated iowa state university campus early 1940s researchers campus able build smallscale specialpurpose electronic computer however computer never completely operational time complete design fully functional programmable specialpurpose machine z3 reported germany 1941 appears lack funding prevented design implemented history recorded two attempts progress researchers different parts world opportunities gain rsthand experience visits laboratories institutes carrying work assumed rsthand visits interchange ideas enabled visitors embark similar projects laboratories back home far generalpurpose machines concerned university pennsylvania recorded hosted building electronic numerical integrator calculator eniac machine 1944 rst operational generalpurpose machine built using vacuum tubes machine primarily built help compute artillery ring tables world war ii programmable manual set ting switches plugging cables machine slow today standard limited amount storage primitive programmability improved version eniac proposed campus improved version eniac called electronic discrete variable automatic computer edvac attempt improve way programs entered explore concept stored programs 1952 edvac project completed inspired ideas implemented eniac researchers institute advanced study ias princeton built 1946 ias machine 10 times faster eniac 1946 edvac project progress similar project initiated cambridge university project build storedprogram com puter known electronic delay storage automatic calculator edsac 1949 edsac became world rst fullscale storedprogram fully operational computer spinoff edsac resulted series machines introduced harvard series consisted mark ii iii iv latter two machines introduced concept separate memories instructions data term harvard architecture given machines indicate use separate memories noted term harvard architecture used today describe machines separate cache instructions data rst generalpurpose commercial computer universal automatic computer univac market middle 1951 represented improvement binac built 1949 ibm announced rst com puter ibm701 1952 early 1950s witnessed slowdown computer industry 1964 ibm announced line products name ibm 360 series series included number models varied price performance led digital equipment corporation dec introduce rst minicomputer pdp8 considered remarkably lowcost machine intel introduced rst micropro cessor intel 4004 1971 world witnessed birth rst personal computer pc 1977 apple computer series rst introduced 1977 world also witnessed introduction vax11780 dec intel followed suit introducing rst popular microprocessor 80 86 series personal computers introduced 1977 altair processor technology north star tandy commodore apple many others enhanced productivity endusers numerous departments personal computers compaq apple ibm dell many others soon became pervasive changed face computing parallel smallscale machines supercomputers coming play rst supercomputer cdc 6600 introduced 1961 control data corporation cray research corporation introduced best costperformance supercomputer cray1 1976 1980s 1990s witnessed introduction many commercial parallel computers multiple processors generally classied two main categories 1 shared memory 2 distributed memory systems number processors single machine ranged several shared memory computer hundreds thousands massively parallel system examples parallel computers era include sequent symmetry intel ipsc ncube intel paragon thinking machines cm2 cm5 mspar mp fujitsu vpp500 others one clear trends computing substitution centralized servers networks computers networks connect inexpensive powerful desktop machines form unequaled computing power local area networks lan powerful personal computers workstations began replace mainframes minis 1990 individual desktop computers soon connected larger complexes computing wide area networks wan pervasiveness internet created interest network computing recently grid computing grids geographically distributed platforms com putation provide dependable consistent pervasive inexpensive access highend computational facilities table 11 modied table proposed lawrence tesler 1995 table major characteristics different computing paradigms associated decade computing starting 1960", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-17 22:51:25", "segment": "segment1", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "1.2. ARCHITECTURAL DEVELOPMENT AND STYLES ", "content": "computer architects always striving increase performance architectures taken number forms among philosophy single instruction one use smaller number instructions perform job immediate consequence need fewer memory readwrite operations eventual speedup operations also argued increasing complexity instructions number addressing modes theoretical advantage reducing semantic gap instructions highlevel language lowlevel machine language single machine instruction convert several binary coded decimal bcd numbers binary example complex instructions intended huge number addressing modes considered 20 vax machine adds complexity instructions machines following philosophy referred complex instructions set computers ciscs examples cisc machines include intel pentiumtm motorola mc68000tm ibm macintosh powerpctm noted capabilities added processors manufacturers realized increasingly difcult support higher clock rates would possible otherwise increased complexity computations within single clock period number studies mid1970s early1980s also identied typical programs 80 instructions executed using assignment statements conditional branching procedure calls also surprising nd simple assign ment statements constitute almost 50 operations ndings caused different philosophy emerge philosophy promotes optimization architectures speeding operations frequently used reducing instruction complexities number addressing modes machines following philosophy referred reduced instructions set computers riscs examples riscs include sun sparctm mipstm machines two philosophies architecture design led unresolved controversy architecture style best however men tioned studies indicated risc architectures would indeed lead faster execution programs majority contemporary microprocessor chips seems follow risc paradigm book present salient features examples cisc risc machines", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-17 22:51:25", "segment": "segment2", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "1.3. TECHNOLOGICAL DEVELOPMENT ", "content": "computer technology shown unprecedented rate improvement includes development processors memories indeed advances technology fueled computer industry integration numbers transistors transistor controlled onoff switch single chip increased hundred millions impressive increase made possible advances fabrication technology transistors scale integration grown smallscale ssi mediumscale msi largescale lsi largescale integration vlsi currently wafer scale integration wsi table 12 shows typical numbers devices per chip technologies mentioned continuous decrease minimum devices featuresizehasledtoacontinuousincreaseinthenumberofdevicesperchip turn led number developments among increase number devices ram memories turn helps designers trade memory size speed improvement feature size provides golden oppor tunities introducing improved design styles", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-17 22:51:25", "segment": "segment3", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "1.4. PERFORMANCE MEASURES ", "content": "section consider important issue assessing performance computer particular focus discussion number performance measures used assess computers let us admit outset various facets performance computer example user computer measures performance based time taken execute given job program hand laboratory engineer measures performance system total amount work done given time user considers program execution time measure performance laboratory engineer considers throughput important measure performance metric assessing performance computer helps comparing alternative designs performance analysis help answering questions fast program executed using given computer order answer question need determine time taken computer execute given job dene clock cycle time time two consecutive rising trailing edges periodic clock signal fig 11 clock cycles allow counting unit compu tations storage computation results synchronized rising trail ing clock edges time required execute job computer often expressed terms clock cycles denote number cpu clock cycles executing job cycle count cc cycle time ct clock frequency f 1ct time taken cpu execute job expressed cpu time cc ct ccf may easier count number instructions executed given program compared counting number cpu clock cycles needed executing program therefore average number clock cycles per instruction cpi used alternate performance measure following equation shows howtocomputethecpi known instruction set given machine consists number instruction categories alu simple assignment arithmetic logic instruc tions load store branch case cpi instruction category known overall cpi computed ii number times instruction type executed program cpii average number clock cycles needed execute instruction example consider computing overall cpi machine following performance measures recorded executing set benchmark programs assume clock rate cpu 200 mhz instruction category percentage occurrence cycles per instruction alu 38 1 load store 15 3 branch 42 4 others 5 5 assuming execution 100 instructions overall cpi computed cpia pn i1 cpii ii instruction count 38 1 15 3 42 4 5 5 100 276 noted cpi reects organization instruction set archi tecture processor instruction count reects instruction set archi tecture compiler technology used shows degree interdependence two performance parameters therefore imperative cpi instruction count considered assessing merits given computer equivalently comparing performance two machines different performance measure given lot attention recent years mips million instructionspersecond rate instruction execution per unit time dened example suppose set benchmark programs considered executed another machine call machine b following measures recorded mips rating machine considered previous example machine machine b assuming clock rate 200 mhz cpia pn i1 cpii ii instruction count 38 1 15 3 42 4 5 5 100 276 mipsa clock rate cpia 106 200 106 276 106 7024 cpib pn i1 cpii ii instruction count 35 1 30 2 20 5 15 3 100 24 mipsb clock rate cpia 106 200 106 24 106 8367 thus mipsb mipsa interesting note although mips used performance measure machines one careful using compare machines different instruction sets mips track execution time consider example following measurement made two different machines running given set benchmark programs example shows although machine b higher mips compared machine requires longer cpu time execute set benchmark programs million oatingpoint instructions per second mflop rate oatingpoint instruction execution per unit time also used measure machines performance dened mflops number floatingpoint operations program execution time 106 mips measures rate average instructions mflops dened subset oatingpoint instructions argument mflops fact set oatingpoint operations may consistent across machines therefore actual oatingpoint operations vary machine machine yet another argument fact performance machine given program measured mflops generalized provide single performance metric machine performance machine regarding one particular program might interesting broad audience use arithmetic geometric means popular ways summarize performance regarding larger sets programs eg benchmark suites dened 1 x n execution timei execution time ith program n total number programs set benchmarks following table shows example computing metrics conclude coverage section discussion known amdahl law speedup suo due enhancement case consider speedup measure machine performs enhancement relative original performance following relationship formulates amdahl law suo performance enhancement performance enhancement speedup execution time enhancement execution time enhancement consider example possible enhancement machine reduce execution time benchmarks 25 15 s say speedup resulting reduction suo 2515 167 initsgiven form amdahl slaw accounts cases whereby improvement applied tothe instruction execution timehowever sometimes itmay bepossible achieve performance enhancement afraction time din case anew formula hastobedeveloped inordertorelatethespeedup sudduetoanenhancement forafractionoftimedtothespeedupduetoanoverallenhancement su relationship expressed noted 1 enhancement possible times suo sud expected consider example machine speedup 30 possible applying enhancement certain conditions enhancement possible 30 time speedup due partial application enhancement suo 1 1 dsud 1 1 03 03 30 1 07 001 14 interesting note formula generalized shown account case whereby number different independent enhancements applied separately different fractions time d1 d2 dn thus leading respectively speedup enhancements sud1 sud2 sudn suo 1 1 d1 d2 dn d1 d2 dn sud1 sud2 sudn", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-17 22:51:26", "segment": "segment4", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "1.5. SUMMARY ", "content": "chapter provided brief historical background development computer systems starting rst recorded attempt build computer z1 1938 passing cdc 6600 cray supercomputers ending today modern highperformance machines provided discussion risc versus cisc architectural styles impact machine performance followed brief discussion technological development impact computing performance coverage chapter concluded detailed treatment issues involved assessing per formance computers particular introduced number performance measures cpi mips mflops arithmeticgeometric performance means none dening performance machine consistently possible ways evaluating speedup given partial general improvement measure ments machine discussed end chapter", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-17 22:51:26", "segment": "segment5", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 2 Instruction Set Architecture and Design ", "content": "chapter consider basic principles involved instruction set architecture design discussion starts consideration memory locations addresses present abstract model main memory considered sequence cells capable storing n bits address issue stor ing retrieving information memory information stored andor retrieved memory needs addressed discussion number different ways address memory locations addressing modes next topic discussed chapter program consists number instruc tions accessed certain order motivates us explain issue instruction execution sequencing detail show application presented addressing modes instruction characteristics writing sample segment codes performing number simple programming tasks unique characteristic computer memory organized hier archy hierarchy larger slower memories used supplement smaller faster ones typical memory hierarchy starts small expensive rela tively fast module called cache cache followed hierarchy larger less expensive relatively slow main memory part cache main memory built using semiconductor material followed hierarchy larger less expensive far slower magnetic memories consist hard disk tape characteristics factors inuencing success memory hierarchy computer discussed detail chapters 6 7 concentration chapter main memory programmer point view par ticular focus way information stored retrieved memory", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-17 22:51:26", "segment": "segment6", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.1. MEMORY LOCATIONS AND OPERATIONS ", "content": "main memory modeled array millions adjacent cells capable storing binary digit bit value 1 0 cells organized form groups xed number say n cells dealt atomic entity entity consisting 8 bits called byte many systems entity consisting n bits stored retrieved memory using one basic memory operation called word smallest addressable entity typical size word ranges 16 64 bits however customary express size memory terms bytes example size typical memory personal computer 256 mbytes 256 220 228 bytes order able move word memory distinct address assigned word address used determine location memory given word stored called memory write operation similarly address used determine memory location word retrieved memory called memory read operation number bits l needed distinctly address words memory given l log2 m example size memory 64 read 64 mega words number bits address log2 64 220 log2 226 26 bits alternatively number bits address l maximum memory size terms number words addressed using l bits 2l figure 21 illustrates concept memory words word address explained mentioned two basic memory operations memory write memory read operations memory write operation word stored memory location whose address specied memory read operation word read memory location whose address specied typically memory read memory write operations performed central processing unit cpu three basic steps needed order cpu perform write operation specied memory location 1 word stored memory location rst loaded cpu specied register called memory data register mdr 2 address location word stored loaded cpu specied register called memory address register mar 3 signal called write issued cpu indicating word stored mdr stored memory location whose address loaded mar figure 22 illustrates operation writing word given 7e hex memory location whose address 2005 part gure shows status reg isters memory locations involved write operation execution operation part b gure shows status execution operation worth mentioning mdr mar registers used exclusively cpu accessible programmer similar write operation three basic steps needed order perform memory read operation 1 address location word read loaded mar 2 signal called read issued cpu indicating word whose address mar read mdr 3 time corresponding memory delay reading specied word required word loaded memory mdr ready use cpu figure 23 illustrates operation reading word stored memory location whose address 2010 part gure shows status registers memory locations involved read operation execution operation part b gure shows status read operation", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-17 22:51:26", "segment": "segment7", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2. ADDRESSING MODES ", "content": "information involved operation performed cpu needs addressed computer terminology information called operand therefore instruction issued processor must carry least two types information operation performed encoded called opcode eld address information operand operation performed encoded called address eld instructions classied based number operands threeaddress twoaddress oneandhalfaddress oneaddress zeroaddress explain classes together simple examples following paragraphs noted presenting examples would use convention operation source destination express instruction convention operation rep resents operation performed example add subtract write read source eld represents source operand source operand con stant value stored register value stored memory destination eld represents place result operation stored example register memory location threeaddress instruction takes form operation add1 add2 add3 form add1 add2 add3 refers register memory location consider example instruction add r1 r2 r3 instruction indicates operation performed addition also indicates values added stored registers r1 r2 results stored register r3 example threeaddress instruction refers memory locations may take form add b c instruction adds contents memory location contents memory location b stores result memory location c twoaddress instruction takes form operation add1 add2 form add1 add2 refers register memory location consider example instruction add r1 r2 instruction adds contents regis ter r1 contents register r2 stores results register r2 original contents register r2 lost due operation original contents register r1 remain intact instruction equivalent threeaddress instruction form add r1 r2 r2 similar instruction uses memory locations instead registers take form add b case contents memory location added contents memory location b result used override original contents memory location b operation performed threeaddress instruction add b c per formed two twoaddress instructions move b c add c rst instruction moves contents location b location c second instruction adds contents location location c con tents location b stores result location c oneaddress instruction takes form add r1 case instruction implicitly refers register called accumulator racc contents accumulator added contents register r1 results stored back accumulator racc memory location used instead reg ister instruction form add b used case instruction adds content accumulator racc content memory location b stores result back accumulator racc instruction add r1 equival ent threeaddress instruction add r1 racc racc twoaddress instruc tion add r1 racc two oneaddress instruction oneandhalf address instruction consider example instruction add b r1 case instruction adds contents register r1 contents memory location b stores result register r1 owing fact instruction uses two types addressing register memory location called oneandhalfaddress instruction register addressing needs smaller number bits needed memory addressing interesting indicate exist zeroaddress instructions instructions use stack operation stack data organization mechanism last data item stored rst data item retrieved two specic oper ations performed stack push pop operations figure 24 illustrates two operations seen specic register called stack pointer sp used indicate stack location addressed stack push operation sp value used indicate location called top stack value 5a stored case location 1023 storing pushing value sp incremented indicate location 1024 stack pop operation sp rst decremented become 1021 value stored location dd case retrieved popped stored shown register different operations performed using stack structure consider example instruction add sp sp instruction adds contents stack location pointed sp pointed sp 1 stores result stack location pointed current value sp figure 25 illustrates addition operation table 21 summarizes instruc tion classication discussed different ways operands addressed called addressing modes addressing modes differ way address information operands specied simplest addressing mode include operand instruction address information needed called immediate addressing involved addressing mode compute address operand adding constant value content register called indexed addressing two addressing modes exist number addressing modes including absolute addressing direct addressing indirect addressing number different addressing modes explained", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-17 22:51:27", "segment": "segment8", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2.1. Immediate Mode ", "content": "according addressing mode value operand immediately avail able instruction consider example case loading decimal value 1000 register ri operation performed using instruction following load 1000 ri instruction operation per formed load value register source operand immediately given 1000 destination register ri noted order indi cate value 1000 mentioned instruction operand address immediate mode customary prex operand special character seen use immediate addressing mode simple use immediate addressing leads poor programming practice change value operand requires change every instruction uses immediate value operand exible addressing mode explained", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-17 22:51:27", "segment": "segment9", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2.2. Direct (Absolute) Mode ", "content": "according addressing mode address memory location holds operand included instruction consider example case loading value operand stored memory location 1000 register ri oper ation performed using instruction load 1000 ri instruc tion source operand value stored memory location whose address 1000 destination register ri note value 1000 prexed special characters indicating direct absolute address source operand figure 26 shows illustration direct addressing mode example content memory location whose address 1000 2345 time instruction load 1000 ri executed result execut ing instruction load value 2345 register ri direct absolute addressing mode provides exibility compared immediate mode however requires explicit inclusion operand address instruction exible addressing mechanism provided use indirect addressing mode explained", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-17 22:51:27", "segment": "segment10", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2.3. Indirect Mode ", "content": "indirect mode included instruction address operand rather name register memory location holds effec tive address operand order indicate use indirection instruc tion customary include name register memory location parentheses consider example instruction load 1000 ri instruc tion memory location 1000 enclosed parentheses thus indicating indirec tion meaning instruction load register ri contents memory location whose address stored memory address 1000 indirec tion made either register memory location therefore identify two types indirect addressing register indirect addressing register used hold address operand memory indirect addressing memory location used hold address operand two types illustrated figure 27", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-17 22:51:27", "segment": "segment11", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2.4. Indexed Mode ", "content": "addressing mode address operand obtained adding con stant content register called index register consider example instruction load x rind ri instruction loads register ri contents memory location whose address sum contents register rind value x index addressing indicated instruction including name index register parentheses using symbol x indicate constant added figure 28 illustrates indexed addressing seen indexing requires additional level complexity register indirect addressing", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-17 22:51:27", "segment": "segment12", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.2.5. Other Modes ", "content": "addressing modes presented represent commonly used modes processors provide programmer sufcient means handle general programming tasks however number addressing modes used number processors facilitate execution specic programming tasks additional addressing modes involved compared presented among addressing modes relative autoincrement autodecrement modes represent wellknown ones explained relative mode recall indexed addressing index register rind used relative addressing indexed addressing except program counter pc replaces index register example instruction load x pc ri loads register ri contents memory location whose address sum contents program counter pc value x figure 29 illustrates relative addressing mode autoincrement mode addressing mode similar register indirect addressing mode sense effective address operand content register call autoincrement register included instruction however autoincrement content autoincrement register automati cally incremented accessing operand indirection indicated including autoincrement register parentheses automatic increment register content accessing operand indicated including parentheses consider example instruction load rauto ri instruction loads register ri operand whose address content register rauto loading operand register ri content register rauto incremented pointing example next item list items figure 210 illustrates autoincrement addressing mode autodecrement mode similar autoincrement autodecrement mode uses register hold address operand however case content autodecrement register rst decremented new content used effective address operand order reect fact content autodecrement register decremented accessing operand 2 included indirection parentheses consider example instruction load rauto ri instruction decrements content register rauto uses new content effective address operand loaded register ri figure 211 illustrates autodecrement addres sing mode seven addressing modes presented summarized table 22 case table shows name addressing mode denition gen eric example illustrating use mode presenting different addressing modes used load instruction illustration however understood types instructions given machine following section elaborate differ ent types instructions typically constitute instruction set given machine", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-17 22:51:27", "segment": "segment13", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.3. INSTRUCTION TYPES ", "content": "type instructions forming instruction set machine indication power underlying architecture machine instructions general classied following subsections 231 234", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-17 22:51:27", "segment": "segment14", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.3.1. Data Movement Instructions ", "content": "data movement instructions used move data among different units machine notably among instructions used move data among different registers cpu simple register register movement data made instruction move ri rj instruction moves content register r register r effect instruc tion override contents destination register rj without changing con tents source register ri data movement instructions include used move data registers memory instructions usually referred load store instructions respectively examples two instructions load 25838 rj store ri 1024 rst instruction loads content memory location whose address 25838 destination register rj content memory location unchanged executing load instruction store instruction stores content source register ri memory location 1024 content source register unchanged executing store instruction table 23 shows common data transfer operations meanings", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-17 22:51:27", "segment": "segment15", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.3.2. Arithmetic and Logical Instructions ", "content": "arithmetic logical instructions used perform arithmetic logical manipulation registers memory contents examples arithmetic instructions include add subtract instructions add r1 r2 r0 subtract r1 r2 r0 rst instruction adds contents source registers r1 r2 stores result destination register r0 second instruction subtracts contents source registers r1 r2 stores result destination register r0 contents source registers unchanged add subtract instructions addition add subtract instructions machines multiply divide instructions two instructions expensive implement could substituted use repeated addition repeated subtraction therefore modern architectures multiply divide instructions instruction set table 24 shows common arith metic operations meanings logical instructions used perform logical operations shift compare rotate names indicate instructions per form respectively shift compare rotate operations register memory contents table 25 presents number logical operations", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-17 22:51:27", "segment": "segment16", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.3.3. Sequencing Instructions ", "content": "control sequencing instructions used change sequence instructions executed take form conditional branching conditional jump unconditional branching jump call instructions common characteristic among instructions execution changes program counter pc value change made pc value unconditional example unconditional branching jump instructions case earlier value pc lost execution program starts new value specied instruction consider example instruction jump newaddress execution instruction cause pc loaded memory location represented newaddress wherebytheinstructionstoredatthisnewaddressisexecutedontheotherhand change made pc branching instruction conditional based value specic ag examples ags include negative n zero z overow v carry c ags represent individual bits specic register called condition code cc register values ags set based results executing different instructions meaning ags shown table 26 consider example following group instructions load 100 r1 loop add r2 r0 decrement r1 branchifgreaterthan loop fourth instruction conditional branch instruction indicates result decrementing contents register r1 greater zero z ag set next instruction executed labeled loop noted conditional branch instructions could used exe cute program loops shown call instructions used cause execution program transfer subroutine call instruction effect jump terms loading pc new value next instruction executed however call instruction incremented value pc point next instruction sequence pushed onto stack execution return instruction subroutine load pc popped value stack effect resuming program execution point branching subroutine occurred figure 212 shows program segment uses call instruction pro gram segment sums number values n stores result memory location sum values added stored n consecutive memory locations starting num subroutine called addition used perform actual addition values main program stores results sum table 27 presents common transfer control operations", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-17 22:51:28", "segment": "segment17", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.3.4. Input/Output Instructions ", "content": "input output instructions io instructions used transfer data computer peripheral devices two basic io instructions used input output instructions input instruction used transfer data input device processor examples input devices include keyboard mouse input devices interfaced computer dedicated input ports computers use dedicated addresses address ports suppose input port keyboard connected computer carries unique address 1000 therefore execution instruction input 1000 cause data stored specic register interface keyboard computer call input data register moved specic register called accumulator computer similarly execution instruction output 2000 causes data stored accumulator moved data output register output device whose address 2000 alternatively com puter address ports usual way addressing memory locations case computer input data input device executing instruc tion move rin r0 instruction moves content register rin register r0 similarly instruction move r0 rin moves contents ofregisterr0intotheregisterrin thatis performsanoutputoperationthis latter scheme called memorymapped inputoutput among advantages memorymapped io ability execute number memorydedicated instructions registers io devices addition elimination need dedicated io instructions main disadvantage need dedicate part memory address space io devices", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-17 22:51:28", "segment": "segment18", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.4. PROGRAMMING EXAMPLES ", "content": "introduced addressing modes instruction types move illustrate use concepts number programming examples presenting examples generic mnemonics used done order emphasize understanding use different addressing modes performing different operations independent machine used applications similar principles using reallife machine examples presented chapter 3 example 1 example would like show program segment used perform task adding 100 numbers stored consecutive memory loca tions starting location 1000 results stored memory location 2000 example use made immediate move 100 r1 indexed add 1000 r2 r0 addressing example 2 example autoincrement addressing used perform task performed example 1 seen given task performed using one programming methodology method used programmer depends hisher experience well richness instruction set machine used note also use autoincrement addressing example 2 led decrease number instructions used perform task example 3 example illustrates use subroutine sort sort n values ascending order fig 213 numbers originally stored list starting location 1000 sorted values also stored list starting location 1000 subroutine sorts data using well known bubble sort technique content register r3 checked end every loop nd whether list sorted example 4 example illustrates use subroutine search search value val list n values fig 214 assume list orig inally sorted therefore brute force search used search value val compared every element list top bottom content register r3 used indicate whether val found rst element list located address 1000 example 5 example illustrates use subroutine search search value val list n values example 4 fig 215 make use stack send parameters val n", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-17 22:51:28", "segment": "segment19", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "2.5. SUMMARY ", "content": "chapter considered main issues relating instruction set design characteristics presented model main memory memory abstracted sequence cells capable storing n bits number addressing modes presented include immediate direct indirect indexed autoincre ment autodecrement examples showing use addressing modes presented also presented discussion instruction types include data movement arithmeticlogical instruction sequencing inputoutput dis cussion concluded presentation number examples showing use principles concepts discussed chapter programming solution number sample problems next chapter introduce concepts involved programming solution reallife problems using assembly language", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-17 22:51:28", "segment": "segment20", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 3 Assembly Language Programming ", "content": "chapter 2 introduced basic concepts principles involved design instruction set machine chapter considers issues related assem bly language programming although highlevel languages compiler technol ogy witnessed great advances years assembly language remains necessary cases programming assembly result machine code much smaller much faster generated compiler high level language small fast code could critical embedded portable applications resources may limited cases small portions program may heavily used written assembly language reader book learning assembly languages writing assembly code extremely helpful understanding computer organization architecture computer program represented different levels abstraction pro gram could written machineindependent highlevel language java c computer execute programs represented machine language specic architecture machine language program given architecture collection machine instructions represented binary form programs written level higher machine language must trans lated binary representation computer execute assembly language program symbolic representation machine language program machine language pure binary code whereas assembly language direct map ping binary code onto symbolic form easier humans understand manage converting symbolic representation machine language per formed special program called assembler assembler program accepts symbolic language program source produces machine language equivalent target translating program binary code assembler replace symbolic addresses numeric addresses replace symbolic operation codes machine operation codes reserve storage instructions data translate constants machine representation purpose chapter give reader general overview assembly language programming meant manual assembly language specic architecture start chapter discussion simple hypothetical machine referred throughout chapter machine ve registers instruction set 10 instructions use simple machine dene rather simple assembly language easy understand help explain main issues assembly pro gramming introduce instruction mnemonics syntax assembler directives commands discussion execution assembly programs presented conclude chapter showing realworld example assembly language x86 intel cisc family", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-17 22:51:28", "segment": "segment21", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.1. A SIMPLE MACHINE ", "content": "machine language native language given processor since assembly language symbolic form machine language different type processor unique assembly language study assembly language given processor need rst understand details processor need know memory size organization processor registers instruction format entire instruction set section present simple hypothetical processor used explaining different topics assembly language throughout chapter simple machine accumulatorbased processor ve 16bit registers program counter pc instruction register ir address register ar accumulator ac data register dr pc contains address next instruction executed ir contains operation code portion instruction executed ar contains address portion instruction executed ac serves implicit source destination data dr used hold data memory unit made 4096 words storage word size 16 bits processor shown figure 31 assume simple processor supports three types instructions data transfer data processing program control data transfer operations load store move data registers ac dr data processing instructions add subtract program control instructions jump conditional jump instruction set processor summarized table 31 instruction size 16 bits 4 bits operation code 12 bits address appropriate example 1 let us write machine language program adds contents memory location 12 00chex initialized 350 memory location 14 00ehex initialized 96 store result location 16 010hex initialized 0 program given binary instructions table 32 rst column gives thememorylocationinbinaryforeachinstructionandoperandthesecondcolumn lists contents memory locations example contents location 0 instruction opcode 0001 operand address 0000 0000 1100 please note case operations require operand operand portion instruction shown zeros program expected stored indicated memory locations starting location 0 execution program stored different memory locations addresses instructions need updated reect new locations clear programs written binary code difcult understand course debug representing instructions hexadecimal reduce number digits four per instruction table 33 shows program hexadecimal", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-17 22:51:28", "segment": "segment22", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.2. INSTRUCTION MNEMONICS AND SYNTAX ", "content": "assembly language symbolic form machine language assembly programs written short abbreviations called mnemonics mnemonic abbrevi ation represents actual machine instruction assembly language program ming writing machine instructions mnemonic form machine instruction binary hex value replaced mnemonic clearly use mnemonics meaningful hex binary values would make programming low level easier manageable assembly program consists sequence assembly statements statements written one per line line assembly program split following four elds label operation code opcode operand comments figure 32 shows fourcolumn format assembly instruction labels used provide symbolic names memory addresses label identier used program line order branch labeled line also used access data using symbolic names maximum length label differs one assembly language another allow 32 characters length others may restricted six characters assembly languages processors require colon label others example sparc assembly requires colon every label motorola assembly intel assembly requires colons code labels data labels operation code opcode eld contains symbolic abbreviation given operation operand eld consists additional information data opcode requires operand eld may used specify constant label immedi ate data register address comments eld provides space documen tation explain done purpose debugging maintenance simple processor described previous section assume label eld may empty six characters colon requirement label comments preceded simple mnemonics ten binary instructions table 31 summarized table 34 let us consider following assembly instruction start ld x copy contents location x ac label instruction ld x start means memory address instruction label used program reference shown following instruction bra start go statement label start jump instruction make processor jump memory address associ ated label start thus executing instruction ld x immediately bra instruction addition program instructions assembly program may also include pseudo instructions assembler directives assembler directives commands understood assembler correspond actual machine instructions example assembler asked allocate memory storage assembly language simple processor assume use pseudo instruction w reserve word 16 bits memory example following pseudo instruction reserves word label x initializing decimal value 350 x w 350 reserve word initialized 350 label pseudo instruction w 350 x means memory address value following assembly code machine language program example 1 previous section ld x ac x movac dr ac ld ac add ac ac dr st z z ac stop x w 350 reserve word initialized 350 w 96 reserve word initialized 96 z w 0 result stored example 2 example write assembly program perform multiplication operation z xy x z memory locations know assembly simple cpu multiplication operation compute product applying add operation multiple times order add x times use n counter initialized x decremented one addition step bz instruction used test case n reaches 0 use memory location store n need loaded ac bz instruction executed also use memory location one store constant 1 memory location z partial products eventually nal result following assembly program using assembly language simple processor assume values x small enough allow product stored single word sake example let us assume x initialized 5 15 respectively", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-17 22:51:29", "segment": "segment23", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.3. ASSEMBLER DIRECTIVES AND COMMANDS ", "content": "previous section introduced reader assembly machine languages provided several assembly code segments written using simple machine model writing assembly language programs specic architecture number practical issues need considered among issues following assembler directives use symbols use synthetic operations assembler syntax interaction operating system use assembler directives also called pseudooperations important issue writing assembly language programs assembler directives commands understood assembler correspond actual machine instruc tions assembler directives affect way assembler performs conversion assembly code machine code example special assembler directives used instruct assembler place data items proper align ment alignment data memory required efcient implementation archi tectures proper alignment data data nbytes width must stored address divisible n example word twobyte width stored locations addresses divisible two assembly language programs symbols used represent numbers example immediate data done make code easier read understand debug symbols translated corresponding numerical values assembler use synthetic operations helps assembly programmers use instructions directly supported architecture translated assembler set instructions dened architecture example assem blers allow use synthetic increment instruction architectures increment instruction dened use instruc tions add instruction assemblers usually impose conventions referring hardware com ponents registers memory locations one convention prex ing immediate values special characters register name character underlying hardware machines accessed directly pro gram operating system os plays role mediating access resources memory io facilities interactions operating systems os take place form code causes execution function part os functions called system calls", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-17 22:51:29", "segment": "segment24", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.4. ASSEMBLY AND EXECUTION OF PROGRAMS ", "content": "know program written assembly language needs trans lated binary machine language executed section learn get point writing assembly program execution phase figure 33 shows three steps assembly execution pro cess assembler reads source program assembly language generates object program binary form object program passed linker linker check object le calls procedures link library linker combine required procedures link library object program produce executable program loader loads executable program memory branches cpu starting address program begins execution", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-17 22:51:29", "segment": "segment25", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.4.1. Assemblers ", "content": "assemblers programs generate machine code instructions source code program written assembly language assembler replace symbolic addresses numeric addresses replace symbolic operation codes machine oper ation codes reserve storage instructions data translate constants machine representation functions assembler performed scanning assembly pro gram mapping instructions machine code equivalent since symbols used instructions dened later ones single scanning program might enough perform mapping simple assembler scans entire assembly program twice scan called pass rst pass generates table includes symbols binary values table called symbol table second pass assembler use symbol table tables generate object program output information needed linker", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-17 22:51:29", "segment": "segment26", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.4.2. Data Structures ", "content": "assembler uses least three tables perform functions symbol table opcode table pseudo instruction table symbol table generated pass one entry every symbol program associated symbol binary value information table 35 shows symbol table multiplication program segment example 2 assume instruction ld x starting location 0 memory since instruction takes two bytes value symbol loop 4 004 hexadecimal symbol n example stored decimal location 40 028 hexadecimal values symbols obtained similar way opcode table provides information operation codes associated symbolic opcode table numerical value information aboutitstype itsinstructionlength anditsoperandstable36showstheopcode table simple processor described section 31 example explain information associated opcode ld one operand memory address binary value 0001 instruction length ld 2 bytes type memoryreference entries pseudo instruction table pseudo instructions symbols entry refers assembler procedure processes pseudo instruction encountered program example end encountered trans lation process terminated order keep track instruction locations assembler maintains vari able called instruction location counter ilc ilc contains value memory location assigned instruction operand processed ilc initialized 0 incremented processing instruction ilc incremented length instruction processed number bytes allocated result data allocation pseudo instruction figures 34 35 show simplied owcharts pass one pass two two pass assembler remember main function pass one build symbol table pass two main function generate object code", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-17 22:51:29", "segment": "segment27", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.4.3. Linker and Loader ", "content": "linker entity combine object modules may resulted assembling multiple assembly modules separately loader operating system utility reads executable memory start execution summary assembly modules translated object modules func tions linker loader prepare program execution functions include combining object modules together resolving addresses unknown assem bly time allocating storage nally executing program", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-17 22:51:29", "segment": "segment28", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.5. EXAMPLE: THE X86 FAMILY ", "content": "section discuss assembly language features use x86 family present basic organizational features system basic programming model addressing modes sample different instruction types used nally examples showing use assembly language system programming sample reallife problems late 1970s intel introduced 8086 rst 16bit microprocessor processor 16bit external bus 8086 evolved series faster powerful processors starting 80286 ending pentium latter introduced 1993 intel family processors usually called x86 family table 37 summarizes main features main members family intel pentium processor three million transistors compu tational power ranges two ve times predecessor processor 80486 number new features introduced pentium processor among incorporation dualpipelined superscalar architecture capable processing one instruction per clock cycle basic programming model 386 486 pentium shown figure 36 consists three register groups general purpose regis ters segment registers instruction pointer program counter ag register rst set consists general purpose registers b c si source index di destination index sp stack pointer bp base pointer noted naming registers used x indicate extended second set registers consists cs code segment ss stack segment four data segment registers ds es fs gs third set registers consists instruction pointer program counter ags status register latter shown figure 37 among status bits shown figure 37 rst ve identical bits introduced early 8085 8bit microproces sor next 611 bits identical introduced 8086 ags bits 1214 introduced 80286 1617 bits introduced 80386 ag bit 18 introduced 80486 table 38 shows mean ing ags x86 family instruction perform operation one two oper andsintwooperandinstructions thesecondoperandcanbeimmediatedatain 2 complement format data transfer arithmetic logical instructions act immediate data registers memory locations x86 family direct indirect memory addressing used direct addressing displacement address consisting 8 16 32bit word used logical address logical address added shifted contents seg ment register segment base address give physical memory address figure 38 illustrates direct addressing process address indirection x86 family obtained using content base pointer register bpr content index register sum base register index register figure 39 illustrates indirect addressing using bpr x86 family processors denes number instruction types using naming convention introduced instruction types data movement arithmetic logic sequencing control transfer addition x86 family denes instruction types string manipulation bit manipulation highlevel language support data movement instructions x86 family include mainly four subtypes generalpurpose accumulatorspecic addressobject ag instructions sample instructions shown table 39 arithmetic logic instructions x86 family include mainly ve subtypes addition subtraction multiplication division logic instructions sample arithmetic instructions shown table 310 logic instructions include typical xor test latter performs logic compare source destination sets ags accord ingly addition x86 family set shift rotate instructions sample shown table 311 control transfer instructions x86 family include mainly four subtypes conditional iteration interrupt unconditional sample instructions shown table 312 processor control instructions x86 family include mainly three subtypes external synchronization ag manipulation general control instruc tions sample instructions shown table 313 introduced basic features instruction set x86 processor family wenowmoveontopresentanumberofprogrammingexamplestoshow instruction set used examples presented presented end chapter 2 example 3 adding 100 numbers stored consecutive memory locations starting atlocation1000 theresultsshouldbestoredinmemorylocation2000listis dened array n elements size byte flag memory variable used indicate whether list sorted register cx used coun ter loop instruction loop instruction decrements cx register branch result zero addressing mode used access array list bx 1 called based addressing mode noted since using bx bx 1 cx counter loaded value 999 order exceedthelist example 4 implement search algorithm 8086 instruction set list dened array n elements size word flag memory variable used indicate whether list sorted register cx used counter loop instruction loop instruction decrements noted example call procedure initiated ip register last value pushed top stack therefore care made avoid altering value ip register top stack thus saved temporary variable temp procedure entry restored exit", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-17 22:51:30", "segment": "segment29", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "3.6. SUMMARY ", "content": "machine language collection machine instructions represented 0s 1s assembly language provides easier use symbolic representation alphanumeric equivalent machine language used onetoone corre spondence assembly language statements machine instructions assembler program accepts symbolic language program source program produces machine language equivalent target program although assembly language programming difcult compared programming highlevel languages still important learn assembly applications small por tions program heavily used may need written assembly language programming assembly result machine code smaller faster generated compiler highlevel language assembly pro grammers access hardware features target machine might accessible highlevel language programmers addition learning assem bly languages great help understanding low level details computer organization architecture chapter provided general overview assembly language programming programmer view x86 intel microprocessor family processors also introduced realworld example examples presented showing use x86 instruction set writing sample programs similar presented chapter 2", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-17 22:51:30", "segment": "segment30", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 4 Computer Arithmetic ", "content": "chapter dedicated discussion computer arithmetic goal introduce reader fundamental issues related arithmetic operations circuits used support computation computers coverage starts introduction number systems particular introduce issues number representations base conversion followed discussion integer arithmetic regard introduce number algorithms together hardware schemes used performing integer addition subtraction multiplication division end chapter discussion oating point arithmetic particular introduce issues oatingpoint represen tation oatingpoint operations oatingpoint hardware schemes ieee oatingpoint standard last topic discussed chapter", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-17 22:51:30", "segment": "segment31", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1. NUMBER SYSTEMS ", "content": "number system uses specic radix base radices power 2 widely used digital systems radices include binary base 2 quaternary base 4 octagonal base 8 hexagonal base 16 base 2 binary system dominant computer systems unsigned integer number represented using n digits base b an1an2 a2a1a0 b representation called positional representation digit ai given 0 ai b 1 using positional representation dec imal value unsigned integer number given pn1 i0 ai bi con sider example positional representation decimal number 106 using 8 digits base 2 represented 0 27 1 26 1 25 0 24 1 23 0 22 1 21 0 20 using n digits largest value unsigned number given amax bn 1 example largest unsigned number obtained using 4 digits base 2 24 1 15 case decimal numbers ranging 0 15 corresponding binary 0000 1111 represented similarly largest unsigned number obtained using 4 digits base 4 fundamentals computer organization architecture m abdelbarr h elrewini isbn 0471467413 copyright 2005 john wiley sons inc 44 1 255 case decimal numbers ranging 0 255 corresponding 0000 3333 represented consider use n digits represent real number x radix b signicant k digits represent integral part least signicant digits represents fraction part value x given x4 1 x3 1 x2 0 x1 0 x0 1 x1 0 x2 1 x3 1 often necessary convert representation number given base another example base 2 base 10 achieved using number methods algorithms important tool algorithms div ision algorithm basis division algorithm representing integer terms another integer c using base b basic relation used c q r q quotient r remainder 0 r b 1 q bacc radix conversion discussed", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-17 22:51:30", "segment": "segment32", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1.1. Radix Conversion Algorithm ", "content": "radix conversion algorithm used convert number representation given radix r1 another representation different radix r2 consider conversion integral part number x xint integral part xint expressed xint xk1r2 xk2 r2 x2 r2 x1 r2 x0 dividing xint r2 result quotient xq xk1r2 xk2 r2 x2 r2 x1 remainder xrem x0 repeating division process quo tient retaining remainders required digits zero quotient obtained result required representation xint new radix r2 using similar argument possible show repeated multiplication fractional part x xf r2 retaining obtained integers required digits result required representation fractional part new radix r2 however noted unlike integral part conversion fractional part conversion may terminate nite number repeated mul tiplications therefore process may terminated number steps thus leading acceptable approximation example consider conversion decimal number 67575 binary r1 10 r2 2 xint 67 xf 0575 integral part xint repeated division 2 result following quotients remainders quotient 33 16 8 4 2 1 0 remainder 1 1 0 0 0 0 1 therefore integral part radix r2 2 xint 1000011 similar method used obtain fractional part repeated multiplication fractional part 0150 0300 0600 0200 0400 0800 0600 0200 carry bit 1 0 0 1 0 0 1 1 fractional part xf 10010011 therefore resultant representation number 67575 binary given 100001110010011", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-17 22:51:30", "segment": "segment33", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1.2. Negative Integer Representation ", "content": "exist number methods representation negative integers include signmagnitude radix complement diminished radix complement briey explained", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-17 22:51:30", "segment": "segment34", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1.3. Sign-Magnitude ", "content": "according representation signicant bit n bits used represent number used represent sign number 1 signicant bit position indicates negative number 0 signicant bit position indicates positive number remaining n 1 bits used represent magnitude number example negative number 218 represented using 6 bits base 2 signmagnitude format follows 110010 18 represented 010010 although simple signmagnitude representation complicated performing arithmetic opera tions particular sign bit dealt separately magnitude bits consider example addition two numbers 18 010010 219 110011 using signmagnitude representation since two numbers carry different signs result carry sign larger number magnitude case 219 remaining 5bit numbers subtracted 10011 2 10010 produce 00001 21", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-17 22:51:30", "segment": "segment35", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1.4. Radix Complement ", "content": "according system positive number represented way signmagnitude however negative number represented using b comp lement base b numbers consider example representation number 219 using 2 complement case number 19 rst represented 010011 digit complemented hence name radix complement produce 101100 finally 1 added least signicant bit position result 101101 consider 2 complement representation number 18 since number positive represented 010010 signmagnitude case consider addition two numbers case add corresponding bits without giving special treatment sign bit results adding two numbers produces 111111 2 comp lement representation 21 expected main advantage 2 comp lement representation special treatment needed sign numbers another characteristic 2 complement fact carry coming signicant bit performing arithmetic operations ignored without affecting correctness result consider example adding 219 101101 26 011010 result 1 000111 correct 7 carry bit ignored", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-17 22:51:30", "segment": "segment36", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.1.5. Diminished Radix Complement ", "content": "representation similar radix complement except fact 1 added least signicant bit complementing digits number done radix complement according number system representation 219 represented 101100 18 represented 010010 add two numbers obtain 111110 1 complement 21 main disadvantage diminished radix representation need correction factor whenever carry obtained signicant bit performing arithmeticoperationsconsider forexample adding23 111100 to18 010010 obtain 1 001110 carry bit added least signicant bit result obtain 001111 15 correct result table 41 shows comparison 2 complement 1 comp lement representation 8bit number x", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-17 22:51:30", "segment": "segment37", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.2. INTEGER ARITHMETIC ", "content": "section introduce number techniques used perform integer arith metic using radix complement representation numbers discussion focus base 2 binary representation", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-17 22:51:30", "segment": "segment38", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.2.1. Two\u2019s Complement (2\u2019s) Representation ", "content": "order represent number 2 complement perform following two steps 1 perform boolean complement bit including sign bit 2 add 1 least signicant bit treating number unsigned binary integer a 1 example consider representation 222 using 2 complement", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-17 22:51:30", "segment": "segment39", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.2.2. Two\u2019s Complement Arithmetic ", "content": "addition addition two nbit numbers 2 complement performed using nbit adder carryout bit ignored without affecting correct ness results long results addition range 2n1 2n1 1 example consider addition two 2 complement numbers 27 4 addition carried 27 4 23 1001 0100 1101 23 2 complement condition result range 2n1 2n1 1 important result outside range lead overow hence wrong result simple terms overow occur result produced given operation outside range representable numbers consider fol lowing two examples example consider adding two 2 complement numbers 7 6 addition done 7 6 13 0111 0110 1101 wrong result result exceeds largest value 7 example consider adding two 2 complement numbers 27 24 addition done 27 24 211 1001 1100 0101 wrong result result less smallest value 28 notice original numbers negative result positive two examples make following observation two numbers positive negative added overow detected result opposite sign added numbers subtraction 2 complement subtraction performed way addition performed example perform b b a 1 sub tracting b addition complement b example consider subtraction 2 2 7 25 performed 2 7 1 0010 1000 0001 1011 25 noted earlier observation occurrence overow context addition applies case subtraction well sub traction addition complement consider following illustrative example example consider subtraction 7 2 27 14 performed 7 7 1 0111 1000 0001 1 0000 wrong answer result 7 hardware structures addition subtraction signed numbers addition two nbit numbers b requires basic hardware circuit accepts three inputs ai bi ci1 three bits represent respectively two current bits numbers b position carry bit previous bit position position 2 1 circuit pro duce two outputs si ci representing respectively sum carry according following truthtable output logic functions given si ai bi ci1 ci aibi aici1 bici1 circuit used implement two functions called fulladder fa shown figure 41 addition two nbit numbers b carried using n consecutive fas arrangement known carryripple adder crt see figure 42 nbit crt adder shown figure 42 used add 2 complement numbers b bn1 an1 represent sign bits cir cuit used perform subtraction using relation b b a 1 figure 43 shows structure binary additionsubtraction logic network gure two inputs b represent arguments addedsub tracted control input determines whether add subtract operation performed control input 0 add operation performed control input 1 subtract operation performed simple circuit implement addsub block figure 43 shown figure 44 case 4bit inputs one main drawbacks crt circuit expected long delay time inputs presented circuit nal output obtained dependence stage carry output produced pre vious stage chain dependence makes crt adder delay n n number stages adder order speed addition process necessary introduce addition circuits chain dependence among adder stages must broken number fast addition circuits exist lit erature among carrylookahead cla adder well known cla adder introduced consider crt adder circuit two logic functions realized si ai bi ci1 ci aibi aici1 bici1 two functions rewritten terms two new subfunctions carry generate gi aibi carry pro pagate pi ai bi using two new subfunctions rewrite logic equation carry output stage ci gi pici1 write sequence carry outputs shows total independence among different car ries broken carry chain figure 45 shows overall architecture 4bit cla adder basically three blocks cla rst one used generate g p second used create carry output third block used generate sum outputs regardless number bits cla delay rst block equivalent one gate delay delay second block equivalent two gate delay delay third block equivalent one gate delay figure 45 show generation carry sum outputs reader encouraged complete design see chapter exercises multiplication discussing multiplication shall assume two input arguments multiplier q given q qn1qn2 q1q0 multiplicand given mn1mm2 m1m0 number methods exist performing multiplication methods discussed paper pencil method unsigned numbers simplest method performing multiplication two unsigned numbers method illus trated example shown example consider multiplication two unsigned numbers 14 10 process shown using binary representation two numbers 1110 14 multiplicand 1010 10 multiplier q 0000 partial product 1110 partial product 0000 partial product 1110 partial product 10001100 140 final product p multiplication performed using array cells consisting fa cell computes given partial product figure 46 shows basic cell example array 44 multiplier array characterizes method need adding n partial products regard less values multiplier bits noted given bit multiplier 0 need computing corresponding partial product following method makes use observation addshift method case multiplication performed series n conditional addition shift operations given bit multiplier 0 shift operation performed given bit multiplier 1 addition partial products shift operation performed follow ing example illustrates method example consider multiplication two unsigned numbers 11 13 process shown tabular form process 4bit register initialized 0s c carry bit signicant bit position process repeated n 4 times number bits multiplier q bit multiplier 1 concatenation aq shifted one bit position right hand bit 0 shift operation performed aq structure required perform operation shown figure 47 gure control logic used determine operation performed depending least signicant bit q nbit adder used add contents registers m order speed multiplication operation number techniques used techniques based observation larger number consecutive zeros ones fewer partial products gen erated group consecutive zeros multiplier requires generation new partial product group k consecutive ones multiplier requires gener ation fewer k new partial products one technique makes use observation booth algorithm discuss 2bit booth algorithm booth algorithm technique two bits multiplier q q 2 1 0 n 1 inspected time action taken depends binary values two bits two values respectively 01 two values 10 m action needed values 00 11 four cases arithmetic shift right oper ation concatenation aq performed whole process repeated n times n number bits multiplier booth algorithm requires inclusion bit q 21 0 least signicant bit multiplier q beginning multiplication process booth algorithm illustrated figure 48 following examples show apply steps booth algorithm hardware structure shown figure 49 used perform operations required booth algorithm consists alu perform add sub operation depending two bits q 0 q 21 control circuitry also required perform asr aq issue appropriate signals needed control number cycles main drawbacks booth algorithm variability number addsub operations inefciency algorithm bit pattern q becomes repeated pair 0 1 followed 1 0 last situation improved three rather two bits inspected time division among four basic arithmetic operations division considered complex time consuming simplest form integer division oper ation takes two arguments dividend x divisor d produces two outputs quotient q remainder r four quantities satisfy relation x q r r d number complications arise dealing division obvious among case 0 another subtle difculty requirement resulting quotient exceed capacity reg ister holding satised q 2n1 n number bits register holding quotient implies relation x 2n1d must also satised failure satisfy conditions lead overow condition start showing division algorithm assuming values involved divided divisor quotient remainder interpreted frac tions process also valid integer values shown later order obtain positive fractional quotient q 0q1q2 qn1 division operation performed sequence repeated subtractions shifts step remainder compared divisor d remainder larger quotient bit set 1 otherwise quotient bit set 0 represented following equation ri 2ri1 qi ri ri1 current previous remainder respectively r0 x 1 2 n 2 1 resultant quotient q 0101 5 remainder r 0010 2 correct values hardware structure binary division shown figure 410 gure divisor contents register added using n 1 bit adder control logic used perform required shift left operation see exercises comparison remainder divisor considered difcult step division process way used perform com parison subtract 2ri1 result negative set qi 0 required restoring previous value adding back subtracted value restoring division alternative use nonrestoring division algorithm step 1 following n times 1 sign 0 shift left aq subtract otherwise shift left aq add 2 sign 0 set q0 1 otherwise set q0 0 step 2 sign 1 add", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-17 22:51:31", "segment": "segment40", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.3. FLOATING-POINT ARITHMETIC ", "content": "considered integer representation arithmetic consider section oatingpoint representation arithmetic", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-17 22:51:31", "segment": "segment41", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.3.1. Floating-Point Representation (Scienti\ufb01c Notation) ", "content": "oatingpoint fp number represented following form mb e called mantissa represents fraction part number normally represented signed binary fraction e represents exponent b represents base radix exponent example figure 411 representation oatingpoint number 23 bits e 8 bits sign bit 1 bit value stored 0 number positive value stored 1 number negative exponent example represent positive numbers 0 255 represent positive negative exponents xed value called bias subtracted exponent eld obtain true exponent assume example bias 128 used true exponents range 2128 stored 0 exponent eld 127 stored 255 exponent eld represented based representation exponent 4 represented storing 132 exponent eld exponent 212 represented storing 116 exponent eld assuming b 2 fp number 175 represented forms shown figure 412 simplify performing operations fp numbers increase precision always represented called normalized forms fp number said normalized leftmost bit mantissa 1 therefore among three possible representations 175 rst representation normal ized used since signicant bit msb normalized fp number always 1 bit often stored assumed hidden bit left radix point stored mantissa 1m therefore nonzero normalized number represents value 1 1 2e128 floatingpoint arithmetic additionsubtraction difculty adding two fp numbers stems fact may different exponents therefore adding two fp numbers exponents must equalized mantissa number smaller magnitude exponent must aligned steps required addsubtract two floatingpoint numbers 1 compare magnitude two exponents make suitable alignment number smaller magnitude exponent 2 perform additionsubtraction 3 perform normalization shifting resulting mantissa adjusting resulting exponent example consider adding two fp numbers 11100 24 11000 22 1 alignment 11000 22 aligned 00110 24 2 addition add two numbers get 100010 24 3 normalization nal normalized result 01000 26 assuming 4 bits allowed radix point additionsubtraction two fp numbers illustrated using schematic shown figure 413 multiplication multiplication pair fp numbers x mx 2a 2b represented x mx 2ab general algorithm multiplication fp numbers consists three basic steps 1 compute exponent product adding exponents together 2 multiply two mantissas 3 normalize round nal product example consider multiplying two fp numbers x 1000 222 21010 221 1 add exponents 22 21 23 2 multiply mantissas 1000 2 1010 21010000 product 210100 223 76 computer arithmetic multiplication two fp numbers illustrated using schematic shown figure 414 division division pair fp numbers x mx 2a 2b represented xy mxmy 2ab general algorithm division fp numbers consists three basic steps 1 compute exponent result subtracting exponents 2 divide mantissa determine sign result 3 normalize round resulting value necessary example consider division two fp numbers x 10000 222 210100 221 1 subtract exponents 22 2 21 21 2 divide mantissas 10000 4 210100 201101 3 result 201101 221 division two fp numbers illustrated using schematic shown figure 415", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-17 22:51:32", "segment": "segment42", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.3.3. The IEEE Floating-Point Standard ", "content": "essentially two ieee standard oatingpoint formats basic extended formats ieee denes two formats singleprecision doubleprecision formats singleprecision format 32bit doubleprecision 64bit single extended format least 44 bits double extended format least 80 bits singleprecision format base 2 used thus allowing use hidden bit exponent eld 8 bits ieee singleprecision representation shown figure 416 8bit exponent allows 256 combinations among two com binations reserved special values 1 e 0 reserved zero fraction 0 denormalized numbers fraction 0 2 e 255 reserved 1 fraction 0 number nan fraction 0 single extended ieee format extends exponent eld 8 11 bits mantissa eld 231 32 bits without hidden bit results total length least 44 bits single extended format used calculating intermediate results", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-17 22:51:32", "segment": "segment43", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.3.4. Double-Precision IEEE Format ", "content": "exponent eld 11 bits signicant eld 52 bits format shown figure 417 similar singleprecision format extreme values e 0 2047 reserved purpose number attributes characterizing ieee single doubleprecision formats summarized table 42", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-17 22:51:32", "segment": "segment44", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "4.4. SUMMARY ", "content": "chapter discussed number issues related computer arithmetic discussion started introduction number representation radix con version techniques discussed integer arithmetic particular dis cussed four main operations addition subtraction multiplication division case shown basic architectures organization last topic discussed chapter oatingpoint representation arith metic also shown basic architectures needed perform basic oat ingpoint operations addition subtraction multiplication division ended discussion chapter ieee oatingpoint number representation", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-17 22:51:32", "segment": "segment45", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 5 ", "content": "processing unit design previous chapters studied history computer systems fundamen tal issues related memory locations addressing modes assembly language computer arithmetic chapter focus attention main component computer system central processing unit cpu primary function cpu execute set instructions stored computer memory simple cpu consists set registers arithmetic logic unit alu con trol unit cu follows reader introduced organization main operations cpu", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-17 22:51:32", "segment": "segment46", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.1. CPU BASICS ", "content": "typical cpu three major components 1 register set 2 arithmetic logic unit alu 3 control unit cu register set differs one computer architecture another usually combination generalpurpose special purpose registers generalpurpose registers used purpose hence name general purpose specialpurpose registers specic functions within cpu example program counter pc specialpurpose register used hold address instruction executed next another example specialpurpose registers instruction register ir used hold instruction currently executed alu provides cir cuitry needed perform arithmetic logical shift operations demanded instruction set chapter 4 covered number arithmetic oper ations circuits used support computation alu control unit entity responsible fetching instruction executed main memory decoding executing figure 51 shows main com ponents cpu interactions memory system input output devices cpu fetches instructions memory reads writes data memory transfers data inputoutput devices typical simple execution cycle summarized follows 1 next instruction executed whose address obtained pc fetched memory stored ir 2 instruction decoded 3 operands fetched memory stored cpu registers needed 4 instruction executed 5 results transferred cpu registers memory needed execution cycle repeated long instructions execute check pending interrupts usually included cycle examples inter rupts include io device request arithmetic overow page fault see chapter 7 interrupt request encountered transfer interrupt handling routine takes place interrupt handling routines programs invoked collect state currently executing program correct cause interrupt restore state program actions cpu execution cycle dened microorders issued control unit microorders individual control signals sent dedicated control lines example let us assume want execute instruction moves contents register x register y let us also assume registers connected data bus d control unit issue con trol signal tell register x place contents data bus d delay another control signal sent tell register read data bus d acti vation control signals determined using either hardwired control micropro gramming concepts explained later chapter remainder chapter organized follows section 52 presents register set explains different types registers functions sec tion 53 understand meant datapath control cpu instruction cycle control unit covered sections 54 55 respectively", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-17 22:51:32", "segment": "segment47", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2. REGISTER SET ", "content": "registers essentially extremely fast memory locations within cpu used create store results cpu operations calculations differ ent computers different register sets differ number registers reg ister types length register also differ usage register generalpurpose registers used multiple purposes assigned variety functions programmer specialpurpose registers restricted specic functions cases registers used hold data used calculations operand addresses length data register must long enough hold values data types machines allow two contiguous registers hold doublelength values address registers may dedicated particular addressing mode may used address general purpose address registers must long enough hold largest address number registers particular architecture affects instruction set design small number registers may result increase memory references another type registers used hold processor status bits ags bits set cpu result execution operation status bits tested later time part another operation", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-17 22:51:32", "segment": "segment48", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.1. Memory Access Registers ", "content": "two registers essential memory write read operations memory data register mdr memory address register mar mdr mar used exclusively cpu directly accessible programmers order perform write operation specied memory location mdr mar used follows 1 word stored memory location rst loaded cpu mdr 2 address location word stored loaded cpu mar 3 write signal issued cpu similarly perform memory read operation mdr mar used follows 1 address location word read loaded mar 2 read signal issued cpu 3 required word loaded memory mdr ready use cpu", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-17 22:51:32", "segment": "segment49", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.2. Instruction Fetching Registers ", "content": "two main registers involved fetching instruction execution pro gram counter pc instruction register ir pc register con tains address next instruction fetched fetched instruction loaded ir execution successful instruction fetch pc updated point next instruction executed case branch operation pc updated point branch target instruction branch resolved target address known", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-17 22:51:32", "segment": "segment50", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.3. Condition Registers ", "content": "condition registers ags used maintain status information architec tures contain special program status word psw register psw contains bits set cpu indicate current status executing program indicators typically arithmetic operations interrupts memory protection information processor status", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-17 22:51:32", "segment": "segment51", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.4. Special-Purpose Address Registers ", "content": "index register covered chapter 2 index addressing address operand obtained adding constant content register called index register index register holds address displacement index addressing indi cated instruction including name index register parentheses using symbol x indicate constant added segment pointers discuss chapter 6 order support segmen tation address issued processor consist segment number base displacement offset within segment segment register holds address base segment stack pointer shown chapter 2 stack data organization mechanism last data item stored rst data item retrieved two specic oper ations performed stack push pop operations specic register called stack pointer sp used indicate stack location addressed stack push operation sp value used indicate location called top stack storing pushing value sp incremented architectures eg x86 sp decremented stack grows low memory", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-17 22:51:33", "segment": "segment52", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.5. 80386 Registers ", "content": "discussed chapter 3 intel basic programming model 386 486 pentium consists three register groups generalpurpose registers segment registers instruction pointer program counter ag register figure 52 repeats fig 36 shows three sets registers rst set consists general purpose registers b c si source index di destination index sp stack pointer bp base pointer second set registers consists cs code segment ss stack segment four data segment registers ds es fs gs third set registers consists instruction pointer program counter ags status register among status bits rst ve iden tical bits introduced early 8085 8bit microprocessor next 611 bits identical introduced 8086 ags bits 1214 introduced 80286 1617 bits introduced 80386 ag bit 18 introduced 80486", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-17 22:51:33", "segment": "segment53", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.2.6. MIPS Registers ", "content": "mips cpu contains 32 generalpurpose registers numbered 031 register x designated x register zero always contains hardwired value 0 table 51 lists registers describes intended use registers 1 k0 26 k1 27 reserved use assembler operating system registers a0 a3 47 used pass rst four arguments routines remaining arguments passed stack registers v0 v1 2 3 used return values functions registers t0 t9 815 24 25 callersaved registers used temporary quantities need preserved across calls registers s0 s7 1623 callesaved registers hold longlived values preserved across calls register sp 29 stack pointer points last location use stack register fp 30 frame pointer register ra 31 written return address function call register gp 28 global pointer points middle 64 k block memory heap holds constants global variables objects heap quickly accessed single load store instruction", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-17 22:51:33", "segment": "segment54", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.3. DATAPATH ", "content": "cpu divided data section control section data section also called datapath contains registers alu datapath capable performing certain operations data items control section basi cally control unit issues control signals datapath internal cpu data move one register another alu registers internal data movements performed via local buses may carry data instructions addresses externally data move registers memory io devices often means system bus internal data movement among registers alu registers may carried using different organizations including onebus twobus threebus organizations dedicated datapaths may also used components transfer data them selves frequently example contents pc transferred mar fetch new instruction beginning instruction cycle hence dedicated datapath pc mar could useful speeding part instruction execution", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-17 22:51:33", "segment": "segment55", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.3.1. One-Bus Organization ", "content": "using one bus cpu registers alu use single bus move outgoing incoming data since bus handle single data movement within one clock cycle twooperand operations need two cycles fetch operands alu additional registers may also needed buffer data alu bus organization simplest least expensive limits amount data transfer done clock cycle slow overall performance figure 53 shows onebus datapath consisting set generalpurpose registers memory address register mar memory data register mdr instruction register ir program counter pc analu", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-17 22:51:33", "segment": "segment56", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.3.2. Two-Bus Organization ", "content": "using two buses faster solution onebus organization case gen eralpurpose registers connected buses data transferred two different registers input point alu time therefore two operand operation fetch operands clock cycle additional buffer register may needed hold output alu two buses busy carrying two operands figure 54a shows twobus organization cases one buses may dedicated moving data registers inbus dedicated transferring data registers outbus case additional buffer register may used one alu inputs hold one operands alu output connected directly inbus transfer result one registers figure 54b shows twobus organization inbus outbus", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-17 22:51:33", "segment": "segment57", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.3.3. Three-Bus Organization ", "content": "threebus organization two buses may used source buses third used destination source buses move data registers outbus destination bus may move data register inbus two outbuses connected alu input point output alu connected directly inbus expected buses data move within single clock cycle however increasing number buses also increase complexity hardware figure 55 shows example threebus datapath", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-17 22:51:33", "segment": "segment58", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.4. CPU INSTRUCTION CYCLE ", "content": "sequence operations performed cpu execution instruc tions presented fig 56 long instructions execute next instruction fetched main memory instruction executed based operation specied opcode eld instruction completion instruction execution test made determine whether interrupt occurred interrupt handling routine needs invoked case interrupt basic actions fetching instruction executing instruction hand ling interrupt dened sequence microoperations group control signals must enabled prescribed sequence trigger execution micro operation section show microoperations implement instruction fetch execution simple arithmetic instructions interrupt handling", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-17 22:51:33", "segment": "segment59", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.4.1. Fetch Instructions ", "content": "sequence events fetching instruction summarized follows 1 contents pc loaded mar 2 value pc incremented operation done parallel memory access 3 result memory read operation instruction loaded mdr 4 contents mdr loaded ir let us consider onebus datapath organization shown fig 53 see fetch operation accomplished three steps shown table t0 t1 t2 note multiple operations separated imply accomplished parallel", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-17 22:51:33", "segment": "segment60", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.4.2. Execute Simple Arithmetic Operation ", "content": "thisinstructionaddsthecontentsofsourceregistersrandr stores results destination register r0 addition executed follows 1 registers r0 r1 r2 extracted ir 2 contents r1 r2 passed alu addition 3 output alu transferred r0", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-17 22:51:33", "segment": "segment61", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.4.3. Interrupt Handling ", "content": "execution instruction test performed check pending inter rupts interrupt request waiting following steps take place 1 contents pc loaded mdr saved 2 mar loaded address pc contents saved 3 pc loaded address rst instruction interrupt hand ling routine", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-17 22:51:33", "segment": "segment62", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.5. CONTROL UNIT ", "content": "control unit main component directs system operations sending control signals datapath signals control ow data within cpu cpu external units memory io control buses generally carry signals control unit computer components clockdriven manner system clock produces continuous sequence pulses specied duration frequency sequence steps t0 t1 t2 t0 t1 t2 used execute certain instruction opcode eld fetched instruction decoded provide control signal generator infor mation instruction executed step information generated logic circuit module used inputs generate control signals signal generator specied simply set boolean equations output terms inputs figure 57 shows block diagram describes timing used generating control signals mainly two different types control units microprogrammed hardwired microprogrammed control control signals associated oper ations stored special memory units inaccessible programmer control words control word microinstruction species one micro operations sequence microinstructions called microprogram stored rom ram called control memory cm hardwired control xed logic circuits correspond directly boolean expressions used generate control signals clearly hardwired control faster microprogrammed control however hardwired control could expensive complicated complex systems hardwired control econ omical small control units also noted microprogrammed control could adapt easily changes system design easily add new instruc tions without changing hardware hardwired control require redesign entire systems case change example 1 let us revisit add operation add contents source registers r1 r2 store results destination register r0 shown earlier operation done one step using threebus datapath shown figure 55 let us try examine control sequence needed accomplish addition step t0 suppose opcode eld current instruction decoded instx type first need select source registers destination register select add alu function performed following table shows needed step control sequence 96 processing unit design figure 59 shows signals generated execute instx time periods t0 t1 t2 gates ensure appropriate signals issued opcode decoded instx appropriate time period t0 signals r1 issued move contents r1 a similarly t1 signals r2 b issued move contents r2 b finally signals r0 add issued t2 add con tents b move results r0", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-17 22:51:33", "segment": "segment63", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.5.1. Hardwired Implementation ", "content": "hardwired control direct implementation accomplished using logic cir cuits control line one must nd boolean expression terms input control signal generator shown figure 57 let us explain implementation using simple example assume instruction set machine three instructions instx insty instz b c e f g h control lines following table shows control lines activated three instructions three steps t0 t1 t2 boolean expressions control lines b c obtained follows instx t1 instz t1 instx instz t1 b instx t0 insty t2 c instx t1 instx t2 insty t2 instz t1 instx instz t1 instx insty t2 figure 510 shows logic circuits control lines boolean expressions rest control lines obtained similar way figure 511 shows state diagram execution cycle instructions", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-17 22:51:33", "segment": "segment64", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.5.2. Microprogrammed Control Unit ", "content": "idea microprogrammed control units introduced m v wilkes early 1950s microprogramming motivated desire reduce complex ities involved hardwired control studied earlier instruction implemented using set microoperations associated microoperation set control lines must activated carry corresponding micro operation idea microprogrammed control store control signals associated implementation certain instruction microprogram special memory called control memory cm microprogram consists sequence microinstructions microinstruction vector bits bit control signal condition code address next microinstruction microinstructions fetched cm way program instructions fetched main memory fig 512 instruction fetched memory opcode eld instruc tion determine microprogram executed words opcode mapped microinstruction address control memory microinstruction processor uses address fetch rst microinstruction microprogram fetching microinstruction appropriate control lines enabled every control line corresponds 1 bit turned every control line corresponds 0 bit left completing execution one microinstruction new microinstruction fetched executed condition code bits indicate branch must taken next microinstruction specied address bits cur rent microinstruction otherwise next microinstruction sequence fetched executed length microinstruction determined based number micro operations specied microinstructions way control bits interpreted way address next microinstruction obtained microinstruction may specify one microoperations activated simultaneously length microinstruction increase number parallel microoperations per microinstruction increases furthermore control bit microinstruction corresponds exactly one control line length microinstruction could get bigger length microinstruction could reduced control lines coded specic elds microinstruction decoders needed map eld individual control lines clearly using decoders reduce number control lines activated sim ultaneously tradeoff length microinstructions amount parallelism important reduce length microinstructions reduce cost access time control memory may also desirable microoperations performed parallel control lines activated simultaneously horizontal versus vertical microinstructions microinstructions classied horizontal vertical individual bits horizontal microinstructions correspond individual control lines horizontal microinstructions long allow maximum parallelism since bit controls single control line vertical microinstructions control lines coded specic elds within microinstruction decoders needed map eld k bits 2k possible com binations control lines example 3bit eld microinstruction could used specify one eight possible lines encoding vertical microinstructions much shorter horizontal ones control lines encoded eld activated simultaneously therefore vertical micro instructions allow limited parallelism noted decoding needed horizontal microinstructions decoding necessary vertical case example 3 consider threebus datapath shown figure 55 addition pc ir mar mdr assume 16 generalpurpose registers numbered r0r15 also assume alu supports eight functions add sub tract multiply divide shift left shift right consider add oper ation add r1 r2 r0 adds contents source registers r1 r2 store results destination register r0 example study format microinstruction horizontal organization use horizontal microinstructions control bit control line format microinstruction control bits following alu operations registers output outbus1 source 1 registers output outbus2 source 2 registers input inbus destination operations shown following table shows number bits needed alu source 1 source 2 destination example 4 example use vertical microinstructions decoders needed use threebus datapath shown figure 55 assume 16 generalpurpose registers alu supports eight functions following tables show encoding alu functions registers connected outbus 1 source 1 registers connected outbus 2 source 2 registers connected inbus destination 1 2 0 example 5 using encoding example 4 let us nd vertical microin structions used fetching instruction mar pc first need select pc source 1 using 10000 source 1 eld similarly select mar destination using 10010 destina tion eld also need use 0000 alu eld decoded none shown alu encoding table example 4 none means outbus1 connected inbus eld source 2 set 10000 means none registers selected microinstruction shown figure 515 memory read write memory operations easily accommodated adding 1 bit read another write two microinstructions figure 516 perform memory read write respectively fetch fetching instruction done using three microinstructions figure 517 rst second microinstructions shown third microin struction moves contents mdr ir ir mdr mdr selected source 1 using 10011 source 1 eld similarly ir selected destina tion using 10001 destination eld also need use 0000 none alu eld means outbus1 connected inbus eld source 2 set 10000 means none registers selected", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-17 22:51:34", "segment": "segment65", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "5.6. SUMMARY ", "content": "cpu part computer interprets carries instructions con tained programs write cpu main components register le alu control unit register le contains generalpurpose special reg isters generalpurpose registers may used hold operands intermediate results special registers may used memory access sequencing status information hold fetched instruction decoding execution arithmetic logi cal operations performed alu internal cpu data may move one register another registers alu data may also move cpu external components memory io control unit com ponent controls state instruction cycle long instructions execute next instruction fetched main memory instruction executed based operation specied opcode eld instruction control unit generates signals control ow data within cpu cpu external units memory io control unit implemented using hardwiredormicroprogrammingtechniques", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-17 22:51:34", "segment": "segment66", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 6 Memory System Design I ", "content": "chapter study computer memory system stated chapter 3 without memory information stored retrieved computer interesting observe early 1946 recognized burks goldstine von neumann computer memory organized hierarchy hierarchy larger slower memories used supplement smaller faster ones observation since proven essential constructing com puter memory put aside set cpu registers rst level storing retrieving information inside cpu see chapter 5 typical memory hierarchy starts small expensive relatively fast unit called cache cache followed hierarchy larger less expensive relatively slow main memory unit cache main memory built using solidstate semi conductor material followed hierarchy far larger less expensive much slower magnetic memories consist typically hard disk tape deliberation chapter starts discussing characteristics fac tors inuencing success memory hierarchy computer direct attention design analysis cache memory discussion main memory unit conducted chapter 7 also discussed chapter 7 issues related virtual memory design brief coverage different read memory rom implementations also provided chapter 7", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-17 22:51:34", "segment": "segment67", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.1. BASIC CONCEPTS ", "content": "section introduce number fundamental concepts relate memory hierarchy computer", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-17 22:51:34", "segment": "segment68", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.1.1. Memory Hierarchy ", "content": "mentioned typical memory hierarchy starts small expensive relatively fast unit called cache followed larger less expensive rela tively slow main memory unit cache main memory built using solidstate fundamentals computer organization architecture m abdelbarr h elrewini isbn 0471467413 copyright 2005 john wiley sons inc semiconductor material typically cmos transistors customary call fast memory level primary memory solidstate memory followed larger less expensive far slower magnetic memories consist typically hard disk tape customary call disk secondary memory tape con ventionally called tertiary memory objective behind designing memory hier archy memory system performs consists entirely fastest unit whose cost dominated cost slowest unit memory hierarchy characterized number parameters among parameters access type capacity cycle time latency bandwidth cost term access refers action physically takes place read write oper ation capacity memory level usually measured bytes cycle time dened time elapsed start read operation start subsequent read latency dened time interval request information access rst bit information bandwidth provides measure number bits per second accessed cost memory level usually specied dollars per megabytes figure 61 depicts typical memory hierarchy table 61 provides typical values memory hierarchy parameters term random access refers fact access memory location takes xed amount time regardless actual memory location andor sequence accesses takes place example write operation memory location 100 takes 15 ns operation followed read oper ation memory location 3000 latter operation also take 15 ns compared sequential access access location 100 takes 500 ns consecutive access location 101 takes 505 ns expected access location 300 may take 1500 ns memory cycle locations 100 300 location requiring 5 ns effectiveness memory hierarchy depends principle moving information fast memory infrequently accessing many times replacing new information principle possible due phenomenon called locality reference within given period time programs tend reference relatively conned area memory repeatedly exist two forms locality spatial temporal locality spatial locality refers phenomenon given address referenced likely addresses near referenced within short period time example consecu tive instructions straightline program temporal locality hand refers phenomenon particular memory item referenced likely referenced next example instruction program loop sequence events takes place processor makes request item follows first item sought rst memory level memory hierarchy probability nding requested item rst level called hit ratio h1 probability nding missing requested item rst level memory hierarchy called miss ratio 1 2 h1 requested item causes miss sought next subsequent memory level probability nding requested item second memory level hit ratio second level h2 miss ratio second memory level 1 h2 process repeated item found upon nding requested item brought sent processor memory hierarchy consists three levels average memory access time expressed follows tav h1 t1 1 h1 t1 h2 t2 1 h2 t2 t3 t1 1 h1 t2 1 h2 t3 average access time memory level dened time required access one word level equation t1 t2 t3 represent respectively access times three levels", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-17 22:51:34", "segment": "segment69", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2. CACHE MEMORY ", "content": "cache memory owes introduction wilkes back 1965 time wilkes distinguished two types main memory conventional slave memory wilkes terminology slave memory second level unconventional highspeed memory nowadays corresponds called cache memory term cache means safe place hiding storing things idea behind using cache rst level memory hierarchy keep information expected used frequently cpu cache small highspeed memory near cpu end result given time active portion main memory duplicated cache therefore processor makes request memory reference request rst sought cache request corresponds element currently resid ing cache call cache hit hand request corre sponds element currently cache call cache miss cache hit ratio hc dened probability nding requested element cache cache miss ratio 1 hc dened probability nding requested element cache case requested element found cache brought subsequent memory level memory hierarchy assuming element exists next memory level main memory brought placed cache expectation next requested element residing neighboring locality current requested element spatial locality upon cache miss actually brought main memory block elements contains requested element advantage transferring block main memory cache visible could poss ible transfer block using one main memory access time possibility could achieved increasing rate information transferred main memory cache one possible technique used increase bandwidth memory interleaving achieve best results assume block brought main memory cache upon cache miss consists elements stored different memory modules whereby consecutive memory addresses stored successive memory modules figure 62 illustrates simple case main memory consisting eight memory modules assumed case block consists 8 bytes introduced basic idea leading use cache memory would like assess impact temporal spatial locality performance memory hierarchy order make assessment limit deliberation simple case hierarchy consisting two levels cache main memory assume main memory access time tm cache access time tc measure impact locality terms average access time dened average time required access element word requested processor twolevel hierarchy", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-17 22:51:35", "segment": "segment70", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.1. Impact of Temporal Locality ", "content": "case assume instructions program loops executed many times example n times loaded cache used replaced new instructions average access time tav given tav ntc tm n tc tm n deriving expression assumed requested memory element created cache miss thus leading transfer main memory block time tm following n accesses made requested element taking tc expression reveals number repeated accesses n increases average access time decreases desirable feature memory hierarchy", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-17 22:51:35", "segment": "segment71", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.2. Impact of Spatial Locality ", "content": "case assumed size block transferred main memory cache upon cache miss elements also assume due spatial locality elements requested one time processor based assumptions average access time tav given deriving expression assumed requested memory element created cache miss thus leading transfer main memory block con sisting elements time tm following accesses one elements constituting block made expression reveals number elements block increases average access time decreases desirable feature memory hierarchy", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-17 22:51:35", "segment": "segment72", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.3. Impact of Combined Temporal and Spatial Locality ", "content": "case assume element requested processor created cache miss leading transfer block consisting elements cache take tm due spatial locality elements constituting block requested one time processor requiring mtc following orig inally requested element accessed n 2 1 times temporal locality total n times access element based assumptions average access time tav given simplifying assumption expression assume tm mtc case expression simplify expression reveals number repeated accesses n increases average access time approach tc signicant performance improvement clear discussion requests items exist cache cache miss occur blocks would brought cache raise two basic questions place incoming main memory block cache case cache totally lled cache block incoming main memory block replace place ment incoming blocks replacement existing blocks performed accord ing specic protocols algorithms protocols strongly related internal organization cache cache internal organization discussed following subsections however discussing cache organization rst introduce cachemapping function", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-17 22:51:35", "segment": "segment73", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.4. Cache-Mapping Function ", "content": "without loss generality present cachemapping function taking consider ation interface two successive levels memory hierarchy primary level secondary level focus interface cache main memory cache represents primary level main memory rep resents secondary level principles apply interface two memory levels hierarchy following discussion focus atten tion interface cache main memory noted request accessing memory element made processor issuing address requested element address issued processor may correspond element exists currently cache cache hit otherwise may correspond element currently resid ing main memory therefore address translation made order determine whereabouts requested element one functions performed memory management unit mmu schematic address mapping function shown figure 63 gure system address represents address issued processor requested element address used address translation function inside mmu address translation reveals issued address corresponds element currently residing cache element made 112 memory system design available processor hand element currently cache brought part block main memory placed cache element requested made available processor", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-17 22:51:35", "segment": "segment74", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.5. Cache Memory Organization ", "content": "three main different organization techniques used cache memory three techniques discussed techniques differ two main aspects 1 criterion used place cache incoming block main memory 2 criterion used replace cache block incoming block cache full direct mapping simplest among three techniques simplicity stems fact places incoming main memory block specic xed cache block location placement done based xed relation incoming block number cache block number j number cache blocks n j mod n example 1 consider example case main memory consisting 4k blocks cache memory consisting 128 blocks block size 16 words figure 64 shows division main memory cache according directmapped cache technique gure shows total 32 main memory blocks map given cache block example main memory blocks 0 128 256 384 3968 map cache block 0 therefore call directmapping technique manytoone mapping technique main advantage directmapping tech nique simplicity determining place incoming main memory block cache main disadvantage inefcient use cache according technique number main memory blocks may com pete given cache block even exist empty cache blocks dis advantage lead achieving low cache hit ratio according directmapping technique mmu interprets address issued processor dividing address three elds shown figure 65 lengths bits elds figure 65 1 word eld log2 b b size block words 2 block eld log2 n n size cache blocks 3 tag eld log2 mn size main memory blocks 4 number bits main memory address log2 b noted total number bits computed rst three equations add length main memory address used check correctness computation example 2 compute four parameters example 1 word eld log2 b log2 16 log2 24 4 bits block eld log2 n log2 128 log2 27 7 bits tag eld log2 mn log2 22 21027 5 bits shown division main memory address proceed explain protocol used mmu satisfy request made processor accessing given element illustrate protocol using parameters given example presented fig 66 steps protocol 1 use block eld determine cache block contain element requested processor block eld used directly determine cache block sought hence name technique directmapping 2 check corresponding tag memory see whether match content tag eld match two indicates targeted cache block determined step 1 currently holding main memory element requested processor cache hit 3 among elements contained cache block targeted element selected using word eld 4 step 2 match found indicates cache miss therefore required block brought main memory deposited cache targeted element made available processor cache tag memory cache block memory updated accordingly directmapping technique answers placement incoming main memory block cache question also answers replacement question upon encountering totally lled cache new main memory block brought replacement trivial determined equation j mod n main advantages directmapping technique simplicity measured terms direct determination cache block search needed also simple terms replacement mechanism main disadvantage tech nique expected poor utilization cache memory represented terms possibility targeting given cache block requires frequent replacement blocks rest cache used consider example sequence requests made processor elements held main memory blocks 1 33 65 97 129 161 consider also cache size 32 blocks clear blocks map cache block number 1 there fore blocks compete cache block despite fact remaining 31 cache blocks used expected poor utilization cache direct mapping technique mainly due restriction placement incoming main memory blocks cache manytoone property restriction relaxed make possible incoming main memory block placed empty available cache block resulting technique would ex ible efcient utilization cache would possible exible tech nique called associative mapping technique explained next fully associative mapping according technique incoming main memory block placed available cache block therefore address issued processor need two elds tag word elds rst uniquely identies block residing cache second eld identies element within block requested pro cessor mmu interprets address issued processor dividing two elds shown figure 67 length bits elds figure 67 given 1 word eld log2 b b size block words 2 tag eld log2 size main memory blocks 3 number bits main memory address log2 b noted total number bits computed rst two equations add length main memory address used check correctness computation example 3 compute three parameters memory system following specication size main memory 4k blocks size cache 128 blocks block size 16 words assume system uses associative mapping word eld log2 b log2 16 log2 24 4 bits tag eld log2 log2 27 210 12 bits number bits main memory address log2 b log2 24 212 16 bits shown division main memory address proceed explain protocol used mmu satisfy request made processor accessing given element illustrate protocol using parameters given example presented see fig 68 steps protocol 1 use tag eld search tag memory match tags stored 2 match tag memory indicates corresponding targeted cache block determined step 1 currently holding main memory element requested processor cache hit 3 among elements contained cache block targeted element selected using word eld 4 step 2 match found indicates cache miss therefore required block brought main memory deposited rst available cache block targeted element word made available processor cache tag memory cache block memory updated accordingly noted search made step 1 requires matching tag eld address every entry tag memory search done sequentially could lead long delay therefore tags stored associative content addressable memory allows entire contents tag memory searched parallel associatively hence name associative mapping noted regardless cache organization used mechanism needed ensure accessed cache block contains valid information val idity information cache block checked via use single bit cache block called valid bit valid bit cache block updated way valid bit 1 corresponding cache block car ries valid information otherwise information cache block invalid computer system powered valid bits made equal 0 indicating carry invalid information blocks brought cache statuses changed accordingly indicate validity information contained main advantage associativemapping technique efcient use cache stems fact exists restriction place incoming main memory blocks unoccupied cache block potentially used receive incoming main memory blocks main disadvantage technique however hardware overhead required perform associat ive search conducted order nd match tag eld tag memory discussed compromise simple inefcient direct cache organization involved efcient associative cache organization achieved con ducting search limited set cache blocks knowing ahead time cache incoming main memory block placed basis setassociative mapping technique explained next setassociative mapping setassociative mapping technique cache divided number sets set consists number blocks given main memory block maps specic cache set based equation mod number sets cache main memory block number specc cache set block maps however incoming block maps block assigned cache set therefore address issued processor divided three distinct elds tag set word elds set eld used uniquely identify specic cache set ideally hold targeted block tag eld uniquely identies tar geted block within determined set word eld identies element word within block requested processor according setassociative mapping technique mmu interprets address issued processor dividing three elds shown figure 69 length bits elds figure 69 given 1 word eld log2 b b size block words 2 set eld log2 number sets cache 3 tag eld log2 ms size main memory blocks nbs n number cache blocks bs number blocks per set 4 number bits main memory address log2 b noted total number bits computed rst three equations add length main memory address used check correctness computation example 4 compute three parameters word set tag memory system following specication size main memory 4k blocks size cache 128 blocks block size 16 words assume system uses setassociative mapping four blocks per set 128 4 32 sets 1 word eld log2 b log2 16 log2 24 4 bits 2 set eld log2 32 5 bits 3 tag eld log2 4 21032 7 bits number bits main memory address log2 b log2 24 212 16 bits shown division main memory address proceed explain protocol used mmu satisfy request made processor accessing given element illustrate protocol using parameters given example presented see fig 610 steps protocol 1 use set eld 5 bits determine directly specied set 1 32 sets 2 use tag eld nd match four blocks deter mined set match tag memory indicates specied set deter mined step 1 currently holding targeted block cache hit 3 among 16 words elements contained hit cache block requested word selected using selector help word eld 4 step 2 match found indicates cache miss therefore required block brought main memory deposited specied set rst targeted element word made available processor cache tag memory cache block memory updated accordingly noted search made step 2 requires matching tag eld address every entry tag memory specied set search performed parallel associatively set hence name setassociative mapping hardware overhead required performing associ ative search within set order nd match tag eld tag memory complex used case fully associative technique setassociativemapping technique expected produce moderate cache utilization efciency efcient fully associative technique poor direct technique however technique inherits simplicity direct mapping technique terms determining target set overall qualitative comparison among three mapping techniques shown table 62 owing moderate complexity moderate cache utilization setassociative technique used intel pentium line processors discussion shows associativemapping setassociative techniques answer question placement incoming main memory block cache important question posed beginning discussion cache memory replacement specically upon encoun tering totally lled cache new main memory block brought cache blocks selected replacement discussed", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-17 22:51:36", "segment": "segment75", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.6. Replacement Techniques ", "content": "number replacement techniques used include randomly selected block random selection block cache longest rstin rstout fifo block used least residing cache least recently used lru let us assume computer system powered random number generator starts generating numbers 0 n 2 1 name indicates random selection cache block replacement done based output random number generator time replacement technique simple require much additional overhead however main shortcoming take locality consideration random techniques found effec tive enough rst used intel iapx microprocessor series fifo technique takes time spent block cache measure replacement block cache longest selected replace ment regardless recent pattern access block technique requires keeping track lifetime cache block therefore simple random selection technique intuitively fifo technique reasonable use straightline programs locality reference concern according lru replacement technique cache block recently used least selected replacement among three replacement techniques lru technique effective history block usage criterion replacement taken consideration lru algorithm requires use cache controller circuit keeps track refer ences blocks residing cache achieved number possible implementations among implementations use counters case cache block assigned counter upon cache hit counter corresponding block set 0 counters smaller value original value counter hit block incremented 1 counters larger value kept unchanged upon cache miss block whose counter showing maximum value chosen replacement counter set 0 counters incremented 1 introduced three cache mapping technique offer follow ing example illustrates main observations made three techniques example 5 consider case 48 twodimensional array numbers a assume number array occupies one word array elements stored columnmajor order main memory location 1000 location 1031 cache consists eight blocks consisting two words assume also whenever needed lru replacement policy used would like exam ine changes cache three mapping techniques used following sequence requests array elements made processor a00 a01 a02 a03 a04 a05 a06 a07 a10 a11 a12 a13 a14 a15 a16 a17 solution distribution array elements main memory shown figure 611 shown also status cache requests made direct mapping table 63 shows 16 cache misses single cache hit number replacements made 12 shown tinted also shows available eight cache blocks four 0 2 4 6 used remaining four inactive time represents 50 cache utilization fully associative mapping table 64 shows eight cache hits 50 total number requests replacements made also shows 100 cache utilization setassociative mapping two blocks per set table 65 shows 16 cache misses single cache hit number replace ments made 12 shown tinted also shows available four cache sets two sets used remaining two inactive time represents 50 cache utilization", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-17 22:51:36", "segment": "segment76", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.7. Cache Write Policies ", "content": "discussed main issues related cache mapping techniques repla cement policies would like address important related issue cache coherence coherence cache word copy main memory maintained times possible number policies techniques used performing write operations main memory blocks residing cache policies determine degree coherence maintained cache words counterparts main memory following paragraphs discuss write policies particular dis cuss two main cases cache write policies upon cache hit cache write pol icies upon cache miss also discuss cache read policy upon cache miss cache read upon cache hit straightforward cache write policies upon cache hit essentially two possible write policies upon cache hit writethrough writeback writethrough policy every write operation cache repeated main memory time writeback policy writes made cache write main memory postponed replacement needed every cache block assigned bit called dirty bit indicate least one write operation made block residing cache replacement time dirty bit checked set block written back main memory otherwise simply overwritten incoming block writethrough policy maintains coherence cache blocks counterparts main memory expense extra time needed write main memory leads increase average access time hand writeback policy eliminates increase average access time however coherence guaranteed time replacement cache write policy upon cache miss two main schemes used writeallocate whereby main memory block brought cache updated scheme called writenoallocate whereby missed main memory block updated main memory brought cache general writethrough caches use writenoallocate policy writeback caches use writeallocate policy cache read policy upon cache miss two possible strategies used rst main memory missed block brought cache required word forwarded immediately cpu soon available second strategy missed main memory block entirely stored cache required word forwarded cpu discussed issues related design analysis cache memory briey present formulae average access time memory hierarchy different cache write policies case 1 cache writethrough policy writeallocate case average access time memory system given ta tc 1 h tb w tm tc tb time required transfer block cache tm tc additional time incurred due write operations w fraction write oper ations noted data path organization allow tb tm otherwise tb btm b block size words writenoallocate case average access time expressed ta tc 1 w 1 h tb w tm tc case 2 cache writeback policy average access time system uses writeback policy given ta tc 1 h tb wb 1 h tb wb probability block altered cache", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-17 22:51:36", "segment": "segment77", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.2.8. Real-Life Cache Organization Analysis ", "content": "intel pentium iv processor cache intel pentium 4 processor uses two level cache organization shown schematically figure 612 gure l1 represents 8 kb data cache fourway setassociative block size 64 bytes consider following example tailored l1 pentium cache example 6 cache organization setassociative main memory size 16 mb cache l1 size 8 kb number blocks per set four cpu addressing byte addressable main memory address divided three elds word set tag fig 613 length eld computed follows number main memory blocks 22426 218 blocks number cache blocks n 21326 128 blocks 1284 32 sets set eld log2 32 5 bits word eld log2 b log2 64 log2 26 6 bits tag eld log2 21825 13 bits main memory address log2 b log2 26 218 24 bits second cache level figure 611 l2 called advanced transfer cache organized eightway setassociative cache 256 kb total size 128byte block size following similar set steps shown l1 level obtain following number main memory blocks 22427 217 blocks 128 memory system design 2 following tables summarize l1 l2 pentium 4 cache performance terms cache hit ratio cache latency cpu l1 hit ratio l2 hit ratio l1 latency l2 latency average latency pentium 4 15 ghz 90 99 133 ns 60 ns 18 ns powerpc 604 processor cache powerpc cache divided data instruction caches called harvard organization instruction data caches organized 16 kb fourway setassociative following table sum marizes powerpc 604 cache basic characteristics cache organization setassociative block size 32 bytes main memory size 4 gb 128 mega blocks cache size 16 kb n 512 blocks number blocks per set four number cache sets 128 sets main memory address divided three elds word set tag fig 613 length eld computed follows number main memory blocks 23225 227 blocks number cache blocks n 21425 512 blocks 5124 128 sets set eld log2 128 7 bits word eld log2 b log2 32 log2 25 5 bits tag eld log2 22727 20 bits main memory address log2 b log2 25 227 32 bits pmcsierra rm7000a 64bit mips risc processor rm7000 uses different cache organization compared intel powerpc case three separate caches included 1 primary instruction cache 16 kb fourway setassociative cache 32byte block size eight instructions 2 primary data cache 16 kb fourway setassociative cache 32 bytes block size eight words 3 secondary cache 256 kb fourway setassociative cache instruc tions data addition three onchip caches rm7000 provides dedicated tertiary cache interface supports tertiary cache sizes 512 kb 2 mb 8 mb tertiary cache accessed secondary cache miss primary caches require one cycle access caches 64bit read data path 128bit write data path caches accessed sim ultaneously giving aggregate bandwidth 4 gb per second secondary cache 64bit data path accessed primary cache miss threecycle miss penalty owing unusual cache organization rm7000 uses two cache access schemes described nonblocking caches scheme caches stall miss rather processor continues operate primary caches one following events takes place 1 two cache misses outstanding third loadstore instruction appears instruction bus 2 subsequent instruction requires data either instructions caused cache miss use nonblocking caches improves overall performance allowing cache continue operating even though cache miss occurred cache locking scheme critical code data segments locked pri mary secondary caches locked contents updated write hit selected replacement miss rm7000 allows three caches locked separately however two available four sets cache locked particular rm7000 allows maximum 128 kb data code locked secondary cache maximum 8 kb code locked instruction cache maximum 8 kb data locked data cache", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-17 22:51:37", "segment": "segment78", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "6.3. SUMMARY ", "content": "chapter consider design analysis rst level memory hierarchy cache memory context locality issues discussed effect average access time explained three cache mapping techniques namely direct associative setassociative mappings analyzed performance measures compared also introduced three replacement techniques random fifo lru replacement impact three techniques cache hit ratio analyzed cache writing policies also introduced analyzed discussion cache ended presentation cache memory organization characteristics three reallife examples pentium iv powerpc pmcsierra rm7000 processors chapter 7 discuss issues related design aspects internal external organization main memory also discuss issues related virtual memory", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-17 22:51:37", "segment": "segment79", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 7 Memory System Design II", "content": "chapter 6 introduced concept memory hierarchy also character ized memoryhierarchy terms locality reference impact aver age access time moved onto cover different issues related first level hierarchy cache memory reader advised carefully review chapter 6 proceeding chapter chapter continue cover age different levels memory hierarchy particular start discus sion issues related design analysis main memory unit issues related virtual memory design discussed brief coverage different readonly memory rom implementations provided end chapter ", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-17 22:51:38", "segment": "segment80", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.1. MAIN MEMORY ", "content": "name implies main memory provides main storage computer figure 71 shows typical interface main memory cpu two cpu registers used interface cpu main memory memory address register mar memory data register mdr mdr used hold data stored andor retrieved infrom memory location whose address held mar possible visualize typical internal main memory structure consisting rows columns basic cells cell capable storing one bit infor mation figure 72 provides conceptual internal organization memory chip gure cells belonging given row assumed form bits given memory word address lines an1an2 a1a0 used inputs address decoder order generate word select lines w2n1 w1w0 given word select line common memory cells row given time address decoder activates one word select line deactivating remaining lines word select line used enable cells row read write data bit lines used input output contents cells memory cell connected two data lines given data line common cells given column static cmos technology main memory cell consists six transistors shown figure 73 six transistor static cmos memory cell consists two inverters back back noted cell could exist one two stable states example figure 73 1 transistor n2 point b 0 turn cause transistor p1 thus causing point 1 represents cell stable state call state 1 similar way a0 a1 a2 data lines an 1 w2n 1 w2 w1 w0 cell cell cell figure 72 conceptual internal organization memory chip one show 0 b 1 represents cell stable state call state 0 two transistors n n used connect cell two data bit lines normally word select activated two transistors turned thus protecting cell signal values carried data lines two transistors turned word select line activated takes place two transistors turned depend intended memory operation shown read operation 1 lines b b precharged high 2 word select line activated thus turning transistors n3 n4 3 depending internal value stored cell point b lead discharge line b b write operation 1 bit lines precharged b b 1 0 2 word select line activated thus turning transistors n3 n4 3 bit line precharged 0 force point b 1 0 internal organization memory array satisfy important memory design factor efcient utilization memory chip consider example 1k4 memory chip using organization shown figure 72 memory array organized 1k rows cells consisting four cells chip 10 pins address four pins data however may lead best utilization chip area another possible organization memory cell array 6464 organize array form 64 rows consisting 64 cells case six address lines forming called row address needed order select one 64 rows remaining four address lines called column address used select appropriate 4 bits among available 64 bits constituting row figure 74 illustrates organization another important factor related design main memory subsystem number chip pins required integrated circuit consider example design memory subsystem whose capacity 4k bits different organization memory capacity lead different number chip pins requirement table 71 illustrates observation clear table increasing number bits per addressable location results increase number pins needed integrated circuit another factor pertinent design main memory subsystem required number memory chips important realize available per chip memory capacity limiting factor designing memory subsystems consider example design 4m bytes main memory subsystem using 1m bit chip number required chips 32 chips noted number address lines required 4m system 22 number data lines 8 figure 75 shows block diagram intended memory subsystem basic building block used construct subsystem memory subsystem arranged four rows eight chips schematic arrangement shown figure 76 gure least sig nicant 20 address lines a19a0 used address basic building block 1m single bit chips highorder two address lines a21a20 used inputs 24 decoder order generate four enable lines connected ce line eight chips constituting row discussion main memory system design assumes use six transistor static random cell possible however use onetransistor dynamic cell dynamic memory depends storing logic values using capacitor together one transistor acts switch use dynamic memory leads saving chip area however due possibility decay stored values leakage stored charge capacitor dynamic memory requires periodical every milliseconds refreshment order restore stored logic values figure 77 illustrates dynamic memory array organization readwrite cir cuitry figure 77 performs functions sensing value bit line ampli fying refreshing value stored capacitor order perform read operation bit line precharged high static memory word line activated cause value stored capacitor appear bit line thus appearing data line di seen read operation destructive capacitor charged bit line there fore every read operation followed write operation value order perform write operation intended value placed bit line word line activated intended value 1 capacitor charged intended value 0 capacitor discharged table72summarizestheoperationofthecontrolcircuitry discussed appropriate internal organization memory subsystem lead saving number ic pins required important ic design factor order reduce number pins required given dynamic memory subsystem normal practice case static memory divide address lines row column address lines addition row column address lines transmitted pins one scheme known timemultiplexing potentially cut number address pins required half due timemultiplexing address lines necessary add two extra control lines row address strobe ras column address strobe cas two control lines used indicate memory chip row address lines valid column address lines valid respectively consider example design 1m1 dynamic memory subsystem figure 78 shows possible internal organization memory cell array array organized 10241024 noted 10 address lines shown used multi plex rows columns address lines 10 lines rows col umns latches used store row column addresses duration equal memory cycle case memory access consist ras row address followed cas column address", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-17 22:51:38", "segment": "segment80", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2. VIRTUAL MEMORY ", "content": "concept virtual memory principle similar cache memory described section 62 virtual memory system attempts optimize use main memory higher speed portion hard disk lower speed portion effect virtual memory technique using secondary sto rage extend apparent limited size physical memory beyond actual physical size usually case available physical memory space enough host parts given active program parts pro gram currently active brought main memory parts active stored magnetic disk segment program con taining word requested processor main memory time request segment brought disk main memory principles employed virtual memory design employed cache memory relevant principle keeping active segments highspeed main memory moving inactive segments back hard disk movement data disk main memory takes form pages page collection memory words moved disk mm processor requests accessing word page typical size page modern computers ranges 2k 16k bytes page fault occurs page containing word required processor exist mm brought disk movement pages pro grams data main memory disk totally transparent application programmer operating system responsible movement data programs useful mention point although based similar principles signicant difference exists cache virtual memories cache miss cause time penalty 5 10 times costly cache hit page fault hand 1000 times costly page hit therefore unreasonable processor wait page fault page trans ferred main memory thousands instructions could exe cuted modern processor page transfer address issued processor order access given word correspond physical memory space therefore address called vir tual logical address memory management unit mmu responsible translation virtual addresses corresponding physical addresses three address translation techniques identied directmapping associ ativemapping setassociativemapping techniques information main memory locations corresponding virtual pages kept table called page table page table stored main memory information kept page table includes bit indicating validity page modication page authority accessing page valid bit set corresponding page actually loaded main memory valid bits pages reset computer rst powered control bit kept page table dirty bit set corres ponding page altered residing main memory residing main memory given page altered dirty bit reset help deciding whether write contents page back disk time replacement override contents another page following discussion concentrate address trans lation techniques keeping mind use different control bits stored page table", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-17 22:51:38", "segment": "segment81", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.1. Direct Mapping ", "content": "figure 79 illustrates address translation process according directmapping technique case virtual address issued processor divided two elds virtual page number offset elds number bits virtual page number eld n number entries page table 2n virtual page number eld used directly address entry page table corresponding page valid indicated valid bit con tents specied page table entry correspond physical page address latter extracted concatenated offset eld order form physical address word requested processor hand specied entry page table contain valid physical page number represents page fault case mmu bring corresponding page hard disk load main memory indicate validity page translation process carried explained main advantage directmapping technique simplicity measured terms direct addressing page table entries main disadvantage expected large size page table order overcome need large page table associativemapping technique explained used", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-17 22:51:38", "segment": "segment82", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.2. Associative Mapping ", "content": "figure 710 illustrates address translation according associative mapping technique technique similar direct mapping virtual address issued processor divided two elds virtual page number offset elds however page table used associative mapping could far shorter direct mapping counterpart every entry page table divided two parts virtual page number physical page number match searched associatively virtual page number eld address virtual page numbers stored page table match found corresponding physical page number stored page table extracted concatenated offset eld order generate physical address word requested processor hand match could found represents page fault case mmu bring corresponding page hard disk load main memory indicate validity page translation process carried explained main advantage associativemapping technique expected shorter page table compared directmapping technique required translation process main disadvantage search required matching virtual page number eld virtual page numbers stored page table although search done associatively requires use added hardware overhead possible compromise complexity associative mapping simplicity direct mapping setassociative mapping technique hybrid technique explained", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-17 22:51:38", "segment": "segment83", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.3. Set-Associative Mapping ", "content": "figure 711 illustrates address translation according setassociative map ping case virtual address issued processor divided three elds tag index offset page table used setassociative map ping divided sets consisting number entries entry page table consists tag corresponding physical page address similar direct mapping index eld used directly determine set search conducted number bits index eld number sets page table 2s set determined search similar associative mapping conducted match tag eld entries specic set match found corresponding physical page address extracted concatenated offset eld order generate physical address word requested processor hand match could found represents page fault case mmu bring corresponding page hard disk load main memory update corresponding set indicate validity page translation process carried explained setassociativemapping technique strikes compromise inef ciency direct mapping terms size page table excessive hard ware overhead associative mapping also enjoys best two techniques simplicity direct mapping efciency associative mapping noted address translation techniques extra main memory access required accessing page table extra main memory access could potentially saved copy small portion page table kept mmu portion consists page table entries corre spond recent accessed pages case address translation attempted search conducted nd whether virtual page number tag virtual address eld could matched small portion kept table lookaside buffer tlb cache mmu explained", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-17 22:51:38", "segment": "segment84", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.4. Translation Look-Aside Buffer (TLB) ", "content": "modern computer systems copy small portion page table kept processor chip portion consists page table entries correspond recently accessed pages small portion kept translation lookaside buffer tlb cache search tlb precedes page table therefore virtual page eld rst checked entries tlb hope match found hit tlb result generation physical address word requested processor thus saving extra main memory access required access page table noted miss tlb equivalent page fault figure 712 illustrates use tlb virtual address translation process typical size tlb range 16 64 entries small tlb size hit ratio 90 always possible owing limited size search tlb done associa tively thus reducing required search time illustrate effectiveness use tlb let us consider case using tlb virtual memory system following specications number entries tlb 16 associative search time tlb 10 ns main memory access time 50 ns tlb hit ratio 09 average access time 09 10 50 01 10 2 50 09 60 01 110 65 ns compared 100 ns access time needed absence tlb noted simplicity overlooked exist ence cache illustration clear discussion requests items exist main memory page faults occur pages would brought hard disk main memory eventually lead totally lled main memory arrival new page hard disk totally full main memory promote following question main memory page removed replaced order make room incoming page replace ment algorithms policies explained next noted intel pentium 4 processor 36bit address bus allows maximum main memory size 64 gb according intel specica tions virtual memory 64 tb 65528 gb increases processor memory access space 236 246 bytes compared powerpc 604 two 12entry twoway setassociative translation lookaside buffers tlbs one instructions data virtual memory space therefore 252 4 petabytes", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-17 22:51:38", "segment": "segment85", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.5. Replacement Algorithms (Policies) ", "content": "basic implementation virtual memory concept demand paging means operating system programmer controls swap ping pages main memory required active pro cesses process needs nonresident page operating system must decide resident page replaced requested page technique used virtual memory makes decision called replacement policy exists number possible replacement mechanisms main objective mechanisms select removal page expectedly referenced near future random replacement according replacement policy page selected randomly replacement simplest replacement mechanism implemented using pseudorandom number generator generates numbers correspond possible page frames time replacement random number generated indicate page frame must replaced although simple technique may result efcient use main memory low hit ratio h random replacement used intel i860 family risc processor firstinfirstout fifo replacement according replacement policy page loaded others main memory selected replacement basis page replacement technique time spent given page residing main memory regardless pattern usage page technique also simple however expected result accep table performance measured terms main memory hit ratio page refer ences made processor strict sequential order illustrate use fifo mechanism offer following example example consider following reference string pages made processor 6 7 8 9 7 8 9 10 8 9 10 particular consider two cases number page frames allocated main memory two b number page frames allocated three figure 713 illustrates trace reference string two cases seen gure number page frames two 11 page faults shown bold gure number page frames increased three number page faults reduced ve since ve pages referenced optimum con dition fifo policy results best minimum page faults reference string strict order increasing page number references least recently used lru replacement according technique page replacement based pattern usage given page residing main memory regardless time spent main memory page referenced longest time residing main memory selected replacement lru technique matches programs characteristics therefore expected result best possible performance terms main memory hit ratio however involved compared techniques illustrate use lru mechanism offer following example example consider following reference string pages made processor 4 7 5 7 6 7 10 4 8 5 8 6 8 11 4 9 5 9 6 9 12 4 7 5 7 assume number page frames allocated main memory four compute number page faults generated trace main memory contents shown figure 714 number page faults 17 presenting lru particular implementation called stackbased lru implementation recently accessed page represented top page rectangle rectangles represent specic page frames fifo diagram thus reference generating page fault top row noted pages allotted program page references row change number page faults changes make set pages memory n page frames subset set pages n 1 page frames fact diagram could considered stack data structure depth stack representing number page frames page stack ie found depth greater number pageframes thenapagefaultoccurs clock replacement algorithm modied fifo algorithm takes account time spent page residing main memory similar fifo pattern usage page similar lru tech nique therefore sometimes called firstinnotusedfirstout finufo keeping track time usage technique uses pointer indicate place incoming page used bit indicate usage given page technique explained using following three steps 1 used bit 1 reset bit increment pointer repeat 2 used bit 0 replace corresponding page increment pointer 3 used bit set page referenced initial loading example consider following page requests fig 716 threepage frames mm system using finufo technique 23246256146 estimate hit ratio estimated hit ratio 411", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-17 22:51:39", "segment": "segment86", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.6. Virtual Memory Systems with Cache Memory ", "content": "typical computer system contain cache virtual memory tlb virtual address received processor number different scenarios occur dependent availability requested item cache main memory secondary storage figure 717 shows general ow diagram different scenarios rst level address translation checks match received vir tual address virtual addresses stored tlb match occurs tlb hit corresponding physical address obtained physical address used access cache match occurs cache hit element requested processor sent cache processor hand cache miss occurs block containing targeted element copied main memory cache discussed requested element sent processor scenario assumes tlb hit tlb miss occurs page table pt searched existence page containing targeted element main memory pt hit occurs corresponding physical address gener ated discussed search conducted block containing requested element discussed require updating tlb hand pt miss takes place indicating page fault page containing targeted element copied disk main memory block copied cache element sent processor last scenario require updating page table main memory cache subsequent request virtual address processor result updating tlb", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-10-17 22:51:39", "segment": "segment87", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.7. Segmentation ", "content": "segment block contiguous locations varying size segments used operating system os relocate complete programs main disk memory segments shared programs provide means pro tection unauthorized access andor execution possible enter seg ments segments unless access specically allowed data segments code segments separated also possible alter information code segment fetching instruction poss ible execute data data segment", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-10-17 22:51:39", "segment": "segment88", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.8. Segment Address Translation ", "content": "order support segmentation address issued processor consist segment number base displacement offset within segment address translation performed directly via segment table starting address targeted segment obtained adding segment number contents segment table pointer one important content segment table physical segment base address adding latter offset yields required physical address figure 718 illustrates segment address translation process possible additional information included segment table includes 1 segment length 2 memory protection readonly executeonly systemonly 3 replacement algorithm similar used paged systems 4 placement algorithm nding suitable place main memory hold incoming segment examples include first t b best t c worst t", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-10-17 22:51:39", "segment": "segment89", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.9. Paged Segmentation ", "content": "segmentation paging combined systems segment divided number equal sized pages basic unit transfer data main memory disk page given time main memory may consist pages various segments case virtual address divided segment number page number displacement within page address translation explained except physical segment base address obtained segment table added virtual page number order obtain appropriate entry page table output page table page physical address concatenated word eld virtual address results physical address figure 719 illustrates paged segmentation address translation", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-10-17 22:51:39", "segment": "segment90", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.2.10. Pentium Memory Management ", "content": "pentium processor segmentation paging individually available also disabled four distinct views memory exist 1 unsegmented unpaged memory 2 unsegmented paged memory 3 segmented unpaged memory 4 segmented paged memory segmentation 16bit segment number two used protection 32bit offset produce segmented virtual address space equal 246 64 terabytes virtual address space divided two parts one half 8k 4 gb global shared processes half local distinct process paging twolevel table lookup paging system used first level page directory 1024 entries 1024 page groups page table four mb length page table contains 1024 entries entry corresponds single 4 kb page", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-10-17 22:51:39", "segment": "segment91", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.3. READ-ONLY MEMORY ", "content": "random access well cache memories examples volatile memories volatile storage dened one loses contents power turned nonvolatile memory storages retain stored information power turned need volatile storage also need nonvo latile storage computer system boot subroutines microcode control video game cartridges examples computer software require use nonvolatile storage readonly memory rom also used realize combi national logic functions technology used implementing rom chips evolved years early implementations roms called maskprogrammed roms case made toorder one time rom programmed according specic encoding pattern sup plied user structure 44 cmos rom chip shown figure 720 gure ntype transistor placed 1 stored twotofour address decoder used create four word lines used activate row transistors 1 appears word line corresponding transistors turned thus pulling corresponding bit line 0 inverter output bit lines used output 1 output pulled bit lines table 73 shows patterns stored four rom locations maskprogrammed roms primarily used store machine microcode desk top bootstrap loaders video game cartridges programmed manufacturer maskprogrammed roms inexible user would like program hisher rom site different type rom called programmable rom prom used case fuses instead transis tors placed intersection word bit lines user program prom selectively blowing appropriate fuses done allow ing high current ow particular fuses thus causing blow process known burning rom although allows added exibility prom still restricted fact programmed user third type rom called erasable prom eprom reprogrammable allows stored data erased new data stored order provide exibility eproms constructed using special type transistors transistors able assume one two statuses normal disabled disabled transistor acts like switch turned time normal transistor programmed become open time inducing certain amount charge trapped gate disabled transistor become normal removing induced charge requires exposing transistors ultraviolet light exposing eprom chip ultraviolet light lead erasure entire chip contents considered major drawback eproms proms eproms used prototyping moderate size systems flash eproms feproms emerged strong contenders eproms feproms compact faster removable compared eproms erasure time feprom far faster eprom different type rom overcomes drawback eprom electrically eprom eeprom case erasure eprom done electrically moreover selectively contents selective cells erased leaving cells contents untouched feproms eeproms used applications requiring occasional updating infor mation programmable tvs vcr automotives table 74 summarizes main characteristics different types rom discussed", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-10-17 22:51:39", "segment": "segment92", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "7.4. SUMMARY ", "content": "discussion chapter continuation conducted chapter 6 particular chapter dedicated cover design aspects relate internal external organization main memory design static ram cell introduced emphasis read write operations dis cussion virtual memory started issues related address translation three address translation techniques discussed compared direct associative setassociative techniques use tlb improve average access time explained three replacement techniques intro duced fifo lru clock replacement segmented paged systems also introduced discussion virtual memory ended explanation virtual memory aspects pentium iv processor toward end chapter wehavetouchedonanumberofimplementationsforroms", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-10-17 22:51:39", "segment": "segment93", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 8 Input\u2013OutputDesignandOrganization ", "content": "havingconsideredthefundamentalconceptsrelatedtoinstructionsetdesign assembly language programming processor design memory design turn attentiontotheissuesrelatedtoinputoutput io designandorganizationitshould beemphasizedattheoutsetthatioplaysacrucialroleinanymoderncomputersystem therefore clear understanding appreciation fundamentals io operations devices andinterfacesareofgreatimportance input output io devices vary substantially characteristics one dis tinguishing factor among input devices also among output devices istheir data processing rate denedastheaveragenumberofcharacters thatcanbeprocessedbya devicepersecondforexample whilethedataprocessingrateofaninputdevicesuchas thekeyboard isabout 10characters bytes second ascanner cansend dataatarateof 200000 characters secondsimilarly alaser printer canoutput data ata rate ofabout 100000 characters second agraphic display output data atarate about30000000characterssecond strikingacharacteronthekeyboardofacomputerwillcauseacharacter intheform ofanasciicode tobesenttothecomputertheamount oftimepassedbeforethenext character issenttothecomputer willdependontheskilloftheuserandevensometimes onhisherspeedofthinkingitisoftenthecasethattheuserknowswhatheshewantsto input butsometimestheyneedtothinkbeforetouchingthenextbuttononthekeyboard therefore inputfromakeyboardisslowandburstinnatureanditwillbeawasteoftime forthecomputer tospenditsvaluable timewaiting forinputfromslowinputdevicesa mechanism istherefore needed whereby adevice tointerrupt theprocessor askingforattentionwheneveritisreadythisiscalledinterruptdrivencommunication betweenthecomputerandiodevices seesection83 consider case ofadiskatypical disk becapable oftransferring data rates exceeding several million bytesseconditwould beawaste oftime totransfer data byte byte even word word therefore itis always case data transferredintheformofblocks thatis entireprograms also necessary allows disk transferthis hugevolume data without intervention cputhis allow cpu perform useful operation huge amount data transferred disk memory essence direct memory access dma mechanism discussed section 84 begin discussion offering basic concepts section 81", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-10-17 22:51:39", "segment": "segment94", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.1. BASIC CONCEPTS ", "content": "figure 81 shows simple arrangement connecting processor memory given computer system input device example keyboard output device graphic display single bus consisting required address data control lines used connect system components figure 81 way processor memory exchange data explained chapters 6 7 concerned way processor io devices exchange data indicated introduction part exists big difference rate processor process infor mation input output devices one simple way accommodate speed difference input device example keyboard deposit character struck user register input register indicates avail ability character processor input character taken processor indicated input device order proceed input next character similarly processor character output display deposits specic register dedicated communication graphic display output register character taken graphic display indicated processor proceed output next character simple way communication processor io devices called io protocol requires availability input output registers typical computer system number input registers belonging specic input device also number output registers figure 81 single bus system belonging specic output device addition mechanism according processor address input output registers must adopted one arrangement exists satisfy abovementioned requirements among two particular methods explained rst arrangement io devices assigned particular addresses isolated address space assigned memory execution input instruc tion input device address cause character stored input register device transferred specic register cpu similarly execution output instruction output device address cause char acter stored specic register cpu transferred output register output device arrangement called shared io shown schematically figure 82 case address data lines cpu shared memory io devices separate control line used need executing input output instructions typical computer system exists one input one output device therefore need address decoder circuitry device identication also need status registers input output device status input device whether ready send data processor stored status register device similarly status output device whether ready receive data processor stored status register device input output registers status registers address decoder circuitry represent main components io interface module main advantage shared io arrangement separation memory address space io devices main disadvantage need special input output instructions processor instruction set shared io arrangement mostly adopted intel second possible io arrangement deal input output registers regular memory locations case read operation address corresponding input register input device example read device 6 equivalent performing input operation input register device 6 similarly write operation address corresponding output register output device example write device 9 equivalent performing output operation output register device 9 arrangement called memorymapped io shown figure 83 main advantage memorymapped io use read write instructions processor perform input output operations respectively eliminates need introducing special io instructions main disadvantage memorymapped io need reserve certain part memory address space addressing io devices reduction available memory address space memorymapped io mostly adopted motorola", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-10-17 22:51:40", "segment": "segment95", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.2. PROGRAMMED I/O ", "content": "section present main hardware components required communi cations processor io devices way according communications take place protocol also indicated protocol pro grammed form routines run control cpu consider example input operation device 6 could keyboard case shared io arrangement let us also assume eight different io devices connected processor case see fig 84 following protocol steps program followed 1 processor executes input instruction device 6 example input 6 effect executing instruction send device number address decoder circuitry input device order ident ify specic input device involved case output deco der device 6 enabled outputs decoders disabled 2 buffers gure assumed eight buffers holding data specied input device device 6 enabled output address decoder circuitry 3 data output enabled buffers available data bus 4 instruction decoding gate data available data bus input particular register cpu normally accumulator output operations performed way similar input operation explained difference direction data transfer specic cpu register output register specied output device io operations performed manner called programmed io performed cpu control complete instruction fetch decode execute cycle executed every input every output oper ation programmed io useful cases whereby one character time transferred example keyboard character mode printers although simple programmed io slow one point overlooked description programmed io handle substantial speed difference io devices pro cessor mechanism adopted order ensure character sent output register output device screen overwritten processor due processor high speed displayed char acter available input register keyboard read processor brings issue status input output devices mechanism implemented requires availability status bit bin interface input device status bit bin interface output device whenever input device keyboard character available input register indicates setting bin 1 program processor used continuously monitor bin program sees bin 1 interpret mean character available input register device reading character require executing protocol explained when ever character read program reset bin 0 thus avoiding multiple read character similar manner processor deposit character output register output device screen bout 0 screen displayed character sets bout 1 indicating program monitors bout screen ready receive next character process checking status io devices order determine readi ness receiving andor sending characters called software io polling hard ware io polling scheme shown figure 85 gure n io devices access interrupt line inr upon recognizing arrival request called interrupt request inr pro cessor polls devices determine requesting device done thedlog2nepolling lines priority requesting device determine order addresses put polling lines address highest priority device put rst followed next priority least priority device addition io polling two mechanisms used carry io operations interruptdriven io direct memory access dma discussed next two sections", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-10-17 22:51:40", "segment": "segment96", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.3. INTERRUPT-DRIVEN I/O ", "content": "often necessary normal ow program interrupted example react abnormal events power failure interrupt also used acknowledge completion particular course action printer indicating computer completed printing character input register ready receive character interrupt also used timesharing systems allocate cpu time among different programs instruction sets modern cpus often include instruction mimic actions hardware interrupts cpu interrupted required discontinue current activity attend interrupting condition serve interrupt resume activity wherever stopped discontinuity processor current activity requires nishing executing current instruction saving processor status mostly form pushing register values onto stack transferring control jump called interrupt service routine isr service offered interrupt depend source interrupt example interrupt due power failure action taken save values processor registers pointers resumption correct operation guaranteed upon power return case io interrupt serving interrupt means perform required data transfer upon nishing serving interrupt processor restore original status popping relevant values stack processor returns normal state enable sources interrupt one important point overlooked scenario issue ser ving multiple interrupts example occurrence yet another interrupt processor currently serving interrupt response new interrupt depend upon priority newly arrived interrupt respect interrupt currently served newly arrived interrupt priority less equal currently served one wait processor nishes serving current interrupt hand newly arrived inter rupt priority higher currently served interrupt example power failure interrupt occurring serving io interrupt processor push status onto stack serve higher priority interrupt correct handling multiple interrupts terms storing restoring correct processor status guaranteed due way push pop operations performed example serve rst interrupt status 1 pushed onto stack upon receiving second interrupt status 2 pushed onto stack upon serving second interrupt status 2 popped stack upon serving rst interrupt status 1 popped stack possible interrupting device identify processor sending code following interrupt request code sent given io device represent io address memory address location start isr device scheme called vectored interrupt", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-10-17 22:51:40", "segment": "segment97", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.3.1. Interrupt Hardware ", "content": "discussion assumed processor recognized occurrence interrupt proceeding serve computers provided interrupt hardware capability form specialized interrupt lines processor lines used send interrupt signals processor case io exists one io device processor pro vided mechanism enables handle simultaneous interrupt requests recognize interrupting device two basic schemes implemented achieve task rst scheme called daisy chain bus arbitration dcba second called independent source bus arbitration isba according dcba see fig 86a io devices present interrupt requests interrupt request line inr similar polling arrangement upon recognizing arrival interrupt request processor daisy chained grant line gl sends grant requesting device start com munication processor gl goes devices starting rst device nearer processor going next device reaches last device device n device 1 put request hold grant signal start communication processor hand device 1 interrupt request pass grant signal device 2 repeat procedure case multiple requests dcba arrangement gives highest priority device physically nearer processor furthest device processor lowest priority according isba see fig 86b io device interrupt request line send interrupt request independent devices similarly io device grant line receives grant signal request start communicating processor io device priority isba depend device location priority arbitra tion circuitry needed order deal simultaneous interrupt requests", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-10-17 22:51:40", "segment": "segment98", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.3.2. Interrupt in Operating Systems ", "content": "interrupt occurs operating system gains control operating system saves state interrupted process analyzes interrupt passes control appropriate routine handle interrupt several types interrupts including io interrupts io interrupt noties operating system io device completed suspended operation needs service cpu process interrupt context current process must saved interrupt handling routine must invoked process called context switching process context two parts processor context memory context processor context state cpu registers including program counter pc program status words psws registers memory context state program memory including program data interrupt handler routine processes different type interrupt operating system must provide programs save area contexts also must provide organized way allocating deallocating memory interrupted process interrupt handling routine nishes processing inter rupt cpu dispatched either interrupted process highest priority ready process depend whether interrupted process preemptive nonpreemptive process nonpreemptive gets cpu first con text must restored control returned interrupts process figure 87 shows layers software involved io operations first pro gram issues io request via io call request passed io device device completes io interrupt sent interrupt handler invoked eventually control relinquished back process initiated io example 1 80386 interrupt architecture 8086 processors two hardware interrupt pins labeled intr nmi nmi nonmaskable interrupt means blocked processor must respond nmi input usually reserved critical system functions intr input mask able interrupt request line cpu programmable interrupt controller 8259a pic interrupts intr enabled disabled using instructions sti set interrupt ag cli clear interrupt ag respectively interrupt handlers called interrupt service routines isr address interrupt service routine stored four consecutive memory locations inter rupt vector table ivt ivt stores pointers isr type interrupt interrupt occurs 8bit type number supplied processor identies appropriate entry table interrupt generated device goes pic multiple interrupts may generated simultaneously however buffered pic pic decides one interrupts forwarded cpu inform cpu outstanding interrupt waiting processed pic sends interrupt request intr cpu appropriate time responds interrupt acknowledgment inta time pic put 8bit interrupt type number associated device bus cpu identify interrupt handler invoke case several interrupts pending pic send next interrupt request cpu receives end interrupt command current isr figure 88 shows simple protocol used determine isr invoked computer designs used single pic pc xt eight different inter rupt requests allowed irq0irq7 table 81 shows list standard interrupt type numbers typical devices designed second pic added increasing number interrupt inputs 15 figure 89 shows two pics wired cascade one pic designated master becomes slave shown gure slave interrupts input via irq1 master general eight different slaves accommodated single pic example 2 arm interrupt architecture arm stands advanced risc machines arm 1632bit architecture used portable devices low power consumption reasonable performance interrupt requests arm core collected controlled interrupt controller called atic interrupt controller provides interface core collect 64 interrupt requests usual sequence events interrupts follows interrupts would enabled source peripheral enabled interrupt controller nally enabled core interrupt occurs source signal routed interrupt controller arm core interrupt controller theinterruptcanbeenabledordisabledtothecoreandcanbeassignedapriority level interrupt request reaches core halt core normal processing routines allow interrupt request serviced among different interrupt requests arm core handle irq fiq requests irq normal interrupt request used generalpurpose inter rupt handling lower priority fiq fast interrupt request masked fiq sequence entered fiq used support high speed data transfer channel processes similar 8086 addresses interrupt handlers stored vector table shown table 82 example irq detected core accesses address 018 vector table executes instruction loaded address normally instruction found 018 vector table form ldr pc irqhandler load address irq interrupt handler pc fiq detected core accesses address 01c vector table executes instruction loaded address normally instruction found 01c vector table form ldr pc fiqhandler interrupt occurs following happens inside core 1 cpsr current program state register copied spsr saved pro gram status register mode entered 2 cpsr bits set appropriate mode entered core set arm state relevant interrupt disable ags set 3 appropriate set banked registers banked 4 return address stored link register relevant mode 5 pc set relevant vector address example irq interrupt detected arm core enables spsrirq cpsr enters irq mode setting mode bits cspr 10010 dis ables normal interrupts setting bit cpsr saves address next instruction r14irq loads 018 pc address 018 instruction load address interrupt handler pc similarly fiq interrupt detected arm core enables spsrq cpsr enters fiq mode setting mode bits cspr 10001 disables normal fast interrupts setting f bits cpsr saves address next instruction r14q loads 01c pc address 01c instruc tion load address interrupt handler pc mc9328mx1mxl aitc mc9328mx1mxl aitc contains twentysix 32bit registers described table 83 using registers aitc allows selection whether pending interrupt source create normal interrupt irq fast interrupt fiq core accomplished via inttypeh inttypel registers each bit inthese registers corresponds toan interrupt source available system setting abit select corresponding interrupt source afast interrupt whereas clearing bit select corre sponding bit anormal interrupt in inttypel register bit 0corresponds interrupt source 0 bit1corresponds tointerrupt source 1 soonuptobit31 corresponds tointerrupt source 31inthe inttypeh register bit0corresponds interruptsource32 bit1corresponds tointerruptsource33 andsoonuptobit31 correspondstointerruptsource63 determining type pending interrupt next step isto enable interrupt this bedone via intenableh intenablel registers to enable apending interrupt core corresponding interrupt source bit intenableh orintenablel mustbesetlikewise todisabletheinterrupt clear bitin intenablel register bit 0corresponds interrupt source 0 bit 1 corresponds tointerrupt source 1 andsoonuptobit31 corresponds tointerrupt source 31intheintenableh register bit0corresponds tointerrupt source 32 bit1 correspondstointerruptsource33 andsoonuptobit31 whichcorresponds tointerrupt source63forexample toselectinterruptsourcebit15asanormalinterrupt clearbit15 intheinttypelregisterthen toenablethisinterrupt setbit15intheintenablel register likewise toselect interrupt source bit 45 asafast interrupt set bit 13 inthe inttypehregisterthen toenablethisinterrupt setbit13intheintenablehthe aitcalsoallowstheprogrammertoprioritizethependingnormalinterruptsourcesto oneof16differentprioritylevelsthiscanbedoneinthenipriority 70 registers", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-10-17 22:51:41", "segment": "segment99", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.4. DIRECT MEMORY ACCESS (DMA) ", "content": "themainideaofdirectmemoryaccess dma istoenableperipheraldevicestocutoutthe middleman roleofthecpuindatatransferitallowsperipheral devicestotransferdata directly tomemory without intervention ofthe cpuhaving peripheral devicesaccessmemorydirectlywouldallowthecputodootherwork whichwouldlead toimprovedperformance especiallyinthecasesoflargetransfers dma controller isapiece hardware controls one peripheral devicesitallows devices totransfer data toorfrom thesystem smemory without helpoftheprocessorinatypicaldmatransfer someeventnotiesthedmacontroller data needs transferred memory both dma cpu use memory busandonly oneortheother canusethememory atthesame timethe dma controller thensendsarequesttothecpuaskingitspermission tousethebusthecpu returnsanacknowledgmenttothedmacontrollergrantingitbusaccessthedmacan take control bus independently conduct memory transfer when transfer iscomplete thedmarelinquishes itscontrol ofthebustothecpuprocessors thatsupport dma provide oneormoreinputsignals thatthebusrequester canassert gaincontrol ofthebusandoneormore output signals thatthecpu asserts toindicate hasrelinquished thebusfigure 810shows howthedma controller shares thecpu memorybus direct memory access controllers require initialization cpu typical setup parameters include address source area address destination area length block whether dma controller generate processor interrupt block transfer complete dma controller address reg ister word count register control register address register contains address species memory location data transferred typically possible dma controller automatically increment address register word transfer next transfer next memory location word count register holds number words transferred word count decremented one word transfer control register species transfer mode direct memory access data transfer performed burst mode single cycle mode burst mode dma controller keeps control bus data transferred memory peripheral device mode transfer needed fast devices data transfer stopped entire transfer done singlecycle mode cycle stealing dma controller relinquishes bus transfer one data word mini mizes amount time dma controller keeps cpu controlling bus requires bus requestacknowledge sequence performed every single transfer overhead result degradation performance singlecycle mode preferred system tolerate cycles added interrupt latency peripheral devices buffer large amounts data causing dma controller tie bus excessive amount time following steps summarize dma operations 1 dma controller initiates data transfer 2 data moved increasing address memory reducing count words moved 3 word count reaches zero dma informs cpu termination means interrupt 4 cpu regains access memory bus dma controller may multiple channels channel associated address register count register initiate data transfer device driver sets dma channel address count registers together direction data transfer read write transfer taking place cpu free things transfer complete cpu interrupted direct memory access channels shared device drivers device driver must able determine dma channel use devices xed dma channel others exible device driver simply pick free dma channel use linux tracks usage dma channels using vector dmachan data structures one per dma channel dmachan data structure contains two elds pointer string describing owner dma channel ag indi cating dma channel allocated", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-10-17 22:51:41", "segment": "segment100", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.5. BUSES ", "content": "bus computer terminology represents physical connection used carry signal one point another signal carried bus may represent address data control signal power typically bus consists number connections running together connection called bus line bus line normally ident ied number related groups bus lines usually identied name example group bus lines 1 16 given computer system may used carry address memory locations therefore identied address lines depending signal carried exist least four types buses address data control power buses data buses carry data control buses carry control signals power buses carry powersupplyground voltage size number lines address data control bus varies one system another consider example bus connecting cpu memory given system called cpu bus size memory system 512m word word 32 bits system size address bus log2 512220 29 lines size data bus 32 lines least one control line rw exist system addition carrying control signals control bus carry timing signals signals used determine exact timing data transfer bus determine given computer system component processor memory io devices place data bus receive data bus bus synchronous data transfer bus controlled bus clock clock acts timing reference bus signals bus asynchronous data transfer bus based avail ability data clock signal data transferred asynchronous bus using technique called handshaking operations synchronous asyn chronous buses explained understand difference synchronous asynchronous let us con sider case master cpu dma source data trans ferred slave io device following sequence events involving master slave 1 master send request use bus 2 master request granted bus allocated master 3 master place addressdata bus 4 slave slave selected 5 master signal data transfer 6 slave take data 7 master free bus", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-10-17 22:51:42", "segment": "segment101", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.5.1. Synchronous Buses ", "content": "synchronous buses steps data transfer take place xed clock cycles everything synchronized bus clock clock signals made available master slave bus clock square wave signal cycle starts one rising edge clock ends next rising edge beginning next cycle transfer may take multiple bus cycles depending speed parameters bus two ends transfer one scenario would rst clock cycle master puts address address bus puts data data bus asserts appropriate control lines slave recognizes address address bus rst cycle reads new value bus second cycle synchronous buses simple easily implemented however connect ing devices varying speeds synchronous bus slowest device deter mine speed bus also synchronous bus length could limited avoid clockskewing problems", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-10-17 22:51:42", "segment": "segment102", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.5.2. Asynchronous Buses ", "content": "xed clock cycles asynchronous buses handshaking used instead figure 811 shows handshaking protocol master asserts dataready line point 1 gure sees dataaccept signal slave sees data ready signal assert dataaccept line point 2 gure rising dataaccept line trigger falling dataready line removal data bus falling dataready line point 3 gure trigger falling dataaccept line point 4 gure handshaking called fully interlocked repeated data completely transferred asyn chronous bus appropriate different speed devices", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-10-17 22:51:42", "segment": "segment103", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.5.3. Bus Arbitration ", "content": "bus arbitration needed resolve conicts two devices want become bus master time short arbitration process select ing next bus master among multiple candidates conicts resolved based fairness priority centralized distributed mechanisms centralized arbitration centralized arbitration schemes single arbiter used select next master simple form centralized arbitration uses bus request line bus grant line bus busy line lines shared potential masters daisychained cascade figure 812 shows simple centralized arbitration scheme gure potential masters submit bus request time xed priority set among masters left right bus request received central bus arbiter issues bus grant asserting bus grant line potential master closest arbiter potential master 1 sees bus grant signal checks see made bus request yes takes bus stops propagation bus grant signal made request simple turn bus grant signal next master right potential master 2 transaction complete busy line deasserted instead using shared request grant lines multiple bus request bus grant lines used one scheme master independent request grant line shown figure 813 central arbiter employ priority based fairnessbased tiebreaker another scheme allows masters mul tiple priority levels priority level bus request bus grant line within priority level daisy chain used scheme device attached daisy chain one priority level arbiter receives multiple bus requests different levels grants bus level highest priority daisy chaining used among devices level figure 814 shows example four devices included two priority levels potential master 1 potential master 3 daisychained level 1 potential master 2 potential master 4 daisychained level 2 decentralized arbitration decentralized arbitration schemes prioritybased arbitration usually used distributed fashion potential master unique arbitration number used resolving conicts multiple requests submitted example conict always resolved favor device highest arbitration number question deter mine device highest arbitration number one method request ing device would make unique arbitration number available devices device compares number arbitration number device smaller number always dismissed eventually requester highest arbitration number survive granted bus access", "url": "RV32ISPEC.pdf#segment104", "timestamp": "2023-10-17 22:51:42", "segment": "segment104", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.6. INPUT\u2013OUTPUT INTERFACES ", "content": "interface data path two separate devices computer system inter face buses classied based number bits transmitted given time serial versus parallel ports serial port 1 bit data trans ferred time mice modems usually connected serial ports parallel port allows 1 bit data processed printers common peripheral devices connected parallel ports table 84 shows summary ofthevarietyofbusesandinterfacesusedinpersonalcomputers", "url": "RV32ISPEC.pdf#segment105", "timestamp": "2023-10-17 22:51:42", "segment": "segment105", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "8.7. SUMMARY ", "content": "one major features computer system ability exchange data devices allow user interact system chapter focused io system way processor io devices exchange data computer system chapter described three ways organizing io programmed io interruptdriven io dma programmed io cpu handles trans fers take place registers devices interruptdriven io cpu handles data transfers io module running concurrently dma data transferred memory io devices without intervention cpu also studied two methods synchronization polling interrupts polling processor polls device waiting io complete clearly processor cycles wasted method using interrupts processors free switch tasks io devices assert interrupts io complete interrupts incurs delay penalty two examples interrupt handling covered 8086 family arm chapter also covered buses interfaces wide variety interfacesandbusesusedinpersonalcomputersaresummarized", "url": "RV32ISPEC.pdf#segment106", "timestamp": "2023-10-17 22:51:42", "segment": "segment106", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 9 ", "content": "pipelining design techniques exist two basic techniques increase instruction execution rate pro cessor increase clock rate thus decreasing instruction execution time alternatively increase number instructions executed simultaneously pipelining instructionlevel parallelism examples latter technique pipelining owes origin car assembly lines idea one instruction processed processor time similar assembly line success pipeline depends upon dividing execution instruction among number subunits stages perform ing part required operations possible division consider instruction fetch f instruction decode operand fetch f instruction execution e store results subtasks needed execution instruction case possible ve instructions pipeline time thus reducing instruction execution latency chapter discuss basic concepts involved designing instruction pipelines performance measures pipeline introduced main issues contributing instruction pipeline hazards discussed possible solutions introduced addition introduce concept arithmetic pipelining together prob lems involved designing pipeline coverage concludes review recent pipeline processor", "url": "RV32ISPEC.pdf#segment107", "timestamp": "2023-10-17 22:51:42", "segment": "segment107", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.1. GENERAL CONCEPTS ", "content": "pipelining refers technique given task divided number subtasks need performed sequence subtask performed given functional unit units connected serial fashion operate simultaneously use pipelining improves performance compared traditional sequential execution tasks figure 91 shows illustration basic difference executing four subtasks given instruction case fetching f decoding execution e writing results w using pipelining sequential processing clear gure total time required process three instruc tions i1 i2 i3 six time units fourstage pipelining used compared 12 time units sequential processing used possible saving 50 execution time three instructions obtained order formulate performance measures goodness pipeline processing series tasks space time chart called gantt chart used chart shows suc cession subtasks pipe respect time figure 92 shows gantt chart chart vertical axis represents subunits four case horizontal axis represents time measured terms time unit required unit perform task developing gantt chart assume time taken subunit perform task call unit time seen gure 13 time units needed nish executing 10 instructions i1 i10 compared 40 time units sequential proces sing used ten instructions requiring four time units following analysis provide three performance measures good ness pipeline speedup n throughput u n efciency e n noted analysis assume unit time units 1 speedup n consider execution tasks instructions using nstages units pipeline seen n 1 time units required", "url": "RV32ISPEC.pdf#segment108", "timestamp": "2023-10-17 22:51:42", "segment": "segment108", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.2. INSTRUCTION PIPELINE ", "content": "thesimple analysis made insection 91ignores animportant aspect thatcanaffect performanceofapipeline thatis pipelinestallapipelineoperationissaidtohavebeen stalled ifoneunit stage requires time toperform itsfunction thusforcing stagestobecomeidleconsider forexample thecaseofaninstructionfetchthatincursa cache missassume also acache miss requires three extra time unitsfigure 93 illustrates effect instruction i2incurring acache miss assuming executionofteninstructionsi1toi10 gure shows due extra time units needed instruction i2 fetched pipeline stalls fetching instruction i3 subsequent instruc tions delayed situations create known pipeline bubble pipe line hazards creation pipeline bubble leads wasted unit times thus leading overall increase number time units needed nish executing given number instructions number time units needed execute 10 instructions shown figure 93 16 time units compared 13 time units cache misses pipeline hazards take place number reasons among instruction dependency data dependency explained", "url": "RV32ISPEC.pdf#segment109", "timestamp": "2023-10-17 22:51:42", "segment": "segment109", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.2.1. Pipeline \u201cStall\u201d Due to Instruction Dependency ", "content": "correct operation pipeline requires operation performed stage must depend operation performed stage instruction depen dency refers case whereby fetching instruction depends results executing previous instruction instruction dependency manifests execution conditional branch instruction consider example case branch negative instruction case next instruction fetch known result executing branch negative instruction known following discussion assume instruction following conditional branch instruction fetched result executing branch instruction known stored following example shows effect instruction dependency pipeline example 1 consider execution ten instructions i1i10 pipeline con sisting four pipeline stages instruction fetch id instruction decode ie instruction execute instruction results store assume instruction i4 conditional branch instruction executed branch taken branch condition satised assume also branch instruction fetched pipeline stalls result executing branch instruction stored show succession instructions pipeline show gantt chart figure 94 shows required gantt chart bubble created due pipeline stall clearly shown gure", "url": "RV32ISPEC.pdf#segment110", "timestamp": "2023-10-17 22:51:43", "segment": "segment110", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.2.2. Pipeline \u201cStall\u201d Due to Data Dependency ", "content": "data dependency pipeline occurs source operand instruction ii depends results executing preceding instruction ij j noted although instruction ii fetched operand may avail able results instruction ij stored following example shows effect data dependency pipeline example 2 consider execution following piece code add r1 r2 r3 r3 r1 r2 sl r3 r3 sl r3 sub r5 r6 r4 r4 r5 2 r6 piece code rst instruction call ii adds contents two registers r1 r2 stores result register r3 second instruction call ii1 shifts contents r3 one bit position left stores result back r3 third instruction call ii2 stores result subtracting content r6 content r5 register r4 order show effect data dependency assume pipeline consists ve stages id ie case stage represents operand fetch stage functions remaining four stages remain explained figure 95 shows gantt chart piece code shown gure although instruction ii1 successfully decoded time unit k 2 instruction pro ceed unit time unit k 3 operand fetched ii1 time unit k3 content register r3 modied execution instruction ii however modied value r3 available end time unit k 4 require instruction ii1 wait output id unit k 5 notice instruction ii2 also wait output unit time instruction ii1 proceeds id net result pipeline stall takes place due data dependency exists instruction ii instruction ii1 data dependency presented example resulted register r3 destination instructions ii ii1 called writeafterwrite data dependency taking consideration register written read total four different possibilities exist including writeafterwrite case three cases readafterwrite writeafterread readafterread among four cases readafterread case lead pipeline stall register read operation change content register among remaining three cases writeafterwrite see example readafterwrite lead pipeline stall following piece code illustrates readafterwrite case add r1 r2 r3 r3 r1 r2 sub r3 1 r4 r4 r3 2 1 case rst instruction modies content register r3 write operation second instruction uses modied contents r3 read operation load value register r4 two instructions proceeding within pipeline care taken value register r3 read second instruction updated value resulting execution previous instruction figure 96 shows gantt chart case assuming rst instruction called ii second instruction called ii1 clear operand second instruction fetched time unit k3 delayed time unit k 5 modi ed value content register r3 available time slot k 5 fetching operand second instruction time slot k 3 lead incorrect results example 3 consider execution following sequence instructions vestage pipeline consisting id ie required show succession instructions pipeline figure 97 illustrates progression instructions pipeline taking consideration data dependencies mentioned assumption made constructing gantt chart figure 97 fetching operand instruction depends results previous instruction execution delayed operand available result stored total 16 time units required execute given seven instructions taking consideration data dependencies among different instructions based results obtained compute speedup throughput executing piece code given example 3 speedup 5 time using sequential processing time using pipeline processing 7 5 16 219 throughput u 5 tasks executed per unit time 7 16 044 discussion pipeline stall due instruction data dependencies reveal three main points problems associated dependen cies 1 instruction data dependencies lead added delay pipeline 2 instruction dependency lead fetching wrong instruction 3 data dependency lead fetching wrong operand exist number methods deal problems resulting instruction data dependencies methods try prevent fetching wrong instruction wrong operand others try reduce delay incurred pipeline due existence instruction data dependency number methods introduced methods used prevent fetching wrong instruction operand use nop operation method used order prevent fetching wrong instruction case instruction dependency fetching wrong operand case data dependency recall example 1 example execution sequence ten instructions i1i10 pipeline consisting four pipeline stages id ie considered order show execution instructions pipeline assumed branch instruction fetched pipeline stalls result executing branch instruction stored assumption needed order prevent fetching wrong instruction fetching branch instruction reallife situations mechanism needed guarantee fetching appropriate instruction appropriate time insertion nop instructions help carrying task nop instruction effect status processor example 4 consider execution ten instructions i1i10 pipeline con sisting four pipeline stages id ie assume instruction i4 con ditional branch instruction executed branch taken branch condition satised order execute set instructions preventing fetching wrong instruction assume specied number nop instructions inserted follow instruction i4 sequence precede instruction i5 figure 98 shows gantt chart illustrating execution new sequence instructions inserting nop instructions gure shows insertion three nop instructions instruction i4 guar antee correct instruction fetch i4 case i5 fetched time slot number 8 result executing i4 would stored condition branch would known noted number nop instructions needed equal n 2 1 n number pipeline stages example 4 illustrates use nop instructions prevent fetching wrong instruction case instruction dependency similar approach used prevent fetching wrong operand case data dependency consider execution following piece code vestage pipeline id ie add r1 r2 r3 r3 r1 r2 sub r3 1 r4 r4 r3 2 1 mov r5 r6 r6 r5 note data dependency form readafterwrite rw rst two instructions fetching operand second instruction fetching content r3 proceed result rst instruction stored order achieve nop instructions inserted rst two instructions shown add r1 r2 r3 r3 r1 r2 nop nop sub r3 1 r4 r4 r3 2 1 mov r5 r6 r6 r5 execution modied sequence instructions shown figure 99 gure shows use nop guarantees time unit 6 instruction i2 fetch correct value r3 value stored result executing instruction i1 time unit 5 methods used reduce pipeline stall due instruction dependency unconditional branch instructions order able reduce pipeline stall due unconditional branches necessary identify unconditional branches early possible fetching wrong instruction may also possible reduce stall reordering instruction sequence methods explained reordering instructions case sequence instructions reor dered correct instructions brought pipeline guaranteeing correctness nal results produced reordered set instructions con sider example execution following group instructions i1 i2 i3 i4 i5 ij ij1 pipeline consisting three pipeline stages ie group instructions i4 unconditional branch instruction whereby target instruction ij execution group instructions sequence given lead incorrect fetching instruction i5 fetching instruction i4 however consider execution reordered sequence i1 i4 i2 i3 i5 ij ij1 execution reordered sequence using threestage pipeline shown figure 910 gure shows reordering instructions causes instruction ij fetched time unit 5 instruction i4 executed reorder ing instructions done using smart compiler scan sequence code decide appropriate reordering instructions lead producing correct nal results minimizing number time units lost due instruction dependency one important condition must satised order reordering instruction method produce correct results set instructions swapped branch instruction hold data andor instruction dependency relationship among use dedicated hardware fetch unit case fetch unit assumed associated dedicated hardware unit capable recognizing unconditional branch instructions computing branch target address quickly possible consider example execution sequence instructions illustrated assume also fetch unit dedicated hardware unit capable recognizing unconditional branch instructions comput ing branch address using additional time units figure 911 shows gantt chart sequence instructions gure shows correct sequence instructions executed incurring extra unit times assumption needing additional time units recognize branch instruc tions computing target branch address unrealistic typical cases added hardware unit fetch unit require additional time unit carry task recognizing branch instructions computing target branch addresses extra time units needed hardware unit instruc tions executed number extra time units needed may reduced indeed may eliminated altogether essence method shown precomputing branches reordering instructions method considered combination two methods discussed previous two sections case dedicated hardware used recognize branch instructions computing target branch address executes task concurrently execution instructions consider example sequence instructions given assume also dedicated hardware unit requires one time unit carry task case reordering instructions become i1 i2 i4 i3 i5 ij ij1 produce correct results causing additional lost time units illustrated using gantt chart figure 912 notice time unit 4 used dedicated hardware unit compute target branch address concurrently fetching instruction i3 noted success method depends availability instructions executed concurrently dedicated hardware unit com puting target branch address case presented assumed reordering instructions provide instructions executed con currently target branch computation however reordering possible use instruction queue together prefetching instruc tions help provide needed conditions explained instruction prefetching method requires instructions fetched stored instruction queue needed method also calls fetch unit required hardware needed recognize branch instructions compute target branch address pipeline stalls due data dependency causing new instructions fetched pipeline fetch unit use time continue fetching instructions add instruction queue hand delay fetching instructions occurs example due instruction dependency prefetched instructions instruction queue used provide pipeline new instructions thus eliminating otherwise lost time units due instruction dependency pro viding appropriate instruction instruction queue pipeline usually done using called dispatch unit technique prefetching instructions executing pipeline stall due instruction depen dency called branch folding conditional branch instructions techniques discussed context unconditional branch instructions may work case conditional branch instructions conditional branching target branch address known execution branch instruction completed therefore number techniques used minimize number lost time units due instruction dependency represented conditional branching delayed branch delayed branch refers case whereby possible ll location following conditional branch instruction called branch delay slot useful instruction executed target branch address known consider example execution following program loop pipeline consisting two stages fetch f execute e i1 load 5 r1 r1 5 i2 sub r2 r2 r2 2 1 i3 bnn branch result negative i4 add r4 r5 r3 r3 r4 r5 noted end rst loop either instruction i1 instruction i4 fetched depending result executing instruction i3 way situation dealt delay fetching next instruction result executing instruction i3 known lead incurring extra delay pipeline however extra delay may avoided sequence instructions reordered become follows sub r2 r2 r2 2 1 load 5 r1 r1 5 bnn branch result negative add r4 r5 r3 r3 r4 r5 figure 913 shows gantt chart executing modied piece code case r2 3 entering loop gure indicates branching takes place one instruction later actual place branch instruction appears original instruction sequence hence name delayed branch also clear figure 913 reordering sequence instructions possible ll branch delay time slot useful instruction thus eliminating extra delay pipeline shown number studies smart compilers able make use one branch delay time slot 80 cases use branch delay time slots led improvement speedup throughput processors using smart compilers prediction next instruction fetch method tries reduce time unit potentially lost due instruction dependency predicting next instruction fetch fetching conditional branch instruction basis branch outcomes random would possible save 50 otherwise lost time simple way carry technique assume whenever conditional branch encountered system predicts branch taken alternatively taken way fetching instructions sequential address order continue fetching instructions starting target branch instruction continue completion branch instruction execution results known decision made whether instructions executed assuming branch taken taken intended correct instruction sequence outcome decision one two possibilities prediction correct execution continue wasted time units hand wrong prediction made care must taken status machine measured terms memory register contents restored speculative execution took place prediction based scheme lead branch prediction decision every time given instruction encountered hence name static branch prediction simplest branch prediction scheme done compilation time another technique used branch prediction dynamic branch pre diction case prediction done run time rather compile time branch encountered record checked nd whether branch encountered decision made time branch taken taken run time decision made whether take take branch making decision twostate algorithm likely taken ltk likely taken lnk followed current state ltk branch taken algorithm maintain ltk state otherwise switch lnk hand current state lnk branch taken algorithm maintain lnk state otherwise switch ltk state simple algorithm work ne particularly branch going backwards example execution loop however lead misprediction control reaches last pass loop robust algorithm uses four states used arm 11 microarchitec ture see interesting notice combination dynamic static branch predic tion techniques lead performance improvement attempt use dynamic branch prediction rst made possible system resort static prediction technique consider example arm 11 microarchitecture rst implementation armv6 instruction set architecture architecture uses dynamicstatic branch prediction combination record form 64entry fourstate branch target address cache btac used help dynamic branch prediction nding whether given branch encountered branch encoun tered record also show whether frequently taken fre quently taken btac shows branch encountered prediction made based previous outcome four states strongly taken weakly taken strongly taken weakly taken case record found branch static branch pre diction procedure used static branch prediction procedure investigates branch nd whether going backwards forwards branch going back wards assumed part loop branch assumed taken branch going forwards taken arm 11 employs eightstage pipeline every correctly predicted branch found lead typical saving ve processor clock cycles around 80 branches found correctly predicted using dynamicstatic combination arm 11 architecture pipeline features arm 11 introduced next subsection branch prediction technique based use 16kentry branch history record employed ultrasparc iii risc processor 14stage pipeline how ever impact misprediction terms number cycles lost due branch misprediction reduced using following approach predictions branch taken branch target instructions fetched fallthrough instructions prepared issue parallel use fourentry branch miss queue bmq reduces misprediction penalty two cycles ultrasparc iii achieved 95 success branch prediction pipeline features ultrasparc iii introduced next subsection methods used reduce pipeline stall due data dependency hardware operand forwarding hardware operand forwarding allows result one alu operation available another alu operation cycle immediately follows consider following two instructions add r1 r2 r3 r3 r1 r2 sub r3 1 r4 r4 r3 2 1 easy notice exists readafterwrite data dependency two instructions correct execution sequence vestage pipeline id ie cause stall second instruction decoding result rst instruction stored r3 time operand second instruction new value stored r3 fetched second instruction however possible result rst instruction forwarded alu time unit stored r3 possible reduce stall time illustrated figure 914 assumption operand second instruction forwarded immedi ately available stored r3 requires modication data path added feedback path created allow operand forwarding modication shown using dotted lines figure 915 noted needed modication achieve hardware operand forwarding expensive requires careful issuing control signals also noted possible perform instruction decoding operand fetching time unit lost time units software operand forwarding operand forwarding alternatively performed software compiler case compiler smart enough make result performing instructions quickly available operand subsequent instruction desirable feature requires compiler perform data dependency analysis order determine operand possibly made available forwarded subsequent instructions thus reducing stall time data dependency analysis requires recognition basically three forms explained using simple examples storefetch case represents data dependency result instruction stored memory followed request fetch result subsequent instruction consider following sequence two instructions sequence operand needed second instruction contents memory location whose address stored register r3 already available register r2 therefore immediately forwarded moved register r4 recognizes data dependency smart compiler replace sequence following sequence store r2 r3 r3 r2 move r2 r4 r4 r2 fetchfetch case represents data dependency data stored instruction also needed operand subsequent instruction consider following instruction sequence load r3 r2 r2 r3 load r3 r4 r4 r3 sequence operand needed rst instruction contents memory location whose address stored register r3 also needed operand second instruction therefore operand immediately forwarded moved register r4 recognizes data dependency smart compiler replace sequence following sequence load r3 r2 r2 r3 move r2 r4 r4 r2 storestore case data stored instruction overwrit ten subsequent instruction consider following instruction sequence store r2 r3 r3 r2 store r4 r3 r3 r4 sequence results written rst instruction content register r2 written memory location whose address stored register r3 overwritten second instruction contents register r4 assuming two instructions executed sequence result written rst instruction needed io operation example dma sequence two instructions replaced following single instruction store r4 r3 r3 r4", "url": "RV32ISPEC.pdf#segment111", "timestamp": "2023-10-17 22:51:45", "segment": "segment111", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.3. EXAMPLE PIPELINE PROCESSORS ", "content": "section briey present two pipeline processors use variety pipeline techniques presented chapter focus coverage pipeline features architectures two processors arm 1026ejs ultrasparc iii", "url": "RV32ISPEC.pdf#segment112", "timestamp": "2023-10-17 22:51:45", "segment": "segment112", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.3.1. ARM 1026EJ-S Processor This processor is part of a family of RISC ", "content": "processors designed advanced risc machine arm company series designed suit highperformance lowcost lowpower embedded applications arm 022ejs integer core multiple execution units thus allowing number instructions exist pipeline stage also allows execution sim ultaneous instructions arm 1026ejs deliver peak throughput one instruction per cycle integer core consists following units 1 prefetch unit unit responsible instruction fetch also predicts outcome branches whenever possible 2 integer unit unit responsible decoding instructions coming prefetch unit unit contains barrel shifter alu multiplier executes instructions mov add mul integer unit helps loadstore unit execute load store instructions also helps executing coprocessor instructions 3 loadstore unit unit load store two registers 64 bits per cycle arm 1022ejs pipeline processor whose alu consists six stages 1 fetch stage instruction cache access branch prediction instructions already fetched 2 issue stage initial instruction decoding 3 decode stage nal instruction decode register read alu operations forwarding initial interlock resolution 4 execute stage data access address calculation data processing shift shift saturate alu operations rst stage multiplication ag setting condition code check branch mispredict detection store data register read 5 memory stage data cache access second stage multiplication saturations 6 write stage register write instruction retirement arrangement fetch stage uses rstinrstout fifo buffer hold three instructions issue decode stages contain predicted branch parallel one instruction execute memory write stages simultaneously contain following 1 predicted branch 2 alu multiply instruction 3 ongoing multiply load store multiple instructions 4 ongoing multicycle coprocessor instructions prefetch unit operates fetch stage fetch 64 bits every cycle instructionside cache however issue one 32bit instruction per cycle integer unit pending instructions placed prefetch buffer prefetch unit instruction prefetch buffer branch prediction logic decode see predictable branch prefetch buffer hold three instructions enable prefetch unit 1 detect branch instructions ahead fetch stage 2 predict branches likely taken 3 remove branches likely taken branch predicted taken instruction address redirected branch target address however branch predicted taken next instruction fetched case enough time completely remove branch fetch address redirected anyway thus reducing branch penalty integer unit executes unpredictable branches quickly obtain required address dedicated fast branch adder used done order avoid passing barrel shift prefetch buffer ushed following cases 1 entry exception processing sequence 2 load program counter pc 3 arithmetic manipulation pc 4 execution unpredicted branch 5 detection erroneously predicted branch taken predicted branch case lead automatic ush prefetch buffer mispredicted branches unpredicted taken branches lead threecycle penalty", "url": "RV32ISPEC.pdf#segment113", "timestamp": "2023-10-17 22:51:45", "segment": "segment113", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.3.2. UltraSPARC III Processor The UltraSPARC III is based on the SUN ", "content": "sparcv9 risc architectural specications number features characterize sparcv9 among following 1 simple instruction formats instructions 32bit memory access done exclusively using load store instructions 2 addressing modes memory addressing two modes register register register immediate modes 3 triadic register operands instructions operate two register operands one register constant operand results cases stored third register 4 large window register le ultrasparc iii processor uses six independent units see fig 916 1 instruction issue unit iiu unit predicts program ows fetches predicted path memory directs fetched instructions execution pipeline instructions forwarded either ieu fpu iiu incorporates fourway associative instruction cache address translation buffer 16kentry branch predictor 2 integer execute unit ieu unit executes integer instructions including integer loading storing integer arithmetic logic branch instructions ieu capable executing four integer instructions concurrently cycle time 3 data cache unit dcu unit contains three different levelone l1 data caches data address translation buffer data caches demand fetch fourway associative 64kb 32byte block size pre fetch cache fourway associative 2kb 64byte block size write cache fourway associative 2kb 64byte block size 4 floatingpoint unit fpu unit executes oatingpoint graphical instructions 5 external memory unit emu unit controls access two off chip memory modules two offchip modules leveltwo l2 data cache main memory 6 system interface unit siu unit provides communication interface microprocessor system external main memory io devices processors multiprocessing conguration ultrasparc iii 14stage instruction pipeline 1 address generation unit unit generates instruction fetch addresses 2 instruction prefetch unit p unit fetches second cycle instruc tions cache accesses rst cycle branch prediction 3 instruction fetch unit f unit fetches second cycle instructions cache accesses second cycle branch prediction f unit also performs virtual physical address translation 4 branch target calculation unit b unit computes target address branches decodes rst cycle instructions 5 instruction decode unit unit decodes second cycle instruc tions directs queue 6 instruction steer unit j unit directs instructions appropriate execution unit integer instructions directed integer execution unit oatingpoint graphical instructions directed oatingpoint unit 7 register file read unit r unit reads operands integer register le 8 integer execution unit e unit executes integer instructions 9 date cache access unit c unit accesses second cycle date cache forwards load data word double word loads executes rst cycle oatingpoint instructions 10 memory bypass unit unit loads data alignment half word bytes loads executes second cycle oatingpoint instructions 11 working register file write unit w unit performs writes inte ger register le executes third cycle oatingpoint instructions 12 pipe extend unit x unit extends integer pipeline precise oat ingpoint traps executes fourth cycle oatingpoint instructions 13 trap unit unit reports traps upon occurrences 14 done unit unit writes architectural register le two main techniques employed ultrasparc iii dealing branches explained branch prediction ultrasparc iii uses branch prediction technique combines static dynamic branch prediction techniques explained case branch prediction takes place iiu unit uses branch prediction table hardware implementation dynamic prediction algorithm branch prediction table branch prediction table bpt hardware implementation table twobit nite state machine fsm saturated updown counter branch encountered branch target address and branch history used nd table index location pre diction branch found branch condition predicted taken corresponds one two fsm states strong taken weak taken branch condition predicted taken corresponds one two fsm states weak taken strong taken counter incremented time branch taken otherwise decremented hence name updown counter counter reaches strong taken state 11state stays long branch taken reaches strong taken 00state stays long branch taken hence name saturation bpt ultrasparc iii consists 16kentry 16k 2bit saturation updown counters global share dynamic prediction algorithm global share gshare algorithm uses two levels branchhistory information dynamically predict direction branches rst level registers history last k branches faced represents global branching behavior level implemented providing global branch history register basically shift register enters 1 every taken branch 0 every untaken branch second level branch history information registers branching last occurrences specic pattern k branches information kept branch prediction table gshare algorithm works taking lower bits branch target address xoring history register get index used prediction table ultrasparc iii uses modied version gshare algorithm modi cation requires predictor pipelined two stages original gshare algorithm used predictor would indexed old copy program counter pc modied gshare algorithm time predictor accessed eight counters read three loworder bits pc register used select one b pipeline stage instruction buffer queues ultrasparc iii instruction issue unit iiu incorporates two instruction buffering queues branch instruction queue biq branch miss queue bmq introduced branch instruction queue biq 20entry queue allows fetch execution unit operate independently fetch unit predicts execution path continuously lls biq taken branch encountered two fetch cycles lost ll biq branch miss queue bmq lost two cycles sequential instructions already accessed buffered fourentry bmq found branch mispredicted instructions bmq directed execution unit directly", "url": "RV32ISPEC.pdf#segment114", "timestamp": "2023-10-17 22:51:45", "segment": "segment114", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.4. INSTRUCTION-LEVEL PARALLELISM ", "content": "contrary pipeline techniques instructionlevel parallelism ilp based idea multiple issue processors mip mip multiple pipelined datapaths instruction execution pipelines issue execute one instruc tion per cycle figure 917 shows case processor three pipes comparison purposes also show gure sequential single pipeline case clear gure limit number cycles per instruction case single pipeline cpi 1 mip achieve cpi 1 order make full use ilp analysis made identify instruction data dependencies exist given program analysis lead appropriate scheduling group instructions issued simultaneously retaining program correctness static scheduling results use long instruction word vliw architectures dynamic scheduling results use superscalar architectures vliw instruction represents bundle many operations issued sim ultaneously compiler responsible checking dependencies making appropriate groupingsscheduling operations contrast super scalar architectures rely entirely hardware scheduling instructions superscalar architectures scalar machine able perform one arith metic operation superscalar architecture spa able fetch decode execute store results several instructions time trans forming static sequential instruction stream dynamic parallel one order execute number instructions simultaneously upon completion spa reinforces original sequential instruction stream instructions completed original order spa instruction processing consists fetch decode issue commit stages fetch stage multiple instructions fetched simultaneously branch prediction speculative execution also performed fetch stage done order keep fetching instructions beyond branch jump instructions decoding done two steps predecoding performed main memory cache responsible identifying branch instructions actual decoding used determine following instruction 1 oper ation performed 2 location operands 3 location results stored issue stage instructions among dispatched ones start execution identied commit stage generated valuesresults written destination registers crucial step processing instructions spas dependency analy sis complexity analysis grows quadratically instruction word size puts limit degree parallelism achieved spas degree parallelism higher four impractical beyond b pipelining f1 d1 e1 w1 f4 d4 f7 d7 f2 d2 e2 w2 f3 d3 e3 w3 limit dependence analysis scheduling must done compiler basis vliw approach long instruction word vliw approach compiler performs dependency analysis determines appropriate groupingsscheduling oper ations operations performed simultaneously grouped long instruction word vliw therefore instruction word made long enough order accommodate maximum possible degree parallelism example ibm daisy machine instruction word eight oper ation long called 8issue machine vliw resource binding done devoting eld instruction word one one functional unit however arrangement lead limit mix instructions issued per cycle exible approach allow given instruction eld occupied different kinds operations example philips trimedia machine 5issue machine 27 functional units mapped 5issue slot ibm daisy every instruction implements multiway path selection scheme case rst 72 bits vliw called header contain information tree form condition tests branch targets header followed eight 23bit parcels encod ing operation order solve problem providing operands large number functional units ibm daisy keeps eight identical copies register le one eight functional units", "url": "RV32ISPEC.pdf#segment115", "timestamp": "2023-10-17 22:51:46", "segment": "segment115", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.5. ARITHMETIC PIPELINE ", "content": "principles used instruction pipelining used order improve per formance computers performing arithmetic operations add subtract multiply case principles used realize arithmetic cir cuits inside alu section elaborate use arithmetic pipe line means speed arithmetic operations start xedpoint arithmetic operations discuss oatingpoint operations", "url": "RV32ISPEC.pdf#segment116", "timestamp": "2023-10-17 22:51:46", "segment": "segment116", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.5.1. Fixed-Point Arithmetic Pipelines ", "content": "basic xed point arithmetic operation performed inside alu addition two nbit operands an21an22 a2a1a0 b bn21bn22 b2b1b0 addition two operands performed using number techniques techniques differ basically two attributes degree complexity achieved speed two attributes somewhat contradictory simple realization may lead slower circuit complex realization may lead faster circuit consider example carry ripple crta carry lookahead claa adders crta simple slower claa complex faster possible modify crta way number pairs operands operated upon pipelined inside adder thus improving overall speed addition crta figure 918 shows example modied 4bit crta case two operands b presented crta use synchronizing elements clocked latches latches guarantee movement partial carry values within crta synchronized input subsequent stages adder higher order operand bits example arrival rst carry c0 second pair bits a1 b1 synchronized input second full adder counting low order bits high order bits using latch although operation modied crta remains principle carry ripples adder provision latches allows possi bility presenting multiple sets pairs operands adder time consider example case adding pairs operands whereby oper ands pair nbit time needed perform addition pairs using nonpipelined crta given tnp n ta ta time needed perform single bit addition compared time needed perform computation using pipelined ctra given tpp n 2 1 ta example 16 n 64 bits tnp 1024 ta tpp 79 ta thus resulting speedup 13 extreme case whereby possible present unlimited number pairs operands crta time speed reach 64 number bits operand", "url": "RV32ISPEC.pdf#segment117", "timestamp": "2023-10-17 22:51:46", "segment": "segment117", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.5.2. Floating-Point Arithmetic Pipelines ", "content": "using similar approach possible pipeline oatingpoint fp additionsub traction case pipeline organized around operations needed perform fp addition main operations needed fp addition expo nent comparison ec exponent alignment ea addition ad normalization nz therefore possible pipeline organization fourstage pipeline performing operation ec ea ad nz figure 919 shows sche matic pipeline fp adder possible multiple sets fp operands pro ceeding inside adder time thus reducing overall time needed fp addition synchronizing latches needed order synchronize operands input given stage fp adder", "url": "RV32ISPEC.pdf#segment118", "timestamp": "2023-10-17 22:51:46", "segment": "segment118", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.5.3. Pipelined Multiplication Using Carry-Save Addition ", "content": "indicated one main problems addition fact carry ripple one stage next carry rippling stages eliminated using method called carrysave addition consider case adding 44 28 32 79 possible way add without carry ripple illustrated figure 920 idea delay addition carry resulting intermediate stages last step addition last stage carryripple stage employed carrysave addition used realize pipelined multiplication building block consider example multiplication two nbit operands b multiplication operation transformed addition shown figure 921 gure illustrates case multiplying two 8bit operands b carrysave based multiplication scheme using principle shown figure 921 shown figure 922 scheme based idea producing set partial products needed adding using carrysave addition scheme", "url": "RV32ISPEC.pdf#segment119", "timestamp": "2023-10-17 22:51:46", "segment": "segment119", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "9.6. SUMMARY ", "content": "chapter considered basic principles involved designing pipe line architectures coverage started discussion number metrics used assess goodness pipeline moved present general discussion main problems need considered designing pipelined architecture particular considered two main problems instruction data dependency effect two problems performance pipeline elaborated possible techniques used reduce effect instruction data dependency introduced illustrated two examples recent pipeline architectures arm 11 micro architecture ultrasparc iii processor presented discus sion chapter ended introduction ideas usedinrealizingpipelinearithmeticarchitectures", "url": "RV32ISPEC.pdf#segment120", "timestamp": "2023-10-17 22:51:46", "segment": "segment120", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 10 ", "content": "reduced instruction set computers riscs chapter dedicated study reduced instruction set computers riscs machines represent noticeable shift computer architecture paradigm paradigm promotes simplicity rather complexity risc approach sub stantiated number studies indicating assignment statements conditional branching procedure callsreturn represent 90 complex operations long division represent 2 operations performed typical set benchmark programs studies showed also among operations procedure callsreturn timeconsuming based results risc approach calls enhancing architectures resources needed make execution frequent timeconsuming oper ations efcient seed risc approach started early mid 1970s reallife manifestation appeared berkeley risci stanford mips machines introduced mid1980s today riscbased machines reality characterized number common features simple reduced instruction set xed instruction format one instruction per machine cycle pipeline instruction fetchexecute units ample number general purpose registers alternatively optimized compiler code generation loadstore memory operations hardwired control unit design coverage chapter starts discussion evolution risc architectures provide brief discussion performance studies led adoption risc paradigm overlapped register windows essential concept risc development discussed toward end chapter provide details number riscbased architectures berkeley risc stanford mips compaq alpha sun ultrasparc", "url": "RV32ISPEC.pdf#segment121", "timestamp": "2023-10-17 22:51:46", "segment": "segment121", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.1. RISC/CISC EVOLUTION CYCLE ", "content": "term riscs stands reduced instruction set computers originally introduced notion mean architectures execute fast one instruction per clock cycle risc started notion mid1970s even tually led development rst risc machine ibm 801 minicomputer launching risc notion announces start new paradigm design computer architectures paradigm promotes simplicity computer architecture design particular calls going back basics rather provid ing extra hardware support highlevel languages paradigm shift relates known semantic gap measure difference oper ations provided highlevel languages hlls provided computer architectures recognized wider semantic gap larger number undesirable consequences include execution inefciency b excessive machine pro gram size c increased compiler complexity expected conse quences conventional response computer architects add layers complexity newer architectures include increasing number complex ity instructions together increasing number addressing modes archi tectures resulting adoption add complexity known complex instruction set computers ciscs however soon became apparent complex instruction set number disadvantages include complex instruction decoding scheme increased size control unit increased logic delays drawbacks prompted team computer architects adopt principle less actually more number studies conducted inves tigate impact complexity performance discussed", "url": "RV32ISPEC.pdf#segment122", "timestamp": "2023-10-17 22:51:46", "segment": "segment122", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.2. RISCs DESIGN PRINCIPLES ", "content": "computer minimum number instructions disadvantage large number instructions executed realizing even simple function result speed disadvantage hand computer inated number instructions disadvantage complex decoding hence speed disadvantage natural believe computer carefully selected reduced set instructions strike balance two design alternatives question becomes constitutes carefully selected reduced set instructions order arrive answer question necessary conduct indepth studies number aspects computation aspects include operations frequently performed execution typical benchmark programs b operations time consuming c type operands frequently used number early studies conducted order nd typical break operations performed executing benchmark programs esti mated distribution operations shown table 101 careful look estimated percentage operations performed reveals assignment statements conditional branches procedure calls constitute 90 total operations performed operations however complex may make remaining 10 addition ndings studies timeperformance characteristics operations revealed among operations procedure callsreturn timeconsuming regards type operands used typical compu tation noticed majority references less 60 made simple scalar variables less 80 scalars local variables procedures observations typical program behavior led following conclusions 1 simple movement data represented assignment statements rather complex operations substantial optimized 2 conditional branches predominant therefore careful attention paid sequencing instructions particularly true known pipelining indispensable use 3 procedure callsreturn timeconsuming operations therefore mechanism devised make communication parameters among calling called procedures cause least number instruc tions execute 4 prime candidate optimization mechanism storing accessing local scalar variables conclusions led argument instead bringing instruc tion set architecture closer hlls appropriate rather optimize performance timeconsuming features typical hll programs obviously call making architecture simpler rather complex remember complex operations long division represent small por tion less 2 operations performed typical computation one ask question achieve answer keeping frequently accessed operands cpu registers b minimizing registertomemory operations two principles achieved using following mechanisms 1 use large number registers optimize operand referencing reduce processor memory trafc 2 optimize design instruction pipelines minimum compiler code generation achieved see chapter 8 3 use simplied instruction set leave complex unnecessary instructions following two approaches identied implement three mechanisms 1 software approach use compiler maximize register usage allocat ing registers variables used given time period philosophy adopted stanford mips machine 2 hardware approach use ample cpu registers variables held registers larger periods time philosophy adopted berkeley risc machine hardware approach necessitates use new register organization called overlapped register window explained", "url": "RV32ISPEC.pdf#segment123", "timestamp": "2023-10-17 22:51:47", "segment": "segment123", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.3. OVERLAPPED REGISTER WINDOWS ", "content": "main idea behind use register windows minimize memory accesses order achieve large number cpu registers needed example number cpu generalpurpose registers available original sparc machine one earliest riscs 120 however desirable subset registers visible given time addressed set registers available therefore cpu registers divided multiple small sets assigned different procedure procedure call automatically switch cpu use different xedsize window reg isters order minimize actual movement parameters among calling called procedures set registers divided three subsets par ameter registers local registers temporary registers procedure call made new overlapping window created temporary registers caller physically parameter registers called pro cedure overlap allows parameters passed among procedure without actual movement data fig 101 addition set xed number cpu registers identied global reg isters available procedures example references registers 0 7 sparc architecture refer unique global registers refer ences registers 8 31 indicate registers current window current window pointed using normally called current window pointer cwp upon windows lled register window wraps around thus acting like circular buffer table 102 shows number windows window size number architectures noted study conducted 1985 nd impact using register window performance berkeley risc study two versions machine studied rst designed register win dows second hypothetical berkeley risc implemented without win dows results study indicated decrease factor 2 4 depending specic benchmark memory trafc due use register windows", "url": "RV32ISPEC.pdf#segment124", "timestamp": "2023-10-17 22:51:47", "segment": "segment124", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.4. RISCs VERSUS CISCs ", "content": "choice risc versus cisc depends totally factors must con sidered computer designer factors include size complexity speed risc architecture execute instructions perform function performed cisc architecture compensate drawback risc architectures must use chip area saved using complex instruction decoders providing large number cpu registers additional execution units instruction caches use resources leads reduction trafc processor memory hand cisc archi tecture richer complex instructions require smaller number instructions risc counterpart however cisc architecture requires complex decoding scheme hence subject logic delays therefore reason able consider risc cisc paradigms differ primarily strategy used trade different design factors little reason believe idea improves performance risc architecture fail thing cisc architecture vice versa example one key issue risc development use optimizing compiler reduce complexity hardware optimize use cpu registers ideas applicable cisc compilers increasing number cpu registers could much improve performance cisc machine could reason behind nding pure commercially available risc cisc machine unusual see risc machine complex oatingpoint instructions see details sparc architecture next sec tion equally expected see cisc machines making use register win dows risc idea fact studies indicating cisc machine motorola 680xx register window achieve 2 4 times decrease memory trafc factor achieved risc architecture berkeley risc due use register window however noted processor developers except intel associates opted risc processors computer system manufacturers sun microsystems using risc processors products however compatibility pcbased market companies still producing ciscbased products tables 103 104 show limited comparison example risc cisc machine terms performance characteristics respectively elaborate comparison among number commercially available risc cisc machines shown table 105 worth mentioning point following set common character istics among risc machines observed 1 fixedlength instructions 2 limited number instructions 128 less 3 limited set simple addressing modes minimum two indexed pcrelative 4 operations performed registers memory operations 5 two memory operations load store 6 pipelined instruction execution 7 large number generalpurpose registers use advanced compiler technology optimize register usage 8 one instruction per clock cycle 9 hardwired control unit design rather microprogramming", "url": "RV32ISPEC.pdf#segment125", "timestamp": "2023-10-17 22:51:47", "segment": "segment125", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.5. PIONEER (UNIVERSITY) RISC MACHINES ", "content": "section present brief descriptions main architectural features two pioneer universityintroduced risc machines rst machine berkeley risc second stanford mips machine machines presented means show original risc machines look also make reader appreciate advances made risc machines development since inception", "url": "RV32ISPEC.pdf#segment126", "timestamp": "2023-10-17 22:51:47", "segment": "segment126", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.5.1. The Berkeley RISC ", "content": "two berkeley risc machines risci riscii unless otherwise mentioned refer risci discussion risc 32bit loadstore architecture 138 32bit registers r0r137 available users rst ten registers r0r9 global registers seen procedures register r0 used synthesize addressing modes operations directly available machine registers r10r137 divided overlapping register window scheme 32 registers visible instant 5bit variable called current window pointer cwp used point current register set risc instructions occupy full word 32 bits risc instruction set divided four categories alu total 12 instructions loadstore total 16 instructions branch call total seven instructions special instructions total four instructions examples risc instructions 1 alu add rs rd rd rs 2 loadstore ldxw rx rd rd rx 3 branch call jmpx cond rx pc rx cond condition 4 special instructions getpsw rd rd psw arithmetic logical instructions three operands form desti nation source1 opsource2 fig102 load andstore instructions may use either indicated formats dst register loaded stored low order 19 bits instructions used determine effective address instructions load store 8 16 32 64bit quantities 32bit registers two methods provided calling procedures call instruction uses 30bit pc relative offset fig 103 jmp instruction uses instruction formats used arithmetic logical operations allows return address put register risc uses threeaddress instruction format availability two oneaddress instructions two addressing modes indexed mode pc relative modes indexed mode used synthesize three modes baseabsolute direct register indirect indexed linear byte array modes risc uses static twostage pipeline fetch execute oatingpoint unit fpu contains thirtytwo 32bit registers hold 32 single precision 32bit oatingpoint operands 16 doubleprecision 64bit operands eight extendedprecision 128bit operands fpu execute 20 oat ingpoint instructions single double extendedprecision using rst instruction format used arithmetic addition instructions loading storing fpus registers cpu also test fpus registers branch con ditionally results risc employs conventional mmu supporting single paged 32bit address space risc fourbus organization shown figure 104", "url": "RV32ISPEC.pdf#segment127", "timestamp": "2023-10-17 22:51:47", "segment": "segment127", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.5.2. Stanford MIPS (Microprocessor Without Interlock Pipe Stages) ", "content": "mips 32bit pipelined loadstore machine uses vestage pipeline consisting instruction fetch instruction decode id operand decode od operand storeexecution osex operand fetch rst three stages perform respectively instruction fetch instruction decode operand fetch osex stage sends operand memory case store instruction use alu case instruction execution stage receives operand case load instruction mips uses mechanism called pipeline interlock order prevent instruction continuing needed operand available unlike berkeley risc mips single set sixteen 32bit general purpose registers mips compiler optimizes use registers whatever way best program currently compiled addition 16 general purpose registers mips provided four additional registers order hold four previous pc values support backtracking restart case fault fth reg ister used hold future pc value support branch instructions four addressing modes used mips immediate indexed based offset base shifted four instruction groups identied mips alu loadstore control special instructions total 13 alu instructions provided include registertoregister two three operand formats fig 105 total 10 loadstore instructions pro vided use 16 32 bits latter case indexed addressing used adding 16bit signed constant register using second format figure 105 total six control ow instructions provided include jumps relative jumps compare instructions two special ow instructions provided support procedure interrupt linkage examples mips instructions 1 alu add src1 src2 dst dst src1 src2 2 loadstore ld src1 src2 dst dst src1 src2 3 control jmp dst pc dst 4 special function savepc pc mips provide direct support oatingpoint operations floating point operations done specialized coprocessor surprisingly nonrisc instructions mult div included use special functional units contents two registers multiplied divided 64bit product kept two special registers lo hi procedure call made jump instruction shown figure 106 instruction uses 26bit jump target address mips virtual address 32 bits long thus allowing four gwords virtual address space virtual address divided 20bit virtual page number 12bit offset within page actual implementation mips restricted packaging constraints allowing 24 address pins actual physical address space 224 16 mwords 32 bits support offchip tlb address translation provided mips organization shown figure 107", "url": "RV32ISPEC.pdf#segment128", "timestamp": "2023-10-17 22:51:47", "segment": "segment128", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.6. EXAMPLE OF ADVANCED RISC MACHINES ", "content": "section introduce two representative advanced risc machines emphasis coverage pipeline features branch handling mech anisms used", "url": "RV32ISPEC.pdf#segment129", "timestamp": "2023-10-17 22:51:47", "segment": "segment129", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.6.1. Compaq (Formerly DEC) Alpha 21264 ", "content": "alpha 21264 ev6 third generation compaq formerly dec risc superscalar processor full 64bit processor 21264 80entry integer register le 72entry oatingpoint register le employs twolevel cache l1 data instruction caches 64 kb organized twoway setassoci ative manner l2 data cache 1 16 mb shared instructions data organized using directmapping block size 64 bytes data cache receive combination two loads stores integer execution pipe every cycle equivalent 64 kb onchip data cache delivering 16 bytes every cycle hence twice clock speed processor 21264 memory system support 32 inight loads 32 inight stores 8 inight 64 byte cache block lls 8 cache misses 64 kb twoway setassociative cache instruction data also support two outoforder operations fig 108", "url": "RV32ISPEC.pdf#segment130", "timestamp": "2023-10-17 22:51:48", "segment": "segment130", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.6.2. The Alpha 21264 Pipeline ", "content": "alpha 21264 instruction pipeline shown figure 109 consists seven stages fetch slot assignment rename issue register read execute memory stages fetch stage fetch execute four instructions per cycle block diagram fetch stage shown figure 1010 stage uses unique block set prediction technique according technique locations next four instructions set two sets located predicted block set prediction technique combines speed advantages directmapped cache lower miss ratio twoway setassociative cache technique achieves 85 hit ratio misprediction pen alty single cycle 21264 uses speculative branch prediction branch predic tion 21264 twolevel scheme based observation branches exhibit local global correlation local correlation makes use branch past behavior global correlation hand makes use past behavior previous branches combined localglobal prediction used 21264 correlates branch behavior pattern local branch history execution single branch unique pc location global branch history execution previous branches scheme dynamically selects local global branch history fig 1011 local branch predictor two tables rst 102410 local history table entry holds 10bit local history selected branch last executions local history table indexed instruction address using pc second table 10243 local prediction table entry 3bit saturating counter predict branch outcome branches retirement 21264 updates local history table true branch direction referenced counter enhances possibility correct prediction called predictor training global branch predictor 40962 global prediction table entry holds 2bit saturating counter keeps track global history last 12 branches global branch prediction table indexed 40962 choice pre diction table branches retirement 21264 updates referenced global prediction counter enhancing possibility correct prediction local prediction useful case alternating takennottaken sequence given branch case local history branch eventually resolve pattern ten alternating zeros ones indicating success failure branch alternate encounters branch executes multiple times saturates prediction counters corresponding local history values hence makes prediction correct global prediction useful outcome branch inferred direction previous branches consider example case repeated invoca tions two branches rst branch checks value equal 1001 suc ceeds second branch checks value odd must also succeed global history predictor learn pattern repeated invocations two branches 20962 choice predictor table entry holds 2bit saturat ing counter used implement selection tournament scheme pre dictions local global predictors differ 21264 updates selected choice prediction entry support correct predictor 21264 updates choice prediction table branch retires slot assignment stage 2 simply assigns instructions slots associated integer oatingpoint queues outoforder ooo issue logic 21264 receives four fetched instruc tions every cycle renames remaps registers avoid unnecessary register dependencies queues instructions operands andor functional units become available dynamically issues six instructions every cycle four inte ger two oatingpoint instructions register renaming means mapping instruc tion virtual registers internal physical registers 31 integer 31 oatingpoint registers visible users registers renamed execution internal registers instructions nished retired internal registers renamed back visible registers register renaming eliminates writeafterwrite writeafterread data dependencies how ever preserves readafterwrite dependencies necessary correct computation list pending instructions maintained ooo queue logic cycle integer oatingpoint queues select instructions ready execute selection made based scoreboard renamed reg isters scoreboard maintains status renamed registers tracking progress singlecycle multiplecycle variablecycle instructions upon availability functional unit load data results scoreboard unit noties instructions queue availability required register value queue selects oldest dataready functionalunitready instructions execution cycle 21264 integer queue statically assigns instructions two four pipes either upper lower pipe fig 1012 alpha 21264 four integer two oatingpoint pipelines allows processor dynamically issue six instructions cycle issue queue stage maintains inventory dynamically select issue maximum six instructions 20entry integer issue queue 15entry oatingpoint issue queue instruction issue reordering takes place issue stage 21264 uses two integer les 80entry store duplicate register con tents two pipes access single le form cluster two clusters form four way integer instruction execution results broadcasted cluster cluster instructions dynamically selected integer issue queue exe cute given instruction pipe instruction heuristically selected execute cluster produces result 21264 one 72entry oatingpoint register le oatingpoint register le together two instruction execution pipes form cluster figure 1012 shows register readexecution pipes nal note indicate 21264 uses writeinvalidate cache coherence mechanism level 2 cache provide support sharedmemory multiprocessing also supports following cache states modied owned shared exclusive invalid", "url": "RV32ISPEC.pdf#segment131", "timestamp": "2023-10-17 22:51:48", "segment": "segment131", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.6.3. SUN UltraSPARC III ", "content": "ultrasparcw iii highperformance superscalar risc processor implements 64bit sparcwv9 risc architecture exist number implementations sparc iii processor include ultrasparc iiii ultrasparc iii cu coverage section independent particular implementation however refer specic implementations whenever appropriate ultrasparc iii third generation 64bit sparcw risc microprocessor supports 64bit virtual address space 43bit physical address space ultrasparc iii employs multilevel cache architecture example ultra sparc iiii ultrasparc iii cu architecture 32 kb fourway set associative l1 instruction cache 64 kb fourway setassociative l1 data cache 2 kb prefetch cache 2 kb write cache ultrasparc iiii supports 1 mb fourway setassociative unied instructiondata chip l2 cache cache block size 64 bytes used ultrasparc iiii ultrasparc iii cu architecture supports 1 4 8 mb twoway setassociative unied instruc tiondata external cache cache block size ultrasparc iii cu varies 64 bytes 1 mb cache 512 bytes 8 mb cache fig 1013 ultrasparc iii uses two instruction tlbs accessed parallel three data tlbs accessed parallel two instruction tlbs organized 16entry fully associative manner hold entries 8 kb 64 kb 512 kb 4 mb page sizes 128entry twoway setassociative tlb used exclusively 8 kb page sizes three data tlbs organized 16entry associative manner 8 kb 64 kb 512 kb 4 mb page sizes two 512entry twoway setassociative tlbs programmed hold one page size given time ultrasparc iii uses writeallocate write back cache write policy ultrasparc iii pipeline covered chapter 9 pages 203207 nal note mentioned ultrasparc iii designed support onetofour way multiprocessing purpose uses jbus supports smallscale multiprocessor system jbus capable delivering high bandwidth needed networking embedded systems applications jbus processors attach coherent shared bus needed glue logic fig 1014", "url": "RV32ISPEC.pdf#segment132", "timestamp": "2023-10-17 22:51:48", "segment": "segment132", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "10.7. SUMMARY ", "content": "risc architecture saves extra chip area used cisc architectures decod ing executing complex instructions saved chip area used provide onchip instruction cache used reduce instruction trafc processor memory common characteristics shared risc designs limited simple instruction set large number general purpose reg isters andor use compiler technology optimize register usage optim ization instruction pipeline essential risc philosophy keep frequently accessed operands registers minimize registermemory operations achieved using one two approaches software approach use com piler maximize register usage allocating registers variables used given time period philosophy used stanford mips machine hardware approach use registers variables held registers larger periods time philosophy used berke ley risc machine register windows multiple small sets registers assigned different procedure procedure call automatically switches cpu use different xedsize window registers rather saving registers memory call time time one window registers visible addressed set registers window overlapping requires temporary registers one level physically parameter regis ters next level overlap allows parameters passed without actual movement data worthwhile mentioning classication processors entirely pure risc entirely pure cisc becoming inappropriate may irrelevant actually counts much performance gain achieved including element given design style modern processors use calcu lated combination elements design styles decisive factor element design style include made based tradeoff required improvement performance expected added cost number processors classied risc employing number cisc features integeroatingpoint division instructions similarly exist processors classied cisc employing number risc features pipelining", "url": "RV32ISPEC.pdf#segment133", "timestamp": "2023-10-17 22:51:48", "segment": "segment133", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "CHAPTER 11 ", "content": "introduction multiprocessors covered essential issues design analysis uniprocessors pointing main limitations singlestream machine begin chap ter pursue issue multiple processors number processors two connected manner allows share simultaneous execution single task main argument using multiprocessors create powerful computers simply connecting many existing smaller ones multiprocessor expected reach faster speed fastest uniprocessor addition multiprocessor consisting number single uniprocessors expected costeffective building highperformance single processor additional advantage multiprocessor consisting n processors single processor fails remaining faultfree n 2 1 processors able provide continued service albeit degraded performance coverage chapter starts section general concepts terminology used point different topologies used interconnecting multiple pro cessors different classication schemes computer architectures introduced analyzed introduce topologybased taxonomy inter connection networks two memoryorganization schemes mimd multiple instruction multiple data multiprocessors also introduced coverage chapter ends touch analysis performance metrics multipro cessors noted interested readers referred elaborate dis cussions multiprocessors chapters 2 3 book advanced computer architecture parallel processing see reference list", "url": "RV32ISPEC.pdf#segment134", "timestamp": "2023-10-17 22:51:48", "segment": "segment134", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.1. INTRODUCTION ", "content": "multiple processor system consists two processors connected manner allows share simultaneous parallel execution given computational task parallel processing advocated promising approach building highperformance computer systems two basic requirements inevitable efcient use employed processors requirements 1 low communication overhead among processors executing given task 2 degree inherent parallelism task number communication styles exist multiple processor networks broadly classied according 1 communication model cm 2 physical connection pc according cm networks classied 1 multiple processors single address space shared memory computation 2 multiple computers multiple address space message passing computation according pc networks classied 1 busbased 2 network based multiple processors typical sizes systems summarized table 111 organization performance multiple processor system greatly inu enced interconnection network used connect one hand single shared bus used interconnection network multiple processors hand crossbar switch used interconnection network rst technique represents simple easytoexpand topology however limited performance since allow one processormemory transfer given time crossbar provides full processormemory distinct connections expensive multistage interconnection networks mins strike balance limitation single shared bus system expense crossbarbased system min one processormemory connection established time cost min considerably less crossbar particularly large number processors andor memories use multiple buses connect multiple processors multiple memory modules also suggested compromise limited single bus expensive crossbar figure 111 illustrates four types interconnection networks mentioned interested readers referred book advanced computer architecture parallel processing see reference list", "url": "RV32ISPEC.pdf#segment135", "timestamp": "2023-10-17 22:51:49", "segment": "segment135", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2. CLASSIFICATION OF COMPUTER ARCHITECTURES ", "content": "classication means order number objects categories common features among certain relationship exist regard classication scheme computer architectures aims categorizing architectures common features fall one category suchthatdifferentcategoriesrepresentdistinctgroupsofarchitecturesinaddition classication scheme computer architecture provide basis infor mation ordering basis predicting features given architecture two broad schemes exist computer architecture classication rst based external morphological features architectures second based evolutionary features architectures rst scheme emphasizes nished form architectures second scheme emphasizes way architecture derived predecessor suggests speculative views successor morphological classication provides basis predictive power evolutionary classication provides basis better understanding architectures examining extent classication scheme satisfying stated objective could assess pros cons scheme number classication schemes proposed last three dec ades include flynn classication 1966 kuck 1978 hwang briggs 1984 erlangen 1981 giloi 1983 skillicorn 1988 bell 1992 number briey discussed", "url": "RV32ISPEC.pdf#segment136", "timestamp": "2023-10-17 22:51:49", "segment": "segment136", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2.1. Flynn\u2019s Classi\ufb01cation ", "content": "flynn classication scheme based identifying two orthogonal streams computer instruction data streams instruction stream dened sequence instructions performed computer data stream dened data trafc exchanged memory processing unit according flynn classication either instruction data streams single multiple leads four distinct categories computer architectures 1 singleinstruction singledata streams sisd 2 singleinstruction multipledata streams simd 3 multipleinstruction singledata streams misd 4 multipleinstruction multipledata streams mimd figure 112 shows orthogonal organization streams according flynn classication schematics four categories architectures resulting flynn classi cation shown figure 113 table 112 lists commercial machines belonging four categories observations flynn classication 1 flynn classication among rst kind introduced must inspired subsequent classications 2 classication helped categorizing architectures available introduced later example introduction simd mimd machine models classication must inspired architects introduce new machine models 3 classication stresses architectural relationship memory processor level architectural levels totally overlooked 4 classication stresses external morphological features architec tures information included revolutionary relationship archi tectures belong category 5 owing pure abstractness practically viable machine exemplied misd model introduced classication least far however noted architects considered pipelined machines perhaps systolicarray computers examples misd 6 important aspect lacking flynn classication issue machine performance although classication gives impression machines simd mimd superior sisd misd counterparts gives information relative performance simd mimd machines", "url": "RV32ISPEC.pdf#segment137", "timestamp": "2023-10-17 22:51:49", "segment": "segment137", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2.2. Kuck Classi\ufb01cation Scheme ", "content": "flynn taxonomy considered general classication extended number computer architects one extension classication intro duced d j kuck 1978 classication kuck extended instruction stream single scalar array multiple scalar array streams data stream kuck classication called execution stream also extended include single scalar array multiple scalar array streams combination streams results total 16 categories architectures shown table 113 main observation flynn kuck classications cover entire architecture space however flynn classication emphasizes description architectures instruction set level kuck classication emphasizes description architectures hardware level", "url": "RV32ISPEC.pdf#segment138", "timestamp": "2023-10-17 22:51:49", "segment": "segment138", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2.3. Hwang and Briggs Classi\ufb01cation Scheme ", "content": "main new contribution classication due hwang briggs introduction concept classes renement flynn classi cation example according hwang briggs sisd category renedintotwosubcategories singlefunctionalunitsisd sisds andmultiple113 functional units sisd sisdm mimd category rened loosely coupled mimd mimdl tightly coupled mimd mimdt simd category rened wordsliced processing simdw bitsliced processing simdb therefore hwang briggs classication added level hierarchy machine classication given machine rst classied sisd simd mimd classied according constituent descendant according hwang briggs taxonomy always true predict sisdm perform better sisds however doubtful predic tion made respect simdw simdb example indi cated using maximum degree potential parallelism performance measure illaciv machine simdw inferior mpp machine simdb nal observation hwang briggs taxonomy shared memory systems see chapter 4 book advanced computer architecture parallel processing see reference list map naturally mimdt category nonshared memory systems map mimdl category", "url": "RV32ISPEC.pdf#segment139", "timestamp": "2023-10-17 22:51:49", "segment": "segment139", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2.4. Erlangen Classi\ufb01cation Scheme ", "content": "simplest form classication scheme adds one level details internal structure computer compared flynn scheme particular scheme considers addition control cntl processing alu units third subunit called elementary logic unit elu used charac terize given computer architecture elu represents circuitry required perform bitlevel processing within alu architecture characterized using threetuple system k w k number cntls number alu units associated one control unit w number elus per alu width single data word example one models illaciv made mesh connected array 64 64bit alus controlled burroughs b6700 computer according erlangen model illaciv characterized 1 64 64 postulating pipelining exist three levels hardware processing classication includes three additional parameters w0 number pipeline stages per alu d0 number functional units per alu k0 number elus forming control unit given expected multiunit nature three hardware processing levels general sixtuple used characterize architecture follows kk0 dd0 ww0 figure 114 illustrates erlangen classication system complex systems still characterized using erlangen system using two additional operators operator denoted alternative operator denoted example architecture consisting two computational subunits sixtuple k0k00 d0d00 w0w00 k1k01 d1d01 w1w01 characterized using subunits k0k00 d0d00 w0w00 k1k01 d1d01 w1w01 architecture expressed using either two subunits characterized k0k00 d0d00 w0 0w00 k1k01 d1d01 w1w01 example later design illaciv consisted two dec pdp10 frontend controller data accepted one pdp10 time version illaciv characterized 2 1 36 1 64 64 since illaciv also work halfword mode whereby 128 32bit processors rather 64 64bit processors overall character ization illaciv given 2 1 36 1 64 64 1 128 32 seen classication scheme regarded hierarchical classi cation puts emphasis internal structure processing hardware provide basis classication andor grouping computer architectures particular classication overlooks interconnection among different units", "url": "RV32ISPEC.pdf#segment140", "timestamp": "2023-10-17 22:51:49", "segment": "segment140", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.2.5. Skillicorn Classi\ufb01cation Scheme ", "content": "owing inherent nature flynn classication may end grouping computer systems similar architectural characteristics diverse functionality one class observation main motive behind skillicorn classication introduced 1988 according classication abstract von neumann machine modeled shown figure 115 seen abstract model includes two memory subdivisions instruction memory im data memory dm addition instruction processor ip data processor dp developing classication scheme following possible interconnec tion relationships considered ipdp ipim dpdm dpip interconnection scheme takes consideration type number con nections among data processors data memories instruction processors instruction memories may exist onetomany manytomany connections table 114 illustrates different connection schemes identied classication using given connection schemes skillicorn arrived 28 different classes sample classes shown table 115 rightmost column table indicates corresponding flynn class figure 116 illustrates four example classes accord ing classication major advantages skillicorn classication include 1 simplicity 2 proper consideration interconnectivity among units 3 exibility 4 theabilitytorepresentmostcurrentcomputersystemshowever theclassication 1 lacks inclusion operational aspects pipelining 2 difculty predicting relative power machines belonging class without explicit knowledge interconnection scheme used class multiple processor systems classied tightly coupled versus loosely coupled tightly coupled system processors equally access global memory addition processor may also local cache memory loosely coupled system memory divided among processors processor memory attached however pro cessors still share memory address space processor directly access remote memory examples tightly coupled multiple processors include cmu cmmp encore computer multimax sequent corp balance series examples loosely coupled multiple processors include cmu cm bbn buttery ibm rp3", "url": "RV32ISPEC.pdf#segment141", "timestamp": "2023-10-17 22:51:49", "segment": "segment141", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.3. SIMD SCHEMES ", "content": "recall flynn classication results four basic architectures among simd mimd frequently used constructing parallel architectures section provide basic information simd paradigm important outset indicate simd mostly designed exploit inherent parallelism encountered matrix array operations required applications image processing famous reallife machines commercially constructed include illiaciv 1972 staran 1974 mpp 1982 two main simd congurations used reallife machines shown figure 117 rst scheme processor local memory processors communicate interconnection network intercon nection network provide direct connection given pair pro cessors pair exchange data via intermediate processor illiaciv used interconnection scheme interconnection network illiaciv allowed processor communicate directly four neighbor ing processors 88 matrix pattern ith processor communicate directly 2 1 th 1 th 2 8 th 8 th processors second simd scheme processors memory modules communicate via interconnection network two processors transfer data via intermediate memory module possibly via intermediate processor assume example processor connected memory modules 2 1 1 case processor 1 communicate processor 5 via memory modules 2 3 4 intermediaries bsp burroughs scientic processor used second simd scheme order illustrate effectiveness simd handling array operations con sider example operations adding corresponding elements two one dimensional arrays b storing results third onedimensional array c assume also three arrays n elements assume also simd scheme 1 used n additions required done one step elements three arrays distributed m0 contains elements 0 b 0 c 0 m1 contains elements 1 b 1 c 1 mn21 contains elements n 2 1 b n 2 1 c n 2 1 case processors execute simultaneously add instruction form c b executing single step processors elements resultant array c stored across memory modules m0 store c 0 m1 store c 1 mn21 store c n 2 1 customary formally represent simd machine terms vetuples n c f meaning argument given 1 n number processing elements n 2k k 1 2 c set control instructions used control unit example step 3 set instructions executed active processing units 4 subset processing elements enabled 5 f set interconnection functions determine communication links among processing elements", "url": "RV32ISPEC.pdf#segment142", "timestamp": "2023-10-17 22:51:50", "segment": "segment142", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.4. MIMD SCHEMES ", "content": "mimd machines use collection processors memory used collaborate executing given task general mimd systems categorized based memory organization sharedmemory messagepassing architectures choice two categories depends cost communication relative computation degree load imbalance application", "url": "RV32ISPEC.pdf#segment143", "timestamp": "2023-10-17 22:51:50", "segment": "segment143", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.4.1. Shared Memory Organization ", "content": "recent growing interest distributed shared memory systems shared memory provides attractive conceptual model interprocess inter action even underlying hardware provides direct support shared memory model one processors communicate reading writing locations shared memory equally accessible processors processor may registers buffers caches local memory banks additional memory resources number basic issues design shared memory systems taken consideration include access control synchronization protection security access control determines process accesses possible resources access control models make required check every access request issued processors shared memory contents access control table latter contains ags determine legality access attempt access attempts resources desired access completed disallowed access attempts illegal processes blocked requests sharing processes may change contents access control table execution ags access control synchroniza tion rules determine system functionality synchronization constraints limit time accesses sharing processes shared resources appropriate synchro nization ensures information ows properly ensures system functiona lity protection system feature prevents processes making arbitrary access resources belonging processes sharing protection incom patible sharing allows access whereas protection restricts running two copies program two processors decrease per formance relative single processor due contention shared memory performance degrades three four copies program execute time shared memory computer system consists 1 set independent processors 2 set memory modules 3 interconnection network simplest shared memory system consists one memory module accessed two processors pa pb fig 118 requests arrive memory module two ports arbitration unit within memory module passes requests memory controller memory module busy single request arrives arbitration unit passes request memory controller request satised module placed busy state request serviced new request arrives memory busy servicing previous request memory module sends wait signal memory controller processor making new request response requesting processor may hold request line memory becomes free may repeat request time later arbitration unit receives two requests selects one passes memory controller denied request either held served next may repeated time later arbitration unit may adequate organize use memory module two processors main problem sequencing inter actions memory accesses two processors consider following two scenarios accessing memory location 1000 two pro cessors pa pb fig 119 let us also assume initial value stored memory location 1000 150 note cases sequence instruc tions performed processor difference two scenarios relative time two processors update value 1000 careful examination two scenarios show value stored location 1000 rst scenario 151 stored value following second scenario 152 illustrative example presents case nonfunctional behavior simple shared memory system example demonstrate basic requirements success systems requirements 1 mechanism conict resolution among rival processors 2 technique specifying sequencing constraints 3 mechanism enforcing sequencing specications approaches satisfying basic requirements covered chapter 4 book advanced computer architecture parallel processing see reference list use different interconnection networks shared memory multiprocessor system leads systems one following characteristics 1 shared memory architecture uniform memory access uma 2 cacheonly memory architecture coma 3 distributed shared memory architecture nonuniform memory access numa figure 1110 shows typical organization abovementioned three shared memory architectures uma system shared memory accessible processors interconnection network way single processor accesses memory therefore processors equal access time memory location interconnection network used uma single bus multiple bus crossbar multiport memory numa system processor part shared memory attached memory single address space therefore processor could access memory location directly using real address however access time modules depends distance processor results nonuniform memory access time number architectures used interconnect processors memory modules numa among tree hierarchical bus networks see chapter 2 book advanced computer architecture parallel processing see reference list similar numa processor part shared memory coma however case shared memory consists cache memory coma system requires data migrated processor requesting", "url": "RV32ISPEC.pdf#segment144", "timestamp": "2023-10-17 22:51:50", "segment": "segment144", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.4.2. Message-Passing Organization ", "content": "message passing represents alternative method communication move ment data among multiprocessors local rather global memories used communicate messages among processors message dened block related information travels among processors direct links exist number models message passing examples messagepassing systems include cosmic cube workstation cluster transputer introduction transputer system t212 1983 announced birth rst messagepassing multiprocessor subsequently t414 announced 1985 inmos introduced visi transputer processor 1986 two subsequent transputer products t800 1988 t9000 1990 introduced cosmic cube messagepassing multiprocessor designed caltech period 19811985 represented rst hypercube multi processor system made work wormhole routing message passing introduced 1987 alternative traditional storeand forward routing order reduce size required buffers decrease message latency wormhole routing packet divided smaller units called its ow control bits its move pipeline fashion header it packet leading way destination node header it blocked due network congestion remaining its blocked well see chapter 5 book advanced computer architecture parallel processing see reference list volume ii details elimination need large global memory usually reason slowdown overall system together asynchronous nature give messagepassing schemes edge sharedmemory schemes similar sharedmemory multiprocessors application programs divided smaller parts executed individual processor concurrent manner simple example messagepassing multiprocessor architecture shown figure 1111 seen gure processors use local bus internal chan nels communicate local memories communicating processors via interconnection networks external channels processes running given processor use internal channels exchange messages among processes running different processors use external channels exchange mess ages scheme offers great deal exibility accommodating large number processors readily scalable noted process processor executes considered two separate entities size process determined programmer described granu larity givenby three types granularity distinguished 1 coarse granularity process holds large number sequential instruc tions takes substantial time execute 2 medium granularity since process communication overhead increases granularity decreases medium granularity describes middle ground whereby communication overhead reduced order enable nodal communication take less amount time 3 fine granularity process contains numbers sequential instruc tions one instruction messagepassing multiprocessors use mostly medium coarse granularity messagepassing multiprocessors employ static networks local communica tion particular hypercube networks receiving special attention use messagepassing multiprocessor nearest neighbor twodimensional threedimensional mesh networks potential used messagepassing system well two important factors led suitability hypercube mesh networks use messagepassing networks factors 1 ease vlsi implementation 2 suitability two threedimensional applications two important design factors must considered designing networks 1 link bandwidth 2 network latency link bandwidth dened number bits transmitted per unit time bits second network latency dened time complete message transfer example links could unidirectional bidirectional transfer one bit several bits time estimate network latency must rst determine path setup time depends number nodes path actual transition time depends message size must also considered information transfer given source network done two ways 1 circuitswitching networks type network buffer required node path source destination rst determined links along path reserved information transfer reserved links released use messages circuitswitching net works characterized producing smallest amount delay inef cient link utilization main disadvantage circuitswitching networks circuitswitching networks therefore advantageously used case large message transfer 2 packetswitching networks messages divided smaller parts called packets transmitted nodes node must contain enough buffers hold received packets transmitting complete path source destination may available start transmission links become available packets moved node node reach destination node technique also known storeandforward packetswitching technique although storeandforward packetswitching networks eliminate need complete path start transmission tend increase overall network latency packets expected stored node buffers waiting availability outgoing links order reduce size required buffers decrease incurred network latency wormhole routing see introduced touched machine categories based flynn classi cations provide introduction interconnection networks used machines provide detailed coverage multiprocessor interconnection networks chapter 2 book advanced computer architecture parallel processing see reference list", "url": "RV32ISPEC.pdf#segment145", "timestamp": "2023-10-17 22:51:50", "segment": "segment145", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.5. INTERCONNECTION NETWORKS ", "content": "number classication criteria exist interconnection networks ins among criteria following", "url": "RV32ISPEC.pdf#segment146", "timestamp": "2023-10-17 22:51:50", "segment": "segment146", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.5.1. Mode of Operation ", "content": "according mode operation ins classied synchronous versus asynchronous synchronous mode operation single global clock used components system whole system operating lock step manner asynchronous mode operation hand require global clock handshaking signals used instead order coordinate operation asynchronous systems synchronous systems tend slower compared asynchronous systems race hazardfree", "url": "RV32ISPEC.pdf#segment147", "timestamp": "2023-10-17 22:51:50", "segment": "segment147", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.5.2. Control Strategy ", "content": "according control strategy ins classied centralized versus decen tralized centralized control systems single central control unit used over see control operation components system decentralized control control function distributed among different components system function reliability central control unit become bottle neck centralized control system crossbar centralized system multistage interconnection networks decentralized", "url": "RV32ISPEC.pdf#segment148", "timestamp": "2023-10-17 22:51:50", "segment": "segment148", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.5.3. Switching Techniques ", "content": "interconnection networks classied according switching mechanism circuit versus packet switching networks circuit switching mechanism complete path established prior start communication source destination established path remain existence whole communication period packet switching mechanism communication source destination takes place via messages divided smaller entities called packets way destination packets sent one node another storeandforward manner reach destination packet switching tends use network resources efciently compared circuit switching suffers variable packet delays", "url": "RV32ISPEC.pdf#segment149", "timestamp": "2023-10-17 22:51:50", "segment": "segment149", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.5.4. Topology ", "content": "according topology ins classied static versus dynamic networks dynamic networks connections among inputs outputs made using switching elements depending switch settings different interconnections estab lished static networks direct xed paths exist nodes switching elements nodes static networks introduced general criteria classication interconnection net works introduce possible taxonomy ins based topology figure 1112 provide taxonomy according shown taxonomy ins classied either static dynamic static networks classied according interconnection patterns onedimension 1d twodimension 2d hypercubes hcs dynamic net works hand classied according scheme inter connection busbased versus switchbased busbased ins classied single bus multiple bus switchbased dynamic networks classied according structure interconnection network singlestage ss multi stage ms crossbar networks multiprocessor interconnection networks explained detail chapter 2 book advanced computer architecture parallel processing see reference list", "url": "RV32ISPEC.pdf#segment150", "timestamp": "2023-10-17 22:51:50", "segment": "segment150", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.6. ANALYSIS AND PERFORMANCE METRICS ", "content": "provided introduction architecture multiprocessors pro vide basic ideas performance issues multiprocessors interested readers referred chapter 3 book advanced computer architecture parallel processing see reference list volume ii details fundamental question usually asked much faster given problem solved using multiprocessors compared single processor question formulated speedup factor dened n speedup factor increase speed due use multiprocessor system consisting n processors execution time using single processor execution time using n processors related question efciently n processors utilized question formulated efciency dened e n efciency n n 100 executing tasks programs using multiprocessor may assumed given task divided n equal subtasks executed one processor therefore expected speedup given n n efciency e n 100 assumption given task divided n equal subtasks executed processor unrealistic chapter 3 book advanced computer architecture parallel processing see reference list meaningful computation models developed analyzed number performance metrics also introduced analyzed chapter", "url": "RV32ISPEC.pdf#segment151", "timestamp": "2023-10-17 22:51:51", "segment": "segment151", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "11.7. SUMMARY ", "content": "chapter navigated number concepts system congurations related issues multiprocessing particular provided general concepts terminology used context multiproces sors number taxonomies multiprocessors introduced analyzed two memory organization schemes introduced sharedmemory messagepassing systems addition introduced different topologies used interconnecting multiple processors chapter 2 book advanced computer architecture parallel processing see reference list said interconnection networks performance metrics sharedmemory messagepassing architectures explained chapters 4 5 respectively reference mentioned", "url": "RV32ISPEC.pdf#segment152", "timestamp": "2023-10-17 22:51:51", "segment": "segment152", "image_urls": [], "Book": "[Mostafa_Abd-El-Barr__Hesham_El-Rewini]_Fundamenta(BookZZ.org)" }, { "section": "Chapter 1 ", "content": "verification guidelines", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-18 14:48:10", "segment": "segment0", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.1 Introduction ", "content": "imagine given job building house someone begin start choosing doors windows picking paint carpet colors selecting bathroom fixtures course first must consider owners use space budget decide type house build questions consider enjoy cooking want highend kitchen prefer watching movies home theater room eating takeout pizza want home office extra bedrooms budget limit basic house start learn details systemverilog language need understand plan verify particular design influences testbench structure houses kitchens bedrooms bathrooms testbenches share common structure stimulus gen eration response checking chapter introduces set guidelines coding styles designing constructing testbench meets par ticular needs techniques use concepts shown verification methodology manual systemverilog vmm bergeron et al 2006 without base classes important principle learn verification engineer bugs good shy away finding next bug hesitate ring bell time uncover one furthermore always keep track bug found entire project team assumes bugs design bug found tapeout one fewer ends customer hands need devious possible twisting torturing design extract possible bugs still easy fix let designers steal glory without craft cunning design might never work book assumes already know verilog language want learn systemverilog hardware verification language hvl typical features hvl distinguish hardware description language verilog vhdl constrainedrandom stimulus generation functional coverage higherlevel structures especially object oriented programming multithreading interprocess communication support hdl types verilog 4state values tight integration eventsimulator control design many useful features allow create test benches higher level abstraction able achieve hdl programming language c", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-18 14:48:11", "segment": "segment1", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.2 The Verification Process ", "content": "goal verification answered finding bugs partly correct goal hardware design create device per forms particular task dvd player network router radar signal processor based design specification purpose verification engineer make sure device accomplish task successfully design accurate representation specification bugs get discrepancy behavior device used outside original purpose responsibility although want know boundaries lie process verification parallels design creation process designer reads hardware specification block interprets human language description creates corresponding logic machineread able form usually rtl code needs understand input format transformation function format output always ambiguity interpretation perhaps ambiguities original document missing details conflicting descriptions verifi cation engineer must also read hardware specification create verification plan follow build tests showing rtl code cor rectly implements features one person perform interpretation added redundancy design process verification engineer job read hardware specifications make independent assessment mean tests exercise rtl show matches interpretation types bugs lurking design easiest ones detect block level modules created single person alu correctly add two numbers every bus transaction successfully complete packets make portion network switch almost trivial write directed tests find bugs contained entirely within one block design block level next place look discrepancies bound aries blocks interesting problems arise two designers read description yet different interpretations given proto col signals change first designer builds bus driver one view specification second builds receiver slightly different view job find disputed areas logic maybe even help reconcile two different views simulate single design block need create tests generate stimuli surrounding blocks difficult chore benefit lowlevel simulations run fast however may find bugs design testbench latter great deal code provide stimuli missing blocks start integrate design blocks stimulate reducing workload multiple block simulations may uncover bugs also run slower highest level dut entire system tested simula tion performance greatly reduced tests strive blocks performing interesting activities concurrently io ports active processors crunching data caches refilled action data alignment timing bugs sure occur level able run sophisticated tests dut exe cuting multiple operations concurrently many blocks possible active happens mp3 player playing music user tries download new music host computer download user presses several buttons player know real device used someone going try built testing makes difference product seen easy use one locks verified dut performs designated functions correctly need see operates errors design handle partial transaction one corrupted data control fields trying enumerate possible problems difficult mention design recover error injection han dling challenging part verification design abstraction gets higher verification challenge show individual cells flow blocks atm router correctly streams different priority cell chosen next always obvious highest level may analyze statistics thousands cells see aggregate behavior correct one last point never prove bugs left need constantly come new verification tactics", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-18 14:48:11", "segment": "segment2", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.3 The Verification Plan ", "content": "verification plan closely tied hardware specification con tains description features need exercised techniques used steps may include directed random testing assertions hw sw coverification emulation formal proofs use verification ip complete discussion verification see bergeron 2006", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-18 14:48:11", "segment": "segment3", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.4 The Verification Methodology Manual ", "content": "book hands draws heavily upon vmm roots methodology developed janick bergeron others qualis design started industry standard practices refined based expe rience many projects vmm techniques originally developed use openvera language extended 2005 systemver ilog vmm predecessor reference verification methodology vera used successfully verify wide range hardware designs networking devices processors book uses many concepts book teach vmm like advanced tool vmm designed use expert user excels difficult prob lems charge verifying 10 million gate design many communication protocols complex error handling library ip vmm right tool job working smaller modules single protocol may need robust methodology remember block part larger system vmm still useful promote reuse cost verification goes beyond immediate project new verification little experience object oriented programming unfamiliar constrainedrandom tests techniques book might right path choose familiar find vmm easy step biggest thing missing book compared vmm set base classes data environment utilities managing log files interprocess communication useful out side scope book systemverilog language", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-18 14:48:11", "segment": "segment4", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.5 Basic Testbench Functionality ", "content": "purpose testbench determine correctness design test dut accomplished following steps generate stimulus apply stimulus dut capture response check correctness measure progress overall verification goals steps accomplished automatically testbench others manually determined methodology choose determines steps carried", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-18 14:48:11", "segment": "segment5", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.6 Directed Testing ", "content": "traditionally faced task verifying correctness design may used directed tests using approach look hardware specification write verification plan list tests concentrated set related features armed plan write stimulus vectors exercise features dut simu late dut vectors manually review resulting log files waveforms make sure design expect test works correctly check test verification plan move next incremental approach makes steady progress always popular managers want see project making headway also produces almost immediate results since little infrastructure needed guiding creation every stimulus vector given enough time staffing directed testing sufficient verify many designs figure 11 shows directed tests incrementally cover features verification plan test targeted specific set design ele ments given enough time write tests need 100 coverage entire verification plan necessary time resources carry directed testing approach see may always making forward progress slope remains design complexity doubles takes twice long complete requires twice many people neither situations desirable need methodology finds bugs faster order reach goal 100 coverage figure 12 shows total design space features get covered directed testcases space many features bugs need write tests cover features find bugs", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-18 14:48:11", "segment": "segment6", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.7 Methodology Basics ", "content": "book uses following principles constrainedrandom stimulus functional coverage layered testbench using transactors common testbench tests testspecific code kept separate testbench principles related random stimulus crucial exercising complex designs directed test finds bugs expect design random test find bugs never anticipated using random stimulus need functional coverage measure verification progress fur thermore start using automatically generated stimulus need automated way predict results generally scoreboard reference model building testbench infrastructure including selfprediction takes significant amount work layered testbench helps control com plexity breaking problem manageable pieces transactors provide useful pattern building pieces appropriate planning build testbench infrastructure shared tests continually modified need leave hooks tests perform certain actions shaping stimulus injecting disturbances conversely code specific single test must kept separate testbench complicate infrastructure building style testbench takes longer traditional directed test bench especially selfchecking portions causing delay first test run gap cause manager panic make effort part schedule figure 13 see initial delay first random test runs upfront work may seem daunting payback high every test create shares common testbench opposed directed tests written scratch random test contains dozen lines code constrain stimulus certain direction cause desired exceptions creating protocol violation result single constrainedrandom testbench finding bugs faster many directed ones rate discovery begins drop create new random constraints explore new areas last bugs may found directed tests vast majority bugs found random tests", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-18 14:48:11", "segment": "segment7", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.8 Constrained-Random Stimulus ", "content": "want simulator generate stimulus want totally random values use systemverilog language describe format stimulus address 32bits opcode x z length 32 bytes simulator picks values meet constraints constraining random values become relevant stimuli covered chapter 6 values sent design also highlevel model predicts result design actual output compared predicted output figure 14 shows coverage constrainedrandom tests total design space first notice random test often covers wider space directed one extra coverage may overlap tests may explore new areas anticipate new areas find bug luck new area legal need write constraints keep away lastly may still write directed tests find cases covered constrainedrandom tests figure 15 shows paths achieve complete coverage start upper left basic constrainedrandom tests run many different seeds look functional coverage reports find holes gaps make minimal code changes perhaps new con straints injecting errors delays dut spend time outer loop writing directed tests features unlikely reached random tests", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-18 14:48:11", "segment": "segment8", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9 What Should You Randomize? ", "content": "think randomizing stimulus design first thing might think data fields stimulus easiest create call random problem gives low payback terms bugs found primary types bugs found random data data path errors perhaps bitlevel mistakes need find bugs control logic need think broadly design input following device configuration environment configuration input data protocol exceptions delays errors violations discussed following sections", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-18 14:48:11", "segment": "segment9", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9.1 Device and environment configuration ", "content": "common reason bugs missed testing rtl design enough different configurations tried tests use design comes reset apply fixed set initialization vectors put known state like testing pc operating sys tem right installed without applications installed course performance fine crashes real world environment dut configuration becomes ran dom longer use example helped company verify time division multiplexor switch 2000 input channels 12 output chan nels verification engineer said channels could mapped various configurations side input could used single channel divided multiple channels tricky part although standard ways breaking used time combination breakdowns legal leaving huge set possible cus tomer configurations test device engineer write several dozen lines directed testbench code configure channel result never able try configurations handful channels together wrote testbench randomized parameters single channel put loop configure switch channels confidence tests would uncover configurationrelated bugs would missed real world device operates environment containing components verifying dut connected testbench mimics environment randomize entire environment configuration including length simulation number devices configured course need create constraints make sure configuration legal another synopsys customer example company creating io switch chip connected multiple pci buses internal memory bus start simulation randomly chose number pci buses 14 number devices bus 18 parameters device master slave csr addresses etc kept track tested combina tions using functional coverage could sure covered almost every possible one environment parameters include test length error injection rates delay modes etc see bergeron 2006 examples", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-18 14:48:12", "segment": "segment10", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9.2 Input data ", "content": "read random stimulus probably thought taking transaction bus write atm cell filling data fields random values actually approach fairly straightforward long carefully prepare transaction classes shown chapters 4 8 need anticipate layered protocols error injection plus scoreboard ing functional coverage", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-18 14:48:12", "segment": "segment11", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9.3 Protocol exceptions, errors, and violations ", "content": "things frustrating device pc cell phone locks many times cure shut restart chances deep inside product piece logic experi enced sort error condition could recover thus stopped device working correctly prevent happening hardware build ing something go wrong real hardware try simulate look errors occur happens bus trans action complete invalid operation encountered design specification state two signals mutually exclusive drive make sure device continues operate trying provoke hardware illformed commands also try catch occurrences example recall mutually exclusive signals add checker code look violations code least print warning message occurs preferably generate error wind test frustrat ing spend hours tracking back code trying find root malfunction especially could caught close source simple assertion see vijayaraghavan 2005 guidelines writing assertions testbench design code make sure disable code stops simulation error easily test error handling", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-18 14:48:12", "segment": "segment12", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9.4 Delays and synchronization ", "content": "fast testbench send stimulus always use constrained random delays help catch protocol bugs test uses shortest delays runs fastest create possible stimulus create test bench talks another block fastest rate subtle bugs often revealed intermittent delays introduced block may function correctly possible permutations stimulus single interface subtle errors may occur data flowing multiple inputs try coordinate various drivers communi cate different relative timing inputs arrive fastest possible rate output throttled back slower rate stimulus arrives multiple inputs concurrently staggered different delays use functional coverage discussed chapter 9 measure combinations randomly generated", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-18 14:48:12", "segment": "segment13", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.9.5 Parallel random testing ", "content": "run tests directed test testbench pro duces unique set stimulus response vectors change stimulus need change test random test consists testbench code plus random seed run test 50 times unique seed get 50 different sets stimuli running multiple seeds broadens coverage test leverages work need choose unique seed simulation people use time day still cause duplicates using batch queuing system across cpu farm tell start 10 jobs mid night multiple jobs could start time different computers thus get random seed run stimulus blend processor name seed cpu farm includes multiprocessor machines could two jobs start running midnight seed also throw process id jobs get unique seeds need plan organize files handle multiple simulations job creates set output files log files functional coverage data run job different directory try give unique name file", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-18 14:48:12", "segment": "segment14", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.10 Functional Coverage ", "content": "previous sections shown create stimuli randomly walk entire space possible inputs approach testbench visits areas often takes long reach possible states unreachable states never visited even given unlimited simula tion time need measure verified order check items verification plan process measuring using functional coverage consists sev eral steps first add code testbench monitor stimulus going device reaction response determine functionality exercised next data one simulations combined report lastly need analyze results determine cre ate new stimulus reach untested conditions logic chapter 9 describes functional coverage systemverilog", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-18 14:48:12", "segment": "segment15", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.10.1 Feedback from functional coverage to stimulus ", "content": "random test evolves using feedback initial test run many different seeds creating many unique input sequences eventually test even new seed less likely generate stimulus reaches areas design space functional coverage asymptotically approaches limit need change test find new approaches reach uncovered areas design known coveragedriven verification testbench smart enough previous job wrote test generated every bus transaction processor additionally fired every bus terminator success parity error retry every cycle hvls wrote long set directed tests spent days lining terminator code fire right cycles much hand analysis declared success 100 coverage processor tim ing changed slightly reanalyze test change stimuli productive testing strategy uses random transactions termina tors longer run higher coverage bonus test could made flexible enough create valid stimuli even design timing changed could add feedback loop would look stimulus cre ated far generated write cycles yet change constraint weights drop write weight zero improvement would greatly reduce time needed get full coverage little manual intervention however typical situation trivial feedback functional coverage stimulus real design change stimulus reach desired design state easy answers dynamic feedback rarely used constrainedrandom stimulus manual feedback used coveragedriven verification feedback used formal analysis tools magellan synopsys 2003 analyzes design find unique reachable states runs short simulation see many states visited lastly searches state machine design inputs calculate stimulus needed reach remaining states magellan applies dut", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-18 14:48:12", "segment": "segment16", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.11 Testbench Components ", "content": "simulation testbench wraps around dut hardware tester connects physical chip testbench tester provide stimu lus capture responses difference testbench needs work wide range levels abstraction creating transactions sequences eventually transformed bit vectors tester works bit level goes testbench block made many bus functional models bfm think testbench components dut look like real components part testbench rtl real device connects amba usb pci spi buses build equivalent components testbench generate stimulus check response detailed synthesizable models instead high level transactors obey protocol execute quickly prototyping using fpgas emulation bfms need synthesizable", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-18 14:48:12", "segment": "segment17", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12 Layered Testbench ", "content": "key concept modern verification methodology layered testbench process may seem make testbench complex actually helps make task easier dividing code smaller pieces developed separately try write single routine randomly generate types stimulus legal illegal plus inject errors multilayer protocol routine quickly becomes complex unmaintainable", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-18 14:48:12", "segment": "segment18", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12.1 Flat testbench ", "content": "first learned verilog started writing tests probably looked like following lowlevel code simplified apb amba peripheral bus write vhdl users may written similar code days writing code like probably realized repetitive created tasks common operations bus write shown example 12", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-18 14:48:12", "segment": "segment19", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12.2 The signal and command layers ", "content": "figure 19 shows lower layers testbench bottom signal layer contains design test signals connect testbench next level command layer dut inputs driven driver runs single commands bus read write dut output drives monitor takes signal transitions groups together commands assertions also cross commandsignal layer look individual signals look changes across entire command", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-18 14:48:12", "segment": "segment20", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12.3 The functional layer ", "content": "functional layer feeds command layer agent block called transactor vmm receives higherlevel transactions dma read write breaks individual commands commands also sent scoreboard predicts results transaction checker compares commands monitor scoreboard", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-18 14:48:12", "segment": "segment21", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12.4 The scenario layer ", "content": "functional layer driven generator scenario layer scenario remember job verification engineer make sure device accomplishes intended task example device mp3 player concurrently play music storage download new music host respond input user volume track controls operations scenario downloading music file takes several steps control register reads writes set operation multiple dma writes transfer song another group reads writes scenario layer testbench orchestrates steps constrainedrandom values parameters track size memory location blocks testbench environment inside dashed line writ ten start development project may evolve may add functionality blocks change individual tests done leaving hooks code test change behavior blocks without rewrite create hooks callbacks section 87 factory patterns section 83", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-18 14:48:12", "segment": "segment22", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.12.5 The test layer and functional coverage ", "content": "top testbench test layer shown figure 112 design bugs occur dut blocks harder find involve multiple people reading interpreting multiple specifications toplevel test conductor play musical instru ment instead guides efforts others test contains constraints create stimulus functional coverage measures progress tests fullfilling requirements verification plan functional coverage code changes project various criteria complete constantly modified part environment create directed test constrainedrandom environment simply insert section directed test case middle parallel random sequence directed code performs work want random background noise may cause bug become visible perhaps unanticipated block need layers testbench answer depends dut looks like complicated design requires sophisticated test bench always need test layer simple design scenario layer may simple merge agent estimating effort test design count number gates count number designers every time add another person team increase chance different interpretations specifications may need layers dut several protocol layers get layer testbench environment example tcp traffic wrapped ip sent ethernet packets consider using three separate layers generation checking better yet use exist ing verification components one last note diagram shows possible con nections blocks testbench may different set test may need reach driver layer force physical errors guidelines let needs guide create", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-18 14:48:12", "segment": "segment23", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.13 Building a Layered Testbench ", "content": "time take previous diagrams learn map components systemverilog constructs", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-18 14:48:12", "segment": "segment24", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.13.1 Creating a simple driver ", "content": "first take closer look one blocks driver driver receives commands agent may inject errors add delays breaks command individual signal changes bus requests handshakes etc general term testbench block transactor core loop", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-18 14:48:12", "segment": "segment25", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.14 Simulation Environment Phases ", "content": "learning parts make environment parts execute want clearly define phases coordi nate testbench code project works together three primary phases build run wrapup divided smaller steps build phase divided following steps generate configuration randomize configuration dut surrounding environment build environment allocate connect testbench components based configuration testbench component one exists testbench opposed physical components design built rtl code example configuration chose three bus drivers testbench would allocate initialize step reset dut configure dut based generated configuration first step run phase test actually runs following steps start environment run testbench components bfms stimulus generators run test start test wait complete easy tell directed test completed complex random test use testbench layers guide start ing top wait layer drain inputs previ ous layer wait current layer become idle wait next lower layer also use timeout checkers ensure dut testbench lock wrapup phase two steps sweep lowest layer completes need wait final transactions drain dut report dut idle sweep testbench lost data sometimes scoreboard holds transactions never came perhaps dropped dut informa tion create final report whether test passed failed failed sure delete functional coverage data may correct shown layer diagram figure 112 test starts environ ment runs steps details found chapter 8", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-18 14:48:13", "segment": "segment26", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.15 Maximum Code Reuse ", "content": "verify complex device hundreds features write hundreds directed tests use constrainedrandom stimulus write far fewer tests instead real work put constructing test bench contains lower testbench layers scenario functional command testbench code used tests remain generic guidelines may seem recommend sophisticated testbench remember every line put eliminate line every sin gle test create dozen tests high payback keep mind read chapter 8", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-18 14:48:13", "segment": "segment27", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.16 Testbench Performance ", "content": "first time seen methodology probably qualms works compared directed testing common objection testbench performance directed test often simulates less second constrainedrandom tests wander around state space minutes hours problem argument ignores real verification bottleneck time required create test may able handcraft directed test day debug manually verify results hand another day two actual simula tion runtime dwarfed amount time invested several steps creating constrainedrandom test significant building layered testbench including selfchecking por tion benefit work shared tests well worth effort next step creating stimulus specific goal verifica tion plan may crafting random constraints devious ways injecting errors protocol violations building one may take time making several directed tests payoff much higher constrainedrandom test tries thousands different protocol variations worth handful directed tests could created amount time third step constrainedrandom testing functional coverage task starts creation strong verification plan clear goals easily measured next need create systemverilog code instruments environment gathers data lastly impor tantly need analyze results determine met goals modify tests", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-18 14:48:13", "segment": "segment28", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "1.17 Conclusion ", "content": "continuous growth complexity electronic designs requires modern systematic automated approach creating testbenches cost fixing bug grows 10x project moves step specifica tion rtl coding gate synthesis fabrication finally user hands directed tests test one feature time create com plex stimulus configurations device would subjected real world produce robust designs must use constrainedrandom stimulus combined functional coverage create widest possible range stimulus", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-18 14:48:13", "segment": "segment29", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 2 ", "content": "data types", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-18 14:48:13", "segment": "segment30", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.1 Introduction ", "content": "systemverilog offers many improved data structures compared ver ilog created designers also useful testbenches chapter learn data structures use ful verification systemverilog introduces new data types following benefits twostate better performance reduced memory usage queues dynamic associative arrays automatic storage reduced memory usage builtin support searching sorting unions packed structures allows multiple views data classes structures support abstract data structures strings builtin string support enumerated types code easier write understand", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-18 14:48:13", "segment": "segment31", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.2 Built-in Data Types ", "content": "verilog1995 two basic data types variables reg nets hold fourstate values 0 1 z x rtl code uses variables store combina tional sequential values variables unsigned single multibit reg 70 signed 32bit variables integer unsigned 64bit vari ables time floating point numbers real variables grouped together arrays fixed size storage static meaning variables alive entire simulation routines use stack hold arguments local values net used connect parts design gate primitives module instances nets come many flavors designers use scalar vector wires connect together ports design blocks systemverilog adds many new data types help hardware designers verification engineers", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-18 14:48:13", "segment": "segment32", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.2.1 The logic type ", "content": "one thing verilog always leaves new users scratching heads difference reg wire driving port use connecting blocks system verilog improves classic reg data type driven continuous assignments gates modules addition variable given new name logic look like register declara tion one limitation logic variable driven multiple drivers modeling bidirectional bus case variable needs nettype wire example 21 shows systemverilog logic type", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-18 14:48:13", "segment": "segment33", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.2.2 Two-state types ", "content": "systemverilog introduces several twostate data types improve simula tor performance reduce memory usage fourstate types simplest type bit always unsigned four signed types might tempted use types byte replace verbose declarations logic 70 hardware designers careful new types signed variables byte variable count 127 255 may expect range 128 127 could use byte unsigned verbose bit 70 signed variables may cause unexpected results randomization discussed chapter 6 careful connecting twostate variables design test especially outputs hardware tries drive x z values converted twostate value testbench code may never know try remember converted 0 1 instead always check propagation unknown values use isunknown operator returns 1 bit expression x z example 23 checking fourstate values isunknown iport display 0d 4state value detected input port time iport", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-18 14:48:13", "segment": "segment34", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3 Fixed-Size Arrays ", "content": "systemverilog offers several flavors arrays beyond singledimen sion fixedsize verilog1995 arrays many enhancements made classic arrays", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-18 14:48:13", "segment": "segment35", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.1 Declaring and initializing fixed-size array ", "content": "verilog requires low high array limits must given declaration almost arrays use low index 0 systemverilog lets use shortcut giving array size similar c example 24 declaring fixedsize arrays int lohi 015 16 ints 0 15 int cstyle 16 16 ints 0 15 create multidimensional fixedsize arrays specifying dimensions variable name unpacked array packed arrays shown later following creates several twodimensional arrays inte gers 8 entries 4 sets last entry 1 multidimensional arrays introduced verilog2001 compact declarations new example 25 declaring using multidimensional arrays int array2 07 03 verbose declaration int array3 8 4 compact declaration array2 7 3 1 set last array element systemverilog stores element longword 32bit boundary byte shortint int stored single longword long int stored two longwords simulators frequently store fourstate types logic integer two longwords example 26 unpacked array declarations bit 70 bunpacked 3 unpacked unpacked array bytes bunpacked stored three longwords figure 21 unpacked array storage", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-18 14:48:13", "segment": "segment36", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.2 The array literal ", "content": "initialize array using array literal apostrophe curly braces1 set elements replicate val ues putting count curly braces", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-18 14:48:13", "segment": "segment37", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.3 Basic array operations \u2014 for and foreach ", "content": "common way manipulate array foreach loop example 28 variable declared local loop systemverilog function size returns size array foreach statement specify array name index square brackets systemverilog automatically steps elements array index variable local loop 1 vcs x200506 follows original accellera standard array literals uses curly braces leading apostrophe vcs change ieee standard upcoming release", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-18 14:48:13", "segment": "segment38", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.4 Basic array operations \u2013 copy and compare ", "content": "perform aggregate compare copy arrays without loops aggregate operation works entire array opposed working individual element comparisons limited equality ine quality example 211 shows several examples compares operator mini ifstatement choosing two strings perform aggregate arithmetic operations addition arrays instead use loops logical operations xor either use loop use packed arrays described", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-18 14:48:13", "segment": "segment39", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.5 Bit and word subscripts, together at last ", "content": "common annoyance verilog1995 use word bit subscripts together verilog2001 removes restriction fixed size arrays example 212 prints first array element binary 101 lowest bit 1 next two higher bits binary 10 change new systemverilog many users may know useful improvement verilog2001", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-18 14:48:13", "segment": "segment40", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.6 Packed arrays ", "content": "data types may want access entire value also divide smaller elements example may 32bit register sometimes want treat four 8bit values times single unsigned value systemverilog packed array treated array single value stored contiguous set bits unused space unlike unpacked array", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-18 14:48:13", "segment": "segment41", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.7 Packed array examples ", "content": "packed bit word dimensions specified part type variable name dimensions must specified lo hi format variable bytes packed array four bytes stored single longword mix packed unpacked dimensions may want make array represents memory accessed bits bytes long words example 214 barray unpacked array three packed elements example 214 declaration mixed packedunpacked array bit 30 70 barray 3 packed 3x32bit barray 0 32 h01234567 barray 0 3 8 h01 barray 0 1 6 1 b1 variable bytes packed array four bytes stored single longword barray array three elements single subscript get longword data barray 2 two subscripts get byte data barray 0 3 three subscripts access single bit barray 0 1 6 note one dimension specified name barray 3 dimension unpacked always need use least one subscript", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-18 14:48:13", "segment": "segment42", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.3.8 Choosing between packed and unpacked arrays ", "content": "choose packed unpacked array packed array handy need convert scalars example might need reference memory byte longword array bar ray handle requirement fixedsize arrays packed dynamic arrays associative arrays queues shown need wait change array use packed array perhaps testbench might need wake memory changes value want use operator legal scalar val ues packed arrays using earlier examples block variable lw barray 0 entire array barray unless expand barray 0 barray 1 barray 2", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-18 14:48:13", "segment": "segment43", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.4 Dynamic Arrays ", "content": "basic verilog array type shown far known fixedsize array size set compile time know size array runtime may choose number transactions randomly 1000 100000 want use fixedsize array would half empty systemverilog provides dynamic array allocated resized simulation dynamic array declared empty word subscripts means want give array size compile time instead specify runtime array initially empty must call new opera tor allocate space passing number entries square brackets pass name array new operator values copied new elements size function returns size fixedsize dynamic array dynamic arrays several specialized routines delete size latter function returns size work fixedsize arrays want declare constant array values want bother counting number elements use dynamic array array literal example 216 9 masks 8 bits let systemverilog count rather making fixedsize array accidently choosing wrong size 8 make assignments fixedsize dynamic arrays long base type int assign dynamic array fixed array long number elements copy fixedsize array dynamic array systemverilog calls new constructor allocate space copies values", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-18 14:48:13", "segment": "segment44", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.5 Queues ", "content": "systemverilog introduces new data type queue provides easy searching sorting structure fast fixedsize array versatile linked list like dynamic array queues grow shrink queue easily add remove elements anywhere example 217 adds removes values queue create queue systemverilog actually allocates extra space quickly add extra elements note need call new operator queue add enough elements queue runs space systemverilog automatically allocates additional space result grow shrink queue without performance penalty dynamic array efficient push pop elements front back queue taking fixed amount time matter large queue adding deleting elements middle slower especially larger queues systemverilog shift half elements copy contents fixed dynamic array queue", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-18 14:48:14", "segment": "segment45", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.6 Associative Arrays ", "content": "dynamic arrays good want occasionally create large array want something really huge perhaps modeling pro cessor multigigabyte address range typical test processor may touch hundred thousand memory locations con taining executable code data allocating initializing gigabytes storage wasteful systemverilog offers associative arrays store entries sparse matrix means address large address space systemverilog allocates memory element write following picture associative array holds values 03 42 1000 4521 200000 memory used store far less would needed store fixed dynamic array 200000 entries example 218 shows declaring initializing stepping asso ciative array arrays declared wildcard syntax remember syntax thinking array indexed almost integer example 218 associative array assoc scattered ele ments 1 2 4 8 16 etc simple loop step need use foreach loop wanted finer control could use first next functions loop functions modify index argument return 0 1 depending whether elements left array associative arrays also addressed string index similar perl hash arrays example 219 reads namevalue pairs file associative array try read element allo cated yet systemverilog returns 0 twostate types x 4state types use function exists check element exists shown strings explained section 214 associative array stored simulator tree addi tional overhead acceptable need store arrays widely separated index values packets indexed 32bit addresses 64 bit data values", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-18 14:48:14", "segment": "segment46", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.7 Linked Lists ", "content": "systemverilog provides linked list datastructure analogous stl standard template library list container container defined parameterized class meaning customized hold data type know linked list systemverilog avoid using c programmers might familiar stl version systemver ilog queues efficient easier use", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-18 14:48:14", "segment": "segment47", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.8 Array Methods ", "content": "many array methods use unpacked array types fixed dynamic queue associative routines simple giving current array size sorting elements", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-18 14:48:14", "segment": "segment48", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.8.1 Array reduction methods ", "content": "basic array reduction method takes array reduces scalar common reduction method sum adds together val ues array careful systemverilog rules handling width operations default add values singlebit array result single bit store result 32bit variable compare 32bit variable systemverilog uses 32bits adding values array reduction methods product xor", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-18 14:48:14", "segment": "segment49", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.8.2 Array locator methods ", "content": "largest value array array contain certain value array locator methods find data unpacked array meth ods always return queue example 222 uses fixedsize array f 6 dynamic array queue q the min max functions find smallest largest ele ments array note return queue scalar might expect methods also work associative arrays unique method returns queue unique values array duplicate values included could search array using foreach loop systemver ilog one operation locator method expression tells systemverilog perform search combine array reduction sum using clause results may surprise example 223 sum operator adding number times expression true first statement example 223 two array elements greater 7 9 8 count set 2 note sumwith statement expression need store result temporary variable use directly display statement", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-18 14:48:14", "segment": "segment50", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9 Choosing a Storage Type ", "content": "guidelines choosing right storage type based flex ibility memory usage speed sorting rules thumb results may vary simulators", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-18 14:48:14", "segment": "segment51", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9.1 Flexibility ", "content": "use fixedsize dynamic array accessed consecutive posi tive integer indices 0 1 2 3 choose fixedsize array array size known compile time choose dynamic array size known runtime example variablesize packets easily stored dynamic array writing routines manipulate arrays consider using dynamic arrays one routine works size dynamic array long element type int string etc matches likewise pass queue size routine long element type matches queue argument associative arrays also passed regardless size however routine fixedsize array argument accepts arrays specified length choose associative arrays nonstandard indices widely sepa rated values random data values addresses associative arrays also used model contentaddressable memories queues good way store data number elements grows shrinks lot simulation scoreboard holds expected values lastly queues great searching sorting", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-18 14:48:14", "segment": "segment52", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9.2 Memory usage ", "content": "want reduce simulation memory usage use twostate ele ments chose data sizes multiples 32 bits avoid wasted space simulators usually store anything smaller 32bit word example array 1024 bytes wastes memory simulator puts element 32bit word packed arrays also help conserve memory arrays hold thousand elements type array choose make big difference memory usage unless many instances arrays arrays thousand million active elements fixedsize dynamic arrays memory efficient may want reconsider algorithms need arrays million active elements queues slightly less efficient access fixedsize dynamic arrays additional pointers however data set grows shrinks often store dynamic memory manu ally call new allocate memory copy expensive operation would wipe gains using dynamic memory modeling memories larger megabytes done associative array note element associative array take sev eral times memory fixedsize dynamic memory pointer overhead", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-18 14:48:14", "segment": "segment53", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9.3 Speed ", "content": "choose array type based many times accessed per clock cycle reads writes could use type over head minor compared dut use array often size type matters fixedsize dynamic arrays stored contiguous memory element found amount time regardless array size queues almost access time fixedsize dynamic array reads writes first last elements pushed popped almost overhead inserting removing elements middle requires many elements shifted make room need insert new elements large queue testbench may slow consider changing store new elements reading writing associative arrays simulator must search element memory lrm specify done pop ular ways hash tables trees requires computation arrays therefore associative arrays slowest", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-18 14:48:14", "segment": "segment54", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9.4 Sorting ", "content": "since systemverilog sort singledimension array fixedsize dynamic associative arrays plus queues pick based often data added array data received chose fixedsize dynamic array allocate array data slowly dribbles chose queue adding new elements head tail efficient values noncontiguous 1 10 11 50 also unique store associative array using index using routines first next prev search associa tive array value find successive values lists doubly linked find values larger smaller current value support removing value however associative array much faster accessing given element given index example use associative array bits hold expected 32 bit values value created write location need see given value written use exists function done element use delete remove associative array", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-18 14:48:14", "segment": "segment55", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.9.5 Choosing the best data structure ", "content": "suggestions choosing data structure network packets properties fixed size accessed sequentially use fixedsize dynamic array fixed variablesize packets scoreboard expected values properties variable size accessed value general use queue adding deleting ele ments constantly simulation give every transaction fixed id 1 2 3 could use index queue transaction filled random values push queue search unique values score board may hundreds elements often inserting deleting middle associative array may faster sorted structures use queue data comes predictable order associative array order unspecified score board never needs searched store expected values mailbox shown section 76 modeling large memories greater million entries need every location use associative array sparse mem ory need every location try different approach need much live data still stuck sure use 2state val ues packed 32bits command names values file property lookup string read strings file look commands associative array using command string index create array handles point objects shown chap ter 4 basic oop", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-18 14:48:14", "segment": "segment56", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.10 Creating New Types with typedef ", "content": "create new types using typedef statement example may alu configured compiletime use 8 16 24 32bit operands verilog would define macro operand width another type really creating new type performing text substi tution systemverilog create new type following code book uses convention userdefined types use suffix t general systemverilog lets copy basic types warning either extending truncating values width mismatch note parameter typedef statements made global putting root shown section 57 one useful types create unsigned 2 state 32bit integer values testbench positive integers field length number transactions received put following definition uint root used anywhere simulation", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-18 14:48:14", "segment": "segment57", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.11 Creating User-Defined Structures ", "content": "one biggest limitations verilog lack data structures systemverilog create structure using struct statement similar available c struct degenerate class use class instead shown chapter 4 verilog module combines data signals code alwaysinitial blocks plus routines class combines data routines make entity easily debugged reused typedef groups data fields together without code manipulates data creating half solution several places typedef useful creating simple user defined types unions enumerated types virtual interfaces", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-18 14:48:14", "segment": "segment58", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.11.1 Creating a struct and a new type ", "content": "combine several variables structure example 227 creates structure called pixel three unsigned bytes red green blue example 227 creating single pixel type struct bit 70 r g b pixel problem declaration creates single pixel type able share pixels using ports routines cre ate new type instead example 228 pixel struct typedef struct bit 70 r g b pixels pixels mypixel use suffix s declaring struct makes easier share reuse code", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-10-18 14:48:14", "segment": "segment59", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.11.2 Making a union of several types ", "content": "hardware interpretation set bits register may depend value bits example processor instruction may many layouts based opcode immediatemode operands might store literal value operand field value may decoded differently integer instructions floating point instructions example 229 stores integer real number f location unions useful frequently need read write register several different formats however go over board especially save memory unions may help squeeze bytes structure expense create maintain complicated data struc ture instead make flat class discriminant variable shown section 854 kind variable indicates type transaction thus fields read write randomize need array values plus bits used packed array shown 236", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-10-18 14:48:15", "segment": "segment60", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.11.3 Packed structures ", "content": "systemverilog allows control data laid memory using packed structures packed structure stored contiguous set bits unused space struct pixel shown used three data values stored three longwords even though needs three bytes specify packed smallest possible space example 230 packed structure typedef struct packed bit 70 r g b pixelps pixelps mypixel packed structures used underlying bits represent numerical value trying reduce memory usage example could pack together several bitfields make single register might pack together opcode operand fields make value contains entire processor instruction", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-10-18 14:48:15", "segment": "segment61", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.12 Enumerated Types ", "content": "enumeration creates strong variable type limited set specified names instruction opcodes state machine values using names add move rotw makes code easier write maintain using literals 8 h01 simplest enumerated type declaration contains list constant names one variables creates anonymous enumerated type example 231 simple enumerated type enum red blue green color usually want create named enumerated type easily declare multiple variables especially used routine arguments module ports first create enumerated type variables type get string representation enumerated variable func tion name", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-10-18 14:48:15", "segment": "segment62", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.12.1 Defining enumerated values ", "content": "actual values default integers starting 0 increase choose enumerated values following line uses default value 0 init 2 decode 3 idle example 233 specifying enumerated values typedef enum init decode2 idle fsmtypee enumerated constants init follow scoping rules variables consequently use name several enumerated types init different state machines declared different scopes modules program blocks routines classes enumerated types stored int unless specify oth erwise careful assigning values enumerated constants default value int 0 example 2 34 position initialized 0 legal ordinale variable behavior tool bug language specified always specify enumerated constant value 0 catch error example 234 incorrectly specifying enumerated values typedef enum first1 second third ordinale ordinale position example 235 correctly specifying enumerated values typedef enum erro0 first1 second third ordinale ordinale position", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-10-18 14:48:15", "segment": "segment63", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.12.2 Routines for enumerated types ", "content": "systemverilog provides several functions stepping enumer ated types first returns first member enumeration last returns last member enumeration next returns next element enumeration next n returns nth next element prev returns previous element enumeration prev n returns nth previous element functions next prev wrap around reach beginning end enumeration note easy way write loop steps members enumerated type use enumerated loop variable get starting member first next member next problem creating comparison final iteration loop use test current currentlast loop ends using last value use current currentlast get infinite loop next never gives value greater final value use loop step values", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-10-18 14:48:15", "segment": "segment64", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.12.3 Converting to and from enumerated types ", "content": "default type enumerated type int 2state take value enumerated variable put integer int sim ple assignment systemverilog let store 4state integer enum without explicitly changing type systemverilog requires explicitly cast value make realize could writing outofbounds value called function shown example 237 cast tried assign right value left assignment succeeds cast returns 1 assignment fails outofbounds value assignment made function returns 0 use cast task operation fails systemverilog prints error also cast value using type val shown type checking result may bounds use style", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-10-18 14:48:15", "segment": "segment65", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.13 Constants ", "content": "several types constants systemverilog classic verilog way create constant text macro plus side macros global scope used bit field definitions type definitions negative side macros global cause conflicts need local constant lastly macro requires character recognized expanded systemverilog parameters declared root level global approach replace many verilog macros used constants use typedef replace clunky mac ros next choice parameter verilog parameter loosely typed limited scope single module verilog2001 added typed parameters limited scope kept parameters widely used systemverilog also supports const modifier allows make variable initialized declaration written proce dural code example 238 declaring const variable initial begin const byte colon end example 238 value colon initialized initial block entered example 310 shows const routine argument", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-10-18 14:48:15", "segment": "segment66", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.14 Strings ", "content": "ever tried use verilog reg variable hold string char acters suffering systemverilog string type holds variablelength strings individual character type byte elements string length n numbered 0 n1 note unlike c null character end string attempt use character 0 ignored strings use dynamic memory allocation worry running space store string example 239 shows various string operations function getc n returns byte location n toupper returns uppercase copy string tolower returns lowercase copy curly braces used concatenation task putc writes byte string loca tion must 0 length given len substr start end function extracts characters location start end note useful dynamic strings languages c keep making temporary strings hold result function example 239 psprintf function used instead sformat verilog2001 new function returns formatted temporary string shown passed directly another routine saves declare temporary string passing formatting statement routine call", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-10-18 14:48:15", "segment": "segment67", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.15 Expression Width ", "content": "prime source unexpected behavior verilog width expressions example 240 adds 1 1 using four different styles addition uses two 1bit variables precision 110 addition b uses 8bit precision 8bit variable right side assignment case 112 addition c uses dummy constant force systemver ilog use 2bit precision lastly addition first value cast 2 bit value cast operator 112 2 function psprintf implemented synopsys vcs submitted next version systemverilog simulators may already implemented several tricks use avoid problem first avoid sit uations overflow lost addition a use temporary b8 desired width add another value force mini mum precision 2 b0 lastly systemverilog cast one variables desired precision", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-10-18 14:48:15", "segment": "segment68", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.16 Net Types ", "content": "verilog allows use nets without defining feature called implicit nets shortcut helps netlisting tools lazy designers guaranteed cause problems ever misspell net name solution disable language feature verilog2001 compile directive defaultnettype none put without period first mod ule verilog code implicit net cause compilation error example 241 disabling implicit nets defaultnettype none defaultnettype none module first", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-10-18 14:48:15", "segment": "segment69", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "2.17 Conclusion ", "content": "systemverilog provides many new data types structures create highlevel testbenches without worry bitlevel representation queues work well creating scoreboards con stantly need add remove data dynamic arrays allow choose array size runtime maximum testbench flexibility associative arrays used sparse memories scoreboards single index enu merated types make code easier read write creating groups named constants go create procedural testbench con structs explore oop capabilities systemverilog chapter 4 learn design code even higher level abstraction thus creating robust reusable code", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-10-18 14:48:15", "segment": "segment70", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 3 ", "content": "procedural statements routines", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-10-18 14:48:15", "segment": "segment71", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.1 Introduction ", "content": "verify design need write great deal code tasks functions systemverilog introduces many incremental improvements make easier making language look like c especially around argument passing background software engineering additions familiar", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-10-18 14:48:15", "segment": "segment72", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.2 Procedural Statements ", "content": "systemverilog adopts many operators statements c c declare loop variable inside loop restricts scope loop variable prevent coding bugs increment decrement operators available pre post form label begin fork statement put label matching end join statement makes easier match start finish block also put label systemverilog end statements endmodule endtask endfunction others learn book example 31 demonstrates new constructs two new statements help loops first loop want skip rest statements next iteration use continue want leave loop immediately use break following loop reads commands file using amazing file io code part verilog2001 command blank line code continue skips processing command command done code break terminate loop", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-10-18 14:48:15", "segment": "segment73", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.3 Tasks, Functions, and Void Functions ", "content": "verilog makes clear differentiation tasks functions important difference task consume time function function delay 100 blocking statement posedge clock wait ready call task additionally verilog function must return value value must used assignment statement systemverilog want call function ignore return value cast result void might done calling function use side effect example 33 ignoring function return value void myfunc 42 simulators vcs allow ignore return value without using void syntax systemverilog task consume time make void function function return value called task function maximum flexibility debug routine void function rather task called task function example 34 prints values state machine example 34 void function debug function void printstate display 0d state 0s time curstatename endfunction", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-10-18 14:48:15", "segment": "segment74", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.4 Task and Function Overview ", "content": "systemverilog makes several small improvements tasks functions make look like c c routines3", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-10-18 14:48:15", "segment": "segment75", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.4.1 Routine begin...end removed ", "content": "first improvement may notice systemverilog begin end blocks optional verilog1995 required single line routines task endtask function endfunction keywords enough define routine boundaries example 35 simple task without begin end task multiplelines display first line display second line endtask multiplelines", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-10-18 14:48:15", "segment": "segment76", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5 Routine Arguments ", "content": "many systemverilog improvements routine make easier declare arguments expand ways pass values routine", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-10-18 14:48:15", "segment": "segment77", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5.1 C-style Routine Arguments ", "content": "systemverilog verilog2001 allow declare task function arguments cleanly less repetition following verilog task requires declare arguments twice direction type", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-10-18 14:48:15", "segment": "segment78", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5.2 Argument Direction ", "content": "take even shortcuts declaring routine arguments direction type default input logic sticky repeat similar arguments routine header written using verilog1995 style arguments b input logic 1 bit wide arguments u v 16bit output bit types", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-10-18 14:48:15", "segment": "segment79", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5.3 Advanced Argument Types ", "content": "verilog simple way handle arguments input inout copied local variable start routine output inout copied routine exited memories could passed verilog routine scalars systemverilog specify argument passed reference rather copying value argument type ref several benefits input output inout first pass array routine systemverilog allows pass array arguments without ref direction array copied onto stack expensive operation smallest arrays example 310 also shows const modifier result array initialized printsum called modified routine always use ref passing arrays routine want routine change array values use const ref type compiler checks routine modify array second benefit ref arguments task modify variable instantly seen calling function useful several threads executing concurrently want simple way pass information see chapter 7 details using forkjoin example 311 initial block access data memory soon busenable asserted even though busread task return bus transaction completes could several cycles later since data argument passed ref data statement triggers soon data changes task declared data output data statement would trigger end bus transaction", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-10-18 14:48:16", "segment": "segment80", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5.4 Default Argument Values ", "content": "testbench grows sophistication may want add additional controls code break existing code function example 310 might want print sum middle values array however want go back rewrite every call add extra arguments systemverilog specify default value used leave argument call call task following ways note first call compatible versions printsum routine", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-10-18 14:48:16", "segment": "segment81", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.5.5 Common Coding Errors ", "content": "common coding mistake likely make routine forgetting argument type sticky respect previous argument default type first argument singlebit input start following simple task header two arguments input integers writing task realize need access array add new array argument use ref type copied argument types b take direction previous argument ref using ref simple variable int usually needed would get even warning compiler thus would realize using wrong type argument routine something default input type specify direction arguments", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-10-18 14:48:16", "segment": "segment82", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.6 Returning from a Routine ", "content": "systemverilog adds return statement make easier control flow routines following task needs return early error checking otherwise would use else clause would cause indentation harder read", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-10-18 14:48:16", "segment": "segment83", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.7 Local Data Storage ", "content": "verilog created 1980s primary goal describing hardware objects language statically allocated particular routine arguments local variables stored fixed location rather pushing stack like programming languages build silicon representation recursive routine however software engineers used behavior stackbased languages c bitten subtle bugs limited ability create complex testbenches libraries routines", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-10-18 14:48:16", "segment": "segment84", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.7.1 Automatic storage ", "content": "verilog1995 tried call task multiple places testbench local variables shared common static storage different threads stepped values verilog2001 specify tasks functions modules use automatic storage causes simulator use stack local variables systemverilog routines still use static storage default modules program blocks always make program blocks routines use automatic storage putting automatic keyword program statement chapter 5 learn program blocks hold testbench code always make programs automatic example 319 shows task monitor data written memory call task multiple times concurrently addr expectdata arguments stored separately call without automatic modifier called waitformem second time first still waiting second call would overwrite two arguments", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-10-18 14:48:16", "segment": "segment85", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.7.2 Variable initialization ", "content": "similar problem occurs try initialize local variable declaration actually initialized start simulation general solution avoid initializing variable declaration anything constant use separate assignment statement give better control initialization done following task looks bus five cycles creates local variable attempts initialize current value address bus bug variable localaddr statically allocated actually initialized start simulation begin end block entered solution declare program automatic", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-10-18 14:48:16", "segment": "segment86", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.8 Time Values ", "content": "systemverilog several new constructs allow unambiguously specify time values system", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-10-18 14:48:16", "segment": "segment87", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.8.1 Time units and precision ", "content": "rely timescale compiler directive must compile files proper order sure delays use proper scale precision timeunit timeprecision declarations eliminate ambiguity precisely specifying values every module example 322 shows declarations note use instead timescale must put every module delay", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-10-18 14:48:16", "segment": "segment88", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.8.2 Time literals ", "content": "systemverilog allows unambiguously specify time value plus units code use delays 01ns 20ps remember use timeunit timeprecision timescale make code even time aware using classic verilog timeformat realtime routines", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-10-18 14:48:16", "segment": "segment89", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "3.9 Conclusion ", "content": "new systemverilog procedural constructs taskfunction features make easier create testbenches making language look like programming language cc stick systemverilog additional hdl constructs timing controls fourstate logic", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-10-18 14:48:16", "segment": "segment90", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 4 ", "content": "basic oop", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-10-18 14:48:16", "segment": "segment91", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.1 Introduction ", "content": "procedural programming languages verilog c strong division data structures code uses dec larations types data often different file algorithms manipulate result difficult understand functionality program two halves separate verilog users even worse c users structures verilog bit vectors arrays wanted store information bus transaction would need multiple arrays one address one data one command information transaction n spread across arrays code create transmit receive transac tions module may may actually connected bus worst arrays static testbench allocated 100 array entries current test needed 101 would edit source code change size recompile result arrays sized hold greatest conceivable number transactions normal test memory wasted object oriented programming oop lets create complex data types tie together routines work create testbenches systemlevel models abstract level calling rou tines perform action rather toggling bits work transactions instead signal transitions productive bonus testbench decoupled design details making robust easier maintain reuse future projects already familiar oop skim chapter systemverilog follows oop guidelines fairly closely sure read section 418 learn build testbench chapter 8 presents advanced oop concepts inheritance testbench techniques read everyone", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-10-18 14:48:16", "segment": "segment92", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.2 Think of Nouns, not Verbs ", "content": "grouping data code together helps creating maintaining large testbenches data code brought together start thinking would perform testbench job goal testbench apply stimulus design check result see correct data flows design grouped together transactions easiest way organize test bench around transactions operations perform oop transaction object focus testbench think analogy transportation get car want perform discrete actions starting moving forward turn ing stopping listening music drive early cars required detailed knowledge internals operate advance retard spark open close choke keep eye engine speed aware traction tires drove slippery surface wet road today interactions car high level want start car turn key ignition done get car moving pressing gas pedal stop brakes driving snow worry antilock brakes help stop safely straight line testbench structured way traditional testbenches oriented around operations happen create transaction transmit receive check make report instead think structure testbench part generator creates transactions passes next level driver talks design responds transactions received monitor scoreboard checks expected data divide testbench blocks define communicate", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-10-18 14:48:16", "segment": "segment93", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.3 Your First Class ", "content": "class encapsulates data together routines manipulate example 41 shows class generic packet packet contains source destination addresses array data values two routines bustran class function display contents packet another computes crc cyclic redundancy check data make easier match beginning end named block put label end example 41 end labels may look redundant real code many nested blocks labels help find mate simple end endtask endfunction endclass every company naming style book uses following convention class names start capital letter use underscores bustran packet con stants upper case cellsize variables lower case count transtype free use whatever style want", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-10-18 14:48:16", "segment": "segment94", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.4 Where to Define a Class ", "content": "define class systemverilog program module pack age outside classes used programs modules book shows classes used program block introduced chapter 5 think program block module holds test code program holds single test contains objects com prise testbench initial blocks create initialize run test many verification teams put either standalone class group closely related classes file bundle group classes systemverilog package instance might group together scsiata transactions single package compile package separately test system unrelated classes transactions score boards different protocols go separate files see systemverilog lrm information packages", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-10-18 14:48:16", "segment": "segment95", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.5 OOP Terminology ", "content": "separates oop novice expert first thing words use already know oop concepts working verilog oop terms definitions rough equivalents ver ilog 2001 class basic building block containing routines variables analogue verilog module object instance class verilog need instantiate module use handle pointer object verilog use name instance refer signals methods outside module oop handle like address object stored pointer refer one type property variable holds data verilog signal register wire method procedural code manipulates variables contained tasks functions verilog modules tasks functions plus initial always blocks prototype header routine shows name type argument list body routine contains executable code book uses traditional terms verilog variable routine rather oop property method comfortable oop terms skim chapter verilog build complex designs creating modules instantiat ing hierarchically oop create classes instantiate creating objects create similar hierarchy brief analogy explain oop terms think class blueprint house plans describe structure house live blueprint object actual house one set blueprints used build whole subdivision houses single class used build many objects house address like handle uniquely identifies house inside house things lights switches control class variables hold values routines control values class house might many lights single call turnonporchlight sets porch light variable single house", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-10-18 14:48:17", "segment": "segment96", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6 Creating New Objects ", "content": "verilog oop concept instantiation differences details verilog module counter instan tiated netlist compiled systemverilog class network packet instantiated runtime needed testbench verilog instances static hardware change simulation signal values change stimulus objects constantly created used drive dut check results later objects may freed memory used new ones4 analogy oop verilog exceptions toplevel verilog module usually explicitly instantiated however systemverilog class must instantiated used next ver ilog instance name refers single instance systemverilog handle refer many objects though one time", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-10-18 14:48:17", "segment": "segment97", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6.1 No news is good news ", "content": "example 42 b handle points object type bustran simplify calling b bustran handle example 42 declaring using handle bustran b declare handle b new allocate bustran object declare handle b initialized special value null next call new function construct bustran object new allo cates space bustran initializes variables default value 0 2state variables x 4state ones returns address object stored every class systemverilog creates default new allo cate initialize object see section 462 details function", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-10-18 14:48:17", "segment": "segment98", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6.2 Custom Constructor ", "content": "sometimes oop terminology make simple concept seem complex instantiation mean call new instantiate object allocating new block memory store variables object example bustran class two 32bit registers addr crc array eight values data total 11 longwords 44 bytes call new systemverilog allocates 44 bytes storage used c step similar malloc function note sys temverilog uses additional memory fourstate variables housekeeping information object type new function allocate memory also initializes values default sets variables default values 0 2state vari ables x 4state etc define new function set values prefer new function also called constructor builds object house constructed wood nails write new function note type always returns object type class code sets addr data fixed values leaves crc default value x systemverilog allocates space object automat ically use function argument default values make flexible constructor systemverilog know new function call looks type handle left side assignment example 45 call new inside driver constructor calls new function bustran even though one driver closer since bt bustran handle systemverilog right thing create object type bustran", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-10-18 14:48:17", "segment": "segment99", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6.3 Separating the declaration and construction ", "content": "avoid declaring handle calling construc tor new one statement legal syntax create ordering problems constructor called first procedural statement may want initialize objects certain order call new declaration control additionally forget use automatic storage constructor called start simulation", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-10-18 14:48:17", "segment": "segment100", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6.4 The difference between new() and new[] ", "content": "may noticed new function looks lot like new operator described section 24 used set size dynamic arrays allocate memory initialize values big difference new function called construct single object new opera tor building array multiple elements new take arguments setting object values new takes single value array size", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-10-18 14:48:17", "segment": "segment101", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.6.5 Getting a handle on objects ", "content": "new oop users often confuse object handle two distinct declare handle construct object course simulation handle point many objects dynamic nature oop systemverilog get handle confused object example 46 b1 first points one object another would want create objects dynamically simulation may need create hundreds thousands transactions systemverilog lets create new ones automatically need verilog would use fixedsize array large enough hold maximum num ber transactions figure 41 handles objects note dynamic creation objects different anything else offered verilog language instance verilog module name bound together statically compilation even auto matic variables come go simulation name storage always tied together analogy handles people attending conference person similar object arrive badge constructed writing name badge handle used orga nizers keep track person take seat lecture space allocated may multiple badges attendee presenter organizer leave conference badge may reused writ ing new name handle point different objects assignment lastly lose badge nothing identify asked leave space take seat reclaimed use someone else", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-10-18 14:48:17", "segment": "segment102", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.7 Object Deallocation ", "content": "know create object get rid example testbench creates send thousands transactions transactions dut know transaction completed suc cessfully gather statistics need keep around reclaim memory otherwise long simulation might run memory least run slowly garbage collection process automatically freeing objects longer referenced one way systemverilog tell object longer used keeping track number handles point last handle longer references object systemverilog releases memory it5 second line calls new construct object store address handle b next call new constructs second object stores address b overwriting previous value since handles point ing first object systemverilog deallocate systemverilog may delete object immediately wait last line explicitly clears handle second object deallocated familiar c concepts objects handles might look familiar important differences systemverilog handle point objects one type called typesafe c typical untyped pointer address memory set value modify operators preincrement sure pointer really valid systemverilog allow modi fication handle using handle one type refer object another type systemverilog oop specification closer java c secondly since systemverilog performs automatic garbage collection handles refer object sure code always uses valid handles c c pointer refer object longer exists garbage collection languages manual code suf fer memory leaks forget deallocate objects systemverilog garbage collect object refer enced handle create linked lists especially double linked lists circular lists systemverilog deallocate object need manually clear handles setting null object contains routine forks thread object deallocated thread running likewise objects used spawned thread may deallocated thread terminates see chapter 7 information threads", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-10-18 14:48:17", "segment": "segment103", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.8 Using Objects ", "content": "allocated object use going back verilog module analogy refer variables routines strict oop access variables object public methods get put accessing vari ables directly limits ability change underlying implementation future better simply different algorithm comes along future may able adopt would also need modify references variables problem methodology written large software applications lifetimes decade dozens programmers making modifications stability paramount creating testbench goal maximum control variables generate widest range stimulus values one ways accomplish constrained random stimulus generation done variable hidden behind screen methods get put methods fine compilers guis apis stick public variables directly accessed anywhere testbench", "url": "RV32ISPEC.pdf#segment104", "timestamp": "2023-10-18 14:48:17", "segment": "segment104", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.9 Static Variables vs. Global Variables ", "content": "every object local variables shared object two bustran objects addr crc data variables sometimes need variable shared objects certain type example might want keep running count number transactions created without oop would probably create global variable would global vari able used one small piece code visible entire testbench", "url": "RV32ISPEC.pdf#segment105", "timestamp": "2023-10-18 14:48:17", "segment": "segment105", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.9.1 Using a static variable ", "content": "systemverilog create static variable inside class vari able shared instances class scope limited class example 49 static variable count holds number objects created far initialized 0 declaration transactions beginning simulation time new object con structed tagged unique value count incremented example 49 one copy static variable count regardless many bustran objects created think count stored class object variable id static every bustran copy need make global variable count figure 42 static variables class using id field good way track objects flow design debugging testbench often need unique value systemverilog let print address object make id field whenever tempted make global variable consider making classlevel static variable class selfcontained out side references possible", "url": "RV32ISPEC.pdf#segment106", "timestamp": "2023-10-18 14:48:17", "segment": "segment106", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.9.2 Initializing static variables ", "content": "static variable usually initialized declaration easily initialize class constructor called every single new object elaborate initialization use initial block make sure static variables initialized first object con structed example 410 handle still null initialize task called legal task uses static variables created constructor", "url": "RV32ISPEC.pdf#segment107", "timestamp": "2023-10-18 14:48:17", "segment": "segment107", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.10 Class Routines ", "content": "routine aka method class task function defined inside scope class example 411 defines display routines bustran pcitran systemverilog calls correct one based handle type routine class always uses automatic storage worry remembering automatic modifier", "url": "RV32ISPEC.pdf#segment108", "timestamp": "2023-10-18 14:48:18", "segment": "segment108", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.11 Defining Routines Outside of the Class ", "content": "good rule thumb limit piece code one page keep understandable may familiar rule routines also applies classes see everything class screen one time easily understand routine takes page whole class fit page systemverilog break routine prototype routine name arguments inside class body procedural code goes class create outofblock declarations copy first line routine name arguments add extern keyword beginning take entire routine move class body add class name two colons scope operator routine name classes could defined follows common coding mistake routine prototype match one body systemverilog requires prototype identical outofblock routine declaration except class name scope operator additionally oop compilers g vcs prohibit specifying default argument values prototype body since default argument values important code calls method implementation present class declaration another common mistake leave class name declare method result defined next higher scope probably program compiler gives error task tries access classlevel variables routines", "url": "RV32ISPEC.pdf#segment109", "timestamp": "2023-10-18 14:48:18", "segment": "segment109", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.12 Scoping Rules ", "content": "writing testbench need create refer many vari ables systemverilog follows basic rules verilog helpful improvements scope block code module program task function class beginend block foreach loops automatically create block index variable declared created local scope loop define new variables block new systemverilog ability declare variable unnamed beginend block shown loops declare index variable name relative current scope absolute starting root relative name systemverilog looks list scopes finds match want unambiguous use root start name example 414 uses name several scopes note real code would use meaningful names name limit used global variable program variable class variable task variable local variable initial block latter unnamed block label created tool dependent declare variables program initial block variable used inside single initial block counter declare avoid possible name conflicts blocks declare classes outside program module package approach shared testbenches declare temporary variables innermost possi ble level style also eliminates common bug happens forget declare variable inside class system verilog looks variable higher scopes variable name program block class uses instead warning example 415 function bug display declare loop variable systemverilog uses program level instead calling function changes value testi probably want", "url": "RV32ISPEC.pdf#segment110", "timestamp": "2023-10-18 14:48:18", "segment": "segment110", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.12.1 What is this? ", "content": "use variable name systemverilog looks current scope parent scopes variable found algorithm used verilog deep inside class want unambiguously refer classlevel object style code commonly used constructors programmer uses name class variable argument example 416 keyword removes ambiguity let systemverilog know assigning local variable oname class variable oname", "url": "RV32ISPEC.pdf#segment111", "timestamp": "2023-10-18 14:48:18", "segment": "segment111", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.12.2 Referring to a variable out of scope ", "content": "inside class routines use local variables class variables variables defined program forget declare variable systemverilog looks higher scopes finds match cause subtle bugs two parts code unintentionally sharing variable perhaps forgot declare innermost scope example like use index variable careful two different threads testbench concurrently modify variable using loop may forget declare local variable class buggy shown program block declares global class uses global instead local intended might even notice unless two parts program try modify shared variable time threads covered chapter 7 solution declare variables smallest scope encloses uses variable example 417 declare index variables inside loops program scope level", "url": "RV32ISPEC.pdf#segment112", "timestamp": "2023-10-18 14:48:18", "segment": "segment112", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.13 Using One Class Inside Another ", "content": "class contain instance another class using handle object like verilog concept instantiating module inside another module build design hierarchy common reasons using containment reuse controlling complexity example every one transactions may statistics block timestamps transaction started ended information transactions outermost class bustran refer things statistics class using usual hierarchical syntax statsstart remember instantiate object otherwise handle stats null call start fails best done constructor outer class bustran classes become larger may become hard manage variable declarations routine prototypes grow larger page see logical grouping items class split several smaller ones also potential sign time refactor code ie split several smaller related classes see chapter 8 details class inheritance look trying class something could move one base classes ie decompose single class class hierarchy classic indication similar code appearing vari ous places class need factor code function current class one current class s parent classes", "url": "RV32ISPEC.pdf#segment113", "timestamp": "2023-10-18 14:48:18", "segment": "segment113", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.13.1 How big or small should my class be? ", "content": "may want split classes big also lower limit small class class one two members makes code harder understand adds extra layer hierarchy forces constantly jump back forth parent class children understand addition look often used small class instantiated might want merge parent class one synopsys customer put transaction variable class fine control randomization transaction separate object address crc data etc end approach made class hierar chy complex next project flattened hierarchy see section 85 ideas partitioning classes", "url": "RV32ISPEC.pdf#segment114", "timestamp": "2023-10-18 14:48:18", "segment": "segment114", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.13.2 Compilation order issue ", "content": "sometimes need compile class includes another class yet defined declaration included class handle causes error compiler recognize new type declare class name typedef statement shown", "url": "RV32ISPEC.pdf#segment115", "timestamp": "2023-10-18 14:48:18", "segment": "segment115", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.14 Understanding Dynamic Objects ", "content": "statically allocated language verilog every piece data usu ally variable associated example may wire called grant integer count module instance i1 oop onetoone correspondence many objects named handles testbench may allocate thousand transaction objects dur ing simulation may handles manipulate situation takes getting used written verilog code reality handle pointing every object handles may stored arrays queues another object like linked list objects stored mailbox handle internal systemverilog structure see section 76 information mailboxes", "url": "RV32ISPEC.pdf#segment116", "timestamp": "2023-10-18 14:48:18", "segment": "segment116", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.14.1 Passing objects to routines ", "content": "happens pass object routine perhaps routine needs read values object transmit routine may modify object like routine create packet either way call routine pass handle object object figure 44 generator task called transmit two handles generatorb transmitbtrans refer object call routine scalar variable nonarray nonobject use ref keyword systemverilog passes address scalar routine modify use ref systemverilog copies scalar value argument variable changes argument affect original value example 421 initial block allocates bustran object calls transmit task handle points object using handle transmit read write values object however transmit tries modify handle result seen initial block bt argument declared ref routine modify object even handle argument ref modifier frequently causes confu sion new users mix handle object shown transmit write timestamp object want object modified routine pass copy original data untouched see section 415 copying objects", "url": "RV32ISPEC.pdf#segment117", "timestamp": "2023-10-18 14:48:18", "segment": "segment117", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.14.2 Modifying a handle in a task ", "content": "common coding mistake forget use ref routine arguments want modify especially handles following code argument b declared ref change seen calling code", "url": "RV32ISPEC.pdf#segment118", "timestamp": "2023-10-18 14:48:18", "segment": "segment118", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.14.3 Modifying objects in flight ", "content": "common mistake forgetting create new object transaction testbench example 424 generatetrans task creates bustran object ran dom values transmits design takes several cycles symptoms mistake code creates one bustran every time loop generatorbad changes object time transmitted run dis play shows many addr values transmitted bustrans value addr bug occurs transmit stores object keeps using even transmit returns transmit task keep refer ence object recycle object need create new bustran pass loop", "url": "RV32ISPEC.pdf#segment119", "timestamp": "2023-10-18 14:48:18", "segment": "segment119", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.14.4 Arrays of handles ", "content": "write testbenches need able store reference many objects make arrays handles refers object example 426 shows storing ten bus transactions array array barray made handles objects need con struct object array using would normal handle way call new entire array handles", "url": "RV32ISPEC.pdf#segment120", "timestamp": "2023-10-18 14:48:18", "segment": "segment120", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.15 Copying Objects ", "content": "may want make copy object keep routine modify ing original generator preserve constraints either use simple builtin copy available new write complex classes section 83 make copy method", "url": "RV32ISPEC.pdf#segment121", "timestamp": "2023-10-18 14:48:18", "segment": "segment121", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.15.1 Copying an object with new ", "content": "using new copy object easy reliable new object con structed variables existing object copied however shallow copy similar photocopy original blindly transcribing values source destination class contains handle another class top level object copied new lower level one example 428 bustran class contains handle statistics class shown example 418 initial block creates first bustran object modifies variable contained object statistics call new make copy bustran object copied statistics one use new copy object call new function instead values variables han dles copied bustran objects point statistics object id", "url": "RV32ISPEC.pdf#segment122", "timestamp": "2023-10-18 14:48:19", "segment": "segment122", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.15.2 Writing your own simple copy function ", "content": "simple class contain references classes writing copy function easy", "url": "RV32ISPEC.pdf#segment123", "timestamp": "2023-10-18 14:48:19", "segment": "segment123", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.15.3 Writing your own deep copy function ", "content": "nontrivial classes always create copy function make deep copy calling copy functions contained objects copy function makes sure user fields id remain consistent downside making copy function need keep date add new variables forget one could spend hours debugging find missing value7 note also need write copy statistics class every class hierarchy", "url": "RV32ISPEC.pdf#segment124", "timestamp": "2023-10-18 14:48:19", "segment": "segment124", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.16 Public vs. Private ", "content": "core oop encapsulate data related routines class data kept private default keep one class poking around inside another class provides set accessor routines access modify data would also allow change class implementation without needing let users class know instance graphics package could change internal representation cartesian coordinates polar long user interface accessor routines functionality consider bustran class payload crc hardware detect errors conventional oop would make routine set payload also set crc would stay synchronized thus objects would always filled correct values however testbenches like programs web browser word processor testbench needs create errors want bad crc test hardware reacts errors oop languages c java allow specify visibility variables routines default everything private unless labeled otherwise systemverilog everything public unless labeled private stick default greatest con trol operation dut important longterm software stability example making crc visible allows easily inject errors dut crc private would write extra code bypass data hiding mechanisms resulting larger complex testbench", "url": "RV32ISPEC.pdf#segment125", "timestamp": "2023-10-18 14:48:19", "segment": "segment125", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.17 Straying Off Course ", "content": "new oop student may tempted skip extra thought needed group items class store data variables avoid temptation basic dut monitor samples several values interface store integers pass next stage saves minutes first eventually need group values together form complete transaction several transac tions may need grouped create higherlevel transaction dma transfer instead immediately put interface values transac tion class store related information port number receive time along data easily pass object rest testbench", "url": "RV32ISPEC.pdf#segment126", "timestamp": "2023-10-18 14:48:19", "segment": "segment126", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.18 Building a Testbench ", "content": "closer creating simple testbench classes dia gram chapter 1 obviously transactions example 48 objects block represented class also generator agent driver monitor checker score board classes modeled transactors described instantiated inside environment class simplicity test top hierarchy program instantiates environment class functional coverage definitions put inside outside environment class transactor made simple loop receives transaction object previous block makes transformations sends fol lowing one generator upstream block transactor constructs randomizes every transaction others driver receive transaction send dut signal transitions exchange transactions blocks procedural code could one object call next could use data structure fifo hold transactions flight blocks chapter 7 learn use mailboxes fifos ability stall thread data available", "url": "RV32ISPEC.pdf#segment127", "timestamp": "2023-10-18 14:48:19", "segment": "segment127", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "4.19 Conclusion ", "content": "using object oriented programming big step especially first computer language verilog payoff testbenches modular thus easier develop debug reuse patience first oop testbench may look like verilog classes added get hang new way thinking begin create manipulate classes transactions trans actors testbench manipulate chapter 8 learn oop techniques test change behavior underlying testbench without change existing code", "url": "RV32ISPEC.pdf#segment128", "timestamp": "2023-10-18 14:48:19", "segment": "segment128", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 5 ", "content": "connecting testbench design", "url": "RV32ISPEC.pdf#segment129", "timestamp": "2023-10-18 14:48:19", "segment": "segment129", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.1 Introduction ", "content": "several steps needed verify design generate stimulus cap ture responses determine correctness measure progress first need proper testbench connected design testbench wraps around design sending stimulus captur ing design response testbench forms real world around design mimicking entire environment example processor model needs connect various busses devices modeled test bench bus functional models networking device connects multiple input output data streams modeled based standard protocols video chip connects buses send commands forms images written memory models key concept testbench simulates everything design test testbench needs higherlevel way communicate design verilog ports errorprone pages connections need robust way describe timing synchronous signals always driven sampled correct time interactions free race conditions common verilog models", "url": "RV32ISPEC.pdf#segment130", "timestamp": "2023-10-18 14:48:19", "segment": "segment130", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.2 Separating the Testbench and Design ", "content": "ideal world projects two separate groups one create design one verify real world limited budgets may require wear hats team set specialized skills cre ating synthesizable rtl code figuring new ways find bugs design two groups read original design specification make interpretations designer create code meets spec verification engineer find ways prove design match spec likewise testbench code separate block design code classic verilog goes separate module however using module hold testbench often causes timing problems around driving sampling systemverilog introduces program block separate testbench logically temporally details see section 54 designs grow complexity connections blocks increase two rtl blocks may share dozens signals must listed correct order communicate properly one mismatched misplaced connection design work subtle bug swapping pins toggle occasionally may notice prob lem time worse yet add new signal two blocks edit blocks add new port also higherlevel netlists wire devices one wrong connection level design stops working worse system works intermittently solution interface systemverilog construct represents bundle wires intelligence synchronization functional code interface instantiated like module also connected ports like signal", "url": "RV32ISPEC.pdf#segment131", "timestamp": "2023-10-18 14:48:19", "segment": "segment131", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.2.1 Communication between the testbench and DUT ", "content": "next sections show testbench connected arbiter using indi vidual signals using interfaces diagram top level design including testbench arbiter clock generator signals con nect trivial design concentrate systemverilog concepts get bogged design end chapter atm router shown", "url": "RV32ISPEC.pdf#segment132", "timestamp": "2023-10-18 14:48:19", "segment": "segment132", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.2.2 Communication with ports ", "content": "following code fragments show elements connecting rtl block testbench first header arbiter model uses verilog2001 style port declarations type direction header code declarations left clarity discussed section 221 systemverilog expanded classic reg type use like wire connect blocks recognition new capabilities reg type new name logic place use logic variable net multiple drivers must use net wire top netlist connects testbench dut example 53 netlists simple real designs hundreds pins require pages signal port declarations connections error prone signal moves several layers hierarchy declared connected worst want add new signal declared connected multiple files systemver ilog interfaces help cases", "url": "RV32ISPEC.pdf#segment133", "timestamp": "2023-10-18 14:48:19", "segment": "segment133", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3 The Interface Construct ", "content": "designs complex even communication may need separated separate entities model systemverilog uses interface construct think intelligent bundle wires contain connectivity synchronization optionally functionality communication two blocks con nect design blocks andor testbenches basic designlevel interfaces covered sutherland 2004 book covers interfaces connect design blocks testbenches", "url": "RV32ISPEC.pdf#segment134", "timestamp": "2023-10-18 14:48:19", "segment": "segment134", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.1 Using an interface to simplify connections ", "content": "first improvement bundle wires together interface block figure 53 shows testbench arbiter communicating using interface note interface extends two blocks representing drivers receivers functionally part test dut clock part interface separate port simplest interface bundle nondirectional signals use logic drive signals procedural statements instantiated top module connected follows example 56 shows testbench refer signal interface making hierarchical reference using instance name arbifrequest interface signals always driven using nonblocking assignments explained detail section 553 lastly device test arbiter uses interface instead ports see immediate benefit even small device connec tions become cleaner less prone mistakes wanted put new signal interface would add interface definition modules actually used would change mod ule top pass interface language feature greatly reduces chance wiring errors", "url": "RV32ISPEC.pdf#segment135", "timestamp": "2023-10-18 14:48:19", "segment": "segment135", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.2 Connecting interfaces and ports ", "content": "legacy design ports changed use interface connect interfaces signals individual ports example 58 connects original arbiter example 51 interface example 54", "url": "RV32ISPEC.pdf#segment136", "timestamp": "2023-10-18 14:48:19", "segment": "segment136", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.3 Grouping signals in an interface using modports ", "content": "example 57 uses pointtopoint connection scheme signal direc tions interface original netlists using ports information compiler uses check wiring mistakes modport construct interface lets group signals specify directions monitor modport allows connect monitor module arbiter model testbench need specify mod port port connection list note put modport name dut test interface name arbif modport name identical previous examples top model change example 55 modports speci fied module header module instantiated code change much except interface grew larger interface accurately represents real design especially signal direction", "url": "RV32ISPEC.pdf#segment137", "timestamp": "2023-10-18 14:48:19", "segment": "segment137", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.4 Using modports with a bus design ", "content": "every signal needs go every interface consider cpu memory bus modeled interface cpu bus master drives subset signals request command address mem ory slave receives signals drives ready master slave drive data bus arbiter looks request grant ignores signals interface would three modports master slave arbiter plus optional monitor modport", "url": "RV32ISPEC.pdf#segment138", "timestamp": "2023-10-18 14:48:19", "segment": "segment138", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.5 Creating an interface monitor ", "content": "create bus monitor using monitor modport following trivial monitor arbiter real bus could decode com mands print status completed failed etc", "url": "RV32ISPEC.pdf#segment139", "timestamp": "2023-10-18 14:48:19", "segment": "segment139", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.6 Interface trade-offs ", "content": "interface contain instances modules inter faces tradeoffs using interfaces modports compared traditional port connects signals advantages using interface follows interface ideal design reuse two blocks communicate specified protocol using two signals use interface signals repeated networking switch use virtual interface described chapter 10 interface takes jumble signals declare every module program puts central location reducing possibility misconnecting signals add new signal declare interface higherlevel modules reducing errors modports allow module easily tap subset signals interface specify signal direction additional checking disadvantages using interface follows pointtopoint connections interfaces modports almost verbose using ports lists signals declarations still one central location reducing chance making error must use interface name addition signal name possibly making modules verbose connecting two design blocks unique protocol reused interfaces may work wiring together ports difficult connect two different interfaces new interface busif may contain signals existing one arbif plus new signals address data etc since interfaces hierarchical break individual signals drive appropriately", "url": "RV32ISPEC.pdf#segment140", "timestamp": "2023-10-18 14:48:20", "segment": "segment140", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.3.7 More information and examples ", "content": "examples use systemverilog constructs systemverilog lrm specifies many ways use interfaces see sutherland 2004 examples using interfaces design", "url": "RV32ISPEC.pdf#segment141", "timestamp": "2023-10-18 14:48:20", "segment": "segment141", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4 Stimulus Timing ", "content": "timing testbench design must carefully orches trated cycle level need drive receive synchronous signals proper time relation clock drive late sample early testbench cycle even within single time slot example everything happens time 100ns mixing design testbench events cause race condition signal read written time read old value one written verilog nonblocking assignments helped test module drove dut test could always sure sampled last value driven design systemverilog several constructs help control timing communication", "url": "RV32ISPEC.pdf#segment142", "timestamp": "2023-10-18 14:48:20", "segment": "segment142", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4.1 Controlling timing of synchronous signals with a clocking block ", "content": "interface block uses clocking block specify timing synchro nous signals relative clocks signal clocking block driven sampled synchronously ensuring testbench interacts signals right time clocking blocks mainly used testbenches also allow create abstract synchronous models interface contain multiple clocking blocks one per clock domain single clock expression block typical clock expressions posedge clk posedge clk1 negedge clk2 specify clock skew clocking block using default statement default behavior input signals sampled design executes outputs driven back design current time slot next section provides details timing design testbench defined clocking block testbench wait clocking expression myinterfacecb rather spell exact clock edge change clock edge clocking block change testbench example 513 clocking block cb declares signals block active positive edge clock signal directions relative modport used request output test modport grant input modport", "url": "RV32ISPEC.pdf#segment143", "timestamp": "2023-10-18 14:48:20", "segment": "segment143", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4.2 Timing problems in Verilog ", "content": "testbench needs separate design logically also temporally consider hardware tester interacts chip syn chronous signals real hardware design storage elements latch inputs active clock edge values propagate storage outputs logic clouds inputs next storage elements time input first storage next must less clock cycle hardware tester needs drive chip s input clock edge read outputs following edge testbench mimic behavior another storage ele ment plus logic drive active clock edge sample late possible allowed protocol timing specification active clock edge dut testbench made verilog modules outcome nearly impossible achieve testbench drives dut clock edge could race conditions clock propagates dut inputs tb stimulus little later inputs outside clock edges arrive simulation time design inputs get value driven last cycle inputs get values current cycle one way around problem add small delays system 0 forces thread verilog code stop rescheduled code invariably large design several sections want execute last whose 0 wins could vary run run unpre dictable simulators multiple threads using 0 delays cause indeterministic behavior next solution use larger delay 1 rtl code timing clock edges one time unit clock logic settled one module uses time precision 1ns another used res olution 10ps 1 mean 1ns 10ps something else want drive soon possible clock cycle active clock edge time anything else happen worse yet dut may contain mix rtl code delays gate code delays", "url": "RV32ISPEC.pdf#segment144", "timestamp": "2023-10-18 14:48:20", "segment": "segment144", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4.3 Testbench \u2013 design race condition ", "content": "example 514 shows potential race condition testbench design race condition occurs test drives start signal ports memory waiting start signal could wake immediately write addr data signals still old values could delay signals slightly using nonblock ing assignments recommended cummings 2000 remember testbench design using assignments still possi ble get race condition testbench design sampling design outputs similar problem want grab values last possible moment active clock edge perhaps know next clock edge 100ns sample right clock edge 100ns design values may already changed sample tsetup clock edge", "url": "RV32ISPEC.pdf#segment145", "timestamp": "2023-10-18 14:48:20", "segment": "segment145", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4.4 The program block and timing regions ", "content": "root problem mixing design testbench events dur ing time slot though even pure rtl problem happen way could separate events tempo rally separated code 100ns testbench could sample design outputs clock chance change design activity occurred definition values would last possible ones previous time slot design events done testbench would start systemverilog know schedule testbench events sepa rately design events systemverilog testbench code program block similar module contain code variables instantiated modules however program hierarchy instances modules interfaces programs new division time slot introduced systemverilog verilog events executed active region regions nonblocking assignments pli execution etc ignored purposes book see ieee 18002005 language reference man ual details systemverilog several new regions introduced first execute time slot prepone region samples signals design activity samples used testbench next active region design events run include rtl gate code plus clock generator third region observed region assertions evaluated following reactive region testbench exe cutes note time strictly flow forwards events observed reactive regions trigger design events active region current cycle verilog simulation continues scheduled events finish executed systemverilog implicit exit program block terminates initialblocks complete exit terminates current program block finish termi nates entire simulation program blocks end simulation ends even events pending design side spawned threads testbench still running example 515 shows part testbench code arbiter note statement arbifcb waits active edge clocking block posedge clk shown example 513 section 55 explains driving sampling interface signals discussed section 371 always declare program block automatic behaves like routines stackbased languages may worked c", "url": "RV32ISPEC.pdf#segment146", "timestamp": "2023-10-18 14:48:20", "segment": "segment146", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.4.5 Specifying delays between the design and testbench ", "content": "default timing clocking block sample inputs delay 1step drive outputs delay 0 1step delay spec ifies signals sampled prepone region design activity get output values clock changes test bench outputs synchronous virtue clocking block flow directly design program block running reactive region retriggers active region time slot design background remember imagining clocking block inserts synchronizer design testbench", "url": "RV32ISPEC.pdf#segment147", "timestamp": "2023-10-18 14:48:20", "segment": "segment147", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5 Interface Driving and Sampling ", "content": "testbench needs drive sample signals design prima rily interfaces clocking blocks following section uses arbiter interface example 513 asynchronous signals reset pass interface delays signals clocking block get synchronized shown", "url": "RV32ISPEC.pdf#segment148", "timestamp": "2023-10-18 14:48:20", "segment": "segment148", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.1 Interface synchronization ", "content": "use verilog wait constructs synchronize signals testbench following code anything useful except show various constructs", "url": "RV32ISPEC.pdf#segment149", "timestamp": "2023-10-18 14:48:20", "segment": "segment149", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.2 Interface signal sample ", "content": "read signal clocking block get sample last clock edge ie prepone region following code shows program block reads synchronous grant signal dut arb module drives grant 1 middle cycle 2 clock edge waveforms show program arbifcbgrant gets value clock edge interface input changes right clock edge time 30ns value propagate testbench another cycle time 40ns", "url": "RV32ISPEC.pdf#segment150", "timestamp": "2023-10-18 14:48:20", "segment": "segment150", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.3 Interface signal drive ", "content": "synchronous signal request must prefixed interface name arbif clocking block name cb case signal used one clocking block arbifcbgrant legal arbifgrant common coding mistake caught systemverilog compiler always drive interface signals nonblocking assignment namely operator8 design signal change immediately assignment remember testbench executes 8 cases could use blocking assignment force statement lrm clear program block force interface signal signal design additionally interface signal passed ref argument routine routine use blocking nonblocking assignment systemverilog evolving language reactive region design code active region test bench drives arbifcbrequest 100ns time arbifcb posedge clk according clocking block request changes design 100ns testbench tries drive arbifcbrequest time 101ns clock edges change propagate next clock edge way drives always synchronous example 517 arbifgrant driven module use blocking assignment testbench drives synchronous interface signal active edge clock value propagates immediately design default output delay 0 clocking block testbench drives output active edge value seen design next active edge clock example 520 shows happens drive synchronous interface signal various points clock cycle want wait two clock cycles driving signal either use repeat 2 buscb use cycle delay 2 delay works prefix drive signal clocking block need know clock use delay", "url": "RV32ISPEC.pdf#segment151", "timestamp": "2023-10-18 14:48:20", "segment": "segment151", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.4 Bidirectional signals in the interface ", "content": "verilog want drive bidirectional signal port procedural code need continuous assignment connect reg wire systemverilog synchronous bidirectional signals interfaces easier use continuous assignment added write net program systemverilog actually writes temporary vari able drives net program reads directly wire seeing value resolved drivers design code module still uses classic register plus continuous assignment statement systemverilog lrm clear driving asynchronous bidirec tional signal using interface two possible solutions use cross module reference continuous assignment use virtual interface shown chapter 10", "url": "RV32ISPEC.pdf#segment152", "timestamp": "2023-10-18 14:48:21", "segment": "segment152", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.5 Why are always blocks not allowed in a program? ", "content": "systemverilog put tasks functions classes initial blocks program always blocks may seem odd used verilog modules several reasons systemverilog pro grams closer program c one entry points verilog many small blocks concurrently executing hardware design always block might trigger every positive edge clock start simulation testbench hand goes initialization drive respond design activity completes last initial block completes simulation implicitly ends exe cuted finish always block would never stop would explicitly call exit signal program block completed despair really need always block use initial forever accomplish thing", "url": "RV32ISPEC.pdf#segment153", "timestamp": "2023-10-18 14:48:21", "segment": "segment153", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.5.6 The clock generator ", "content": "seen program block may wonder clock generator module clock closely tied design testbench clock generator remain module refine design create clock trees carefully control skews clocks enter system propagate blocks testbench much less picky wants clock edge know drive sample signals functional verification concerned provid ing right values right cycle fractional nanosecond delays relative clock skews program block place put clock generator example 523 tries put generator program block causes race condition clk outsig signals propagate reactive region design active region could cause race condition depending one arrived first avoid race conditions always putting clock generator module want randomize generator properties create class random variables skew frequency characteristics use class generator module testbench lastly try verify lowlevel timing functional verification testbenches described book check behavior dut timing better done static timing analysis tool", "url": "RV32ISPEC.pdf#segment154", "timestamp": "2023-10-18 14:48:21", "segment": "segment154", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.6 Connecting It All Together ", "content": "design described module testbench program block interfaces connect together toplevel module instantiates connects pieces almost identical example 55 uses shortcut notation implicit port connection automatically connects module instance ports signals current level name data type", "url": "RV32ISPEC.pdf#segment155", "timestamp": "2023-10-18 14:48:21", "segment": "segment155", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.7 Top-Level Scope ", "content": "sometimes need create things simulation outside program module seen parts simulation verilog macros extend across module boundaries often used creating global constants systemverilog introduces compilation unit group source files compiled together scope outside boundaries module macromodule interface program package primitive known compilationunit scope also referred unit anything parameter defined scope similar global seen lowerlevel blocks truly global parameter seen compilation files leads confusion tools synopsys vcs com pile systemverilog code together unit global synopsys design compiler compiles single module group modules time unit may contents one files tools ven dors may compile files subset result unit portable book calls scope outside blocks toplevel scope define variables parameters data types even routines space example 526 declares toplevel parameter timeout used any hierarchy example also const string holds error message declare toplevel constants either way instance name root allows unambiguously refer names system starting toplevel scope respect root similar unix file system tools vcs compile files root unit equivalent name root also solves old verilog problem code refers name another module i1var compiler first looks local scope looks next higher scope reaches top may wanted use i1var top module instance named i1 intermediate scope may sidetracked search giving wrong variable use root make unambiguous crossmodule references specifying absolute path example 527 shows program instantiated module explicitly instantiated toplevel scope program use relative absolute reference clk signal module note module implicitly instantiated took line top t1 absolute reference program would change roottopclk use explicit instantiation top module plan making crossmodule references", "url": "RV32ISPEC.pdf#segment156", "timestamp": "2023-10-18 14:48:21", "segment": "segment156", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.8 Program \u2013 Module Interactions ", "content": "program block read write signals modules call routines modules module visibility program testbench needs see control design design depend anything testbench program call routine module perform various actions routine set values internal signals also known backdoor load next current system verilog standard define force signals program block need write task design force call program lastly good practice testbench use function get information dut reading signal values work time design code changes testbench may interpret values incor rectly function module encapsulate communication two make easier testbench stay synchronized design", "url": "RV32ISPEC.pdf#segment157", "timestamp": "2023-10-18 14:48:21", "segment": "segment157", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.9 SystemVerilog Assertions ", "content": "create temporal assertions signals design using sys temverilog assertions sva assertions instantiated similarly design blocks active entire simulation simulator keeps track assertions triggered gather functional coverage data", "url": "RV32ISPEC.pdf#segment158", "timestamp": "2023-10-18 14:48:21", "segment": "segment158", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.9.1 Procedural assertions ", "content": "testbench procedural code check values design signals testbench variables take action problem example asserted bus request expect grant asserted two cycles later could use ifstatement assertion compact ifstatement however note logic reversed compared ifstatement want expression inside parentheses true otherwise print error grant signal asserted correctly test continues signal expected value simulator produces message similar following message says line 7 file testsv assertion topt1a1 started 55ns check signal buscbgrant failed immediately may tempted use full systemverilog assertion syntax check elaborate sequence range time use care assertions declarative code execute dif ferently surrounding procedural code lines assertions verify complex code equivalent procedural code would far complicated verbose", "url": "RV32ISPEC.pdf#segment159", "timestamp": "2023-10-18 14:48:21", "segment": "segment159", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.9.2 Customizing the assertion actions ", "content": "procedural assertion optional then elseclauses want augment default message add systemverilog four functions print messages info warning error fatal allowed inside assertion pro cedural code though future versions systemverilog may allow use thenclause record assertion completed successfully", "url": "RV32ISPEC.pdf#segment160", "timestamp": "2023-10-18 14:48:21", "segment": "segment160", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.9.3 Concurrent assertions ", "content": "type assertion concurrent assertion think small model runs continuously checking values signals entire simulation need specify sampling clock assertion small assertion check arbiter signals x z val ues except reset", "url": "RV32ISPEC.pdf#segment161", "timestamp": "2023-10-18 14:48:21", "segment": "segment161", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.9.4 Exploring assertions ", "content": "many uses assertions example put asser tions interface interface transmits signal values also checks protocol section provides brief introduction assertions informa tion systemverilog assertions see vijayaraghhavan 2005", "url": "RV32ISPEC.pdf#segment162", "timestamp": "2023-10-18 14:48:21", "segment": "segment162", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10 The Four-Port ATM Router ", "content": "arbiter example good introduction interfaces real designs single input output section shows fourport atm router", "url": "RV32ISPEC.pdf#segment163", "timestamp": "2023-10-18 14:48:21", "segment": "segment163", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.1 ATM router with ports ", "content": "following code fragments show tangle wires would endure connect rtl block testbench first header atm router model uses verilog1995 style port declarations type direction separate header actual code router crowded nearly page port declarations", "url": "RV32ISPEC.pdf#segment164", "timestamp": "2023-10-18 14:48:21", "segment": "segment164", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.2 ATM top-level netlist with ports ", "content": "shown next toplevel netlist saw three pages code connectivity testbench design interfaces provide better way organize infor mation eliminate repetitive parts error prone", "url": "RV32ISPEC.pdf#segment165", "timestamp": "2023-10-18 14:48:21", "segment": "segment165", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.3 Using interfaces to simplify connections ", "content": "diagram atm router connected testbench signals grouped interfaces", "url": "RV32ISPEC.pdf#segment166", "timestamp": "2023-10-18 14:48:21", "segment": "segment166", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.4 ATM interfaces ", "content": "rx tx interfaces modports clocking blocks", "url": "RV32ISPEC.pdf#segment167", "timestamp": "2023-10-18 14:48:21", "segment": "segment167", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.5 ATM router model using an interface ", "content": "atm router model testbench need specify modport port connection list note put modport name interface name rxif", "url": "RV32ISPEC.pdf#segment168", "timestamp": "2023-10-18 14:48:21", "segment": "segment168", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.6 ATM top level netlist with interfaces ", "content": "top netlist shrunk considerably along chances making mistake", "url": "RV32ISPEC.pdf#segment169", "timestamp": "2023-10-18 14:48:21", "segment": "segment169", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.10.7 ATM testbench with interface ", "content": "example 542 shows part testbench captures cells coming tx port router note interface names hardcoded duplicate code four times 4x4 atm switch chapter 10 shows simplify code using virtual interfaces", "url": "RV32ISPEC.pdf#segment170", "timestamp": "2023-10-18 14:48:21", "segment": "segment170", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "5.11 Conclusion ", "content": "chapter learned use systemverilog interfaces organize communication design blocks testbench design construct replace dozens signal connections sin gle interface making code easier maintain improve reducing number wiring mistakes systemverilog also introduces program block hold testbench reduce race conditions device test testbench clocking block interface testbenches drive sample design signals correctly relative clock", "url": "RV32ISPEC.pdf#segment171", "timestamp": "2023-10-18 14:48:21", "segment": "segment171", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 6 ", "content": "randomization", "url": "RV32ISPEC.pdf#segment172", "timestamp": "2023-10-18 14:48:21", "segment": "segment172", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.1 Introduction ", "content": "designs grow larger becomes difficult create complete set stimuli needed check functionality write directed testcase check certain set features write enough directed testcases number features keeps doubling project worse yet interactions features source devious bugs least likely caught going laundry list features solution create testcases automatically using constrainedrandom tests crt directed test finds bugs think crt finds bugs never thought using random stimulus restrict test scenarios valid interest using constraints creating crt environment takes work creating one directed tests simple directed test applies stimulus man ually check result results captured golden log file compared future simulations see whether test passes fails crt environment needs create stimulus also predict result using reference model transfer function techniques how ever environment place run hundreds tests without handcheck results thereby improving productivity tradeoff testauthoring time work cpu time machine work makes crt valuable crt made two parts test code uses stream random values create input dut seed pseudorandom number generator prng shown section 6151 make crt behave dif ferently using new seed feature allows leverage test functional equivalent many directed tests changing seeds able create equivalent tests using techniques directed testing may feel random tests like throwing darts know covered aspects design stimulus space large generate every possible input using forloops need generate useful subset chapter 9 learn measure verifica tion progress using functional coverage many ways use randomization chapter gives wide range examples highlights useful techniques choose works best", "url": "RV32ISPEC.pdf#segment173", "timestamp": "2023-10-18 14:48:21", "segment": "segment173", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2 What to Randomize ", "content": "think randomizing stimulus design first thing may think data fields easiest create call random problem approach low payback terms bugs found find datapath bugs perhaps bitlevel mistakes test still inherently directed challenging bugs control logic result need randomize decision points dut everywhere control paths diverge randomization increases probability take different path test case need think broadly design input following device configuration environment configuration primary input data encapsulated input data protocol exceptions delays transaction status errors violations", "url": "RV32ISPEC.pdf#segment174", "timestamp": "2023-10-18 14:48:22", "segment": "segment174", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.1 Device configuration ", "content": "common reason bugs missed testing rtl design enough different configurations tried tests use design comes reset apply fixed set initial ization vectors put known state like testing pc operating system right installed without applica tions course performance fine crashes time real world environment dut configuration becomes random example synopsys customer verify timedivision multiplexor switch 600 input channels 12 output channels device installed endcustomer system chan nels would allocated deallocated point time would little correlation adjacent channels words configuration would seem random test device verification engineer write several dozen lines tcl code configure channel result never able try configurations handful channels enabled using crt methodology wrote testbench randomized parameters sin gle channel put loop configure whole device confidence tests would uncover bugs previously would missed", "url": "RV32ISPEC.pdf#segment175", "timestamp": "2023-10-18 14:48:22", "segment": "segment175", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.2 Environment configuration ", "content": "device designing operates environment containing devices verifying dut connected testbench mimics environment randomize entire environment including number objects configured another company creating io switch chip connected multiple pci buses internal memory bus start simulation customer used randomization choose number pci buses 14 number devices bus 18 parameters device master slave csr addresses etc even though many possible combina tions company knew covered", "url": "RV32ISPEC.pdf#segment176", "timestamp": "2023-10-18 14:48:22", "segment": "segment176", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.3 Primary input data ", "content": "probably thought first read random stimulus take transaction bus write atm cell fill random values hard actually fairly straightfor ward long carefully prepare transaction classes anticipate layered protocols error injection", "url": "RV32ISPEC.pdf#segment177", "timestamp": "2023-10-18 14:48:22", "segment": "segment177", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.4 Encapsulated input data ", "content": "many devices process multiple layers stimulus example device may create tcp traffic encoded ip protocol finally sent inside ethernet packets level control fields randomized try new combinations randomizing data layers surround need write constraints create valid control fields also allow injecting errors", "url": "RV32ISPEC.pdf#segment178", "timestamp": "2023-10-18 14:48:22", "segment": "segment178", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.5 Protocol exceptions, errors, and violations ", "content": "anything go wrong eventually challenging part design verification handle errors system need anticipate cases things go wrong inject sys tem make sure design handles gracefully without locking going illegal state good verification engineer tests behavior design edge functional specification sometimes even beyond two devices communicate happens transfer stops part way testbench simulate breaks error detection correction fields must make sure combinations tried random component errors testbench able send functionally correct stimuli flip configura tion bit start injecting random types errors random intervals", "url": "RV32ISPEC.pdf#segment179", "timestamp": "2023-10-18 14:48:22", "segment": "segment179", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.2.6 Delays ", "content": "many communication protocols specify ranges delays bus grant comes one three cycles request data memory valid fourth tenth bus cycle however many directed tests optimized fastest simulation use shortest latency except one test tries various delays testbench always use random legal delays every test try find hopefully one combination exposes design bug cycle level designs sensitive clock jitter sliding clock edges back forth small amounts make sure design overly sensitive small changes clock cycle clock generator module outside testbench creates events active region along design events however generator parameters frequency offset set testbench configuration phase note looking functional errors timing errors testbench try violate setup hold requirements bet ter validated using timing analysis tools", "url": "RV32ISPEC.pdf#segment180", "timestamp": "2023-10-18 14:48:22", "segment": "segment180", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.3 Randomization in SystemVerilog ", "content": "random stimulus generation systemverilog useful used oop first create class hold group related random variables randomsolver fill random values create constraints limit random values legal values test spe cific features note randomize individual variables case least interesting true constrainedrandom stimuli created transaction level one value time", "url": "RV32ISPEC.pdf#segment181", "timestamp": "2023-10-18 14:48:22", "segment": "segment181", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.3.1 Simple class with random variables ", "content": "class four random variables first three use rand modi fier every time randomize class variables assigned value think rolling dice roll could new value repeat cur rent one kind variable randc means random cyclic random solver repeat random value every possible value assigned think dealing cards deck deal every card deck random order shuffle deck deal cards different order note constraint expression grouped using curly braces code declarative procedural uses begin end randomize function returns 0 problem found con straints procedural assertion used check result shown section 59 need find toolspecific switches cause assertion terminate simulation book uses assert test result randomize may want test result call special routine prints useful information gracefully shuts simulation constraint example 61 expression limits values src variable case systemverilog chooses values 11 12 13 14 variables classes random public gives test maximum control dut stimulus control always turn random variable show section 6102 forget make variable ran dom must edit environment want avoid", "url": "RV32ISPEC.pdf#segment182", "timestamp": "2023-10-18 14:48:22", "segment": "segment182", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.3.2 Checking the result from randomize ", "content": "randomize assigns random values variable class labeled rand randc also makes sure active constraints obeyed randomization fail code conflicting constraints see next section always check status check vari ables may get unexpected values causing simulation fail example 61 checks status randomize using procedural assertion randomization succeeds function returns 1 fails randomize returns 0 assertion checks result prints error failure set simulator switches terminate error found alternatively might want call special routine end simulation housekeeping chores like printing sum mary report", "url": "RV32ISPEC.pdf#segment183", "timestamp": "2023-10-18 14:48:22", "segment": "segment183", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.3.3 The constraint solver ", "content": "process solving constraint expressions handled system verilog constraint solver solver chooses values satisfy constraints values come systemverilog prng started initial seed give systemverilog simulator seed testbench always produces results solver specific simulation vendor constrainedrandom test may give results run different simulators even different versions tool systemverilog standard specifies meaning expressions legal values created detail precise order solver operate see section 615 details random number generators image devicegray width 120 height 120 bpc 8", "url": "RV32ISPEC.pdf#segment184", "timestamp": "2023-10-18 14:48:22", "segment": "segment184", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4 Constraint Details ", "content": "useful stimulus random values relationships variables otherwise may take long generate interesting stimulus values stimulus might contain illegal values define interactions systemverilog using constraint blocks contain one constraint expressions systemverilog solves expressions concur rently choosing random values satisfy expressions least one variable expression random either rand randc following class fails ran domized solution add modifier rand randc variable son randomize function tries assign new values random variables make sure constraints satisfied example 62 since random variables randomize checks value son see bounds specified constraint cteenager unless variable hap pens fall range 1319 randomize fails", "url": "RV32ISPEC.pdf#segment185", "timestamp": "2023-10-18 14:48:22", "segment": "segment185", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.1 Constraint introduction ", "content": "following sections use example random class con straints specific constructs explained later section", "url": "RV32ISPEC.pdf#segment186", "timestamp": "2023-10-18 14:48:22", "segment": "segment186", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.2 Simple expressions ", "content": "example 63 shows class constraint block several expres sions first two control values len variable shown variable used multiple expressions maximum one relational operator expression want put multi ple variables fixed order b c use multiple expressions", "url": "RV32ISPEC.pdf#segment187", "timestamp": "2023-10-18 14:48:22", "segment": "segment187", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.3 Equivalence expressions ", "content": "make assignments constraint block contains expressions instead use equivalence operator set random variable value eg len42 build complex relationships one random variables eg len headeraddrmode 4 payloadsize", "url": "RV32ISPEC.pdf#segment188", "timestamp": "2023-10-18 14:48:22", "segment": "segment188", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.4 Set membership and the inside operator ", "content": "create sets values inside operator systemverilog gathers values chooses values equal probability unless constraints variable always use variables sets example 63 first two expressions could replaced len inside 1999 example 65 systemverilog uses values lo hi determine range possible values use parameterize constraints testbench alter behavior stimulus generator without rewriting constraints set contain multiple values ranges shown example 63 note lo hi empty set formed constraint fails want value long inside set invert constraint values set chosen equally even appear multiple times need weight values others use dist opera tor shown section 646", "url": "RV32ISPEC.pdf#segment189", "timestamp": "2023-10-18 14:48:22", "segment": "segment189", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.5 Using an array in a set ", "content": "choose set values storing array example 68 chooses day week list enumerated values change list choices fly make choice randc variable simulator tries every possible value repeating name function returns string name enumerated value want dynamically add remove values set think twice using inside operator performance example perhaps set values want chosen could use inside choose values queue delete slowly shrink queue requires solver solve n constraints n number elements left queue instead use randc variable points array choices choosing randc value takes short con stant time solving large number constraints expensive especially dozen values", "url": "RV32ISPEC.pdf#segment190", "timestamp": "2023-10-18 14:48:22", "segment": "segment190", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.6 Weighted Distributions ", "content": "dist operator allows create weighted distributions values chosen often others dist operator takes list values weights separated operator values weights constants variables values single value range lo hi weights percentages add 100 operator specifies weight every specified value range operator specifies weight equally divided values example 610 src gets value 0 1 2 3 weight 0 40 1 2 3 weights 60 total 220 probability choosing 0 40220 probability choosing 1 2 3 60220 next dst gets value 0 1 2 3 weight 0 40 1 2 3 share total weight 60 total 100 probability choosing 0 40100 probability choosing 1 2 3 20100 values weights constants variables use variable weights change distributions fly even eliminate choices setting weight zero example 611 len enumerated variable three values con straint defaults choosing longword lengths wlwrd largest value", "url": "RV32ISPEC.pdf#segment191", "timestamp": "2023-10-18 14:48:23", "segment": "segment191", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.7 Bidirectional Constraints ", "content": "may realized constraint blocks procedural code executing top bottom declarative code active time constrain variable inside operator set 1050 another expression constrains variable greater 20 systemverilog chooses values 21 50 systemverilog constraints bidirectional means solver looks constraints side expression consider following constraint systemverilog solver looks four constraints simultaneously b less less 30 b constrained equal c greater 25 even though direct constraint lower value constraint c restricts choices", "url": "RV32ISPEC.pdf#segment192", "timestamp": "2023-10-18 14:48:23", "segment": "segment192", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.8 Conditional constraints ", "content": "normally constraint expressions active block want expression active time example bus table 61 solutions bidirectional constraints supports byte word longword reads longword writes system verilog supports two implication operators ifelse choosing list expressions enumerated type implication operator lets create caselike block parentheses around expression required make code eas ier read constraint blocks use curly braces group multiple expres sions begin end keywords procedural code", "url": "RV32ISPEC.pdf#segment193", "timestamp": "2023-10-18 14:48:23", "segment": "segment193", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.4.9 Choose the right arithmetic operator to boost efficiency ", "content": "simple arithmetic operators addition subtraction bit extracts shifts handled efficiently solver constraint however multiplication division modulo expensive 32bit values remember constant without explicit size 42 treated 32bit value want generate random addresses near page boundary page 4096 bytes could write following code solver may take long time find suitable values addr many constants hardware powers 2 take advantage using bit extraction rather division modulo likewise multiplication power two replaced shift", "url": "RV32ISPEC.pdf#segment194", "timestamp": "2023-10-18 14:48:23", "segment": "segment194", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.5 Solution Probabilities ", "content": "whenever deal random values need understand prob ability outcome systemverilog guarantee exact solution found random constraint solver influence distribution time work random numbers look thousands millions values average noise changing tool version ran dom seed cause different results simulators synopsys vcs multiple solvers allow trade memory usage vs performance", "url": "RV32ISPEC.pdf#segment195", "timestamp": "2023-10-18 14:48:23", "segment": "segment195", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.5.1 Unconstrained ", "content": "start two variables constraints eight possible solutions constraints probability run thousands randomizations see actual results approach listed probabilities9", "url": "RV32ISPEC.pdf#segment196", "timestamp": "2023-10-18 14:48:23", "segment": "segment196", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.5.2 Implication ", "content": "example 618 value depends value x indi cated implication operator following constraint example rest section also behave way implica tion operator possible solutions probability see ran dom solver recognizes eight combinations x solutions x0 ad merged together", "url": "RV32ISPEC.pdf#segment197", "timestamp": "2023-10-18 14:48:23", "segment": "segment197", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.5.3 Implication and bidirectional constraints ", "content": "note implication operator says x0 forced 0 y0 constraint x however implication bidirec tional forced nonzero value x would 1 example 619 constraint 0 x never 0", "url": "RV32ISPEC.pdf#segment198", "timestamp": "2023-10-18 14:48:23", "segment": "segment198", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.5.4 Guiding distribution with solve...before ", "content": "guide systemverilog solver using solve constraint seen example 620 solve constraint change solution space probability results solver chooses values x 0 1 equal probability 1000 calls randomize x 0 500 times 1 500 times x 0 must 0 x 1 0 1 2 3 equal probability use solve dissatisfied often values occur excessive use slow con straint solver make constraints difficult others understand", "url": "RV32ISPEC.pdf#segment199", "timestamp": "2023-10-18 14:48:23", "segment": "segment199", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.6 Controlling Multiple Constraint Blocks ", "content": "class contain multiple constraint blocks class might naturally divide two sets variables data vs control may want constrain separately might want separate constraint test perhaps one constraint would restrict data length create small transactions great testing congestion another would make long transactions runtime use constraintmode routine turn con straints used handleconstraint method controls single constraint used handle controls con straints object", "url": "RV32ISPEC.pdf#segment200", "timestamp": "2023-10-18 14:48:23", "segment": "segment200", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.7 Valid Constraints ", "content": "good randomization technique create several constraints ensure correctness random stimulus known valid constraints example bus readmodifywrite command might allowed longword data length know bus transaction obeys rule later want violate rule use constraintmode turn one constraint naming convention make constraints stand using prefix valid shown", "url": "RV32ISPEC.pdf#segment201", "timestamp": "2023-10-18 14:48:23", "segment": "segment201", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.8 In-line Constraints ", "content": "write tests end many constraints interact unexpected ways extra code enable disable adds test complexity additionally constantly adding editing constraints class could cause problems team environment many tests randomize objects one place code systemver ilog allows add extra constraint using randomize equivalent adding extra constraint existing ones effect exam ple 623 shows base class constraints two randomize statements extra constraints added existing ones effect use constraintmode need disable conflicting constraint note inside statement systemverilog uses scope class example 623 used addr taddr common mistake surround inline constraints parenthesis instead curly braces remember constraint blocks use curly braces inline con straint must use braces declarative code", "url": "RV32ISPEC.pdf#segment202", "timestamp": "2023-10-18 14:48:23", "segment": "segment202", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.9 The pre_randomize and post_randomize Functions ", "content": "sometimes need perform action immediately every randomize call immediately afterwards example may want set nonrandom class variables limits randomization starts may need calculate error correction bits random data systemverilog lets two special void functions prerandomize postrandomize section 33 showed void function return value task con sume time want call debug routine prerandomize postrandomize must function", "url": "RV32ISPEC.pdf#segment203", "timestamp": "2023-10-18 14:48:23", "segment": "segment203", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.9.1 Building a bathtub distribution ", "content": "applications want nonlinear random distribution instance small large packets likely find design bug buffer overflow mediumsized packets want bathtub shaped distribution high ends low middle could build elaborate dist constraint might require lots tweaking get shape want verilog several functions nonlinear distribution distexponential none bathtub however build one using exponential function twice every time object randomized variable value gets updated across many randomizations see desired nonlinear distribution use verilog1995 distribution functions way plus sev eral new systemverilog useful functions include following distexponential exponential decay shown figure 61 distnormal bellshaped distribution distpoisson bellshaped distribution distuniform flat distribution random flat distribution returning signed 32bit random urandom flat distribution returning unsigned 32bit random urandomrange flat distribution range consult statistics book details functions", "url": "RV32ISPEC.pdf#segment204", "timestamp": "2023-10-18 14:48:23", "segment": "segment204", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.9.2 Note on void functions ", "content": "functions prerandomize postrandomize call functions tasks could consume time delay middle call randomize debugging randomization problem call display routines planned ahead made void functions", "url": "RV32ISPEC.pdf#segment205", "timestamp": "2023-10-18 14:48:23", "segment": "segment205", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10 Constraints Tips and Techniques ", "content": "create constrainedrandom tests easily modified several tricks use", "url": "RV32ISPEC.pdf#segment206", "timestamp": "2023-10-18 14:48:23", "segment": "segment206", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.1 Constraints with variables ", "content": "constraint examples book use constants make readable example 625 size randomized range uses vari able upper bound default class creates random sizes 1 100 changing variable maxsize vary upper limit use variables dist constraint turn values ranges example 626 bus command different weight variable default constraint produces command equal probability want greater number read8 commands increase read8wt weight variable importantly turn generation commands dropping weight 0", "url": "RV32ISPEC.pdf#segment207", "timestamp": "2023-10-18 14:48:23", "segment": "segment207", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.2 Using nonrandom values ", "content": "set constraints produces stimulus almost want quite could call randomize set variable value want use random one however stimulus values may correct according constraints created check validity variables want override use randmode make nonrandom example 627 packet length stored one variable payload dynamic array first half test randomizes length variable contents payload array second half calls randmode make length nonrandom variable sets 42 calls randomize constraint sets payload size constant 42 array still filled random values", "url": "RV32ISPEC.pdf#segment208", "timestamp": "2023-10-18 14:48:23", "segment": "segment208", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.3 Checking values using constraints ", "content": "randomize object modify variables check object still valid checking constraints still obeyed call handlerandomize null systemverilog treats variables non random state variables ensures constraints satisfied", "url": "RV32ISPEC.pdf#segment209", "timestamp": "2023-10-18 14:48:23", "segment": "segment209", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.4 Turn constraints off and on ", "content": "simple testbench may use data class constraints want two tests different flavors data could use implication operators ifelse build single elaborate constraint controlled nonrandom variables see one large constraint quickly get control add expressions operand addressing modes etc modular approach use separate constraint flavor instruction disable one need constraints simpler approach process turn ing complex example turn constraints create data also disabling ones check data validity", "url": "RV32ISPEC.pdf#segment210", "timestamp": "2023-10-18 14:48:23", "segment": "segment210", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.5 Specifying a constraint in a test using in-line constraints ", "content": "keep adding constraints class becomes hard manage control soon everyone checking file source con trol system many times constraint used single test visible every test one way localize effects constraint use inline constraints randomize shown section 68 works well new constraint additive default constraints worst case disable constraint conflicts trying example one test injects particular flavor corrupted data could first turn particular validity constraint checks error several problems using inline constraints first constraints multiple locations add new constraint original class may conflict inline constraint second hard reuse inline constraint across multiple tests definition inline constraint exists one piece code could put routine separate file call needed point become nearly external constraint", "url": "RV32ISPEC.pdf#segment211", "timestamp": "2023-10-18 14:48:23", "segment": "segment211", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.6 Specifying a constraint in a test with external constraints ", "content": "body constraint defined within class routine body defined externally shown section 411 data class could defined one file one empty constraint test could define version constraint generate flavors stimulus external constraints several advantages inline constraints put file thus reused tests external constraint applies instances class inline constraint affects single call randomize consequently external constraint provides primitive way change class without learn advanced oop tech niques add constraints alter existing ones need define external constraint prototype original class like inline constraints external constraints cause problems constraints spread across multiple files final consideration happens body external con straint never defined systemverilog lrm currently specify happen case build testbench many external constraints find simulator handles missing definitions error prevents simulation warning message", "url": "RV32ISPEC.pdf#segment212", "timestamp": "2023-10-18 14:48:23", "segment": "segment212", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.10.7 Extending a class ", "content": "chapter 8 learn extend class take testbench uses given class swap extended class additional redefined constraints routines variables learning oop techniques requires little study flexibility new approach repays great rewards", "url": "RV32ISPEC.pdf#segment213", "timestamp": "2023-10-18 14:48:23", "segment": "segment213", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.11 Common Randomization Problems ", "content": "may comfortable procedural code writing constraints understanding random distributions requires new way thinking issues may encounter trying create random stimulus", "url": "RV32ISPEC.pdf#segment214", "timestamp": "2023-10-18 14:48:23", "segment": "segment214", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.11.1 Use care with signed variables ", "content": "creating testbench may tempted use int byte signed types counters simple variables use random constraints unless really want signed values values pro duced class example 632 randomized two random variables wants make sum 64 obviously could get pairs values 32 32 2 62 could also see 64 128 legitimate solution equation even though may wanted avoid meaningless values negative lengths use unsigned random variables even version causes problems large values pkt1len pkt2len 32 h80000040 32 h80000000 wrap around added together give 32 d64 32 h40 might think adding another pair constraints restrict values two variables best approach make wide needed avoid using 32 bit variables constraints", "url": "RV32ISPEC.pdf#segment215", "timestamp": "2023-10-18 14:48:24", "segment": "segment215", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12 Iterative and Array Constraints ", "content": "constraints presented far allow specify limits scalar vari ables want randomize array foreach statement several array functions let shape distribution values using foreach constraint creates many constraints slow simulation good solver quickly solve hun dreds constraints may slow thousands especially slow nested foreach constraints pro duce n2 constraints array size n see section 6125 algorithm used randc variables instead nested foreach", "url": "RV32ISPEC.pdf#segment216", "timestamp": "2023-10-18 14:48:24", "segment": "segment216", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12.1 Array size ", "content": "easiest array constraint understand size function specifying number elements dynamic array queue using inside constraint lets set lower upper boundary array size many cases may want empty array size0 remember specify upper limit otherwise end thousands millions elements cause random solver take excessive amount time", "url": "RV32ISPEC.pdf#segment217", "timestamp": "2023-10-18 14:48:24", "segment": "segment217", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12.2 Sum of elements ", "content": "send random array data design also use control flow perhaps interface transfer four data words words sent consecutively many cycles strobe signal tells data valid legitimate strobe patterns sending four values ten cycles create patterns using random array constrain four bits enabled entire range using sum function remember chapter 2 sum singlebit variables would normally single bit eg 0 1 example 636 compares strobesum 3bit value 3 h4 sum calculated 3bit precision", "url": "RV32ISPEC.pdf#segment218", "timestamp": "2023-10-18 14:48:24", "segment": "segment218", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12.3 Issues with array constraints ", "content": "sum function looks simple cause several problems verilog arithmetic rules start simple concept want generate one eight trans actions total length less 1024 bytes first attempt len field byte original transaction generates smaller lengths sum sometimes negative always less 127 definitely wanted try time unsigned field display function unchanged example 642 subtle problem sum transaction lengths always less 256 even though constrained array sum less 1024 problem verilog sum many 8bit values computed using 8bit result bump len field 32 bits using uint type chapter 2 wow happened similar signed problem sec tion 6111 sum two large numbers wrap around small number need limit size based comparison constraint work either individual len field fields 8 bits len values often greater 255 need specify len field 1 255 use 9bit field sum cor rectly requires constraining every element array", "url": "RV32ISPEC.pdf#segment219", "timestamp": "2023-10-18 14:48:24", "segment": "segment219", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12.4 Constraining individual array and queue elements ", "content": "systemverilog lets constrain individual elements array using foreach might able write constraints fixedsize array listing every element foreach style compact practi cal way constrain dynamic array queue foreach addition constraint individual elements fixed example note len array 10 bits wide must unsigned specify constraints array elements long careful endpoints following class creates ascending list values comparing element previous except first complex constraints become constraints writ ten solve einstein problem logic puzzle five people five separate attributes eight queens problem place eight queens chess board none capture sudoku", "url": "RV32ISPEC.pdf#segment220", "timestamp": "2023-10-18 14:48:24", "segment": "segment220", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.12.5 Generating an array of unique values ", "content": "create array random values unique exam ple may need assign id numbers n bus drivers range 0 max1 max n may tempted use constraint nested foreach specifying j systemverilog solver expands equations array 30 values creates almost 1000 constraints instead try procedural code postrandomize function uses randc variable put randc variable helper class randomize variable", "url": "RV32ISPEC.pdf#segment221", "timestamp": "2023-10-18 14:48:24", "segment": "segment221", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.13 Atomic Stimulus Generation vs. Scenario Generation ", "content": "seen atomic random transactions learned make single random bus transaction single network packet single processor instruction job verify design works realworld stimuli bus may long sequences transactions dma transfers cache fills network traffic consists extended sequences packets simultaneously read email browse web page download music net parallel processors deep pipelines filled code routine calls loops interrupt handlers generating transactions one time unlikely mimic scenarios", "url": "RV32ISPEC.pdf#segment222", "timestamp": "2023-10-18 14:48:24", "segment": "segment222", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.13.1 An atomic generator with history ", "content": "easiest way create stream related transactions atomic generator base random values ones previous trans actions class might constrain bus transaction repeat previous command write 80 time also use previous destina tion address plus increment use postrandomize function make copy generated transaction use next call randomize scheme works well smaller cases gets trouble need information entire sequence ahead time example dut may need know length sequence network transactions starts", "url": "RV32ISPEC.pdf#segment223", "timestamp": "2023-10-18 14:48:24", "segment": "segment223", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.13.2 Randsequence ", "content": "next way generate sequence transactions using randse quence construct systemverilog randsequence describe grammar transaction using syntax similar bnf backusnaur form example 653 generates sequence called stream stream either cfgread ioread memread random sequence engine ran domly picks one cfgread label weight 1 ioread twice likely chosen memread likely chosen weight 5 cfgread either single call cfgreadtask call task followed another cfgread result task always called least possibly many times one big advantage randsequence procedural code debug stepping though execution adding display state ments call randomize object either works fails see steps taken get result several problems using randsequence code gen erate sequence separate different style classes data constraints used sequence use randomize randsequence master two different forms randomiza tion seriously want modify sequence perhaps add new branch action modify original sequence code make extension see chapter 8 extend class add new code data constraints without edit original class", "url": "RV32ISPEC.pdf#segment224", "timestamp": "2023-10-18 14:48:24", "segment": "segment224", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.13.3 Random array of objects ", "content": "last form generating random sequences randomize entire array objects create constraints refer previous next objects array systemverilog solver solves constraints simul taneously since entire sequence generated extract information total number transactions checksum data values first transaction sent alternatively build sequence dma transfer constrained exactly 1024 bytes let solver pick right number transactions reach goal", "url": "RV32ISPEC.pdf#segment225", "timestamp": "2023-10-18 14:48:24", "segment": "segment225", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.13.4 Combining sequences ", "content": "combine multiple sequences together make realistic flow transactions example network device could make one sequence resembles downloading email second viewing web page third entering single characters webbased formthe techniques combine flows beyond scope book learn vmm described bergeron et al 2005", "url": "RV32ISPEC.pdf#segment226", "timestamp": "2023-10-18 14:48:24", "segment": "segment226", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.14 Random Control ", "content": "point may thinking process great way create long streams random input design may think lot work want occasionally make random decision code may prefer set procedural statements step using debugger", "url": "RV32ISPEC.pdf#segment227", "timestamp": "2023-10-18 14:48:24", "segment": "segment227", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.14.1 Introduction to randcase ", "content": "use randcase make weighted choice several actions without create class instance example 654 chooses one three branches based weight systemverilog adds weights 181 10 chooses value range picks appro priate branch branches order dependent weights variables add 100 urandomrange function returns random number specified range write example 654 using class randomize function small case oop version little larger however part larger class constraint would compact equiva lent randcase statement code using randcase difficult override modify ran dom constraints way modify random results rewrite code use variable weights careful using randcase leave tracks behind example could use decide whether inject error trans action problem downstream transactors scoreboard need know choice best way inform would use vari able transaction environment going create variable part classes could made random variable used constraints change behavior different tests", "url": "RV32ISPEC.pdf#segment228", "timestamp": "2023-10-18 14:48:24", "segment": "segment228", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.14.2 Building a decision tree with randcase ", "content": "use randcase need create decision tree example 656 two levels procedural code see extended use", "url": "RV32ISPEC.pdf#segment229", "timestamp": "2023-10-18 14:48:24", "segment": "segment229", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.15 Random Generators ", "content": "random systemverilog one hand testbench depends uncorrelated stream random values create stimulus patterns go beyond directed test hand need repeat patterns debug particular test even design test bench make minor changes", "url": "RV32ISPEC.pdf#segment230", "timestamp": "2023-10-18 14:48:24", "segment": "segment230", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.15.1 Pseudorandom number generators ", "content": "verilog uses simple prng could access random function generator internal state set providing seed random ieee1364compliant verilog simulators use algorithm calculate values example 657 shows simple prng one used systemverilog prng 32bit state calculate next random value square state produce 64bit value take middle 32 bits add original value endfunction see simple code produces stream values seem random repeated using seed value systemverilog calls prng new values randomize randcase", "url": "RV32ISPEC.pdf#segment231", "timestamp": "2023-10-18 14:48:24", "segment": "segment231", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.15.2 Random Stability \u2014 multiple generators ", "content": "verilog single prng used entire simulation would happen systemverilog kept approach testbenches often several stimulus generators running parallel creating data design test two streams share prng get subset random values figure 63 two stimulus generators single prng pro ducing values b c etc gen2 two random objects every cycle uses twice many random values gen1 problem occur one classes changes gen1 gets additional random variable consumes two random values every time called approach changes values used gen1 also gen2 systemverilog separate prng every object thread changes one object affect random values seen others", "url": "RV32ISPEC.pdf#segment232", "timestamp": "2023-10-18 14:48:24", "segment": "segment232", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.15.3 Random Stability \u2014 hierarchical seeding ", "content": "every object thread prng unique seed new object thread started prng seeded parent prng thus single seed specified start simulation create many streams ran dom stimulus distinct", "url": "RV32ISPEC.pdf#segment233", "timestamp": "2023-10-18 14:48:25", "segment": "segment233", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.16 Random Device Configuration ", "content": "important part dut test configuration internal dut settings system surrounds described section 621 tests randomize environment confident tested many modes possible example 658 shows create random testbench configuration modify results needed test level ethcfg class describes configuration 4port ethernet switch instantiated environment class turn used test test overrides one configura tion values enabling 4 ports configuration class used environment class several phases configuration constructed environment constructor randomized gencfg phase allows turn con straints randomize called afterwards override generated values build phase creates virtual components around dut components build test described program block test instantiates environment class runs step may want override random configuration perhaps reach corner case following test randomizes configuration class enables ports", "url": "RV32ISPEC.pdf#segment234", "timestamp": "2023-10-18 14:48:25", "segment": "segment234", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "6.17 Conclusion ", "content": "constrainedrandom tests practical way generate stimu lus needed verify complex design systemverilog offers many ways create random stimulus chapter presents many alternatives test needs flexible allowing either use values generated default constrain override values reach goals always plan ahead creating testbench leaving sufficient hooks steer testbench test without modifying existing code", "url": "RV32ISPEC.pdf#segment235", "timestamp": "2023-10-18 14:48:25", "segment": "segment235", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 7 ", "content": "threads interprocess communication", "url": "RV32ISPEC.pdf#segment236", "timestamp": "2023-10-18 14:48:25", "segment": "segment236", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.1 Introduction ", "content": "real hardware sequential logic activated clock edges combinational logic constantly changing inputs change parallel activity simulated verilog rtl using initial always blocks plus occasional gate continuous assignment statement stimulate check blocks testbench uses many threads execution running parallel blocks testbench environment modeled transactor run thread systemverilog scheduler traffic cop chooses thread runs next use techniques chapter control threads thus testbench threads communicates neighbors figure 71 generator passes stimulus agent environment class needs know generator completes tell rest testbench threads terminate done interprocess communication constructs standard verilog events event control wait constructs systemverilog mailboxes semaphores10", "url": "RV32ISPEC.pdf#segment237", "timestamp": "2023-10-18 14:48:25", "segment": "segment237", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2 Working with Threads ", "content": "thread constructs used modules program blocks testbenches belong program blocks result code always starts initial blocks start executing time 0 simulator starts put always block program however easily get around using forever loop initial block classic verilog two ways grouping statements begin end fork join statements begin end run sequentially fork join execute parallel latter limited statements inside fork join finish rest block continue result rare verilog testbenches use feature systemverilog introduces two new ways create threads fork joinnone fork joinany statements testbench communicates synchronizes controls threads existing constructs events event control wait disable statements plus new language elements semaphores mailboxes", "url": "RV32ISPEC.pdf#segment238", "timestamp": "2023-10-18 14:48:25", "segment": "segment238", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.1 Using fork...join and begin...end ", "content": "example 71 fork join parallel block enclosed begin end sequential block shows difference two 10 systemverilog lrm uses thread process interchangeably term process com monly associated unix processes contains program running memory space threads lightweight processes may share common code memory consume far less resources typical process book uses term thread however interprocess commu nication common term used book note output code fork join executes parallel statements shorter delays execute longer delays fork join completes last statement starts 50", "url": "RV32ISPEC.pdf#segment239", "timestamp": "2023-10-18 14:48:25", "segment": "segment239", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.2 Spawning threads with fork...join_none ", "content": "fork joinnone block schedules statement block execution continues parent thread following code identical example 71 except join converted joinnone diagram block similar figure 73 note statement joinnone block executes statement inside fork joinnone", "url": "RV32ISPEC.pdf#segment240", "timestamp": "2023-10-18 14:48:25", "segment": "segment240", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.3 Synchronizing threads with fork...join_any ", "content": "fork joinany block schedules statement block first statement completes execution continues parent thread remaining threads continue following code identical previous examples except join converted joinany note results statement display joinany completes first statement parallel block", "url": "RV32ISPEC.pdf#segment241", "timestamp": "2023-10-18 14:48:25", "segment": "segment241", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.4 Creating threads in a class ", "content": "use fork joinnone start thread code random transactor generator example 77 shows generator class run task creates n packets full testbench classes driver monitor checker transactors need run parallel several points notice example 77 first transactor started new task constructor initialize values start threads separating constructor code real work allows change variables start executing code object allows inject errors modify defaults alter behavior object next run task starts thread fork joinnone block thread implementation detail transactor spawned parent class", "url": "RV32ISPEC.pdf#segment242", "timestamp": "2023-10-18 14:48:25", "segment": "segment242", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.5 Dynamic threads ", "content": "verilog threads predictable read source code count initial always fork join blocks know many threads module systemverilog lets create threads dynamically require wait finish example 78 testbench generates random transactions sends dut stores predetermined time returns testbench wait transaction complete want stop generator waitfortr task called spawns thread watch bus matching transaction address normal simulation many threads run concurrently simple example thread prints message could add elaborate controls", "url": "RV32ISPEC.pdf#segment243", "timestamp": "2023-10-18 14:48:25", "segment": "segment243", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.6 Automatic variables in threads ", "content": "common subtle bug occurs loop spawns threads save variable values next iteration example 78 works program module automatic storage waitfortr used static storage thread would share variable tr later calls would overwrite value set earlier ones likewise example fork joinnone inside repeat loop would try match incoming transactions using tr value would change next time loop always use automatic variables hold values concurrent threads example 79 fork joinnone inside loop systemverilog schedules threads inside fork joinnone executed original code blocks 0 delay example 79 prints 3 3 3 values index variable j loop terminates 0 delay blocks current thread reschedules start later current time slot example 710 delay makes current thread run threads spawned fork join statement delay useful blocking thread careful excessive use causes race conditions unexpected results use automatic variables inside fork join statement save copy variable shown example 711 fork joinnone block split two parts automatic variable declaration initialization runs thread inside loop loop copy k created set current value j body fork joinnone write scheduled including copy k loop finishes 0 blocks current thread three threads run printing value copy k threads complete nothing else left current timeslot region systemverilog advances next statement display executes example 712 traces code variables example 711 three copies automatic variable k called k0 k1 k2", "url": "RV32ISPEC.pdf#segment244", "timestamp": "2023-10-18 14:48:25", "segment": "segment244", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.7 Disabling a single thread ", "content": "may need create threads testbench may also need stop verilog disable statement works systemverilog threads waitfortr task time using fork joinany plus disable create watch timeout task outermost fork joinnone identical example 78 version two threads inside fork joinany simple wait done parallel delayed display correct bus address comes back quickly enough wait construct completes joinany executes disable kills remaining thread however bus address get right value timeout delay completes error message printed joinany executes disable kills thread wait", "url": "RV32ISPEC.pdf#segment245", "timestamp": "2023-10-18 14:48:25", "segment": "segment245", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.8 Disabling multiple threads ", "content": "example 713 used classic verilog disable statement stop threads named block systemverilog introduces disable fork statement stop child threads spawned current thread watch might unintentionally stop many threads created routine calls always surround target code fork join limit scope disable fork statement example 714 additional begin end block inside fork join make statements sequential following sections show asynchronously disable multiple threads cause unexpected behavior watch side effects thread stopped midstream may instead want design algorithm check interrupts stable points gracefully give resources next examples use waitfortr task example 713 think task timeout code calls waitfortr starts thread 0 next fork join creates thread 1 inside thread one spawned waitfortr task one innermost fork join spawns thread 4 calling task delay disable fork stops child threads threads 2 3 4 thread 1 ones stopped thread 0 outside fork join block disable unaffected example 715 robust version example 714 disable label explicitly names threads want stop", "url": "RV32ISPEC.pdf#segment246", "timestamp": "2023-10-18 14:48:25", "segment": "segment246", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.2.9 Waiting for all spawned threads ", "content": "systemverilog initial blocks program done simulator exits however spawn many threads might still running use wait fork statement wait child threads", "url": "RV32ISPEC.pdf#segment247", "timestamp": "2023-10-18 14:48:25", "segment": "segment247", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.3 Interprocess Communication ", "content": "threads testbench need synchronize exchange data basic level one thread waits another environment object waiting generator complete multiple threads might try access single resource bus dut testbench needs ensure one one thread granted access highest level threads need exchange data transaction objects passed generator agent", "url": "RV32ISPEC.pdf#segment248", "timestamp": "2023-10-18 14:48:25", "segment": "segment248", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.4 Events ", "content": "verilog event synchronizes threads similar phone one person waits call another person verilog thread waits event operator operator edge sensitive always blocks waiting event change another thread triggers event operator unblocking first thread systemverilog enhances verilog event several ways first event handle synchronization object passed around routines feature allows share events across objects without make events global give phone number object called later always possibility race condition verilog one thread blocks event time another triggers triggering thread executes blocking thread trigger missed systemverilog introduces triggered function lets check whether event triggered including current timeslot thread wait function instead blocking operator", "url": "RV32ISPEC.pdf#segment249", "timestamp": "2023-10-18 14:48:25", "segment": "segment249", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.4.1 Blocking on the edge of an event ", "content": "run code one initial block starts triggers event blocks event second block starts triggers event waking first blocks first event second thread locks missed first event zerowidth pulse", "url": "RV32ISPEC.pdf#segment250", "timestamp": "2023-10-18 14:48:25", "segment": "segment250", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.4.2 Waiting for an event trigger ", "content": "instead edgesensitive block e1 use levelsensitive wait e1triggered block event triggered time step otherwise waits event triggered run code one initial block starts triggers event blocks event second block starts triggers event waking first blocks first event", "url": "RV32ISPEC.pdf#segment251", "timestamp": "2023-10-18 14:48:25", "segment": "segment251", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.4.3 Passing events ", "content": "described event systemverilog passed argument routine example 721 event used transactor signal completed", "url": "RV32ISPEC.pdf#segment252", "timestamp": "2023-10-18 14:48:26", "segment": "segment252", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.4.4 Waiting for multiple events ", "content": "example 721 single generator fired single event testbench environment class must wait multiple child processes finish n generators easiest way use wait fork waits child processes end problem also waits transactors drivers threads spawned environment need selective still want use events synchronize parent child threads could use loop parent wait event would work thread 0 finished thread 1 finished thread 2 etc threads finish order could waiting event triggered many cycles ago solution make new thread spawn children block event generator wait fork selective another way solve problem keep track number events triggered slightly less complicated get rid events wait count number running generators count static variable generator class note thread manipulation code replaced single wait construct last block example 724 waits count using handle gen 0 handle object gives access static variables", "url": "RV32ISPEC.pdf#segment253", "timestamp": "2023-10-18 14:48:26", "segment": "segment253", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.5 Semaphores ", "content": "semaphore allows control access resource imagine spouse share car obviously one person drive time manage situation agreeing whoever key drive done car give car person use key semaphore makes sure one person access car operating system terminology known mutually exclusive access semaphore known mutex used control access resource semaphores used testbench resource bus may multiple requestors inside testbench part physical design one driver systemverilog thread requests key one available always blocks multiple blocking threads queued fifo order", "url": "RV32ISPEC.pdf#segment254", "timestamp": "2023-10-18 14:48:26", "segment": "segment254", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.5.1 Semaphore operations ", "content": "three operations semaphore create semaphore one keys using new method get one keys get return one keys put want try get semaphore block use tryget function returns 1 enough keys 0 insufficient keys", "url": "RV32ISPEC.pdf#segment255", "timestamp": "2023-10-18 14:48:26", "segment": "segment255", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.5.2 Semaphores with multiple keys ", "content": "two things watch semaphores first put keys back took suddenly may two keys one car secondly careful testbench needs get put multiple keys perhaps one key left thread requests two causing block second thread requests single semaphore happen systemverilog second request blocked fifo ordering even though enough keys need let smaller requests jump larger ones always write class perhaps hardware resembles restaurant queue people waiting tables party two able jump ahead groups four line space available", "url": "RV32ISPEC.pdf#segment256", "timestamp": "2023-10-18 14:48:26", "segment": "segment256", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6 Mailboxes ", "content": "pass information two threads perhaps generator needs create many transactions pass driver might tempted generator thread call task driver generator needs know hierarchical path driver task making code less reusable additionally style forces generator run speed driver cause synchronization problems one generator needs control multiple drivers think generator driver transactors autonomous objects communicate channel object gets transaction upstream object creates case generator processing passes downstream object channel must allow driver receiver operate asynchronously may tempted use shared array queue difficult create code reads writes blocks threads solution systemverilog mailbox hardware point view easiest way think mailbox fifo source sink source puts data mailbox sink gets values mailbox mailboxes maximum size unlimited source puts value sized mailbox full blocks data removed likewise sink tries remove data mailbox empty blocks data put mailbox mailbox object thus instantiated calling new function takes optional size argument limit number entries mailbox size 0 specified mailbox unbounded hold unlimited number entries put data mailbox put task remove get task put block mailbox full get blocks empty peek task gets copy data mailbox remove data single value integer logic size put handle mailbox object default mailbox type put mix data stick one type per mailbox classic bug loop randomizes objects puts mailbox object constructed outside loop since one object randomized mailbox holds handles objects end mailbox containing multiple handles point single object code gets handles mailbox sees last set random values solution make sure loop three steps constructing object randomizing putting mailbox bug common also mentioned section 4143 type loop known factory pattern described section 83 want code block use tryget trypeek functions successful return nonzero value otherwise return 0 reliable num function number entries change measure next access mailbox 761 mailbox testbench example 726 shows generator driver exchanging transactions using mailbox", "url": "RV32ISPEC.pdf#segment257", "timestamp": "2023-10-18 14:48:26", "segment": "segment257", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6.2 Bounded mailboxes ", "content": "default mailboxes similar unlimited fifo producer put number objects mailbox consumer gets objects however may want two threads operate lockstep producer blocks consumer done object specify maximum size mailbox construct default mailbox size 0 creates unbounded mailbox size greater 0 creates bounded mailbox attempt put objects limit put block get object creating room example 727 creates smallest mailbox stores single message producer thread tries put three messages integers mailbox consumer thread slowly gets messages every 1ns example 728 shows first put succeeds producer tries put 2 blocks consumer wakes gets message 1 mailbox producer finish putting message 2 bounded mailbox acts buffer two processes see producer gets ahead consumer", "url": "RV32ISPEC.pdf#segment258", "timestamp": "2023-10-18 14:48:26", "segment": "segment258", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6.3 Unsynchronized threads communicating with a mailbox ", "content": "want producer consumer threads run lockstep need additional handshake example 729 producer consumer classes exchange integers using mailbox explicit synchronization two objects example 731 synchronization producer puts three integers mailbox consumer get first one thread continues running blocking statement producer none consumer thread blocks first call mbxget", "url": "RV32ISPEC.pdf#segment259", "timestamp": "2023-10-18 14:48:26", "segment": "segment259", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6.4 Synchronized threads using a mailbox and events ", "content": "may want two threads use handshake producer get ahead consumer consumer already blocks waiting producer using mailbox producer needs block waiting consumer finish transaction done adding blocking statement producer event semaphore second mailbox example 732 uses event block producer puts data mailbox consumer triggers event consumes data use wait handshaketriggered loop sure advance time waiting wait blocks given time slot need move another example 732 uses edgesensitive blocking statement handshake instead ensure producer stops sending transaction edgesensitive statement works multiple times time slot may ordering problems trigger block happen time slot producer advance consumer triggers event see producer consumer lockstep", "url": "RV32ISPEC.pdf#segment260", "timestamp": "2023-10-18 14:48:26", "segment": "segment260", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6.5 Synchronized threads using two mailboxes ", "content": "another way synchronize two threads use second mailbox sends completion message consumer back producer return message rtn mailbox negative version original integer", "url": "RV32ISPEC.pdf#segment261", "timestamp": "2023-10-18 14:48:26", "segment": "segment261", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.6.6 Other synchronization techniques ", "content": "also complete handshake blocking variable semaphore event simplest construct followed blocking variable semaphore comparable using second mailbox information exchanged", "url": "RV32ISPEC.pdf#segment262", "timestamp": "2023-10-18 14:48:26", "segment": "segment262", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.7 Building a Testbench with Threads and IPC ", "content": "know use threads ipc construct basic testbench transactors", "url": "RV32ISPEC.pdf#segment263", "timestamp": "2023-10-18 14:48:26", "segment": "segment263", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.7.1 Basic transactor ", "content": "following transactor agent sits generator driver", "url": "RV32ISPEC.pdf#segment264", "timestamp": "2023-10-18 14:48:26", "segment": "segment264", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.7.2 Environment class ", "content": "generator agent driver monitor checker scoreboard classes instantiated environment class", "url": "RV32ISPEC.pdf#segment265", "timestamp": "2023-10-18 14:48:26", "segment": "segment265", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.7.3 Test program ", "content": "main test goes toplevel program", "url": "RV32ISPEC.pdf#segment266", "timestamp": "2023-10-18 14:48:26", "segment": "segment266", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "7.8 Conclusion ", "content": "design modeled many independent blocks running parallel testbench must also generate multiple stimulus streams check responses using parallel threads organized layered testbench orchestrated toplevel environment systemverilog introduces powerful constructs fork joinnone fork joinany dynamically creating new threads addition standard fork join threads communicate synchronize using events semaphore mailboxes classic event control wait statement lastly disable command used terminate threads threads related control constructs complement dynamic nature oop objects created destroyed run independent threads allowing build powerful flexible testbench environment", "url": "RV32ISPEC.pdf#segment267", "timestamp": "2023-10-18 14:48:26", "segment": "segment267", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 8 ", "content": "advanced oop guidelines", "url": "RV32ISPEC.pdf#segment268", "timestamp": "2023-10-18 14:48:26", "segment": "segment268", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.1 Introduction ", "content": "would create complex class bus transaction also includes performs error injection variable delays first approach put everything large flat class approach simple build easy understand code right one class slow develop debug additionally large class maintenance burden anyone wants make new transaction behavior edit file would never create complex rtl design using one verilog module break classes smaller reusable blocks next choice composition learned chapter 4 instantiate one class inside another instantiate modules inside another building hierarchical testbench write debug classes top bottom look natural partitions deciding variables methods go various classes sometimes difficult divide functionality separate parts take example bus transaction error injection write original class transaction may think possible error cases ideally would like make class good transaction later add different error injectors transaction may data fields errorchecking crc field generated data one form error injection corruption crc field use composition need separate classes good transaction error transaction testbench code used good objects would rewritten process new error objects need something resembles original class adds new variables methods result accomplished inheritance inheritance allows new class derived existing one order share variables routines original class known base super class new one since extends capability base class called extended class inheritance provides reusability adding features error injection existing class basic transaction without modifying base class real power oop gives ability take existing class transaction selectively change parts behavior replacing routines without change surrounding infrastructure planning create testbench solid enough send basic transactions able accommodate extensions needed test", "url": "RV32ISPEC.pdf#segment269", "timestamp": "2023-10-18 14:48:27", "segment": "segment269", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2 Introduction to Inheritance ", "content": "figure 81 shows simple testbench generator creates transaction randomizes sends driver rest testbench left figure 81 simplified layered testbench", "url": "RV32ISPEC.pdf#segment270", "timestamp": "2023-10-18 14:48:27", "segment": "segment270", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.1 Base transaction ", "content": "base transaction class variables source destination addresses eight data words crc error checking plus routines displaying contents calculating crc calccrc function tagged virtual redefined needed shown next section virtual routines explained detail later chapter", "url": "RV32ISPEC.pdf#segment271", "timestamp": "2023-10-18 14:48:27", "segment": "segment271", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.2 Extending the Transaction class ", "content": "suppose testbench sends good transactions dut want inject errors take existing transaction class extend create new class following guidelines chapter 1 want make code changes possible existing testbench reuse existing transaction class done declaring new class badtr extension current class transaction called base class badtr known extended class note example 82 variable crc used without hierarchical identifier badtr class see variables original transaction plus variables badcrc calccrc function extended class calls calccrc base class using super prefix call one level going across multiple levels supersupernew allowed systemverilog mention style since reaches across multiple levels violates rules encapsulation reaching across multiple boundaries always declare routines inside class virtual redefined extended class applies tasks functions except new function called object constructed way extend systemverilog always calls new function based handle type", "url": "RV32ISPEC.pdf#segment272", "timestamp": "2023-10-18 14:48:27", "segment": "segment272", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.3 Quick OOP glossary ", "content": "quick glossary terms explained chapter 4 oop term variable class property task function called method extend class original class transaction called parent class super class extended class badtr known derived class sub class base class one derived class prototype routine first line shows argument list return type prototype used move body routine outside class needed describe routine communicated others shown section 411", "url": "RV32ISPEC.pdf#segment273", "timestamp": "2023-10-18 14:48:27", "segment": "segment273", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.4 Constructors in extended classes ", "content": "start extending classes one rule constructors new function keep mind base class constructor arguments extend constructor must constructor must call base constructor first line", "url": "RV32ISPEC.pdf#segment274", "timestamp": "2023-10-18 14:48:27", "segment": "segment274", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.5 Driver class ", "content": "following driver class receives transactions generator drives dut class stimulates dut transaction objects oop rules say handle base type transaction also point object extended type badtr handle tr reference variables src dst crc data routine calccrc send badtr objects driver without changing driver calls trcalccrc systemverilog looks type object stored tr task declared virtual object type transaction systemverilog calls transaction calccrc type badtr systemverilog calls badtr calccrc", "url": "RV32ISPEC.pdf#segment275", "timestamp": "2023-10-18 14:48:27", "segment": "segment275", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.2.6 Simple generator class ", "content": "generator testbench creates random transaction puts mailbox driver following example shows might create class learned far note avoids common testbench bug constructing transaction inside loop instead outside problem generator run task constructs transaction immediately randomizes means transaction uses whatever constraints turned default way change would edit transaction class goes verification guidelines worse yet generator uses transaction objects way use extended object badtr fix separate construction tr randomization shown see generator driver classes common structure enforce extensions transactor class virtual methods gencfg build run wrapup vmm extensive set base classes transactors data much", "url": "RV32ISPEC.pdf#segment276", "timestamp": "2023-10-18 14:48:27", "segment": "segment276", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.3 Factory Patterns ", "content": "useful oop technique factory pattern build factory make signs need know shape every possible sign advance need stamping machine change die cut different shapes likewise want build transactor generator know build every type transaction need able stamp new ones similar given transaction instead constructing immediately using object example 85 instead construct blueprint object cutting die modify constraints even replace extended object randomize blueprint random values want make copy object send copy downstream transactor beauty technique change blueprint object factory creates differenttype object using sign analogy change cutting die square triangle make yield signs blueprint hook allows change behavior generator class without change class code generator class using factory pattern important thing notice blueprint object constructed one place build task used another run task previous coding guidelines said separate declaration construction similarly need separate construction randomization blueprint object copy function discussed section 86 remember must add transaction badtr classes", "url": "RV32ISPEC.pdf#segment277", "timestamp": "2023-10-18 14:48:27", "segment": "segment277", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 1 discussed the three phases of execution: Build, Run, and Wrap- ", "content": "example 87 shows environment class instantiates testbench components runs three phases", "url": "RV32ISPEC.pdf#segment278", "timestamp": "2023-10-18 14:48:27", "segment": "segment278", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.3.2 A simple testbench ", "content": "test contained toplevel program basic test lets environment run defaults", "url": "RV32ISPEC.pdf#segment279", "timestamp": "2023-10-18 14:48:27", "segment": "segment279", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.3.3 Using the extended Transaction class ", "content": "inject error need change blueprint object transaction object badtr build run phases environment toplevel testbench runs phase environment changes blueprint note references badtr one file change environment generator classes want restrict scope badtr used standalone begin end block used middle initial block", "url": "RV32ISPEC.pdf#segment280", "timestamp": "2023-10-18 14:48:27", "segment": "segment280", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.4 Type Casting and Virtual Methods ", "content": "start use inheritance extend functionality classes need oop techniques control objects functionality particular handle refer object certain class extended class happens base handle points extended object happens call method exists base extended classes section explains happens using several examples", "url": "RV32ISPEC.pdf#segment281", "timestamp": "2023-10-18 14:48:27", "segment": "segment281", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.4.1 Type casting with $cast ", "content": "type casting conversion refers changing entity one data type another outside oop convert integer real visa versa oop easiest way think type casting consider base class extended class assign extended handle base handle nothing special needed class extended variables routines inherited integer src exists extended object assignment second line permitted reference using base handle tr valid trsrc trdisplay try going opposite direction copying base object extended handle shown example 812 fails base object missing properties exist extended class badcrc systemverilog compiler static check handle types compile second line always illegal assign base handle extended handle allowed base handle actually points extended object cast routine checks type object handle copy address extended object base handle tr extended handle b2 use cast task systemverilog checks type source object runtime gives error compatible eliminate error using cast function checking result 0 incompatible types non0 compatible types", "url": "RV32ISPEC.pdf#segment282", "timestamp": "2023-10-18 14:48:27", "segment": "segment282", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.4.2 Virtual methods ", "content": "comfortable using handles extended classes happens try call routine using one handles use virtual methods systemverilog uses type object handle decide routine call final statement tr points extended object badtr badtr calccrc called left virtual modifier calccrc systemverilog would use type handle transaction object last statement would call transaction calccrc probably wanted oop term multiple routines sharing common name polymorphism solves problem similar computer architects faced trying make processor could address large address space small amount physical memory created concept virtual memory code data program could reside memory disk compile time program know parts resided taken care hardware plus operating system runtime virtual address could mapped ram chips swap file disk programmers longer needed worry virtual memory mapping wrote code knew processor would find code data runtime see also denning 2005", "url": "RV32ISPEC.pdf#segment283", "timestamp": "2023-10-18 14:48:27", "segment": "segment283", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.4.3 Signatures ", "content": "one downside using virtual routines define one extended classes define virtual routine must use signature ie number type arguments add remove argument extended virtual routine means need plan ahead", "url": "RV32ISPEC.pdf#segment284", "timestamp": "2023-10-18 14:48:27", "segment": "segment284", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.5 Composition, Inheritance, and Alternatives ", "content": "build testbench decide group related variables routines together classes chapter 4 learned build basic classes include one class inside another previously chapter saw basics inheritance section shows decide two styles also shows alternative", "url": "RV32ISPEC.pdf#segment285", "timestamp": "2023-10-18 14:48:27", "segment": "segment285", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.5.1 Deciding between composition and inheritance ", "content": "tie together two related classes composition uses has relationship packet header body inheritance uses isa relationship badtr transaction information following table quick guide detail 1 several small classes want combine larger class example may data class header class want make packet class systemverilog support multiple inheritance one class derives several classes instead use composition alternatively could extend one classes new class manually add information others 2 example 814 transaction badtr classes bus transactions created generator driven dut thus inheritance makes sense 3 lowerlevel information src dst data must always present driver send transaction 4 example 814 new badtr class new field badcrc extended calccrc function generator class trans mits transaction care additional information use composition create error bus transaction generator class would rewritten handle new type two objects seem related isa hasa may need break smaller components", "url": "RV32ISPEC.pdf#segment286", "timestamp": "2023-10-18 14:48:27", "segment": "segment286", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.5.2 Problems with composition ", "content": "classical oop approach building class hierarchy partitions functionality small blocks easy understand discussed section 416 public vs private attributes testbenches standard software development projects concepts information hiding using private variables conflict building testbench needs maximum visibility controllability similarly dividing transaction smaller pieces may cause problems solves creating class represent transaction may want partition keep code manageable example may ethernet mac frame testbench uses two flavors normal ii virtual lan vlan using composition could create basic cell ethmacframe common fields da sa discriminant variable kind indicate type second class hold vlan information included ethmacframe several problems composition first adds extra layer hierarchy constantly add extra name every reference vlan information called ethhvlanhvlan start adding layers hierarchical names become burden subtle issue occurs want instantiate randomize classes ethmacframe constructor create kind random know whether construct vlan object new called randomize class constraints set variables ethmacframe vlan objects based random kind field however randomization works objects instantiated instantiate object kind chosen solution construction randomization problems always instantiate object ethmacframe new always using alternatives divide ethernet cell two different classes", "url": "RV32ISPEC.pdf#segment287", "timestamp": "2023-10-18 14:48:28", "segment": "segment287", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.5.3 Problems with inheritance ", "content": "inheritance solve issues variables extended classes referenced without extra hierarchy ethhvlan need discriminant may find easier one variable test rather typechecking downside set classes use inheritance always requires effort design build debug set classes without inheritance code must use cast whenever assignment base handle extended building set virtual routines challenging prototype need extra argument need go back edit entire set possibly routine calls also problems randomization make constraint randomly chooses two kinds frame sets proper variables put constraint ethmacframe references vlan field final issue multiple inheritance figure 87 see vlan frame derived normal mac frame problem different standards reconverged systemverilog support multiple inheritance could create vlan snap control frame inheritance", "url": "RV32ISPEC.pdf#segment288", "timestamp": "2023-10-18 14:48:28", "segment": "segment288", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.5.4 A real-world alternative ", "content": "composition leads large hierarchies inheritance requires extra code planning deal different classes difficult construction randomization instead make single flat class variables routines approach leads large class handles variants cleanly use discriminant variable often tell variables valid shown example 818 contains several conditional constraints apply different cases depending value kind define typical behavior constraints class use inheritance inject new behavior test level", "url": "RV32ISPEC.pdf#segment289", "timestamp": "2023-10-18 14:48:28", "segment": "segment289", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.6 Copying an Object ", "content": "example 86 generator first randomized copied blueprint make new transaction take closer look copy function extend transaction class make class badtr copy function still return transaction object extended virtual function must match base transaction copy including arguments return type", "url": "RV32ISPEC.pdf#segment290", "timestamp": "2023-10-18 14:48:28", "segment": "segment290", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.6.1 The copy_data routine ", "content": "one optimization break copy function two creating separate function copydata class responsible copying local data makes copy function robust reusable function base class extended class copydata little complicated extends original copydata routine argument transaction handle routine use calling transaction copydata needs copy badcrc needs badtr handle first cast base handle extended type", "url": "RV32ISPEC.pdf#segment291", "timestamp": "2023-10-18 14:48:28", "segment": "segment291", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.6.2 Specifying a destination for copy ", "content": "existing copy routine always constructs new object improvement copy specify location copy put technique useful want reuse existing object allocate new one difference additional argument specify destination code test destination object passed routine nothing passed default construct new object else use existing one", "url": "RV32ISPEC.pdf#segment292", "timestamp": "2023-10-18 14:48:28", "segment": "segment292", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.7 Callbacks ", "content": "one main guidelines book create verification environment use tests changes key requirement testbench must provide hook test program inject new code without modifying original classes driver may want following inject errors drop transaction delay transaction synchronize transaction others put transaction scoreboard gather functional coverage data rather try anticipate every possible error delay disturbance flow transactions driver needs call back routine defined toplevel test beauty technique callback11 routine defined differently every test result test add new functionality driver using callbacks without editing driver class figure 88 driver run task loops forever call transmit task sending transaction run calls pretransmit callback sending transaction calls postcallback task", "url": "RV32ISPEC.pdf#segment293", "timestamp": "2023-10-18 14:48:28", "segment": "segment293", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.7.1 Creating a callback ", "content": "callback task created toplevel test called driver lowest level environment however driver knowledge test use generic class test extend driver uses queue hold callback objects simplifies adding new objects", "url": "RV32ISPEC.pdf#segment294", "timestamp": "2023-10-18 14:48:28", "segment": "segment294", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.7.2 Using a callback to inject disturbances ", "content": "common use callback inject disturbance causing error delay following testbench randomly drops packets using callback object callbacks also used send data scoreboard gather functional coverage values note use pushback pushfront depending order want called example probably want scoreboard called tasks may delay corrupt drop transaction gather coverage transaction successfully transmitted", "url": "RV32ISPEC.pdf#segment295", "timestamp": "2023-10-18 14:48:28", "segment": "segment295", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.7.3 Connecting to the scoreboard with a callback ", "content": "following testbench creates extension driver callback class adds reference driver callback queue always use callbacks scoreboards functional coverage monitor transactor use callback compare received transactions expected ones monitor callback also perfect place gather functional coverage transactions actually sent dut may thought using separate transactor functional coverage connected testbench using mailbox poor solution coverage scoreboards passive testbench components wake testbench data never pass information downstream transactor additionally may sample data several points testbench transactor designed single source instead make scoreboard coverage interface passive rest testbench access callbacks", "url": "RV32ISPEC.pdf#segment296", "timestamp": "2023-10-18 14:48:28", "segment": "segment296", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.7.4 Using a callback to debug a transactor ", "content": "transactor callbacks working expect use additional callback debug start adding callback display transaction multiple instances display hierarchical path using display move debug callbacks locate one causing problem even debug want avoid making changes testbench environment", "url": "RV32ISPEC.pdf#segment297", "timestamp": "2023-10-18 14:48:28", "segment": "segment297", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "8.8 Conclusion ", "content": "software concept inheritance new functionality added existing class parallels hardware practice extending design features generation still maintaining backwards compatibility example upgrade pc adding larger capacity disk long uses interface old one replace part system yet overall functionality improved likewise create new test upgrading existing driver class inject errors use existing callback driver change testbench infrastructure need plan ahead want use oop techniques using virtual routines providing sufficient callback points test modify behavior testbench without changing code result robust testbench need anticipate every type disturbance error injection delays synchronization may want long leave hook test inject behavior tests become smaller easier write testbench hard work sending stimulus checking responses test make small tweaks cause specialized behavior", "url": "RV32ISPEC.pdf#segment298", "timestamp": "2023-10-18 14:48:28", "segment": "segment298", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 9 ", "content": "functional coverage", "url": "RV32ISPEC.pdf#segment299", "timestamp": "2023-10-18 14:48:28", "segment": "segment299", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.1 Introduction ", "content": "designs become complex effective way verify thoroughly constrainedrandom testing crt approach elevates tedium writing individual directed tests one feature design however testbench taking random walk space design states know reached destina tion whether using random directed stimulus gauge progress using coverage functional coverage measure design features exer cised tests start design specification create verification plan detailed list test example design connects bus tests need exercise possible interactions design bus including relevant design states delays error modes verification plan map show go information creating verification plan see bergeron 2006 use feedback loop analyze coverage results decide actions take order converge 100 coverage first choice run existing tests seeds second build new constraints resort creating directed tests absolutely necessary back exclusively wrote directed tests verification planning limited design specification listed 100 features write 100 tests coverage implicit tests register move test moved combinations registers back forth measuring progress easy completed 50 tests halfway done chapter uses explicit implicit describe coverage specified explicit coverage described directly test environment using systemverilog features implicit coverage implied test register move directed test passes hopefully covered register transactions crt freed hand crafting every line input stimulus need write code tracks effectiveness test respect verification plan still productive working higher level abstraction moved tweaking indi vidual bits describing interesting design states reaching 100 functional coverage forces think want observe direct design states", "url": "RV32ISPEC.pdf#segment300", "timestamp": "2023-10-18 14:48:28", "segment": "segment300", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.1.1 Gathering coverage data ", "content": "run random testbench simply chang ing random seed generate new stimulus individual simulation generates database functional coverage information trail footprints random walk merge information together measure overall progress using functional coverage analyze coverage data decide modify tests coverage levels steadily growing may need run existing tests new random seeds even run longer tests coverage growth started slow add additional constraints generate interesting stimuli reach plateau parts design exercised need create tests lastly functional coverage values near 100 check bug rate bugs still found may measuring true coverage areas design simulation vendor format storing coverage data well analysis tools need perform following actions tools run test multiple seeds given set constraints cov erage groups compile testbench design single execute able need run constraint set different random seeds use unix system clock seed careful batch system may start multiple jobs simulta neously jobs may run different servers may start single server multiple processors check passfail functional coverage information valid successful simulation simulation fails design bug coverage information must discarded cover age data measures many items verification plan com plete plan based design specification design match specification coverage data useless verification teams periodically measure functional coverage scratch reflects current state design analyze coverage across multiple runs need measure successful constraint set time yet getting 100 coverage areas targeted constraints amount still growing run seeds coverage level plateaued recent progress time modify con straints think reaching last test cases one particular section may take long constrainedrandom simula tion consider writing directed test even continue use random stimulus sections design case background noise finds bug", "url": "RV32ISPEC.pdf#segment301", "timestamp": "2023-10-18 14:48:29", "segment": "segment301", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.2 Coverage Types ", "content": "coverage generic term measuring progress complete design ver ification simulations slowly paint canvas design try cover legal combinations coverage tools gather information simulation postprocess produce coverage report use report look coverage holes modify existing tests create new ones fill holes iterative process continues satisfied coverage level", "url": "RV32ISPEC.pdf#segment302", "timestamp": "2023-10-18 14:48:29", "segment": "segment302", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.2.1 Code coverage ", "content": "easiest way measure verification progress code coverage measuring many lines code executed line coverage paths code expressions executed path coverage singlebit variables values 0 1 toggle coverage states transitions state machine vis ited fsm coverage write extra hdl code tool instruments design automatically analyzing source code add ing hidden code gather statistics run tests code coverage tool creates database many simulators include code coverage tool postprocessing tool converts database readable form end result measure much tests exercise design code note primarily con cerned analyzing design code testbench untested design code could conceal hardware bug may redundant code code coverage measures thoroughly tests exercised imple mentation design specification verification plan tests reached 100 code coverage job done made mistake test catch worse yet implementation missing feature following module d flip flop see mistake reset logic accidently left code coverage tool would report every line exercised yet model implemented correctly", "url": "RV32ISPEC.pdf#segment303", "timestamp": "2023-10-18 14:48:29", "segment": "segment303", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.2.2 Functional coverage ", "content": "goal verification ensure design behaves correctly real environment mp3 player network router cell phone design specification details device operate verifica tion plan lists functionality stimulated verified measured gather measurements functions covered performing design coverage example verification plan dflip flop would mention data storage also resets known state test checks design features 100 functional coverage functional coverage tied design intent sometimes called specification coverage code coverage measures design imple mentation consider happens block code missing design code coverage catch mistake functional coverage", "url": "RV32ISPEC.pdf#segment304", "timestamp": "2023-10-18 14:48:29", "segment": "segment304", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.2.3 Bug rate ", "content": "indirect way measure coverage look rate fresh bugs found keep track many bugs found week life project start may find many bugs inspection create testbench read design spec may find inconsistencies hopefully fixed rtl written testbench running torrent bugs found check module system bug rate drops hopefully zero design nears tapeout however bug rates approaches zero yet done every time rate sags time find different ways create corner cases bug rate vary per week based many factors project phases recent design changes blocks integrated personnel changes even vacation schedules unexpected changes rate could signal potential problem shown figure 93 uncommon keep find ing bugs even tapeout even design ships customers", "url": "RV32ISPEC.pdf#segment305", "timestamp": "2023-10-18 14:48:29", "segment": "segment305", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.2.4 Assertion Coverage ", "content": "assertions pieces declarative code check relationships design signals either period time simulated along design testbench proven formal tools sometimes write equivalent check using systemverilog proce dural code many assertions easily expressed using systemverilog assertions sva assertions local variables perform simple data checking need check complex protocol determining whether packet successfully went router procedural code often better suited job large overlap sequences coded procedurally using sva see vijayaraghavan 2005 cohen 2005 chapters 3 7 vmm book bergeron et al 2006 informa tion sva familiar assertions look errors two signals mutually exclusive request never followed grant error checks stop simulation soon detect problem assertions also check arbitration algorithms fifos hardware coded assert property statement assertions might look interesting signal values design states successful bus transaction coded cover property statement measure often assertions trig gered test using assertion coverage cover property observes sequences signals cover group described samples data val ues transactions simulation two constructs overlap cover group trigger sequence completes additionally sequence collect information used cover group", "url": "RV32ISPEC.pdf#segment306", "timestamp": "2023-10-18 14:48:29", "segment": "segment306", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.3 Functional Coverage Strategies ", "content": "write first line test code need anticipate key design features corner cases possible failure modes write verification plan think terms data values instead think information encoded design plan spell significant design states", "url": "RV32ISPEC.pdf#segment307", "timestamp": "2023-10-18 14:48:29", "segment": "segment307", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.3.1 Gather information, not data ", "content": "classic example fifo sure thoroughly tested 1k fifo memory could measure values read write indices million possible combinations even able simulate many cycles would want read cover age report abstract level fifo hold 0 n1 possible values compare read write indices measure full empty fifo would still 1k coverage values test bench pushed 100 entries fifo pushed 100 really need know fifo ever 150 values long successfully read values corner cases fifo full empty make fifo go empty state reset full back empty covered levels interesting states involve indices pass 1 0 coverage report cases easy understand may noticed interesting states independent fifo size look information data values design signals large range dozen possible values broken smaller ranges plus corner cases example dut may 32bit address bus certainly need col lect 4 billion samples check natural divisions memory io space counter pick interesting values always try rollover counter values 1 back 0", "url": "RV32ISPEC.pdf#segment308", "timestamp": "2023-10-18 14:48:29", "segment": "segment308", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.3.2 Only measure what you are going to use ", "content": "gathering functional coverage data expensive measure analyze use improve tests simulations may run slower simulator monitors signals functional coverage approach lower overhead gathering waveform traces measuring code coverage simulation completes database saved disk multiple testcases multiple seeds fill disk drives func tional coverage information never look final coverage reports perform initial measurements several ways control cover data compilation instantiation triggering could use switches provided simulation vendor con ditional compilation suppression gathering coverage data last less desirable postprocessing report filled sections 0 coverage making harder find enabled ones", "url": "RV32ISPEC.pdf#segment309", "timestamp": "2023-10-18 14:48:29", "segment": "segment309", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.3.3 Measuring completeness ", "content": "like kids backseat family vacation manager con stantly asks yet tell fully tested design need look coverage measurements consider bug rate see reached destination start project code functional coverage low develop tests run different random seeds longer see increasing values functional coverage create additional constraints tests explore new areas save testseed combinations give high coverage use regression testing functional coverage high code coverage low tests exercising full design need revise verifica tion plan add functional coverage points locate untested functionality difficult situation high code coverage low functional cover age even though testbench giving design good workout unable put interesting states first see design implements specified functionality functionality tests reach might need formal verification tool extract design states create appropriate stimulus goal high code functional coverage plan vacation yet trend bug rate significant bugs still pop ping worse yet found deliberately testbench happen stumble across particular combination states one anticipated hand low bug rate may mean existing strategies run steam look different approaches try different approaches new combinations design blocks error generators", "url": "RV32ISPEC.pdf#segment310", "timestamp": "2023-10-18 14:48:29", "segment": "segment310", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.4 Simple Functional Coverage Example ", "content": "measure functional coverage begin verification plan write executable version simulation systemverilog test code coverage code coverage bench sample values variables expressions sample locations known cover points multiple cover points sampled time transaction completes place together cover group following design transaction comes eight flavors testbench generates port variable randomly verification plan requires every value tried example 92 creates random transaction drives interface testbench samples value port field using covport cover group eight possible values 32 random transactions testbench generate part coverage report vcs see testbench generated values 1 2 3 4 5 6 7 never generated port 0 least column specifies many hits needed bin considered covered see section 993 atleast option improve functional coverage easiest strategy run simulation cycles try new random seeds look coverage report items two hits chances need make simulation run longer try new seed values cover point zero one hit probably try new strategy testbench creat ing proper stimulus example next random transaction 33 port value 0 giving 100 coverage", "url": "RV32ISPEC.pdf#segment311", "timestamp": "2023-10-18 14:48:29", "segment": "segment311", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.5 Anatomy of a Cover Group ", "content": "cover group similar class define instanti ate one times contains cover points options formal arguments optional trigger cover group encompasses one data points sampled time create clear cover group names explicitly indicate measuring possible reference verification plan name parityerrorsinhexawordcachefills may seem ver bose trying read coverage report dozens cover groups appreciate extra detail also use com ment option additional descriptive information shown section 991 cover group defined class program module level sample visible variable programmodule variables signals interface signal design using hierarchical reference cover group inside class sample variables class well data values embedded classes define cover group data class transac tion cause additional overhead gathering coverage data imagine trying track many beers consumed patrons pub would try follow every bottle flowed loading dock bar person instead could patron check type number beers consumed shown van der schoot 2006 systemverilog define cover groups appropriate level abstraction level boundary testbench design transactors read write data environment con figuration class wherever needed sampling transaction must wait actually received dut inject error mid dle transaction causing aborted transmission need change treat functional coverage need use different cover point created error handling class contain multiple cover groups approach allows separate groups enabled disabled needed addition ally group may separate trigger allowing gather data many sources cover group must instantiated collect data forget error message null handles printed runtime coverage report contain men tion cover group rule applies cover groups defined either inside outside classes", "url": "RV32ISPEC.pdf#segment312", "timestamp": "2023-10-18 14:48:30", "segment": "segment312", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.5.1 Defining a cover group in a class ", "content": "cover group defined program module class cases must explicitly instantiate start sampling cover group defined class make separate name instance use original cover group name example 95 similar first example chapter except embeds cover group transactor class thus need sepa rate instance name", "url": "RV32ISPEC.pdf#segment313", "timestamp": "2023-10-18 14:48:30", "segment": "segment313", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.6 Triggering a Cover Group ", "content": "two major parts functional coverage sampled data values time sampled new values ready transaction completed testbench triggers cover group done directly sample task shown example 95 using blocking expression covergroup definition blocking expression use wait block signals events use sample method want explicitly trigger coverage procedural code existing signal event tells sam ple multiple instances cover group trigger separately use blocking statement covergroup declaration want tap existing events signals trigger coverage", "url": "RV32ISPEC.pdf#segment314", "timestamp": "2023-10-18 14:48:30", "segment": "segment314", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.6.1 Sampling using a callback ", "content": "one better ways integrate functional coverage testbench use callbacks originally shown section 87 technique allows build flexible testbench without restricting coverage col lected decide every point verification plan data sampled need extra hook environment callback always add one unobtrusive manner callback fires test registers callback object create many separate callbacks cover group little overhead explained section 873 callbacks superior using mailbox connect test bench coverage objects might need multiple mailboxes collect transactions different points testbench mailbox requires transactor receive transactions multiple mailboxes cause juggle multiple threads instead active transactor use passive callback example 825 shows driver class two callback points transaction transmitted example 824 shows base callback class example 826 test extended callback class sends data scoreboard make extension drivercbscoverage base callback class drivercbs call sample task cover group posttx push instance coverage callback class driver callback queue coverage code triggers cover group right time following two examples define use callback drivercscoverage", "url": "RV32ISPEC.pdf#segment315", "timestamp": "2023-10-18 14:48:30", "segment": "segment315", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.6.2 Cover group with an event trigger ", "content": "example 98 cover group covport sampled testbench triggers transready event advantage using event calling sample routine directly may able use existing event one triggered assertion shown example 910", "url": "RV32ISPEC.pdf#segment316", "timestamp": "2023-10-18 14:48:30", "segment": "segment316", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.6.3 Triggering on a SystemVerilog Assertion ", "content": "already sva looks useful events like complete transaction add event trigger wake cover group", "url": "RV32ISPEC.pdf#segment317", "timestamp": "2023-10-18 14:48:30", "segment": "segment317", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7 Data Sampling ", "content": "coverage information gathered specify variable expression cover point systemverilog creates number bins record many times value seen bins basic units measurement functional coverage sample onebit vari able maximum two bins created imagine systemverilog drops token one bin every time cover group triggered end simulation database created bins token run analysis tool reads databases generates report coverage part design total coverage", "url": "RV32ISPEC.pdf#segment318", "timestamp": "2023-10-18 14:48:30", "segment": "segment318", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.1 Individual bins and total coverage ", "content": "calculate coverage point first determine total number possible values also known domain may one value per bin multiple values coverage number sampled values divided number bins domain cover point 3bit variable domain 07 normally divided eight bins simulation values belonging seven bins sampled report show 78 875 coverage point points combined show coverage entire group groups combined give coverage percentage simula tion databases status single simulation need track coverage time look trends see run simulations add new constraints tests better predict verification design completed", "url": "RV32ISPEC.pdf#segment319", "timestamp": "2023-10-18 14:48:30", "segment": "segment319", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.2 Creating bins automatically ", "content": "saw report example 93 systemverilog automatically creates bins cover points looks domain sampled expression determine range possible values expression n bits wide 2n possible values 3bit port example eight possible values range enumerated type shown section 978 domain enumerated data types number named values also explicitly define bins shown section 975", "url": "RV32ISPEC.pdf#segment320", "timestamp": "2023-10-18 14:48:30", "segment": "segment320", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.3 Limiting the number of automatic bins created ", "content": "cover group option autobinmax specifies maximum number bins automatically create default 64 bins domain val ues cover point variable expression greater option systemverilog divides range autobinmax bins example 16bit variable 65536 possible values 64 bins covers 1024 values reality may find approach impractical difficult find needle missing coverage haystack autogenerated bins lower limit 8 16 better yet explicitly define bins shown section 975 following code takes chapter first example adds cover point option sets autobinmax two bins sampled variable still port three bits wide domain eight possible values first bin holds lower half range 03 hold upper values 47 coverage report vcs shows two bins simulation achieved 100 coverage eight port values mapped two bins since bins sampled values coverage 100 example 911 used autobinmax option cover point also use option entire group", "url": "RV32ISPEC.pdf#segment321", "timestamp": "2023-10-18 14:48:30", "segment": "segment321", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.4 Sampling expressions ", "content": "sample expressions always check coverage report sure getting values expect may adjust width computed expression shown section 215 example sampling 3bit header length 07 plus 4bit payload length 015 creates 2 4 16 bins may enough transactions actually 023 bytes long example 914 cover group samples total transaction length cover point label make easier read coverage report also expression additional dummy constant transaction length computed 5bit precision maximum 32 autogenerated bins quick run 200 transactions showed len16 100 cov erage across 16 bins cover point len32 68 coverage across 32 bins neither cover points correct max imum length domain 023 autogenerated bins work maximum length power 2", "url": "RV32ISPEC.pdf#segment322", "timestamp": "2023-10-18 14:48:30", "segment": "segment322", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.5 User-defined bins find a bug ", "content": "automatically generated bins okay anonymous data values counter values addresses values power 2 values explicitly name bins improve accuracy ease coverage report analysis systemverilog automatically creates bin names enumer ated types variables need give names interesting states sampling 2000 random transactions group 9583 coverage quick look report shows problem length 23 17 hex never seen longest header 7 longest payload 15 total 22 23 change bins declaration use 022 cov erage jumps 100 userdefined bins found bug test", "url": "RV32ISPEC.pdf#segment323", "timestamp": "2023-10-18 14:48:30", "segment": "segment323", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.6 Naming the cover point bins ", "content": "example 917 samples 4bit variable kind 16 possible values first bin called zero counts number times kind 0 sampled next three values 13 grouped single bin lo upper eight values 815 kept separate bins hi8 hi9 hia hib hic hid hie hif note hi bin expres sion used shorthand notation largest value sampled variable lastly misc holds values previously chosen namely 47 note additional information coverpoint grouped using curly braces bin specification declarative code procedural code would grouped begin end lastly final curly brace followed semicolon end never easily see bins hits hi8 case define bins restricting values used coverage interesting systemverilog longer automatically creates bins ignores values fall predefined bin importantly bins create used calculate functional cover age get 100 coverage long get hit every specified bin values fall specified bin ignored rule useful sampled value transaction length power 2 general specifying bins always use default bin specifier catch values may forgotten use iff keyword add condition cover point common reason turn coverage reset stray triggers ignored example 919 gathers values port reset 0 reset activehigh alternately use start stop functions control individ ual instances cover groups", "url": "RV32ISPEC.pdf#segment324", "timestamp": "2023-10-18 14:48:30", "segment": "segment324", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.8 Creating bins for enumerated types ", "content": "enumerated types systemverilog creates one bin possible value part coverage report vcs showing bins enumerated types want group multiple values single bin define bins bins outside enumerated values ignored unless define bin default specifier gather coverage enu merated types autobinmax apply", "url": "RV32ISPEC.pdf#segment325", "timestamp": "2023-10-18 14:48:30", "segment": "segment325", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.9 Transition coverage ", "content": "specify state transitions cover point way tell interesting values seen also sequences exam ple check port ever went 0 1 2 3 quickly specify multiple transitions using ranges expression 12 34 creates four transitions 1 3 1 4 2 3 2 4 specify transitions length note sample state transition 0 1 2 different 0 1 1 2 0 1 1 1 2 need repeat values last sequence use shorthand form 0 1 3 2 repeat value 1 3 4 5 times use 1 35", "url": "RV32ISPEC.pdf#segment326", "timestamp": "2023-10-18 14:48:30", "segment": "segment326", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.10 Wildcard states and transitions ", "content": "use wildcard keyword create multiple states transitions x z expression treated wildcard 0 1 follow ing creates cover point bin even values one odd", "url": "RV32ISPEC.pdf#segment327", "timestamp": "2023-10-18 14:48:30", "segment": "segment327", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.11 Ignoring values ", "content": "cover points never get possible values instance 3bit variable may used store six values 05 use automatic bin creation never get beyond 75 coverage two ways solve problem explicitly define bins want cover show section 975 alternatively let systemverilog automati cally create bins use ignorebins tell values exclude functional coverage calculation original range lowports05 threebit variable 07 ignorebins excludes last two bins reduces range 05 total coverage group number bins samples divided total number bins 5 case define bins either explicitly using autobinmax option ignore ignored bins contribute calculation coverage example 926 initially four bins created using autobinmax 01 23 45 67 uppermost bin elimi nated ignorebins end three bins created cover point coverage 0 33 66 100", "url": "RV32ISPEC.pdf#segment328", "timestamp": "2023-10-18 14:48:30", "segment": "segment328", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.12 Illegal bins ", "content": "sampled values ignored also cause error seen best done testbench monitor code also done labeling bin illegalbins use illegalbins catch states missed test error checking also double checks accuracy bin creation illegal value found cover group problem either testbench bin definitions", "url": "RV32ISPEC.pdf#segment329", "timestamp": "2023-10-18 14:48:30", "segment": "segment329", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.7.13 State machine coverage ", "content": "noticed cover group used state machine use bins list specific states transitions arcs mean use systemverilog functional coverage mea sure state machine coverage would extract states arcs manually even correctly first time might miss future changes design code instead use code coverage tool extracts state register states arcs automatically saving possible mistakes however automatic tool extracts information exactly coded mistakes may want monitor small critical state machines man ually using functional coverage", "url": "RV32ISPEC.pdf#segment330", "timestamp": "2023-10-18 14:48:31", "segment": "segment330", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8 Cross Coverage ", "content": "cover point records observed values single variable expres sion may want know bus transactions occurred also errors happened transactions source destina tion need cross coverage measures values seen two cover points time note measure cross coverage variable n values another values sys temverilog needs nxm cross bins store combinations", "url": "RV32ISPEC.pdf#segment331", "timestamp": "2023-10-18 14:48:31", "segment": "segment331", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.1 Basic cross coverage example ", "content": "previous examples measured coverage transaction kind port number two combined try every kind transaction every port cross construct systemverilog records combined values two cover points group cross state ment takes cover points simple variable name want use expressions hierarchical names variables object handlevariable must first specify expression coverpoint label use label cross statement example 928 creates cover points trkind trport two points crossed show combinations systemverilog creates total 128 8 x 16 bins even simple cross result large num ber bins random testbench created 200 transactions produced coverage report example 929 note even though possible kind port values generated 18 cross combinations seen", "url": "RV32ISPEC.pdf#segment332", "timestamp": "2023-10-18 14:48:31", "segment": "segment332", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.2 Labeling cross coverage bins ", "content": "want readable cross coverage bin names label individual cover point bins systemverilog use names creating cross bins define bins contain multiple values coverage statistics change report number bins dropped 128 88 kind 11 bins zero lo hi8 hi9 hia hib hic hid hie hif misc percentage coverage jumped 875 9091 single value lo bin 2 allows bin marked covered even values 0 3 seen", "url": "RV32ISPEC.pdf#segment333", "timestamp": "2023-10-18 14:48:31", "segment": "segment333", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.3 Excluding cross coverage bins ", "content": "reduce number bins use ignorebins cross coverage specify cover point binsof set values intersect single ignorebins construct sweep many individual bins first ignorebins excludes bins port 7 value kind since kind 4bit value statement excludes 16 bins sec ond ignorebins selective ignoring bins port 0 kind 9 10 11 total 3 bins ignorebins use bins defined individual cover points ignorebins lo uses bin names exclude kindlo 1 2 3 bins must names defined compiletime zero lo bins hi8 hia hif automatically generated bins names used compiletime statements ignorebins names created runtime report generation note binsof uses parentheses intersect specifies range therefore uses curly braces", "url": "RV32ISPEC.pdf#segment334", "timestamp": "2023-10-18 14:48:31", "segment": "segment334", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.4 Excluding cover points from the total coverage metric ", "content": "total coverage group based simple cover points cross coverage sampling variable expression coverpoint used cross statement set weight 0 contribute total coverage", "url": "RV32ISPEC.pdf#segment335", "timestamp": "2023-10-18 14:48:31", "segment": "segment335", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.5 Merging data from multiple domains ", "content": "one problem cross coverage may need sample values different timing domains might want know processor ever received interrupt middle cache fill interrupt hardware probably distinct may use different clocks cache hard ware previous design bug sort want make sure tested case solution create timing domain separate cache inter rupt hardware make copies signals temporary variables sample new coverage group measures cross coverage", "url": "RV32ISPEC.pdf#segment336", "timestamp": "2023-10-18 14:48:31", "segment": "segment336", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.8.6 Cross coverage alternatives ", "content": "cross coverage definition becomes elaborate may spend considerable time specifying bins used ignored may two random bits b three inter esting states a0 b0 a1 b0 b1 example 934 shows name bins cover points gather cross coverage using bins alternatively make cover point samples concatenation values define bins using less complex cover point syntax use style example 934 already bins defined individual cover points want use build cross coverage bins use example 935 need build cross coverage bins pre defined cover point bins use example 936 want tersest format", "url": "RV32ISPEC.pdf#segment337", "timestamp": "2023-10-18 14:48:31", "segment": "segment337", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9 Coverage Options ", "content": "specify additional information cover group using options options placed cover group apply cover points group put inside single cover point finer control already seen autobinmax option several", "url": "RV32ISPEC.pdf#segment338", "timestamp": "2023-10-18 14:48:31", "segment": "segment338", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9.1 Cover group comment ", "content": "add comment coverage reports make easier ana lyze comment could simple section number verification plan tags used report parser automatically extract rele vant information sea data", "url": "RV32ISPEC.pdf#segment339", "timestamp": "2023-10-18 14:48:31", "segment": "segment339", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9.2 Per-instance coverage ", "content": "testbench instantiates coverage group multiple times default systemverilog groups together coverage data instances may several generators creating different streams transactions may want see separate reports example one gen erator may creating long transactions another makes short ones following cover group instantiated separate generator keeps track coverage instance", "url": "RV32ISPEC.pdf#segment340", "timestamp": "2023-10-18 14:48:31", "segment": "segment340", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9.3 Coverage threshold using at_least ", "content": "may sufficient visibility design gather robust cov erage information suppose verifying dma state machine handle bus errors access current state know range cycles needed transfer repeatedly cause errors range probably covered states could set optionatleast 8 specify 8 hits bin confident exercised combination define optionatleast cover group level applies cover points define inside point applies single point however example 92 showed even 32 attempts random kind variable still hit possible values use atleast direct way measure coverage", "url": "RV32ISPEC.pdf#segment341", "timestamp": "2023-10-18 14:48:31", "segment": "segment341", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9.4 Printing the empty bins ", "content": "default coverage report shows bins samples job verify listed verification plan actually interested bins without samples use option crossnumprintmissing tell simulation report tools show bins especially ones hits set large value shown example 939 larger willing read option affects report may deprecated future ver sion systemverilog ieee standard", "url": "RV32ISPEC.pdf#segment342", "timestamp": "2023-10-18 14:48:31", "segment": "segment342", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.9.5 Coverage goal ", "content": "goal cover group point level group point considered fully covered default 100 coverage12 set level 100 requesting less complete coverage probably desirable option affects coverage report", "url": "RV32ISPEC.pdf#segment343", "timestamp": "2023-10-18 14:48:31", "segment": "segment343", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.10 Parameterized Cover Groups ", "content": "start writing cover groups find close one another systemverilog allows parameterize cover group create generic definition specify unique details instantiate pass trigger coverage group instance instead put coverage group class pass trigger constructor", "url": "RV32ISPEC.pdf#segment344", "timestamp": "2023-10-18 14:48:31", "segment": "segment344", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.10.1 Pass cover group parameters by value ", "content": "example 941 shows cover group uses parameter split range two halves pass midpoint value cover groups new function", "url": "RV32ISPEC.pdf#segment345", "timestamp": "2023-10-18 14:48:31", "segment": "segment345", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.10.2 Pass cover group parameters by reference ", "content": "specify variable sampled passbyreference want cover group sample value entire simulation use value constructor called", "url": "RV32ISPEC.pdf#segment346", "timestamp": "2023-10-18 14:48:31", "segment": "segment346", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.11 Analyzing Coverage Data ", "content": "general assume need seeds fewer constraints easier run tests construct new constraints careful new constraints easily restrict search space cover point zero one sample constraints prob ably targeting areas need add constraints pull solver new areas example 915 transaction length uneven distribution situation similar distribution seen roll two dice look total value problem class len evenly weighted alternative solve dist constraint effective len also constrained sum two lengths", "url": "RV32ISPEC.pdf#segment347", "timestamp": "2023-10-18 14:48:31", "segment": "segment347", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.12 Measuring Coverage Statistics During Simulation ", "content": "query level functional coverage fly simula tion using getcoverage getcoverageinst functions approach allows check whether reached coverage goals possibly control random test practical use functions monitor coverage long test coverage level advance given number trans actions cycles test stop hopefully another seed test increase coverage would nice test perform sophisticated actions based functional coverage results hard write sort test test random seed pair may uncover new functionality may take many runs reach goal test finds reached 100 coverage run cycles many change stimulus generated correlate change input level functional coverage one reliable thing change random seed per simulation otherwise reproduce design bug stimulus depends multiple ran dom seeds query functional coverage statistics want create coverage database verification teams built sql data bases fed functional coverage data simulation setup allows greater control data requires lot work outside creat ing tests formal verification tools extract state design create input stimulus reach possible states try duplicate testbench", "url": "RV32ISPEC.pdf#segment348", "timestamp": "2023-10-18 14:48:31", "segment": "segment348", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "9.13 Conclusion ", "content": "switch writing directed tests handcrafting every bit stimulus constrainedrandom testing might worry tests longer command measuring coverage especially functional coverage regain control knowing features tested using functional coverage requires detailed verification plan much time creating cover groups analyzing results modifying tests create proper stimulus effort less would required write equivalent directed tests coverage work helps better track progress verifying design", "url": "RV32ISPEC.pdf#segment349", "timestamp": "2023-10-18 14:48:31", "segment": "segment349", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 10 ", "content": "advanced interfaces", "url": "RV32ISPEC.pdf#segment350", "timestamp": "2023-10-18 14:48:31", "segment": "segment350", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.1 Introduction ", "content": "chapter 5 learned use interfaces connect design testbench physical interfaces represent real signals similar wires connected ports verilog1995 testbench uses interfaces statically connecting ports however many designs testbench needs connect dynamically design network switch single driver class may connect many interfaces one input channel dut want write unique driver channel instead want write generic driver instan tiate n times connect n physical interfaces systemverilog using virtual interface merely handle physical interface may need write testbench attaches several different config urations design one configuration pins chip may drive usb bus another pins may drive i2c serial bus use virtual interface testbench decide run time drivers use systemverilog interface signals put execut able code inside might include routines read write interface initial always blocks run code inside interface assertions constantly check status signals however put testbench code interface program blocks created expressly building testbench including scheduling execution reactive region described lrm", "url": "RV32ISPEC.pdf#segment351", "timestamp": "2023-10-18 14:48:31", "segment": "segment351", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.2 Virtual Interfaces with the ATM Router ", "content": "common use virtual interface allow objects test bench refer items replicated interface using generic handle rather actual name virtual interfaces mechanism bridge dynamic world objects static world modules interfaces", "url": "RV32ISPEC.pdf#segment352", "timestamp": "2023-10-18 14:48:31", "segment": "segment352", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter 5 showed how to build an interface to connect a 4x4 ATM router ", "content": "testbench atm interfaces dut modport removed clarity interfaces used program block follows proce dural code hard coded interface names rx0 tx0", "url": "RV32ISPEC.pdf#segment353", "timestamp": "2023-10-18 14:48:31", "segment": "segment353", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.2.2 Testbench with virtual interfaces ", "content": "good oop technique create class uses handle reference object rather hardcoded object name case make single driver class single monitor class operate handle data pass handle runtime following program block still passed 4 rx 4 tx interfaces ports example 102 creates array virtual interfaces vrx vtx passed constructors drivers monitors driver class looks similar code example 102 except uses virtual interface name rx instead physical interface rx0", "url": "RV32ISPEC.pdf#segment354", "timestamp": "2023-10-18 14:48:32", "segment": "segment354", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.3 Connecting to Multiple Design Configurations ", "content": "common challenge verifying design may several con figurations could make separate testbench configuration could lead combinatorial explosion explore every alternative instead use virtual interfaces dynamically connect optional interfaces", "url": "RV32ISPEC.pdf#segment355", "timestamp": "2023-10-18 14:48:32", "segment": "segment355", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.3.1 A mesh design ", "content": "example 105 built simple replicated component 8bit counter resembles dut device network chip processor instantiated repeatedly mesh configuration key idea toplevel netlist creates array interfaces counters testbench connect array virtual interfaces physical ones toplevel netlist uses generate statement instantiate numxi inter faces counters one testbench key line testbench local virtual interface array vxi assigned point array physical interfaces top module topxi simplify example 107 environment class merged test generator agent driver layers com pressed driver testbench assumes least one counter thus least one x interface design could zero counters would use dynamic arrays fixedsize arrays size zero driver class uses single virtual interface drive sample sig nals counter", "url": "RV32ISPEC.pdf#segment356", "timestamp": "2023-10-18 14:48:32", "segment": "segment356", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.3.2 Using typedefs with virtual interfaces ", "content": "type virtual xiftb replaced typedef shown following code snippets testbench driver", "url": "RV32ISPEC.pdf#segment357", "timestamp": "2023-10-18 14:48:32", "segment": "segment357", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.3.3 Passing virtual interface array using a port ", "content": "previous examples passed array virtual interfaces using cross module reference xmr alternative pass array port since array top netlist static needs referenced xmr style makes sense using port normally used pass changing values port style cleaner testbench need know hierarchical path top netlist code needs replicated testbench must modified dut interface changes real life quite large additionally must provide argument possible interfaces using attach selected configuration uses example 1012 uses global parameter define number x inter faces snippet top netlist testbench still needs create array virtual interfaces pass constructor driver class", "url": "RV32ISPEC.pdf#segment358", "timestamp": "2023-10-18 14:48:32", "segment": "segment358", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.4 Procedural Code in an Interface ", "content": "class contains variables routines interface con tain code routines assertions initial always blocks recall interface includes signals functionality communi cation two blocks interface block bus contain signals also routines perform commands read write inner workings routines hidden external blocks allowing defer actual implementation access routines controlled using modport statement signals task function imported modport visible block uses modport routines used design testbench approach ensures using protocol eliminating com mon source testbench bugs assertions interface used verify protocol assertion check illegal combinations protocol violations unknown val ues display state information stop simulation immediately easily debug problem assertion also fire good transactions occur functional coverage code uses type assertion trigger gathering coverage information", "url": "RV32ISPEC.pdf#segment359", "timestamp": "2023-10-18 14:48:32", "segment": "segment359", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.4.1 Interface with parallel protocol ", "content": "creating system may know whether choose paral lel serial protocol interface example 1014 two tasks initiatorsend targetrcv send transaction two blocks using interface signals sends address data parallel across two 8bit buses", "url": "RV32ISPEC.pdf#segment360", "timestamp": "2023-10-18 14:48:32", "segment": "segment360", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.4.2 Interface with serial protocol ", "content": "interface example 1015 implements serial interface sending receiving address data values interface rou tine names example 1014 swap two without change design testbench code", "url": "RV32ISPEC.pdf#segment361", "timestamp": "2023-10-18 14:48:32", "segment": "segment361", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.4.3 Limitations of interface code ", "content": "tasks interfaces fine rtl functionality strictly defined tasks poor choice type verification ip interfaces code extended overloaded dynamically instantiated based configuration code verification needs maxi mum flexibility configurability go classes program block", "url": "RV32ISPEC.pdf#segment362", "timestamp": "2023-10-18 14:48:32", "segment": "segment362", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "10.5 Conclusion ", "content": "interface construct systemverilog provides powerful technique group together connectivity timing functionality communica tion blocks chapter saw create single testbench connects many different design configurations containing multiple interfaces signal layer code connect variable number physical interfaces runtime virtual interfaces additionally interface contain procedural code drives signals assertions check protocol many ways interface resemble class pointers encapsula tion abstraction lets create interface model system higher level verilog traditional ports wires remember keep testbench program block", "url": "RV32ISPEC.pdf#segment363", "timestamp": "2023-10-18 14:48:32", "segment": "segment363", "image_urls": [], "Book": "book_systemverilog_for_verification" }, { "section": "Chapter-1 ", "content": "anatomy computer", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-10-18 21:06:55", "segment": "segment0", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. What is CISC Microprocessor? ", "content": "ans cisc stands complex instruction set computer developed intel cisc type design computers cisc based computer shorter programs made symbolic machine language number instructions cisc processor", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-10-18 21:06:55", "segment": "segment1", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. What is RISC Microprocessor? ", "content": "ans risc stands reduced instruction set computer architecture properties design large number general purpose registers use computers optimize register usage ii limited simple instruction set iii emphasis optimizing instruction pyre line", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-10-18 21:06:55", "segment": "segment2", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. What are the different types of Memory? ", "content": "ans memory computer made semiconductions semiconduction memories two types 1 ram random access memory 2 rom read memory 1 ram read write rw memory computer called ram user write information read information random access memory location accessed random memory without going memory location ram volatile memory means information written accessed long image devicergb width 795 height 717 bpc 8 power soon power accessed two basic types ram static ram ii dynamic ram sram retains stored information long power supply static rams costlier consume power higher speed drams store information hiphope ii dram loses stored information short time milli sec even power supply dram binary static stored gate source stray capacitor transfer presence charge stray capacitor shows 1 absence 0 drams cheaper lower rams edo extended data output ram edo rams memory location accessed stores 256 bytes data information latches latches hold next 256 bytes information programs sequentially executed data available without wait states b sdram synchronous drams sgrams synchronous graphic rams ram chips use clock rate cpu uses transfer data cpu expects ready c ddrsdram double data rate sdram ram transfers data edges clock therefore transfer rate data becomes doubles 2 rom read memory non volatile memory ie information stored lost even power supply goes its used permanent storage information also posses random access property information written rom usersprogrammers words contents roms decided manufactures following types roms listed prom its programmable rom contents decided user user store permanent programs data etc prom data fed using prom programs image devicergb width 795 height 717 bpc 8 ii eprom eprom erasable prom stored data eproms erased exposing uv light 20 min its easy erase eprom ic removed computer exposed uv light entire data erased selected portions user eproms cheap reliable iii eeprom electrically erasable prom chip erased reprogrammed board easily byte byte erased milliseconds limit number times eeproms reprogrammed ie usually around 10000 times flash memory electrically erasable programmable permanent type memory uses one transistor memory resulting high packing density low power consumption lower cost higher reliability its used power digital cameras mp3 players etc image devicergb width 795 height 717 bpc 8 image devicergb width 503 height 35 bpc 8", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-10-18 21:06:55", "segment": "segment3", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-2 ", "content": "basic computer architecture", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-10-18 21:06:55", "segment": "segment4", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. Explain the different types of Memory Modules. ", "content": "ans two types memory modules simm single inline memory modules ii dimm double inline memory modules small printed circuit cards pcc several drams memory chips placed cards plugged system board computer simm circuit cards contain several memory chips contacts placed one edge pcc whereas dimm its sides pcc", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-10-18 21:06:55", "segment": "segment5", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. Explain about the System Clock. ", "content": "ans every computer got system clock its located microprocessor clock design piece quartz crystal system clock keeps computer system coordinated its electronic system keeps oscillating specified times intervals 0 1 speed oscillation takes place called cycle clock time taken reach 0 1 back called clock cycle speed system clock measured terms hz", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-10-18 21:06:56", "segment": "segment6", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. Explain about the System Bus. ", "content": "ans bus means electronic path various components bus refers particular types cable cable bus carries information one bit buses 3 types 1 address bus image devicergb width 795 height 717 bpc 8 2 data bus 3 control bus 1 address bus carries address memory location required instructions data address bus unidirectional ie data flows one direction cpu memory address bus data determines maximum number memory addresses capacity measured binary form eg 2 bit address bus provide 22 addresses 2 data bus data bus electronic path connects cpu memory hw devices data bus carries data cpu memory ip op devices vice versa its directional bus transmit data either direction processing speed computer increases data bus large takes data one time 3 control bus control bus controls memory io devices bus bidirectional cpu sends signals control bus enable op addressed memory devices data bus standard bus standard represents architecture bus following important data bus standards industry standard architecture isa bus standard first standard released ibm 24 address lines 16 data lines used single user system isa bus low cost bus low data transfer rate could take full advantage 32bit micro processor ii micro channel architecture mca ibm developed mca bus standard bus speed elevated 833 mhz 10mhz increased 20 mhz bandwidth increased 16 bits 32 bits iii enhanced industry standard architecture eisa buses 32 bit helpful multiprogramming due low data transfer speed isa used multi tasking multiusersystems eisa appropriate multi user systems data transfer rate eisa double isa size eisa isa eisa isa cards fixed eisa connector slot eisa connectors quite expenses iv peripheral component interconnect pci bus standard developed intel its 64 bit bus works 66 mhz earlier 32 bit pci bus developed speed 33 mhz pci bus greater image devicergb width 795 height 717 bpc 8 application presentation session transport network data link physical speed 4 interrupt channels also pci bridge bus connected various devices", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-10-18 21:06:56", "segment": "segment7", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.4 Explain the role of Expansion Slots. ", "content": "ans main function mother board enable connectivity various parts computer processor memory various hardware cards fixed mother board save different purposes mother boards slots fix various cardslike video card modem sound cards etc expansion slots motherboard used fro following purposes connect internal devices computer eg hard disk etc computer bus ii connect computer external devices like mouse printer etc functions carried help adapters", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-10-18 21:06:56", "segment": "segment8", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.5. List out various Cards and elaborate about them? ", "content": "ans 1 sound card card used ip op sound microphone used ip speaker used op sound sound card converts sound computer language vice versa sound cards based midi musical instrument digital interface represents music electronic form main part sound card dsp digital signal processor uses arithmetic logic bring sound effects sound card comes 16bit computers dac digital analog adc analog digital sound card uses dma direct memory access channel read write digital audio data 2 scsi small compute system interface technology used high speed hard disk its often used servers high volume data used present different versions scsi used capacity scsi determined bus width speed interface scsi computers bus extended means cable its extension computer bus 3 network cards nw card versatile device performs number tasks contribute entire process transmitting receiving data osi model computers links computer another computer nw cable wires sevenlayer model osi open system interface used internet receiving transmitting data information passes seven layers nw card implements physical layer half data link layer", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-10-18 21:06:56", "segment": "segment9", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.6. Describe briefly about different types of Ports. ", "content": "ans computers interface called ports peripheral devices interfaced computers ports data flows ports ports 2 types parallel serial parallel port allows transfer bits word simultaneously parallel interface multiple lines connect peripherals port parallel interface used transfer data faster rate higher speed peripherals disk tapes serial port allows serial data transfer serial data transfer one bit data transmitted time serial interface one line pair line used transmit data its used slow speed peripherals terminal printers employ either serial interface parallel interface disadvantage serial parallel port one device connected port", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-10-18 21:06:56", "segment": "segment10", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.7. Explain about RS 232 C. ", "content": "ans rs 232 c standard serial data transfer specifies standard 25 signals hand shake signals used dce dte voltage levels maximum capacitance signal lines also described standard standard rs232 c interface usually provided computers serial data transfer voltage 3 v 15 v load used high logic mark voltage 3 v 15 v load used low logic space voltage levels ttl compatible image devicergb width 795 height 717 bpc 8 14 image devicergb width 503 height 35 bpc 8", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-10-18 21:06:56", "segment": "segment11", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-3 ", "content": "input output devices", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-10-18 21:06:56", "segment": "segment12", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. Give short notes on various Input and Output Devices. ", "content": "ans devices used input data programs computer known input devices devices convert input form understandable computer provides man machine communication io devices explained 1 keyboard data instructions input typing keyboard message typed keyboard reaches memory unit computer its connected computer via cable apart alphabet numeral keys function keys performing different functions 2 mouse its pointing device mouse rolled mouse pad turn controls movement cursor screen click double click drag mouse mouses ball beneath rotates mouse moved ball 2 wheels sides turn mousse movement ball sensor notifies speed movements computer turn moves cursorpointer screen 3 scanner scanners used enter information directly computers memory device works like xerox machine scanner converts type printed written information including photographs digital pulses manipulated computer 4 track ball track ball similar upside design mouse user moves ball directly device remains stationary user spins ball various directions effect screen movements image devicergb width 795 height 717 bpc 8 5 light pen input device used draw lines figures computer screen its touched crt screen detect raster screen passes 6 optical character rader its device detects alpha numeric characters printed written paper text scanned illuminated low frequency light source light absorbed dark areas reflected bright areas reflected light received photocells 7 bar code reader device reads bar codes coverts electric pulses processed computer bar code nothing data coded form light dark bars 8 voice input systems devices converts spoken words mc language form micro phone used convert human speech electric signals signal pattern transmitted computer its compared dictionary patterns previously placed storage unit computer close match found word recognized 9 plotter plotter op device used produce graphical op papers uses single color multi color pens draw pictures blue print etc 10 digital camera converts graphics directly digital form looks like ordinary camera film used therein instead ccd changed coupled divide electronic chip used light falls chip though lens converts light waves electrical waves", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-10-18 21:06:57", "segment": "segment13", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. What is a Printer and what are the different types of Printers? ", "content": "ans printers op devices used prepare permanent op paper printers divided two main categories 1 impact printers hammers pins strike ribbon paper print text mechanism known electromechanical mechanism two types character printer ii line printer character printer prints one character time relatively slower speed eg dot matrix printers dot matrix printer prints characters combination dots dot matrix printers popular among serial printers matrix pins print head printer form character computer memory sends one character time printed printer carbon pins paper words get printed paper pin strikes carbon generally 24 pins ii line printer prints one line text time higher speed compared character printers printers poor quality op chain printers drum printers examples line printers 2 nonimpact printers printers use nonimpact technology inkjet laser technology printers provide better quality op higher speed printers two types inkjet printer prints characters spraying patterns ink paper nozzle jet prints nozzles fine holes image devicergb width 795 height 717 bpc 8 specially made ink pumped create various letters shapes ink comes nozzle form vapors passing reflecting plate forms desired lettershape desired place ii laser printer prints entire page one go printers photo sensitive drum made silicon drum coated recharge photoconductive extremely sensitive light drum exposed laser rays reflected shapes printed area rays fall gets discharged drum rotating comes contact toner toner gets attached discharged area drum drum comes contact paper toner got attached drum original shape gets attached paper hence printing takes place paper slightly heated toner gets permanently attached", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-10-18 21:06:57", "segment": "segment14", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. What is the Refresh Rate? ", "content": "ans refresh rate number times second display data its given distinct measure rate refresh rate includes repeated drawing identical trans rate measures video source lead entire frame new data display", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-10-18 21:06:57", "segment": "segment15", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.4. What are the different kinds of Resolutions in the Monitor? ", "content": "ans resolution refers sharpness detail usual image its primary function monitor its determined beam size dot pitch screen made number pixels completes screen image consists thousand pixels screen resolution maximum displayable pixels higher resolution pixels displayed resolutions different different video standards listed vga 1640 x 480 b svga 800 x 600 c xga 1024 x 768 sxga 1400 x 1050 image devicergb width 795 height 717 bpc 8", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-10-18 21:06:57", "segment": "segment16", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.5. Explain about LCD Monitors. ", "content": "ans lcd stands liquid crystal display pixel lcd typically consists layer molecules aligned 2 transparent electrodes 2 polarizing filters axis transmission perpendicular surface electrodes contact liquid crystal material treated align liquid crystal molecules particular direction", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-10-18 21:06:57", "segment": "segment17", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.6. Explain about Video Controller. ", "content": "ans video display controller vdc ic main component video signal generator device responsible production tv video signal compulsory games system", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-10-18 21:06:57", "segment": "segment18", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.7. Explain the different types of Printer: ", "content": "ans thermal wax printer uses wax coated ribbon heated pairs magenta yellow black ribbon passes front print head heated pins melt wax paper hardens thermal wax printers produce vibrant colors require smooth specially coated paper best op dye sublimation its printer employs printing process user heat transfer dye medium plastic card printed paper poster paper fabric process usually lay one color times using ribbon color panels iris printer its large format color inkjet printer used digital prepress proofing uses continuous inkjet technology produce continuous tone op various media including paper canvas etc low costs", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-10-18 21:06:57", "segment": "segment19", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.8. What is Magnet-Optical Storage Media? ", "content": "ans used erasable disks mo system includes basic principles magnetic optical storage systems mo systems write magnetically read optically two standard forms 525 inches 35 inches image devicergb width 795 height 717 bpc 8", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-10-18 21:06:57", "segment": "segment20", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.9. Explain thee following terms - ", "content": "ans ide de stands integrated drive electronic its high speed intelligent pathway connect peripheral computers ide standard according ide interface made ii eide stands enhanced ide interface hard disks floppy disks optical disk tape drives provides 4 channels two eide devices connected channel thus total 8 eide devices interfaced pc motherboard 2 connectors eide interface iii fast scsi increased maximum scsi data put 5 mbps 10 mbps wide scsi increased speed 10 mbps 20 mbps iv ultra scsi also called fast 20 enhancement scsi results doubling fast scsi data throughput speeds 20 mbps 8 bit 40mbps 16 bit processor", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-10-18 21:06:57", "segment": "segment21", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.10. What are RAID Levels? ", "content": "ans redundant arrays independent disks raid system multiple disks operate parallel store information improves storage reliability eliminates risk data loss one disk fails also large file stored several disk units breaking file number smaller pieces storing pieces different disks called data stripping", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-10-18 21:06:57", "segment": "segment22", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.11. Explain about the Power PC Processes. ", "content": "ans power pc microprocessors jointly developed ibm motorola apple high performances risc processors term superscalar used architecture uses one pipe line execution instructions power pc designed work multiprocessor systems power pc contain floating point math processor memory management unit chip its 32 bit 66 mhz microprocessor", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-10-18 21:06:57", "segment": "segment23", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.12. Describe in brief about Motorola Process Microprocessors. ", "content": "ans motorola introduced first 8bit microprocessor 6800 1974 widely used industry controlling equipment image devicergb width 795 height 717 bpc 8 1979 motorola introduced advanced 16 bit mp 68000 though data bus 16 bit wide intended architecture 32 bits could directly address 16 mb memory motorola 680x0 series mps similar programming point view improved mc series run software predecessor series 1980s 680x0 series used desktops serves computers also used embedded applications", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-10-18 21:06:58", "segment": "segment24", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.13. What are Pits and Lands in CD\u2019s. ", "content": "ans write 1 0s cd laser beam used write 1 laser beam turned turns pit reflecting layer write 0 laser beam turned hence pit burned surface pit called land", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-10-18 21:06:58", "segment": "segment25", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.14. What are the features of Pentium Microprocessor? ", "content": "ans pentium intel 32 bit superscalar cisc microprocessor term superscalar used processor contains multiple alus execute one instruction simultaneously parallel per clock cycle pentium contains 2 alus execute 2 instructions per clock cycle besides 2 alus also contains one onchip fpu 28 kb cache memory one instruction data pentium 32bit address bus 64 bit data bus data bus used 64 bit view supply data faster rates got 4 varieties pentium ii pentium iii pentium iv", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-10-18 21:06:58", "segment": "segment26", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.15. What is PLD & PLA. ", "content": "ans implement combinational sequential circuits interconnect serial ssi msi chips making connection external ic package logic circuits also designed using programmable logic devices plds gates necessary logic circuit design single package devices provisions perform inter connections gates internally desired logic implemented programmable logic array pla type fixed architecture logic device programmable gates followed programmable gates pla used implement complex combinational circuit vlsi design plas used area required regular arrays less area required randomly inter connected gates 22 image devicergb width 503 height 35 bpc 8", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-10-18 21:06:58", "segment": "segment27", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-4 ", "content": "storage devices", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-10-18 21:06:58", "segment": "segment28", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. How can you classify Storage Devices? What are its different types elaborate? ", "content": "ans storage devices secondary storage devices used store data instruction permanently used computers supplement limited storage capacity ram storage devices categorized two parts floppy disk its circular disk coated magnetic oxide enclosed within square plastic cover jacket its available different size commonly used floppy 3 data 144 mb stored data written tiny magnetic spots dish surface creating new data disk surface eraser data previously stored location floppies available 2 sizes 35 inch 525 inch 35 inch size floppy mostly used 525 inch floppy kept flexible cover its safe store 12 mb data hard disk hard disks made aluminum metal alloys coated sides magnetic material unlike floppy disks hark disks removable computer remain storing capacity several disks packed together mounted common drive form disk pack disk also called platter magnetic tape magnetic tape mass storage device its used back storage its serial access type storage device main advantage stores data sequentially standard sizes inch inch 8mm 3mm wide head names tapes dat digital audio tape dlt digital liner tape etc optical memory information written read optical disk tape using laser beam optical disks suitable memory storage units access time hard disks advantage high storage capacity types optical memory cd rom cdr cd rw dvdrom dvdr dvdrw information cdrom written time manufacture cdrw 700 mb available dvdrom similar cdrom uses shorter wave length laser beam hence stores data cdrom", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-10-18 21:06:58", "segment": "segment29", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. Explain about Modem. ", "content": "ans modem abbreviation modulator demodulator modems used data transfer one computer another telephone lines computer works digital mode analog technology used carrying massages across phone lines modem converts information digital mode analog mode transmitting end converts analog digital receiving end modems two types internal modem ii external modem", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-10-18 21:06:58", "segment": "segment30", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. What is Formatting? ", "content": "ans process magnetically mapping floppy called formatting storing data floppy needs magnetically mapped data stored right place every new floppy needs formatted use formatting means creating tracks sectors floppy tracks shape circles floppy divide various segments number tracks depends upon density floppy high density floppy 80 tracks created floppy 80 tracks track 20 sectors number sector would 1600", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-10-18 21:06:58", "segment": "segment31", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-5 ", "content": "history computers", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-10-18 21:06:58", "segment": "segment32", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. Explain about the evolution of Digital Computers. ", "content": "ans successful general purpose mechanical computers developed 1930 mechanical calculations built automatic addition subtraction multiplication division calculator programmable device different eras evolution computer listed 1 mechanical era many attempts create mc could help perform various calculations 1823 charles babbage tried build mechanical computing mc capable performing automatic mathematical calculations designed compute tables functions logs functions etc 1830s babbage made powerful mechanical computer mc designed perform mathematical calculation automatically could perform addition etc memory unit capacity 1000 numbers consisting 50 digits mc programmable mc mechanism enabling program change sequence operations automatically late 19th century punched cards commercially used soon ibm formed 1924 konand zuse developed mechanical computer z1 1938 germany 2 electronic era first electronic computer using valves developed john v atanas late 1930s contained add subtract unit relatively small computer used 300 valves memory unit consisted capacitors mounted rotating drum used io devices including card punch card reader first popular general electronic digital computer eniac electronic numerical interpreter calculator john von neumann consultant eniac project eniac used high speed memory store programs well data program execution neumann colleagues designed build ias computers used ram consisting cathode ray tube transistors invented 1948 bell laboratories slowly replaced vacuum tubes ics first introduced ie designed fabricated 195859 examples computers using ics are ibm 370 pdp8 1970 lsi chips introduced form memory units computers built 1970s onwards used micro process lsi vlsi ulsi components", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-10-18 21:06:58", "segment": "segment33", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. What were the different Computer Generations? ", "content": "ans various generations computers listed first generation 19461954 digital computes using electronic values vacuum tubes known first generation computers high cost vacuum tubes prevented use main memory stored information form propagating sound waves ii second generation 19551964 secondgeneration computer used transistors cpu components ferrite cores main memory magnetic disks tapes secondary memory used highlevel languages fortran 1956 algol 1960 cobol 1960 io processor included control io operations iii third generation 19651974 thirdgeneration computers used ics ssi msi cpu components semiconductor memories lsi chips magnetic disk tapes used secondary memory cache memory also incorporated computers 3rd generation micro programming parallel memory multiprogramming etc introduced eg third generation computers pdp ii etc iv fourth generation 4th generation computers microprocessors used cpus vlsi chips used cpu memory supporting chips computer generation fast 8 16 32 bit microprocessors developed period main memory used fast semiconductors chips 4 bits size hard disks used secondary memory keyboards dot matrix printers etc developed ossuch msdos unix apples macintosh available object oriented language c etc developed v fifth generation 1991 continued 5th generation computers use ulsi ultralarge scale integration chips millions transistors placed single ic ulsi chips 64 bit microprocessors developed period data flow epic architecture processors developed risc cisc types designs used modern processors memory chips flash memory 1 gb hard disks 600 gb optical disks 50 gb developed", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-10-18 21:06:58", "segment": "segment34", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. Explain about the Von-Neumann Architectures. ", "content": "ans type architecture computer consisted cpu memory io devices program stored memory cpu fetches instruction memory time executes thus instructions executed sequentially slow process neumann mc called control flow computer instruction executed sequentially controlled program counter increase speed parallel processing computer developed serial cpus connected parallel solve problem even parallel computers basic building blocks neumann processors 28 image devicergb width 503 height 35 bpc 8", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-10-18 21:06:59", "segment": "segment35", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-6 ", "content": "logic gates flip flops", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-10-18 21:06:59", "segment": "segment36", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. What are Logic Gates? ", "content": "ans digital computer uses binary number system operation binary system 2 digits 0 1 manipulation binary information done logic circuit known logic gates important logical operations frequently performed design digital system 1 2 3 4 nand 5 exclusive electronic circuit performs logical operation called logic gate", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-10-18 21:06:59", "segment": "segment37", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. Explain design of digital multiplexer. ", "content": "ans digital multiplexer n inputs one output applying control signals one input made available output terminal its called data sector design shown select lines control signals applied select select desired input", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-10-18 21:06:59", "segment": "segment38", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. What are Combinational & Sequential Circuits? ", "content": "ans two types logic circuits combinational sequential combinational circuit one state op instant entirely determined states ips time combinational circuits logic circuits whose operation completely described truth table sequential circuit consists combinational logic storage elements op sequential circuit function present ips also past ips state storage elements depends upon preceding ips preceding states elements", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-10-18 21:06:59", "segment": "segment39", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.4. What are Flip Flops? ", "content": "ans device said bistable 2 stable elements flipflop bistable device 2 stable stales op remains either high low high stable state called set stable state called reset store binary bit rather 0 1 thus storing capability ie memory", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-10-18 21:06:59", "segment": "segment40", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.5. What are the different types of Flip-Flops? ", "content": "ans following types flipflops listed sr flipflop sr flip flop realized connecting 2 gates shown q op flipflop q complement op thus high keeping r 0 ip make q 0 q one ips upper gate ip r q upper gate low output q high flopflop stores binary bit 1 made high even set ip removed op q remain 1 q one ips lower gate similarly reset input r made high keeping 0 q become low r made high simultaneously make ops gates low basic definition flip flop ii jk flipflops sr flip flop state op unpredictable sr 1 jk flip flop allows inputs jk 1 situation state op changed complement previous state available output terminal j k 0 ops gates low ie r low r low change op state j 0 k 1 op upper gate ie becomes low therefore possible set flip flop k1 op lower gate ie r high input gate q high iii dflip flops sr flip flop 2 inputs r store 1 high low r required store 0 high r low needed thus 2 signals generated drive sr flip flop dflip flop cab realized using sr flipflop shown iv tflip flop tflip flop acts toggle switch toggle means switch opposite state realized using jk flip flop making t j k 1 shown figure", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-10-18 21:06:59", "segment": "segment41", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.6. What are Shift Registers? ", "content": "ans shift register flipflops connected series op flipflop connected ip adjacent flip flop contents shift register shifted within registers without changing order bits data shifted one position left right time 1 clock pulse applied shift register used temporary storage data suited processing serial data converting serial data parallel data vice versa 4 types shift registers ii serial serial iii serial parallel iv parallel serial v parallel parallel", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-10-18 21:06:59", "segment": "segment42", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.7. What is a Counter? Explain. ", "content": "ans digital counter consists number flip flops function count electrical pulses count certain events electrical pulses proportional event produced counting purpose counters also used count time interval frequency purposes clock pulses applied counter clock pulses occur certain known intervals pulses proportional time therefore time intervals easily measured frequency inversely proportional time frequency also measured 2 types counters synchronous asynchronous synchronous counter flipflops clocked simultaneously hand asynchronous counter flip flops clocked simultaneously flip flop triggered pervious flip flop", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-10-18 21:06:59", "segment": "segment43", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.8. Explain about De-coders. ", "content": "ans decoder similar demultiplexer without data input digital system require decoding data decoding necessary applications data demultiplexing digital display digital analog converters etc decoder logic circuit converts nbit binary input code data 2n op lines op lines activated one possible combinations ips decoder number op greater number ips circuit basic binary decoder shown", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-10-18 21:06:59", "segment": "segment44", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.9. What is an Encoder? ", "content": "ans encoder digital circuit performs inverse operation decoder hence opposite decoding process called encoding encoder combinational logic circuit converts active input signal coded op signal n ip lines one active time op lines encodes one active ips loaded binary op bits encoder number ops less number ips", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-10-18 21:06:59", "segment": "segment45", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.10. What are Comparators? ", "content": "ans magnitude comparator combinational circuit compares magnitude 2 nos b generates following ops b b b block diagram single bit magnitude comparator shown implement comparator exnor gates gates used property exnor gate used find whether 2 binary digits equal gates used find whether binary digit less greater digit", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-10-18 21:06:59", "segment": "segment46", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.11. Explain about Adders. ", "content": "ans half adder performs addition 2 binary digits half address 2 inputs 2 ops 2 ips two 1 bit numbers b two ops sum b carry bit denoted c truth table given full adder half adder 2 ips provision add carry coming lower order bits multi bit addition performed purpose full adder designed full adder combinational circuit performs arithmetic sum 3 inputs bits produces sum op carry", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-10-18 21:06:59", "segment": "segment47", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.12. Explain BCD to 7-Segment Decoder. ", "content": "ans seven segment display normally used displaying one decimal digits 0 9 bcd 7segment decoder accepts decimal digit bcd generates corresponding seven segment code fig shows sevensegment display composed seven segments segment made material emits lights current passed", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-10-18 21:06:59", "segment": "segment48", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.13. Explain about Sub tractors. ", "content": "ans subtractors two types half subtractor its combinational circuit used perform subtraction 2 bits 2 inputs x minuend subtrahend 2 ops logic symbol shown full subtractor full sub tractor combinational circuit performing subtraction 3 bits namely minuend bit subtrahend bit borrow previous stage logical symbol shown computer architecture 41", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-10-18 21:06:59", "segment": "segment49", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-7 ", "content": "addressing concepts", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-10-18 21:06:59", "segment": "segment50", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. Explain in detail about the concept of Addressing in the Memory. ", "content": "ans instruction needs data perform specified operation data may accumulator gpr general purpose registers specified memory location techniques specifying address data known addressing modes important addressing modes follows direct addressing ii register addressing iii register indirect addressing iv immediate addressing v relation addressing direct addressing address data specified within instruction example direct addressing sta 2500h store contents accumulator memory location 2500h ii register addressing register addressing operands located general purpose registers words contents register operands therefore name register specified instruction eg register addressing mov b transfer contents register b register iii register indirect addressing address operand given directly contents register registers pairs address operand example lx1 h 2400h load hl pair 2400 h mov move contents memory location whose address hl pair accumulator iv immediate addressing operand given instruction eg mvi 06 move 06 accumulator v relation addressing signed displacement added current value program counter form effective address also known pc relative addressing", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-10-18 21:07:00", "segment": "segment51", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. Explain the concept of Paging in contest of Memory. ", "content": "ans page oriented memory memory divided pages page fixed length 4kb 4mb length logical address represented page address page offset page address points descriptor table function descriptor case memory segment scheme demanded page present physical memory page fault triggered informs os swap desired page type memory management schemes known demandpaged virtual memory scheme", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-10-18 21:07:00", "segment": "segment52", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. What is an Instruction Cycle? ", "content": "ans main function cpu execute programs program converts sequence instruction perform particular task program stored memory cpu fetcher one instruction time memory executes fetcher vent instruction execute cpu repeats process till executes instruction program thereafter may take another program execute necessary steps processor carry fetching instruction memory executing instruction memory executing constitute instruction cycle instruction cycle consists 2 parts fetch cycle execute cycle fetch cycle cpu fetcher mc code instruction memory necessary steps carried fetch opcode memory constitute fetch cycle execute cycle instruction executed necessary carried execute instruction constitute execute cycle 44 image devicergb width 503 height 35 bpc 8", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-10-18 21:07:00", "segment": "segment53", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Chapter-8 ", "content": "instructions io subsystems", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-10-18 21:07:00", "segment": "segment54", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.1. What is an Instruction? ", "content": "ans instruction command given computer perform specified operation given data instruction consists 2 parts opcode operand first part instruction specifies operation performed known opcode second part instruction called operand data computer perform specified operation computer understands instructions form 0 1 instruction data fed computer binary form written binary codes known machine codes convenience user codes written hexical form instructions classified following three types according word length single byte instruction ii two byte instruction iii three byte instruction", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-10-18 21:07:00", "segment": "segment55", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.2. Explain about the I/O Subsystem. ", "content": "ans inputoutput devices secondary units computer called peripherals term peripheral used sense includes interfacing devices ip port programmable peripheral interface dma controller communication interface counter internal timers etc computer architecture 45 ip devices data instruction entered computer ip devices ip device converts ip data instruction suitable binary form accepted computer examples ip devices keyboards b mouse c joystick trackball e touch screen etc ii op devices op devices receive information computer provide users computer sends information op devices binary coded forms op devices count form used users printed form display screen", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-10-18 21:07:00", "segment": "segment56", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.3. Explain CPU Organization. ", "content": "ans cpu brain computer main function execute programs three main sections arithmetic logical units alu ii control unit iii accumulator general special purpose registers alu function alu perform basic arithmetic logical operation take addition b subtraction etc perform exponential logarithmic trigonometric operations ii control unit control units cpu controls entire operation computer also controls devices memory input output devices fetcher instruction memory decodes instruction interprets instruction know tasks performed sends suitable control signals components perform operation maintains order directs operation entire system controls data flow cpu peripherals control cu instructions fetched memory one another execution instructions executed iii register cpu contains number register store data temporarily execution program number registers differs processor processor register classified follows general purpose registers registers store data intermediate results execution program accessible users instructions users working assembly language b accumulator important gpr multiple functions its efficient data movement arithmetic logical operation special features gpr execution arithmetic logical instruction result placed accumulator iv special purpose register cpu contains number special purpose register different purposes program counter pc b stack pointer sp c instruction register ir index register e memory address register mar f memory buffer register mbr pc pc keeps track address instruction executed next holds address memory location contains rent instruction fetched memory b stack pointer sp stack sequence memory location defined user its used save contents register its required execution program sp holds address last occupied memory location stack c status register flag register flag register contains number flags either indicate certain conditions arising alu operation control certain operations flags indicate condition called control flags flags used control certain operation called control flags single micro processor contains following condition flags 1 carry flag indicates whether carry 2 zero flag indicates whether result zero non zero 3 sign flag indicates whether result positive negative 4 parity flag indicates whether result contains odd number 1s even number 1s instruction register holds instruction decoded e index register used addressing one registers designated index register address operand sum contents index registers constant instruction involving index register contain constants constant added contents index register form effective address f memory address register mar holds address instruction data fetched memory cpu transfers address next instruction pc mar mar its sent memory address bus g memory buffer register mbr holds instruction code data received sent memory its connected data bus data written memory held register next operation completed", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-10-18 21:07:00", "segment": "segment57", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "Q.4. What is System Bus and what are its different architecture and standards? ", "content": "ans memory peripheral devices connected processor group lines called bus three types buses namely address bus data bus control bus already discussed important bus standards isa bus ii pci bus iii agp iv eisa", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-10-18 21:07:00", "segment": "segment58", "image_urls": [], "Book": "Computer_Architecture" }, { "section": "CHAPTER 1 ", "content": "introduction review", "url": "RV32ISPEC.pdf#segment0", "timestamp": "2023-11-05 21:10:33", "segment": "segment0", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.1 OVERVIEW ", "content": "hardware supported last first lifo stacks used computers since late 1950 originally stacks added increase execution efficiency high level languages algol since fallen favor hardware designers eventually becoming incorporated secondary data handling structure computers many stack advocate dismay computers use hardware stacks primary data handling mechanism never really found wide acceptance enjoyed registerbased machines introduction large scale integration vlsi pro cessors conventional methods computer design questioned complex instruction set computers ciscs evolved complicated processors comprehensive instruction sets reduced instruction set computers riscs challenged approach using simplified processing cores achieve higher raw processing speeds applications time come consider stack machines alterna tive design styles new stack machine designs based vlsi design technology provide additional benefits found previous stack machines new stack computers use synergy features attain impressive combination speed flexibility simplicity stack machines offer processor complexity much lower cisc machines overall system complexity lower either risc cisc machines without requiring complicated compilers cache control hardware good performance also attain competitive raw performance superior performance given price programming environments first successful application area realtime embedded control environments outperform system design approaches wide margin stack machines also show great promise executing logic programming languages prolog functional programming languages miranda scheme artificial intelligence research languages ops5 lisp major difference new breed stack machine older stack machines large high speed dedicated stack memories cost effective previously stacks kept mostly program memory newer stack machines maintain separate memory chips even area onchip memory stacks stack machines provide extremely fast subroutine calling capability superior performance interrupt handling task switching put together traits create computer systems fast nimble compact shall start chapter discussion role stacks computing followed subsequent chapters taxonomy hardware stack support computer design discussion abstract stack machine several commercially implemented stack machines results research stack machine performance characteristics hardware software considerations predictions future directions may taken stack machine design", "url": "RV32ISPEC.pdf#segment1", "timestamp": "2023-11-05 21:10:33", "segment": "segment1", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.2 WHAT IS A STACK? ", "content": "lifo stacks also known push stacks conceptually simplest way saving information temporary storage location common computer operations mathematical expression evaluation recursive subroutine calling", "url": "RV32ISPEC.pdf#segment2", "timestamp": "2023-11-05 21:10:33", "segment": "segment2", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.2.2 Example software implementations LIFO stacks may be programmed into conventional computers in a number of ways. The most straightforward way is to allocate an array in memory, and keep a variable with the array index number of the topmost active element. Those programmers who value execution efficiency will refine this technique by allocating a block of memory locations and maintaining a pointer with the actual address of the top stack element. In either case, \u2018pushing\u2019 a stack element refers to the act of allocating a new word on the stack and placing data into it. \u2018Popping\u2019 the stack refers to the action of ", "content": "removing top element stack returning data value removed routine requesting pop stacks often placed uppermost address regions machine usually grow highest memory location towards lower memory locations allowing maximum flexibility use memory end program memory top stack discussions whether stack grows memory memory largely irrelevant top element stack element last pushed first popped bottom element stack one removed leave stack empty important property stacks purest form allow access top element data structure shall see later property profound implications areas program compactness hardware simplicity execution speed stacks make excellent mechanisms temporary storage information within procedures primary reason allow recursive invocations procedures without risk destroying data previous invocations routine also support reentrant code added advantage stacks may used pass parameters procedures finally conserve memory space allowing different procedures use memory space tempor ary variable allocation instead reserving room within procedure s memory temporary variables ways creating stacks software besides array approach linked lists elements may used allocate stack words elements stack necessarily order respect actual memory addresses also software heap may used allocate stack space although really begging question since heap management really superset stack management", "url": "RV32ISPEC.pdf#segment3", "timestamp": "2023-11-05 21:10:33", "segment": "segment3", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.3 WHY ARE STACK MACHINES IMPORTANT? ", "content": "theoretical viewpoint stacks important since stacks basic natural tool used processing well structured code wirth 1968 machines lifo stacks also required compile computer languages may requirement transla tion natural languages evey 1963 computer hardware support stack structures probably execute applications requiring stacks efficiently machines say programming stack machines easier programming conventional machines stack machine programs run reliably programs mckeeman 1975 stack machines easier write compilers since fewer exceptional cases complicate compiler lipovski 1975 since running compilers take significant percentage machine resources installations building machine efficient compiler important earnest 1980 shall see book stack machines also much efficient running certain types programs registerbased machines particu larly programs well modularized stack machines also simpler machines provide good computational power using little hardware particularly favorable application area stack machines realtime embedded control applications require combination small size high processing speed excellent support interrupt handling stack machines provide", "url": "RV32ISPEC.pdf#segment4", "timestamp": "2023-11-05 21:10:33", "segment": "segment4", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.4 WHY ARE STACKS USED IN COMPUTERS? ", "content": "hardware software stacks used support four major computing requirements expression evaluation subroutine return address storage dynamically allocated local variable storage subroutine para meter passing", "url": "RV32ISPEC.pdf#segment5", "timestamp": "2023-11-05 21:10:33", "segment": "segment5", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.4.1 Expression evaluation stack Expression evaluation stacks were the first kind of stacks to be widely supported by special hardware. As a compiler interprets an arithmetic expression, it must keep track of intermediate stages and precedence of operations using an evaluation stack. In the case of an interpreted language, two stacks are kept. One stack contains the pending operations that await completion of higher precedence operations. The other stack contains the intermediate inputs that are associated with the pending operations. In a compiled language, the compiler keeps track of the pending operations during its instruction generation, and the hardware uses a single expression evaluation stack to hold intermediate results. To see why stacks are well suited to expression evaluation, consider how the following arithmetic expression would be computed: ", "content": "x b c first b would added together intermediate result must saved somewhere let us say pushed onto expression evaluation stack next c added result also pushed onto expression evaluation stack finally top two stack elements b cz multiplied result stored x expression evaluation stack provides automatic management intermediate results expressions allows many levels precedence expression available stack elements readers used hewlett packard calculators use reverse polish notation direct exper ience expression evaluation stack use expression evaluation stack basic evaluation expressions even registerbased machine compilers often allocate registers formed expression evaluation stack", "url": "RV32ISPEC.pdf#segment6", "timestamp": "2023-11-05 21:10:33", "segment": "segment6", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.4.3 The local variable stack Another problem that arises when using recursion, and especially when also allowing reentrancy (the possibility of multiple uses of the same code by different threads of control), is the management of local variables. Once again, in older languages like FORTRAN, management of information for a subroutine was handled simply by allocating storage assigned permanently to the subroutine code. This kind of statically allocated storage is fine for programs which are neither reentrant nor recursive. However, as soon as it is possible for a subroutine to be used by multiple threads of control simultaneously or to be recursively called, statically defined local variables within the procedure become almost impossible to maintain properly. The values of the variables for one thread of execution can be easily corrupted by another competing thread. The solution that is most frequently used is to allocate the space on a local variable stack. New ", "content": "blocks memory allocated local variable stack subroutine call creating working storage subroutine even registers used hold temporary values within subroutine local variable stack sort required save register values calling routine destroyed local variable stack allows reentrancy recursion also save memory subroutines statically allocated local vari ables variables take space whether subroutine active local variable stack space stack reused subroutines called stack depth increases decreases", "url": "RV32ISPEC.pdf#segment7", "timestamp": "2023-11-05 21:10:33", "segment": "segment7", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.5 THE NEW GENERATION OF STACK COMPUTERS ", "content": "new breed stack computer forms focus book draws upon rich history stack machine design new opportunities offered vlsi fabrication technology combination produces unique blend simplicity efficiency past lacking computers kinds reasons advantages stack machines many applications explored great detail following chapters features produce results distinguish machines conventional designs multiple stacks hardware stack buffers zero operand stackoriented instruction sets capability fast pro cedure calls design characteristics lead number features resulting machines among features high performance without pipelining simple processor logic low system complexity small program size fast program execution low interrupt response overhead consistent pro gram execution speeds across time scales low cost context switching results obvious clear thought results completely contrary conventional wisdom computer architecture community many designs stack computers roots forth programming language forth forms high level language assembly language stack machine two hardware stacks one expression evaluationparameter passing one return addresses sense forth language actually defines stack based computer architecture emulated host processor executing forth programs similarities language hardware designs accident members current generation stack machines without exception designed promoted people forth programming backgrounds interesting point note although machines designed primarily run forth also good running conventional languages thus may take processors choice inside personal computers workstations anytime soon quite practical use many applications programmed conventional lan guages special interest applications require stack machines special advantages small system size good response external events efficient use limited hardware resources", "url": "RV32ISPEC.pdf#segment8", "timestamp": "2023-11-05 21:10:34", "segment": "segment8", "image_urls": [], "Book": "stack_computers_book" }, { "section": "1.6 HIGHLIGHTS FROM. THE REST OF THE BOOK ", "content": "many different facets stack machines explored following chapters curious preview important points discussed stack machines kinds may classified taxonomy based upon number stacks size dedicated stack buffer memory number operands instruction format stack machines featured book multiple stacks 0operand addressing size stack buffer memory design tradeoff system cost operating speed bulk volume stack machines refers particular machines stack machines small program size low system complexity high system performance good performance consistency varying conditions stack machines run conventional languages reasonably well using less hardware given level performance registerbased machines stack machines good running forth language known rapid program development interactivity flexibility forth also known producing compact code well suited realtime control problems four 16bit stack machine designs span range design tradeoffs respect level integration speed designs presented detail wisc cpu16 misc m17 novix nc4016 harris rtx 2000 three 32bit stack machine designs span wide range design tradeoffs designs presented detail johns hopkinsapl frisc 3 also known silicon composers sc32 harris rtx 32p wright state university sf1 understanding stack machines requires gathering analysis extensive metrics comparison operation registerbased machines measurements presented dynamic static forth instruction frequencies approximately 10 million executed instructions effects combining opcodes subroutine calls instruction rtx 32p stack buffer size requirements stack buffer overflow management strategies performance degradations face frequent interrupts context switching software selection stack machines must encompass large number factors applications written largely conventional languages quite efficient stack machines especially small effort made optimize frequently used sections code good application area stack machines embedded realtime control application area encompasses large portion possible uses computers interesting applications also discussed future hardware software trends stack machines probably include increasingly efficient support conventional programming languages well hardware suffer ill effects limits memory bandwidth much processors", "url": "RV32ISPEC.pdf#segment9", "timestamp": "2023-11-05 21:10:34", "segment": "segment9", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 2 ", "content": "taxonomy hardware stack support historically computer designs promise great deal support high level language processing offered hardware stack support support ranges stack pointer register multiple hardware stack memories within central processing unit two recent classes pro cessors provided renewed interest hardware stack support risc processors frequently feature large register file arranged stack stack oriented realtime control processors use stack instructions reduce program size processor complexity taxonomy important step understanding nature stack oriented computers good taxonomy allows making observations global design tradeoff issues without delving implementation details particular machine taxonomy also helps understanding proposed architecture stands respect existing designs purpose beginning discussion stack machines taxonomy geta glimpse bigger picture focus multiplestack 0operand machines following chapters section 21 shall describe taxonomy stack machines based three attributes number stacks size stack buffer memories number operands instruction format shall also discuss strengths weaknesses inherent design tradeoffs result particular machine belonging one taxonomy categories section 22 shall categorize published stack machine designs according taxonomy section 23 shall looking similarities differences within taxonomy groupings similarities within grouping differences groupings show taxonomy helps us think tradeoffs made designing stack machines", "url": "RV32ISPEC.pdf#segment10", "timestamp": "2023-11-05 21:10:34", "segment": "segment10", "image_urls": [], "Book": "stack_computers_book" }, { "section": "2.1 THE THREE-AXIS STACK DESIGN SPACE ", "content": "stack computer design space may categorized coordinates along threeaxis system shown fig 21 three dimensions design space number stacks supported hardware size dedicated buffer stack elements many operands permitted instruction format respects three dimensions present continuum purposes taxonomy shall break design space 12 categories three dimensions possible values number stacks single multiple size stack buffers small large number operands 0 1 2", "url": "RV32ISPEC.pdf#segment11", "timestamp": "2023-11-05 21:10:34", "segment": "segment11", "image_urls": [], "Book": "stack_computers_book" }, { "section": "2.1.1 Single vs. multiple stacks ", "content": "obvious example stack supported function single stack used support subroutine return addresses often times stack also used pass parameters subroutines sometimes one additional stacks added allow processing subroutine calls without affecting parameter lists allow processing values expression stack separately subroutine information singlestack computers computers exactly one stack supported instruction set stack often intended state saving subroutine calls interrupts may also used expression evaluation either case probably used subroutine parameter passing compilers languages general single stack leads simple hardware expense intermingling data parameters return address information advantage single stack easier operating system manage one block variablesized memory per process machines built structured programming languages often employ single stack combines subroutine parameters subroutine return address often using sort frame pointer mechanism disadvantage single stack parameter return address information forced become mutually well nested imposes overhead modular software design techniques force elements para meter list propagated multiple layers software interfaces repeatedly copied new activation records multiplestack computers two stacks supported instruction set one stack usually intended store return addresses stack expression evaluation subroutine parameter passing multiple stacks allow separating control flow information data operands case parameter stack separate return address stack software may pass set parameters several layers subroutines overhead recopying data new parameter lists important advantage multiple stacks one speed multiple stacks allow access multiple values within clock cycle example machine simultaneous access data stack return address stack perform subroutine calls returns parallel data operations", "url": "RV32ISPEC.pdf#segment12", "timestamp": "2023-11-05 21:10:34", "segment": "segment12", "image_urls": [], "Book": "stack_computers_book" }, { "section": "2.1.2 Size of stack buffers The amount of dedicated memory used to buffer stack elements is a crucial performance issue. Implementation strategies range from using only pro\u00ad gram memory to store stack elements, to having a few top-of-stack registers in the processor, to having a completely separate stack memory unit. The taxonomy divides the design space into those designs that have stacks residing mostly in program memory (with perhaps a few buffering elements in the CPU) and those designs that provide significant stack buffering. An architecture with a Small Stack Buffer typically views the stack as a reserved portion of the general-purpose program memory address space. Stacks use the same memory subsystem as instructions and variables, allowing the regular memory reference instructions to access stack operands if desired. Stack elements may also be addressed by an offset from a stack pointer or frame pointer into memory. To be competitive in speed, a stack machine must have at least one or two stack elements buffered inside the processor. To see the reason for this, consider an addition operation on a machine with unbuffered stacks. A single instruction fetch for the addition would generate three more memory cycles to fetch both operands and store the result. With two elements in a ", "content": "stack buffer one additional memory cycle generated addition memory cycle used fetch new secondfromtop stack element filling hole created addition consumption stack argument small stack buffer primary stacks residing program memory allows quick switching stacks different tasks since stack elements predominately memory resident times fact small dedicated stack buffer simple implement easy manage makes popular particular fact stack elements reside main memory makes managing pointers strings data structures quite easy disadvantage approach significant main memory bandwidth consumed read write stack elements architecture large enough stack buffer main memory bandwidth usually consumed access stack elements architecture large stack buffer large buffer may take one several forms may large set registers accessed using register window scheme used risc processor sequin patterson 1982 separate memory unit isolated program memory dedicated stack memory cache processor ditzel mclellan 1982 event stack buffer considered large several levels subroutines say 5 may processed without exhausting capacity stack memory case stack used expression evaluation stack large may approximately 16 elements since single expressions seldom nest deeply haley 1962 chapter 6 shall examine program execution statistics give insight large large enough advantage large stack buffer program memory cycles consumed accessing data elements subroutine return addresses lead significant speedups particularly subroutine intensive environments disadvantage separate stack memory unit may large enough applications case spilling data program memory make room new stack entries may required also saving entire stack buffer switching tasks multitasking environment may impose unacceptably large context switching over head although noted solved dividing stack memory separate areas separate tasks lower level separate data buses offchip stack memories program memory add pins expense microprocessor clearly delineation large small stack buffers get hazy practice usually clear two alternatives designer mind", "url": "RV32ISPEC.pdf#segment13", "timestamp": "2023-11-05 21:10:35", "segment": "segment13", "image_urls": [], "Book": "stack_computers_book" }, { "section": "2.1.3 0-, 1-, and 2-operand addressing The number of operands in the machine instruction format might at first not seem to have much to do with hardware support for stacks. In practice. ", "content": "however number addressing modes tremendous effect stacks constructed stacks used programs 0operand instructions allow operands associated opcode operations implicitly specified performed top stack element kind addressing often called pure stack addressing 0operand stack architecture must course use one stacks expression evaluation even pure stack machine must instructions specify addresses loading storing variables program memory loading literal constant values subroutine calls conditional branching instructions instructions tend extremely simple formats often using memory word opcode hold operand several advantages simplicity 0operand instructions one top one two stack locations referenced instruction simplify construction stack memory allowing use single ported memory one two topofstack registers speed advantage may also gained loading operand registers parallel instruction decoding since operands instruction known advance top stack elements completely eliminate need pipelining fetch store operands another advantage individual instructions extremely compact 8bit instruction format sufficing 256 different opcodes furthermore instruction decoding simplified since operand addressing modes need interpreted decoding hardware disadvantage 0operand addressing mode complex addressing modes data structure accessing may take several instructions synthesize also data elements deeply buried stack difficult access provisions made copying mhdeep data stack element top stack machine 1operand instruction format usually performs ope rations specified operand uses top stack element implicit second operand 1operand addressing also called stackaccumula tor addressing offers flexibility 0operand addressing since combines fetching operand operation stack keedy 1978a b argued stackaccumulator architecture uses fewer instructions pure stack architecture expression evaluation argument suggests overall program size 1operand designs may smaller 0operand design course tradeoff involved since operand specified instruction efficient implementation must either incorporate operandfetching pipeline longer clock cycle allow operand access time case operand resident subroutine parameter stack evaluation stack stack memory must addressed offset operand fetch element requires execution time pipelining hardware top elements prefetched waiting operation 1operand stack architecture almost always evaluation stack 1operand architectures also support 0operand addressing mode save instruction bits operand field would unused 2operand instruction formats purposes taxonomy include 3operand instruction formats special case allow instruc tion specify source destination case stacks used store return addresses 2operand machine simply general purpose register machine subroutine parameters passed stack 2 operands either specify offset stack frame pointer specify pair registers current register window operation 2 operand machines need expression evaluation stack place burden tracking intermediate results evaluated expressions compiler 2operand machines offer maximum flexibility require complicated hardware perform efficiently since operands known instruction decoded data pipeline dual ported register file must used supply operands execution unit", "url": "RV32ISPEC.pdf#segment14", "timestamp": "2023-11-05 21:10:35", "segment": "segment14", "image_urls": [], "Book": "stack_computers_book" }, { "section": "2.3 INTERESTING POINTS IN THE TAXONOMY ", "content": "perhaps surprising feature taxonomy space twelve categories populated designs indicates significant amount research diverse stack architectures performed another observation different machine types tend group along operand axis major design parameter number size stack buffers creating distinctions within grouping taxonomy categories 0operand addressing pure stack machines unsurprisingly categories academic conceptual design entries since include canonical stack machine forms inherent simplicity sso machine category populated designs constrained scarce hardware resources limited design budget designs sso category may efficiency problems return addresses data elements intermingled stack efficient deep stack element copy operation provided slo category seems applicable specialpurpose machines used combinator graph reduction technique used execute functional programming languages jones 1987 graph reduction requires performing tree traversal evaluate program using stack store node addresses traversal expression evaluation stack required since results stored tree memory mso mlo categories similarly designed machines dis tinguished mainly amount chip board area spent buffering stack elements forth language machines many high level language machines fall categories machines categories finding increasing use realtime embedded control pro cessors simplicity high processing speed small pro gram sizes danile malinowski 1987 fraeman et al 1986 many mso mlo designs allow fast even zerocycle subroutine calls returns entries 1operand addressing designs attempt break bottlenecks may arise 0operand designs altering pure stack model stackaccumulator design ssi designs use address access local variables frame pointers easily sso designs general perceived advantage 1operand design push operation combined arithmetic operation saving instructions circumstances additionally pascal modula2 machines use 1operand addressing nature pcode mcode entries 2operand addressing tend mainstream designs conventional microprocessors fall ss2 category risc designs fall sl2 category register windows designs fall category ms2 categorization 680x0 family reflects flexibility machine use one eight address registers stack pointer ml2 entry psp machine reflects attempt conceptual design carry register window extreme speeding subroutine calls sf1 machine also uses multiple stacks dedicates hardware stack active process realtime control environment preceding discussion see designs found fall twelve categories taxonomy designs within group taxonomy display strong similarities designs different groups shown differences affect implementation system ope ration thus taxonomy useful tool gaining insight nature stack oriented computers next chapter shall focus certain sector stack machine design space mso mlo machines future references shall mean either mso mlo machine use term stack machine", "url": "RV32ISPEC.pdf#segment15", "timestamp": "2023-11-05 21:10:35", "segment": "segment15", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 3 ", "content": "multiple stack ooperand machines chapter focuses attention multiplestack ooperand machines comprising mso mlo categories taxonomy described chapter 2 section 31 shall compare stack machines conventional complex instruction set computer cisc reduced instruction set computer risc architectures section 32 shall describe generic stack machine architecture called canonical stack machine shall cover hardware block diagram level well implementation instruction set two stack machine serve point departure discussions real stack machines subsequent chapters section 33 shall briefly discuss forth programming language forth unconventional programming language uses twostack model computation strongly encourages frequent calls many small procedures many mlo mso designs roots forth language well suited execution forth programs", "url": "RV32ISPEC.pdf#segment16", "timestamp": "2023-11-05 21:10:35", "segment": "segment16", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.1 WHY THESE MACHINES ARE INTERESTING ", "content": "multiplestack ooperand machines two inherent advantages machines ooperand addressing leads small instruction size multiple stacks allow concurrent subroutine calls data manipulations features others lead small programs low system complexity high system performance main difference mso mlo machines mso machines give performance order reduce cpu cost minimizing resources used stack buffers shall examine details behind stack machines achieve advantages chapter 6 let us summarize benefits stack machines support small program sizes encouraging frequent use subroutines reduce code size fact stack machines short instructions small program sizes reduce memory costs compo nent count power requirements improve system speed allowing cost effective use smaller higher speed memory chips additional benefits include better performance virtual memory environ ment requirement less cache memory achieve given hit ratio ooperand stack machines tend smaller code size machines decreased system complexity decreases system development time chip size decreased chip size leaves room onchip semicustom features program memory system performance includes raw execution speed also total system cost system adaptability used realworld appli cations execution speed component system performance includes many instructions performed per second straight line code also speed handling interrupts context switches perfor mance degradation due factors conditional branches pro cedure calls stack machines 0operand addressing mode frequent subroutine calls reduce code size system complexity actually result improved system performance application programs additional benefit fact stack processors support efficient procedure calls well structured code many small procedures encouraged architecture increases maintainability encourag ing better coding practices increases code reuse allowing use small subroutines building blocks", "url": "RV32ISPEC.pdf#segment17", "timestamp": "2023-11-05 21:10:36", "segment": "segment17", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2 A GENERIC STACK MACHINE ", "content": "embarking detailed review real mso mlo designs baseline comparison needs established therefore shall explore design canonical mlo machine design presented simple possible present common point departure comparing designs", "url": "RV32ISPEC.pdf#segment18", "timestamp": "2023-11-05 21:10:36", "segment": "segment18", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.1.2 Data stack The data stack is a memory with an internal mechanism to implement a LIFO stack. A common implementation for this might be a conventional ", "content": "memory updown counter used address generation data stack allows two operations push pop push operation allocates new cell top stack writes value data bus cell pop operation places value top cell stack onto data bus deletes cell exposing next cell stack next processor operation", "url": "RV32ISPEC.pdf#segment19", "timestamp": "2023-11-05 21:10:36", "segment": "segment19", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.1.3 Return stack ", "content": "return stack lifo stack implemented identical manner data stack difference return stack used store subroutine return addresses instead instruction operands", "url": "RV32ISPEC.pdf#segment20", "timestamp": "2023-11-05 21:10:36", "segment": "segment20", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.1.4 ALU and top-of-stack register ", "content": "alu functional block performs arithmetic logical computations pairs data elements one data element pairs topofstack tos register holds topmost element data stack used programmer thus top element data stack block actually second item data stack perceived programmer top perceived data stack element kept oneregister tos buffer alu scheme allows using single ported data stack memory allowing operations addition top two stack elements alu supports standard primitive operations needed computer purposes illustration includes addition subtraction logical functions xor test zero purposes conceptual design arithmetic integer reason floating point arithmetic could added alu generalization concept", "url": "RV32ISPEC.pdf#segment21", "timestamp": "2023-11-05 21:10:36", "segment": "segment21", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.2.1 Reverse Polish Notation Stack machines execute data manipulation operations using postfix ope\u00ad rations. These operations are usually called \u2018Reverse Polish\u2019 after the Reverse Polish Notation (RPN) that is often used to describe postfix operations. Postfix operations are distinguished by the fact that the oper- ", "content": "expression parentheses used force addition occur multiplication even expressions without parentheses implied operator precedence force example without parentheses multiplication would done addition equivalent parenthesized expression would written postfix notation 98 12 45 postfix notation operator acts upon recently seen operands uses implied stack evaluation postfix example numbers 9812 45 would pushed onto stack shown fig", "url": "RV32ISPEC.pdf#segment22", "timestamp": "2023-11-05 21:10:36", "segment": "segment22", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2. Then the + operator would act upon the top two stack elements (namely ", "content": "45 12 leaving result 57 operator would work upon new two top stack elements 57 98 leaving 5586 postfix notation economy expression compared infix notation neither operator precedence parentheses necessary much better suited needs computers fact compilers matter course translate infix expressions languages c fortran postfix machine code sometimes using explicit register allocation instead expression stack canonical stack machine described preceding section designed execute postfix operations directly without burdening compiler register allocation bookkeeping", "url": "RV32ISPEC.pdf#segment23", "timestamp": "2023-11-05 21:10:36", "segment": "segment23", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.2.2 Arithmetic and logical operators ", "content": "order accomplish basic arithmetic canonical stack machine needs arithmetic logical operators following sections instruction discussed described terms register transfer level pseudocode self explanatory example first operation addition operation top two elements user perceived data stack nl n2 popped added result n3 pushed back onto stack implementation point view would mean popping ds gives nl adding tosreg value contains n2 result placed back tosreg leaving n3 top user perceived data stack ds element accessed pop ds actually secondtotop element stack seen programmer top element actual hardware stack operation pop ds consistent notion tosreg topofstack register seen user note keeping tosreg 1element stack buffer pop element n2 subsequent push element n3 eliminated operation stack description nl n2n3 subtract n2 nl giving difference n3 pseudocode tosreg pop ds tosreg operation stack nl n2 n3 description perform logical nl n2 giving result n3 pseudocode tosreg tosreg pop ds operation stack nl n2 n3 description perform logical nl n2 giving result n3 pseudocode tosreg tosreg pop ds operation xor stack description n1n2 n3 perform logical exclusive nl n2 giving result n3 pseudocode tosreg tosreg xor pop ds obvious topofstack buffer register saves considerable amount work performing operations", "url": "RV32ISPEC.pdf#segment24", "timestamp": "2023-11-05 21:10:36", "segment": "segment24", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.2.3 Stack manipulation operators One of the problems associated with pure stack machines is that they are able to access only the top two stack elements for arithmetic operations. Therefore, some overhead instructions must be spent on preparing oper\u00ad ands to be consumed by other operations. Of course, it should be said that some register-based machines also spend a large number of instructions doing register-to-register moves to prepare for operations, so the question of which approach is better becomes complicated. The following instructions all deal with manipulating stack elements. ", "content": "operation drop stack nl description drop nl stack pseudocode tosreg pop ds many subsequent instruction definitions notation similar tosreg pop ds used order accomplish operation data stack information placed onto data bus brought alu performing dummy operation adding 0 placed topofstack register operation dup stack nl nl nl description duplicate nl returning second copy stack pseudocode push ds tosreg operation stack description nl n2 nl n2n1 push copy second element stack nl onto pseudocode top stack push rs tosreg place n2 rs tosreg pop ds push ds tosreg push ds pop rs place nl tosreg push nl onto ds push n2 onto ds seems conceptually simple looking description however operation complicated need store n2 temporarily get way actual machines one temporary storage registers added system reduce thrashing swap stack manipulation instructions operation stack swap n1 n2 n2n1 description swap order top two stack elements pseudocode push rs tosreg save n2 rs tosreg pop ds place n1 tosreg push ds pop rs push n2 onto ds operation stack description pseudocode r n1 push n1 onto return stack push rs tosreg tosreg pop ds pronounced r instruction r complement r allow shuffling data ele ments data return stacks technique used access stack elements buried two elements deep stacks temporarily placing topmost data stack elements return stack operation stack r n1 pronounced r description pop top element return stack push onto data stack n1 pseudocode push ds tosreg tosreg pop rs", "url": "RV32ISPEC.pdf#segment25", "timestamp": "2023-11-05 21:10:36", "segment": "segment25", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.2.4 Memory fetching and storing Even though all arithmetic and logical operations are performed on data elements on the stack, there must be some way of loading information onto the stack before operations are performed, and storing information from the stack into memory. The Canonical Stack Machine uses a simple load/store architecture, so has only a single load instruction and a single store instruction \u2018I\u2019. Since instructions do not have an operand field, the memory address is taken from the stack. This eases access to data structures, since the stack may be used to hold a pointer that indexes through array elements. Since memory must be accessed once for the instruction and once for the data, these instructions take two memory cycles to execute. ", "content": "", "url": "RV32ISPEC.pdf#segment26", "timestamp": "2023-11-05 21:10:36", "segment": "segment26", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.2.5 Literals Somehow there must be a way to get a constant value onto the stack. The instruction to do this is called the literal instruction, which is often called a load-immediate instruction in register-based architectures. The literal instruction uses two consecutive instruction words: one for the actual instruction, and one to hold a literal value to be pushed onto the stack. Literal requires two memory cycles, one for the instruction, one for the data element. ", "content": "implementation assumes pc pointing location next instruction word current opcode operation lit stack n1 description treat value next program cell integer constant push onto stack n1 pseudocode mar pc address literal pc pc 1 push ds tosreg tosreg memory", "url": "RV32ISPEC.pdf#segment27", "timestamp": "2023-11-05 21:10:36", "segment": "segment27", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.3.1 Program Counter The Program Counter is the register that holds a pointer to the next instruction to be executed. After fetching an instruction, the program ", "content": "counter automatically incremented point next word memory case branch subroutine call instruction program counter loaded new value accomplish branch implied instruction fetching sequence performed every instruction operation fetch next instruction pseudocode mar pc instruction register memory pc pc", "url": "RV32ISPEC.pdf#segment28", "timestamp": "2023-11-05 21:10:36", "segment": "segment28", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.3.2 Conditional branching In order to be able to make decisions, the machine must have available some method for conditional branching. The Canonical Stack Machine uses the simplest method possible. A conditional branch may be performed based on whether the top stack element is equal to 0. This approach eliminates the need for condition codes, yet allows implementation of all control flow structures. ", "content": "operation iif stack n1 description n1 false value 0 perform branch address contained next program cell otherwise continue pseudocode tosreg 0 mar t pc pc memory else pc pc 1 endif tosreg pop ds", "url": "RV32ISPEC.pdf#segment29", "timestamp": "2023-11-05 21:10:36", "segment": "segment29", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.3.3 Subroutine calls Finally, the Canonical Stack Machine must have a method of efficiently implementing subroutine calls. Since there is a dedicated return address stack, subroutine calls simply require pushing the current program counter value onto the stack, then loading the program counter with a new value. We will assume that the instruction format for subroutine calls allows specifying a full address for the subroutine call within a single instruction word, and will ignore the mechanics of actually extracting the address field from the instruction. Real machines in later chapters will offer a variety of methods for accomplishing this with extremely low hardware overhead. Subroutine returns are accomplished by simply popping the return address from the top of the return address stack and placing the address in the program counter. Since data parameters are maintained on the data ", "content": "stack pointers memory locations need manipulated subroutine call instruction operation call stack description perform subroutine call pseudocode push rs pc save return address pc instruction register address field operation exit stack description perform subroutine return pseudocode pc pop rs", "url": "RV32ISPEC.pdf#segment30", "timestamp": "2023-11-05 21:10:36", "segment": "segment30", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.3.4 Hardwired vs. microcoded instructions While the Canonical Stack Machine description has been kept free from implementation considerations for conceptual simplicity, a discussion of one major design tradeoff that is seen in real implementations is in order. The tradeoff is one between hardwired control and microcoded control. An introduction to the concepts of hardwired versus microcoded implemen\u00ad tation techniques may be found in Koopman (1987a). Hardwired designs traditionally allow faster and more space efficient implementations to be made. The cost for this performance increase is usually increased complexity in designing decoding circuitry, and a major risk of requiring a complete control logic redesign if the instruction set specification is changed near the end of the product design cycle. With a stack machine, the instruction format is extremely simple (just an opcode) and the usual combinatorial explosion of operand/type combi\u00ad nations is completely absent. For this reason hardwired design of a stack machine is relatively straightforward. As an additional benefit, if a stack machine has a 16-bit or larger word length, the word size is very large compared to the few bits needed to specify the possible operations. Hardwired stack machines usually exploit this situation by using pre-decoded instruction formats to further simplify control hardware and increase flexibility. Pre-decoded (also called unen\u00ad coded) instructions have a microcode-like format in that specific bit fields of the instruction perform specific tasks. This allows for the possibility of combining several independent operations (such as DUP and [EXIT]) in the same instruction. While 16-bit instructions may seem wastefully large, the selection of a fixed length instruction simplifies hardware for decoding, and allows a subroutine call to be encoded in the same length word as other instructions. A simple strategy for encoding a subroutine call is to simply set the highest ", "content": "bit 0 subroutine call giving 15bit address field 1 opcode giving 16bit unencoded instruction field general speed advan tage fixed length instruction combined possibility compressing multiple operations instruction makes selec tion fixed length instruction justifiable technique unencoded hardwired design stack machines pioneered novix nc4016 since used machines benefits hardwired instruction decoding stack machines might seem first glance microcoded instruction execution would never used however several advantages using microcoded implementation scheme major advantage microcoded approach flexibility since combinations bits unencoded hardwired instruction format useful microcoded machine use fewer bits specify possible instructions including optimized instructions perform sequence stack functions leaves room userspecified opcodes architec ture microcoded machine several complex multicycle user specific instructions would unfeasible implement using hardwired design techniques microcode memory constructed ram instead rom instruction set customized user even application program one potential drawback using microcoded design often estab lishes microinstruction fetching pipeline avoid speed penalty accessing microcode program memory result requirement instructions take one clock cycle whereas hardwired designs optimized singleclockcycle execution turns really penalty using realistic match processor speed affordable memory speed processors per form two internal stack operations time takes single memory access thus hardwired design microcoded design execute instructions single memory cycle furthermore since microcoded design slip twice many operations per memory cycle period opportunities code optimization userspecified instructions much greater practical matter microcoded implementations convenient implement discrete component designs predominate board level implementations singlechip implementations hardwired", "url": "RV32ISPEC.pdf#segment31", "timestamp": "2023-11-05 21:10:37", "segment": "segment31", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.4 State changing An important consideration in real-time control applications is how the processor can handle interrupts and task switches. The specified instruction set for the Canonical Stack Machine sidesteps these issues to a certain extent, so we will talk about the standard ways of handling these events to build a base upon which to contrast designs in the next sections. A potential liability for stack machines with independent stack memories is the large state that must be saved if the stacks are swapped into program ", "content": "memory change tasks see state change avoided much time chapter 6 speaks advanced design techniques reduce penalties task switch", "url": "RV32ISPEC.pdf#segment32", "timestamp": "2023-11-05 21:10:37", "segment": "segment32", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.2.4.3 Task switching Task switching occurs when a processor switches between programs to give the appearance of multiple simultaneously executing programs. The state of the program which is stopped at a task switch must be preserved so that it ", "content": "may resumed later time state program started must properly placed machine execution resumed traditional way accomplishing use timer swap tasks every timer tick sometimes subject prioritization scheduling algorithms simple processor lead large overhead saving stacks memory reloading every context swap technique used combat problem programming light weight tasks use little stack space tasks push parameters top existing stack elements remove stack elements terminating thus eliminating potentially expensive saving restoring stacks heavy weight process another solution use multiple sets stack pointers multiple tasks within stack memory hardware", "url": "RV32ISPEC.pdf#segment33", "timestamp": "2023-11-05 21:10:37", "segment": "segment33", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.3.2 The Forth virtual machine In order to solve the original telescope control problem, Forth needed several important qualities. It had to be suitable for real-time control, highly interactive for easy use by nonprogrammers, and had to fit within severe memory space constraints. From these origins, the language took on two major features: the use of threaded code, and 0-operand stack instructions. In order to conceptualize the operation of the language, the Forth virtual machine is used as a model for computation. The Forth virtual machine has two stacks: a data stack and a return stack. Forth programs are actually an emulation of MSO machine ", "content": "code running host hardware forth programs consist small subroutines execute calls subroutines primitive stack operation instructions programs built treelike fashion subroutine calling upon small collection underlying subroutines easy see forth natural machine language 0operand stack hardware exemplified canonical stack machine noted even processor designed forth processor still capable executing high level language primitives forth language defined low level correspond machine code operations would present stack machine thus machine advertised forth machine usually suitable running languages well", "url": "RV32ISPEC.pdf#segment34", "timestamp": "2023-11-05 21:10:37", "segment": "segment34", "image_urls": [], "Book": "stack_computers_book" }, { "section": "3.3.3 Emphasis on interactivity, flexibility A major advantage of the Forth programming language is that it provides an unprecedented level of interactivity in its development environment. The development tools include an integrated incremental compiler and editor which allow interactive testing and modification of individual procedures. ", "content": "encouragement writing small procedures modular code allows easy fast testing development greatly reduced need fixing words first written consensus among forth programmers use forth language reduces development time factor 10 compared languages wide range applications forth programs tend emphasize flexibility problem solving since forth extensible language new data control structures may added language support specific application areas flexibility allows one two programmers solve problem might require larger team effort languages reducing project management over head thus magnifying productivity increase forth used extensively extremely large programming efforts effectiveness large applications yet unknown", "url": "RV32ISPEC.pdf#segment35", "timestamp": "2023-11-05 21:10:37", "segment": "segment35", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 4 ", "content": "architecture 16bit systems chapter shall discuss representative selection 16bit stack computer designs designs chosen span wide range implementation philosophies tradeoffs section 41 discusses char acteristics 16bit systems important consideration 16bit hardware compact enough allow complete systems single chip embedded control applications remaining sections discuss four different 16bit stack computers sections arranged order increasing integration level system made offtheshelf discrete components highly integrated processoronachip section 42 discuss wisc cpu16 discrete component implementation generalized stack processor writable control store cpu16 technology development platform designed simplicity flexibility section 43 discuss misc m17 processor m17 targeted low end pricesensitive applications consequently keeps stacks program memory eliminate cost separate stack memory hardware section 44 discuss novix nc4016 first forth chip enter marketplace nc4016 provides intermediate range price performance dedicated offchip stack memories section 45 discuss harris rtx 2000 high performance microcontroller based novix nc4016 design rtx 2000 uses standard cell design approach enables include on chip stack memory speed compactness standard cell approach also allows addition hardware multiplier countertimers processor chip cpu16 nc4016 rtx 2000 ml0 stack machines m17 keeping emphasis low cost ms0 stack machine", "url": "RV32ISPEC.pdf#segment36", "timestamp": "2023-11-05 21:10:37", "segment": "segment36", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.1 CHARACTERISTICS OF 16-BIT DESIGNS ", "content": "systems discussed 16 bits wide smallest configuration makes sense commercial stack processor applications", "url": "RV32ISPEC.pdf#segment37", "timestamp": "2023-11-05 21:10:37", "segment": "segment37", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.2.1 Introduction The WISC Technologies CPU/16 was designed by this author as a very simple (in terms of TTL component count) stack machine with a good mixture of flexibility and speed. The WISC CPU/16 uses discrete MSI components throughout. It is a 16-bit machine that features a completely RAM-based microcode memory (writable control store) to allow full user ", "content": "programmability cpu16 implemented pair printed circuit boards plug ibm pc compatible computer coprocessor name wisc comes writable instruction set computer although complete term technology used would wisc stack since hardware stacks integral part design primary purpose developing cpu16 investigate technology design alternatives designing rtx 32p described chapter 5 resulting product reasonably fast processor right simple uncluttered design original wire wrapped prototype cpu16 fitted onto single ibm pc expansion card 13 inches 4 inches 16k bytes program memory use ram microcode memory simple microinstruction format makes processor useful teaching tool computer design courses", "url": "RV32ISPEC.pdf#segment38", "timestamp": "2023-11-05 21:10:37", "segment": "segment38", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.2.2 Block diagram Fig. 4.1 is an architectural block diagram of the CPU/16. The Data Stack and Return Stack are implemented as identical hardware stacks consisting of an 8-bit up/down counter (the Stack Pointer) feeding an address to a 256 by 16-bit memory. The stack pointers are readable and writable by the system to provide an efficient capability to access deeply buried stack elements. The ALU section includes a standard multifunction ALU built from 74LS181 chips with a DHI register for holding intermediate results. By convention, the DHI register acts as a buffer for the top stack element. This means that the Data Stack Pointer actually addresses the element perceived by the programmer to be the second-to-top stack element. The result is that an operation on the top two stack elements, such as addition, can be performed in a single cycle, with the A side of the ALU reading the second stack element from the Data Stack and the B side of the ALU reading the top stack element from the Data Hi register. There are no condition codes visible to machine language programs. Add-with-carry and other multiple precision operations are supported by microcoded instructions that push the carry flag onto the data stack as a logical value (0 for carry clear, -1 for carry set). The DLO register acts as a temporary holding register for intermediate results within a single instruction. Both the DHI and the DLO registers are shift registers, connected to allow 32-bit shifting for multiplication and division. The Program Counter is connected directly to the memory address bus. This allows fetching the next instruction in parallel with data operations in the rest of the system. Thus, the system can overlap data operations involving the ALU and the Data Stack with instruction-fetching operations. In order to save the program counter for subroutine call operations, the Program Counter Save register captures the program counter value before it is loaded with the subroutine starting address. The Program Counter Save register is then pushed onto the Return Stack during the subroutine calling process. During subroutine returns, the saved Program Counter value is ", "content": "routed return stack alu add 1 storing new program counter value saves program counter increment would otherwise cost clock cycle program memory organized 64k words 16 bits accessed word boundaries microcoded byteswapping operation sup ported allow manipulation singlebyte quantities microprogram memory readwrite memory containing 2k elements 32 bits memory addressed 256 pages 8 words microprogram counter supplies 8bit page address microprograms execute within 8word page scheme allows supplying 3 bits next microprogram instruction microinstruction one bit result lin8 conditional microbranch selection allows conditional branching looping execution single opcode instruction decoding accomplished simply loading 8bit opcode microprogram counter using page address microprogram memory since microprogram counter built counter hardware operations span one 8microinstruction page required microinstruction register holds output microprogram memory forming 1stage pipeline pipeline allows next microin struction accessed microprogram memory parallel execution current microinstruction completely removes delay microprogram memory access time system s critical path also enforces lower limit two clock cycles instructions instruction requires single clock cycle second noop microinstruc tion must added allow next instruction flow pipeline properly host interface block allows cpu16 operate two possible modes master mode slave mode slave mode cpu16 controlled personal computer host allow program loading micro program loading alteration register memory location system initialization debugging master mode cpu16 runs program freely host computer monitors status register request service cpu16 master mode host computer may enter dedicated service loop alternatively host computer may perform tasks prefetching next block disk input stream displaying image periodically poll status register cpu16 wait service host long necessary", "url": "RV32ISPEC.pdf#segment39", "timestamp": "2023-11-05 21:10:37", "segment": "segment39", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.2.3 Instruction set summary The CPU/16 has two instruction formats: one for invoking microcoded opcodes and one for subroutine calls. Fig. 4.2(a) shows the operation instruction format used to invoke microcoded instructions. Since 256 possible opcodes are supported by the 256 pages of Microcode Memory, only 8 bits of each instruction are needed to specify the opcode. This results in an instruction format for a microcoded opcode which has the highest 8 bits set to ones. This allows the subroutine ", "content": "call format shown fig 42 b address 8 highest bits set 1 strategy eliminates constraint 15bit subroutine addresses found designs chapter disadvan tage strategy parameters instructions contained instruction word consequence targets conditional branches stored memory word instruction opposed small offset within instruction design tradeoff made interest minimizing amount instruction decoding logic used since cpu16 uses ram chips microcode memory microcode may completely changed user desired standard software environment cpu16 mvpforth forth79 dialect haydon 1983 forth instructions included standard microcoded instruction set shown table 41 course software environments possible none except forth implemented one thing noticeable instruction set diversity instructions supported instructions table 41 large set forth primitive operations table 41 b shows common forth word combinations available single instructions table 41 c shows words used support underlying forth operations subroutine call exit table 41 lists high level forth words use specialized microcode speed execution table 41 e shows words added support extended precision integer operations 32bit floating point calculations execution time instructions varies according considerable range complexity instructions simple instructions manipulate data stack swap take 2 3 microcycles complex instructions take clock cycles eg q 64bit addition takes 18 cycles still much faster comparable high level code desired microcoded loops written potentially last thousands clock cycles block memory moves repetitive operations mentioned earlier instruction invokes sequence microin structions microprogram memory page corresponding 8bit opcode instruction fig 43 shows microcode format microinstruction microcode used horizontal means one format microcode broken separate fields control different portions machine simplicity stack machine approach cpu16 hardware 32 bits needed microinstruction 32bit format contrasted microinstruction formats 48 bits wider found horizontally microcoded machines machine using amd 2900 series bitslice components simplicity makes microprogramming much harder assembly language pro gramming conventional machine example pseudocode description addition operation canonical stack machine tosreg tosreg pop ds operation cpu16 microcode would written microinstruction sourceds aluab destdhl inc dp microoperation sourceds routes current top element hardware data stack onto data bus alua b directs alu add input data bus b input topofstack element buffered dhi destdh1 deposits result back data hi register meanwhile inc dp microoperation increments data stack pointer data stack read thus popping stack", "url": "RV32ISPEC.pdf#segment40", "timestamp": "2023-11-05 21:10:38", "segment": "segment40", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.2.4 Architectural features The CPU/16 is very similar to the Canonical Stack Machine. This probably has a lot to do with the fact that both were designed by this author, and the major goal for both the Canonical Stack Machine and the CPU/16 is simplicity. The major efficiency improvement of the CPU/16 over the Canonical Stack Machine is the replacement of the Memory Address Register with the Program Counter. This has the advantage of allowing the next instruction to be fetched without tying up the data bus. so that stack computations can be overlapped with instruction fetches. A disadvantage of this technique is that the @ and ! operations require overwriting the Program Counter with the ", "content": "duplicate temporary floating point number 32bit mantissa 16bit integer note cpu16 uses ram microcode memory user may add modify instructions desired list merely indicates instructions supplied standard development software package memory address restoring program counter contents program counter save register obviously program counter memory address register dhi register could multiplexed onto ram address bus would introduce added complexity components dlo register added design primarily provide efficient 32bit shifting support multiplication division however presence intermediate storage register measurably improves perfor mance four intermediate results available one time dhi dlo data stack temporary result pushed onto return stack example dlo register used intermediate storage location swap operation conceptually cleaner using return stack purpose important implementation feature cpu16 resources machine directly controlled host computer done host interface supports microinstruction register load singlestep clock features features microinstruction desired executed first loading values registers system loading microinstruction cycling clock reading data values back examine results design technique makes writing microcode extremely straightforward avoids need expensive microcode development support hardware also makes diagnostic programs simple write cpu16 designed handle interrupts", "url": "RV32ISPEC.pdf#segment41", "timestamp": "2023-11-05 21:10:38", "segment": "segment41", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.2.5 Implementation and featured application areas The CPU/16 is constructed using conservative (some might even say obsol\u00ad ete) 74LS00 series chips and relatively slow 150 ns static RAMs for stack and program memories. The design priorities for the CPU/16 are, in decreasing order of importance: simplicity, minimum design & development costs, compactness, flexibility, and speed. The CPU/16 clock cycle time is 280 ns, with an average of three clock cycles per instruction. Discrete components were chosen because they are inexpensive and require little initial development investment when compared to a single-chip gate array. Discrete component designs are also much easier and cheaper to change for bug extermination and design upgrades. This is in keeping with the philosophy of a design exploration project. It also results in a much slower processor than could be produced using a single-chip design. Even so, at the time of its introduction, the CPU/16 was speed competitive with the slower versions of the Novix NC4016 (the leading stack machine of the time) when running many application programs. In order to allow increased flexibility and to limit the required microin\u00ad struction width, the CPU/16 uses discrete ALU chips (74LS181) instead of bit-sliced components. The primary application area is general-purpose stack processing as a coprocessor for IBM PC family of personal computers. While the redefinable instruction set makes the machine suitable for most languages, the primary application language is Forth. An additional application area that is attractive is that of a teaching aid in computer architecture courses. Since the machine is constructed using less than 100 simple TTL chips including memory, a student can readily understand the design. An additional result of using discrete component technology is that all signals in the system are accessible with external ", "content": "probes making system suitable experimentation students learning hardware software microcode interact information section derived cpu16 technical reference manual koopman 1986 additional information cpu16 may found haydon koopman 1986 koopman haydon 1986 also koopman 1987b describes conceptual wisc architecture extremely similar cpu16", "url": "RV32ISPEC.pdf#segment42", "timestamp": "2023-11-05 21:10:38", "segment": "segment42", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.3.2 Block diagram Fig. 4.4 shows the block diagram of the M17. Both the Data Stack and the Return Stack reside in program memory, with the top elements of each held in on-chip registers for speed. The X. Y, and Z registers hold the top three elements of the data stack, with X being the top element. These registers are connected with multiplexers so that values can be transferred between registers in a single clock cycle. Simulta\u00ad neously, the Z register can be read from or written to the portion of the stack resident in program memory. Thus, a Data Stack popping operation (Forth DROP operation) is accomplished by simultaneously reading Z from memory, copying Z to Y, and copying Y to X. Similarly, a Data Stack pushing operation (such as the Forth DUP operation) is accomplished by copying X to Y while retaining the old value of X, copying Y to Z, and writing Z to program memory. The LASTX register can be updated with the contents of the X register on each instruction cycle. It therefore contains the top-of-stack value that was overwritten by the previous instruction, which is useful for many instruction sequences. The ALU on the M17 is designed to generate all possible ALU functions simultaneously, only at the last moment selecting the correct function output for writing back to the X and/or Y registers. This technique allows the ALU delay to overlap the instruction decoding time, since once the instruction is decoded its only task is to select the correct ALU output from the functions already computed. ", "content": "m17 8bit io bus allows concurrent operations alu performing data transfers feature found 16bit singlechip forth machines discussed allows high speed io without tying memory data bus return stack kept program memory data stack top element return stack buffered index register index register doubles countdown counter use program loops instruction repeat feature instruction pointer conventional program counter loaded instruction register subroutine calls memory data bus branches index register subroutine returns index register also loaded instruction pointer save return address subroutine calls return stack pointer updown counter contains memory address top element return stack resident program memory actually secondfromtop element visible programmer since index register contains top element simi larly data stack pointer points top data stack element resident program memory actually fourth element stack since top three elements buffered x z data stack grows high memory locations low memory locations return stack grows low memory locations high memory arrangement free space top data stack top return stack shared efficient use memory space m17 directly addresses five segments 64k words 16bit wide memory byte swapping byte packing byte unpacking instruc tions available allow access 8bit quantities m17 provides five signal pins indicate memory space active data stack return stack code space buffer b buffer activated pin indicates address space used address bus simple systems pins ignored somewhat larger systems pin control memory chips providing five independent banks 64k words memory using companion memory controller chip 16m words memory addressed m17 takes two clock cycles instruction one clock cycle load instruction program memory another clock cycle perform operation read write one stacks program memory performing twocycle instruction execution memory bus kept continuously busy simple systems operate two 8bit memory packages m17 also six instruction cache registers registers form short history buffer retains sequences consecutive instructions executed repeat sequence triggered special instruction one six retained instructions formed loop repeated exit condition true loop executes one clock cycle per instruction instead two second subsequent iterations since instructions need fetched memory order simplify interrupt control logic loops required properly aligned within address range evenly divisible 8 sequence interruptible interrupt service routine responsible saving special flag intends use repeat sequence final feature m17 support variable length clock cycles using asynchronous memory interface asynchronous mode operation m17 provides memory request signal memory cycle responding memory device responsible asserting deviceready signal data valid handshaking process actually eliminates need oscillator results asynchronous operation system one advantage scheme different speed memory devices may used different deviceready delays avoid wasting memory bandwidth another advantage short delay provided clock cycles address memory allowing internal operation cycles proceed faster memory reference cycles extre mely cost sensitive applications ordinary clock oscillator used run entire system", "url": "RV32ISPEC.pdf#segment43", "timestamp": "2023-11-05 21:10:38", "segment": "segment43", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.3.3 Instruction set summary Fig. 4.5 shows the instruction formats for the M17. Instructions are accom\u00ad plished in two clock cycles: one for the instruction fetch, and one for the operation and stack memory access. All of the Canonical Stack Machine\u2019s primitive operations listed in Table 3.1 can be executed in a single instruc\u00ad tion cycle (two clock cycles). The details of operation of some instructions are slightly different on the M17 to accomplish single-instruction-cycle execution. For example, a memory store operation does not pop the data and address from the stack because this would require two additional memory transactions. Fig. 4.5(a) shows the subroutine call instruction. A subroutine call is made by using the address of the subroutine (which must be an even address) as the instruction. The zero in bit 0 of the instruction designates a subroutine call. This forces subroutines to start on even memory locations, but allows code to span the entire 64K words of address space. The M17 has three conditional instructions: SET, RETURN, and JUMP. Fig. 4.5(b) shows the format of a generic conditional instruction. Bits 6-15 indicate which conditions are selected as inputs into a logical OR condition evaluation function. For example, if bits 15 and 13 are set, a \u2018less than or equal to zero\u2019 condition is selected. When bit 5 is set, it causes a logical inversion of the condition value. For example, if bits 15,13, and 5 are set, a \u2018greater than zero\u2019 condition is selected. Bit 4 controls the INDEX register and its function. For RETURN, it allows programmer control of the return stack drop. For SET and JUMP it selects a test for zero and decrement INDEX step. In this way many useful conditions based on the data in X, Y, Z, or INDEX can be created in one instruction step. It is important to note that conditional instructions in the M17 do not ", "content": "change data stack simply extract condition code value data system perform conditional operation example selecting carry condition bit 9 give carry bit x added actually modify contents either x y results conditional evaluation retained unless set instruction used fig 45 c shows format set conditional instruction instruction sets user flag may thought conventional condition code register value condition code selected bits 415 user flag tested instructions program later branching bit 3 specifies whether top stack element popped equivalent forth drop operation evaluation performed fig 45 shows format conditional subroutine return instruction bit 4 0 instruction performs conditional subroutine return performing return popping return address return stack resident index register condition evaluates true bit 4 set 1 branch address top return stack still made return stack popped condition false convenient way implementing begin untilfalse conditional control structure stores start address loop index uses data stack conditions determining terminate conditional jump instruction shown fig 45 e instruction evaluates specified condition jumps true destination address stored memory location jump instruction jump condition false m17 skips jump destination value executes instruction next memory location second word jump instruction jump instruction used implement countdown loop using index register setting bit 4 1 fig 45 f shows process instruction format instruction several independent control fields reminiscent horizontal microcode format seen cpu16 bits 35 specify control z register bits 67 register bit 13 lastx register bit 14 x register additionally bits 812 select alushifter function performed results loaded x register finally bit 15 cause data stack pointer updated instruction fig 45 g shows access instruction format instruction similar format process instruction major difference bits 811 specify source sourcedestination pair routing data around processor bits 12 14 control updating source destination registers allowing exchanges internal registers m17 handles interrupts hardwareforced subroutine call memory address 0 another address supplied interrupting device also context register allows saving state processor receiving interrupt", "url": "RV32ISPEC.pdf#segment44", "timestamp": "2023-11-05 21:10:39", "segment": "segment44", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.3.4 Architectural features The biggest difference between the M17 and the Canonical Stack Machine described in Chapter 3 is that the M17\u2019s stack memory and program memory accesses use the same bus, and may reside in the same memory chips. In order to maintain a reasonably high level of performance, the M17 buffers the top three Data Stack elements and the top Return Stack element in internal registers. In contrast to the single internal bus used by the Canonical Stack Machine, the M17 provides a rich interconnect structure between registers. These interconnects not only allow moving data along the LASTX/X/Y/Z register chain to perform pushes and pops, but also allow routing to perform fairly complex stack manipulations within a single decode/execute clock cycle pair. Since stacks are kept in program memory, a multiplexer is used to select the address to be fed to program memory. An advantage of placing stacks in program memory is that the amount of information that must be saved from the chip on a context swap is quite low. Instead of copying the elements of an ", "content": "onchip stack holding area main memory topofstack registers flushed memory stack pointers redirected point different memory block activate new task", "url": "RV32ISPEC.pdf#segment45", "timestamp": "2023-11-05 21:10:39", "segment": "segment45", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.4.2 Block diagram Fig. 4.6 shows the block diagram of the NC4016. The ALU section contains a 2-element buffer for the top elements of the data stack (T for Top data stack element, and N (Next) for the second-from- top data stack element). It also contains a special MD register for support of multiplication and division as well as an SR register for fast integer square roots. The ALU may perform operations on the T register and any one of the N, MD, or SR registers. ", "content": "data stack offchip memory holding 256 elements data stack pointer onchip provides stack address offchip memory separate 16bit stack data bus allows data stack read written parallel operations noted previously top two data stack elements buffered n registers alu return stack separate memory similar data stack exception top return stack element buffered onchip index register since forth keeps loop counters well subroutine return addresses return stack index register decremented implement countdown loops efficiently stacks onchip underflow overflow protection multitasking environment offchip stack page register controlled using io ports give task separate piece larger 256 word stack memory gives hardware protection avoid one task overwriting another task stack reduces context swapping overhead minimum program counter points location next instruction fetched external program memory automatically altered jump loop subroutine call instructions program memory arranged 16bit words byte addressing directly supported nc4016 also two io busses leading offchip dedicated pins bport 16bit io bus xport 5bit io bus io ports allow direct access io devices control applications without stealing bandwidth memory bus bits io ports also used extend program memory address space providing high order memory address bits nc4016 use four separate 16bit busses data transfers every clock cycle high performance program memory data stack return stack io busses", "url": "RV32ISPEC.pdf#segment46", "timestamp": "2023-11-05 21:10:39", "segment": "segment46", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.4.3 Instruction set summary The NC4016 pioneered the use of unencoded instruction formats for stack machines. In the NC4016 the ALU instruction is formatted in independent fields of bits that simultaneously control different parts of the machines, much like horizontal microcode. The NC4016, and many of its Forth processor successors, are the only 16-bit computers that use this technique. Using an unencoded instruction format allows simple hardware decoding of instructions. Fig. 4.7 shows the instruction formats for the NC4016. Fig. 4.7(a) shows the instruction format for subroutine calls. In this format, the highest bit of the instruction is set to 0, and the remainder of the instruction is used to hold a 15-bit subroutine address. This limits programs to 32K words of memory. Fig. 4.7(b) shows the conditional branch instruction format. Bits 12 and 13 select either a branch if T is zero, an unconditional branch, or a decrement and branch-if-zero using the index register for implementing loops. Bits 0-11 specify the lowest 12 bits of the target address, restricting ", "content": "control operation shifter alu output bit 2 specifies nonrestoring division cycle bit 3 enables shifting n registers connected 32bit shift register bit 5 alu instruction indicates subroutine return operation allows subroutine returns combined preceding arithmetic operations obtain free subroutine returns many cases bit 6 specifies whether stack push accomplished combined bit 4 controls pushing popping stack elements bits 7 8 control input select alu well allow specify step iterative multiply square root functions bits 911 specify alu function performed fig 47 shows format memory reference instruction instructions take two clock cycles one cycle instruction fetch one cycle actual reading writing operand address memory access always taken register bit 12 indicates whether operation memory read write bits 04 specify small constant added subtracted value perform autoincrement autodecrement addressing functions bits 511 instruction specify alu control functions almost identical used alu instruction format fig 47 e shows miscellaneous instruction format instruction used read write 32word user space residing first 32 words program memory saving time taken push memory address stack performing fetch store also used transfer values registers within chip push either 5bit literal single clock cycle 16bit literal two clock cycles onto stack bits 511 instruction specify alu control functions similar alu instruction format nc4016 specifically designed execute forth language unencoded format many instructions machine operations correspond sequence forth operations encoded single instruction table 42 shows forth primitives instruction sequences supported nc4016", "url": "RV32ISPEC.pdf#segment47", "timestamp": "2023-11-05 21:10:39", "segment": "segment47", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.4.4 Architectural features The internal structure of the NC4016 is designed for single-clock-cycle instruction execution. All primitive operations except memory fetch, memory store, and long literal fetch execute in a single clock cycle. This requires many more on-chip interconnection paths than are present on the Canonical Stack Machine, but provides much better performance. The NC4016 allows combining nonconflicting sequential operations into the same instruction. For example, a value can be fetched from memory and added to the top stack element using the sequence @+ in a Forth program. These operations can be combined into a single instruction on the NC4016. The NC4016 subroutine return bit allows combining a subroutine return ", "content": "instructions similar manner results subroutine exit instructions executing free combination instructions optimization performed nc4016 compilers tailend recursion elimination tailend recursion elimination involves replacing subroutine callsubroutine exit instruction pair unconditional branch subroutine would called another innovation nc4016 mechanism access first 32 locations program memory global user variables mechanism ease problems associated implementing high level languages allow ing key information task pointer auxiliary stack main memory kept rapidly accessible variable also allows reasonable performance using high level language compilers may originally developed register machines allowing 32 fastaccess variables used simulate register set", "url": "RV32ISPEC.pdf#segment48", "timestamp": "2023-11-05 21:10:39", "segment": "segment48", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.4.5 Implementation and featured application areas The NC4016 is implemented using fewer than 4000 gates on a 3.0 micron HCMOS gate array technology, packaged in a 121-pin Pin Grid Array (PGA). The NC4016 runs at up to 8 MHz. When the NC4016 was designed, gate array technology did not permit placing the stack memories on-chip. Therefore a minimum NC4016 system consists of three 16-bit memories: one for programs and data, one for the data stack, and one for the return stack. Because the NC4016 executes most instructions, including conditional branches and subroutine calls, in a single cycle, there is a significant amount of time between the beginning of the clock cycle and the time that the memory address is valid for fetching the next instruction. This time is approximately half the clock cycle, meaning that program memory access time must be approximately twice as fast as the clock rate. The NC4016 was originally designed as a proof-of-concept and prototype machine. It therefore has some inconveniences that can be largely overcome by software and external hardware. For example, the NC4016 was intended to handle interrupts, but a bug in the gate array design causes improper interrupt response. Novix has since published an application note showing how to use a 20-pin PAL to overcome this problem. A successor product will eliminate these implementation difficulties and add additional capabilities. The NC4016 is aimed at the embedded control market. It delivers very high performance with a reasonably small system. Among the appropriate applications for the NC4016 are: laser printer control, graphics CRT display control, telecommunications control (T1 switches, facsimile controllers, etc.), local area network controllers, and optical character recognition. The information in this section is derived from Golden et al. (1985), Miller (1987), Stephens & Watson (1985), and Novix\u2019s Programmers\u2019 Introduction to the NC4016 Microprocessor (Novix 1985). ", "content": "", "url": "RV32ISPEC.pdf#segment49", "timestamp": "2023-11-05 21:10:39", "segment": "segment49", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.5.3 Instruction set summary The instructions of the RTX 2000 are quite similar in function to those of the NC4016, but are sufficiently different in format to merit a separate descrip\u00ad tion. Fig. 4.9 shows the instruction formats for the RTX 2000. Fig. 4.9(a) shows the instruction format for subroutine calls. In this format, the highest bit of the instruction is set to 0, and the remainder of the instruction is used to hold a 15-bit subroutine address. This limits programs to 32K words of memory. Fig. 4.9(b) shows the conditional branch instruction format. Bits 11 and 12 select either a branch if T is zero (with either a conditional or an unconditional popping of the data stack), an unconditional branch, or a decrement and branch-if-zero using the index register for implementing loops. Bits 0-8 specify the lowest 9 bits of the target address, while bits 9 and 10 control an incrementer/decrementer for the upper 6 bits of the branch address to allow branching within the same 512-byte-memory page, to adjacent pages, or to page 0. ", "content": "fig 49 c shows format alu instruction bits 03 control operation shifter shifts output alu bit 5 alu instruction indicates subroutine return operation allows subroutine returns combined preceding arithmetic operations obtain free subroutine returns many cases bits 811 select alu function bit 7 controlling whether output alu inverted fig 49 shows format alu instruction multistep mode format quite similar alu instruction format bits 03 select shift control function bit 5 controls subroutine return function bits 911 selects alu operation multistep mode bit 8 selects either multiplydivide register square root register special operations bits 67 select special multistep control functions primary use multistep mode repeated multiplication division step operations fig 49 e shows format memory reference instruction instructions take two clock cycles one instruction fetch one actual reading writing operand address memory access always taken register bit 12 selects either byte word memory access since rtx 2000 uses word memory addresses bit selects low halfhigh half full word operation selected memory word bits 6 7 indicate whether operation memory read write well control information bits 04 specify small constant added subtracted value perform autoincrement autodecrement addressing function bits 811 instruction specify alu functions alu instruction format fig 49 f shows miscellaneous instruction format instruction used read write word 32word relocatable user space saving time taken push memory address stack performing fetch store also used transfer registers within chip push either 5bit literal single clock cycle 16bit literal two clock cycles onto stack bits 811 instruction specify alu functions alu instruction format rtx 2000 specifically designed execute forth language unencoded format many instructions machine operations correspond sequence forth operations encoded single instruction table 43 shows forth primitives instruction sequences executed rtx 2000", "url": "RV32ISPEC.pdf#segment50", "timestamp": "2023-11-05 21:10:39", "segment": "segment50", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.5.5 Implementation and featured application areas The RTX 2000 is implemented on 2.0 micron CMOS standard cell techno\u00ad logy, packaged in an 84-pin Pin Grid Array (PGA). The RTX 2000 runs at up to 10 MHz. A large advantage of standard cell technology is that RAM and logic may be mixed on the same chip, allowing both the return stacks and the data stacks to be placed on-chip. Because the RTX 2000 executes most instructions, including conditional branches and subroutine calls, in a single cycle, there is a significant amount of time between the beginning of the clock cycle and the time that the memory address is valid for fetching the next instruction. This time is approximately half the clock cycle, meaning that program memory must be approximately twice as fast as the clock rate. While the RTX 2000 was originally based on the NC4016 design, it has been substantially improved and does not have the hardware anomalies found on the NC4016. The RTX 2000 is aimed at the high end 16-bit microcontroller market. Because it is implemented with semicustom technology, specialized versions of the processor can be made for specific design applications. Some possible applications include laser printer control, Graphics CRT display control. ", "content": "information section derived rtx2000 data sheet harris semiconductor 1988a rtx 2000 instruction set manual harris semiconductor 1988b", "url": "RV32ISPEC.pdf#segment51", "timestamp": "2023-11-05 21:10:39", "segment": "segment51", "image_urls": [], "Book": "stack_computers_book" }, { "section": "4.5.6 Standard cell designs The ", "content": "harris rtx 2000 derives many benefits fact built using standard cell technology instead gate array difference gate array designer customizing regular pattern preplaced logic gates silicon standard cell design designer working library logic functions arbitrarily arranged silicon predetermined gate arrangement scheme used gate arrays predefined memory areas coming use flexibility afforded standard cell design techniques equalled gate array approach thus major differences nc4016 rtx 2000 become apparent rtx 2000 able take advantage flexibility standard cells include stack ram cells onchip flexibility family rtx 2000 processors differing capabilities planned using core processor large standard cell design process addition standard product versions rtx family users benefit applicationspecific hardware examples specialpurpose hardware include serial communication ports fft address generators data compression circuitry hardware might otherwise built offchip standard cell technology users tailored versions chip made use tailoring include process technology gets denser 20 microns significant amount program ram rom onchip", "url": "RV32ISPEC.pdf#segment52", "timestamp": "2023-11-05 21:10:40", "segment": "segment52", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 5 ", "content": "architecture 32bit systems 32bit stack computers beginning come production 1989 soon play central role future stack machines section 51 shall discuss strengths problems associated 32bit stack processors section 52 shall discuss johns hopkins universityapplied physics laboratory frisc 3 design also known silicon composers sc32 frisc 3 hardwired stack processor designed spirit nc4016 successors flexibility uses fairly small onchip stack buffers managed automatic buffer control circuitry section 53 shall discuss harris rtx 32p design rtx 32p microcoded processor descendent wisc cpu16 twochip implementation wisc cpu32 processor rtx 32p uses rambased microprogram memory achieve flexibility also rather large onchip stack buffers rtx 32p prototype processor commercial 32bit stack processor development section 54 shall discuss wright state university sf1 design sf1 actually ml1 stack machine uses stack frames multiple hardware stacks support c conventional languages however sf1 strong mlo roots forms interesting example ml1 design stacks mlo designs implementation strategies three processors quite different accomplish goal high speed execution stack programs", "url": "RV32ISPEC.pdf#segment53", "timestamp": "2023-11-05 21:10:40", "segment": "segment53", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.1 WHY 32-BIT SYSTEMS? ", "content": "16bit processors described chapter 4 sufficiently powerful wide variety applications especially embedded control environ ment applications require added power 32 bit processor applications involve extensive use 32bit integer arithmetic large memory address spaces floating point arithmetic one difficult technical challenges arises designing 32 bit stack processor management stacks brute force approach separate offchip stack memories manner nc4016 unfortunately 32bit design requires 64 extra pins data bits making approach unpractical costsensitive appli cations frisc 3 solves problem maintaining two automatically managed topofstack buffers onchip using normal ram data pins spill individual stack elements program memory rtx 32p simply allocates large amount chip space onchip stacks performs block moves stack elements memory stack spilling chapter 6 goes detail tradeoffs involved approaches", "url": "RV32ISPEC.pdf#segment54", "timestamp": "2023-11-05 21:10:40", "segment": "segment54", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2.2 Block diagram Fig. 5.1 is an architectural block diagram of the FRISC 3. The Data Stack and Return Stack are implemented as identical hardware stacks. They each consist of a stack pointer with special control logic feeding an address to a 16-element by 32-bit stack memory arranged as a circular buffer. The top four elements of both stacks are directly readable onto the Bbus. In addition, the topmost element of the Data Stack may be read onto the Tbus (Top-of-stack bus) and the topmost element of the Return Stack may be read onto the Abus (return Address bus). Both stack buffers are ", "content": "dualported allows two potentially different elements stacks read simultaneously one stack element may written time one innovative features frisc 3 use stack management logic associated stack pointers logic automati cally moves stack items 16word onchip stacks program memory stack spilling area guarantee onchip stack buffers never experience overflow underflow logic steals program memory cycles processor accomplish avoiding extra stack data pins chip exchange small performance degradation spread throughout program execution designers frisc 3 call feature stack cache caches top stack elements quick access onchip cache like normal data instruction caches employ associative memory lookup structure allow access data residing scattered areas memory alu section frisc 3 includes standard alu fed latches bbus tbus two alu sources separate busses allow topmost data stack element via tbus top four data stack elements via bbus operated instruction since data stack dualported bbus feed nonstack bus source b side alu well latches bbus tbus feed alu inputs used capture data first half clock cycle allows bbus used write data alu registers within chip second half clock cycle shift block b input alu used shift b input left one bit division also pass data unshifted similarly shift unit alu output shift data right one bit multiplication desired feeding bbus latch alu output allows pointerplusoffset addressing access memory first clock cycle memory fetch store alu adds literal field value via tbus selected data stack word bbus second cycle bbus used transfer selected bus destination memory flag register fl used store one 16 selectable condition codes generated alu use conditional branches multiple precision arithmetic zero register used supply constant value 0 bbus four user registers provided store pointers memory values two registers reserved use stack control logic store location top element program memory resident portions data stack return stack program counter pc used supply abus program memory addresses fetching instructions pc may also routed via alu return stack subroutine calls return stack may used drive abus instead pc subroutine returns instruction register may used drive abus instruction fetching subroutine calls branching", "url": "RV32ISPEC.pdf#segment55", "timestamp": "2023-11-05 21:10:40", "segment": "segment55", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2.3 Instruction set summary Fig. 5.2 shows FRISC 3\u2019s four instruction formats: one for control flow, one for memory loads and stores, one for ALU operations, and one for shift operations. The FRISC 3 uses unencoded instruction formats similar in spirit to those found on the NC4016, RTX 2000, and M17. All instruction formats use the highest 3 bits of the instruction to specify the instruction type. Fig. 5.2(a) shows the control flow instruction format. The three control flow instructions are subroutine call, unconditional branch, and conditional branch. The conditional branch instruction is taken if the FL register was set to zero by the most recent instruction to set the FL register. The address field contains a 29-bit absolute address. Unconditional branches may be used by the compiler to accomplish tail-end recursion elimination. Fig. 5.2(b) shows the memory access instruction format. Bits 0-15 contain an unsigned offset to be added to the address supplied by the bus source operand. This is accomplished by latching the bus source and the offset field from the instruction at the ALU inputs, performing an addition, and routing the resultant ALU output to the Abus for memory addressing. Bits 16-19 specify control information for incrementing and decrement\u00ad ing the Return Stack Pointer and/or Data Stack Pointer. Bits 20-23 specify the Bbus Destination. In this notation, \u2018TOS\u2019 means Top of Data Stack, \u2018SOS\u2019 means Second on Data Stack, \u20183OS\u2019 means 3rd element of Data Stack, \u2018TOR\u2019 means Top of Return Stack, etc. Bits 24-27 specify the Bus Source for the Bbus in a similar manner. Bit 28 specifies whether the next instruction fetched will be addressed by the top element of the Return Stack or the Program Counter. Using bit 28 to specify the Return Stack as the instruction address is combined with a Return Stack pop operation to accomplish a subroutine return in parallel with other operations. Bits 29-31 specify the instruction type. For the memory access format instructions, the four possible instructions are: load from memory, store to memory, load address (low), and load address (high). The load and store from/to memory instructions use the bus source to supply an address, and the bus destination field to specify the data register destination of source. The load and store instructions are the only instructions that take two clock cycles, since they must access memory twice to accomplish both data movement and the next instruction fetch. The two load address instructions simply load the computed memory address into the destination register without accessing memory at all. This may also be thought of as an add-immediate instruction. The load address high instruction shifts the offset left 16 bits before performing the addition. The load address instructions are also the means for loading literal values, since the address register can be selected to the ZERO register. In this manner a load address high followed by a load address low instruction can be used to synthesize a full 32-bit literal. Fig. 5.2(c) shows the ALU instruction format. Bits 0-6 of this instruction ", "content": "format specify alu operation performed side alu connected thus b side connected bbus bit 7 enables loading fl register condition code selected bits 1013 instruction condition codes provide various combi nations zero bit negative bit carry bit overflow bit well constant 0 1 bits 89 select carry alu operation bit 14 selects whether actual alu result contents fl register driven onto bbus bit 15 0 indicating instruction alu operation bits 1628 identical memory access instruction format shown fig 52 b bits 2931 specify alushift operation instruction type fig 52 shows shift instruction format bit 0 instruction format unused bit 1 specifies whether fl register input taken condition codes selected bits 1013 shiftout bit selected shift register bits 23 select special step operations performing multipli cation restoring division bit 4 selects whether shiftright input bit comes fl register alu condition code bits 56 specify either left right shift operation bit 7 specifies whether fl register loaded shift output bit condition code generated bits 1013 bit 1 bits 89 select carryin alu operation bit 14 determines whether alu output fl register driven bbus bit 15 1 indicating instruction shift operation bits 1628 identical memory access instruction format shown fig 52 b bits 2931 specify alushift operation instruction type instructions execute one clock cycle exception memory load memory store instructions take two clock cycles clock cycle broken execution source phase destination phase source phase selected bbus source tbus value read alu input latches destination phase bbus destination written instruction fetched parallel execution previous instruction subroutine calls accomplished single clock cycle subroutine returns take extra time extent combined instructions usual forth primitives well manipulations top four stack elements data stack return stack supported frisc 3 instruction set table 51 shows representative sample frisc 3 instructions", "url": "RV32ISPEC.pdf#segment56", "timestamp": "2023-11-05 21:10:40", "segment": "segment56", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2.4 Architectural features Like all the other machines designs discussed so far, the FRISC 3 has a separate memory address bus (the Abus) for fetching instructions in parallel with other operations. In addition, the FRISC 3 does not have a dedicated top-of-stack register for the Data Stack, but instead uses a dual-ported stack memory to allow arbitrary access to any of the top four stack elements. This provides a more general capability than a pure stack machine and can speed up some code sequences. ", "content": "stack control logic means prevent catastrophic stack overflow underflow program execution dribbling elements onto stack keep least 4 elements stack times without overflowing demandfed approach stack buffer management discussed greater detail section 6422 stack 16 words used circular buffer stack controllers perform stack data movement memory whenever would less four 12 elements onchip stack movement performed one element time since stack pointers incremented decremented per instruction stack element transfer memory consumes two clock cycles chapter 6 discusses cost extra cycles frisc 3 designers claim typically 2 overall program execution time machine", "url": "RV32ISPEC.pdf#segment57", "timestamp": "2023-11-05 21:10:40", "segment": "segment57", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.3.1 Introduction The Harris Semiconductor RTX 32P is a 32-bit member of the Real Time Express (RTX) processor family. The RTX 32P is a prototype machine that is the basis of Harris\u2019 commercial 32-bit stack machine design. The RTX 32P is a CMOS chip implementation of the WISC Technolo\u00ad gies CPU/32 (Koopman 1987c) which was originally built using discrete TTL components. The CPU/32 was in turn developed from the WISC CPU/16 described in Chapter 4. Because of this history, the RTX 32P is a micro\u00ad coded machine, with on-chip microcode RAM and on-chip stacks. The RTX 32P is a 2-chip stack processor designed primarily for maxi\u00ad mum flexibility as an architectural evaluation platform. It contains very large data and return stacks on-chip, as well as a large amount of on-chip microcode memory. This large amount of high speed RAM forced the design to use two chips, but this was consistent with the goal of producing a research and development vehicle. Real-time control is the primary appli\u00ad cation area for the RTX 32P. ", "content": "primary language programming rtx 32p forth however rtx 32p commercial successor enhanced excellent support conventional languages c ada pascal special purpose languages lisp prolog functional programming languages important design philosophy rtx 32p processor speeds increase alu cycled twice every offchip memory access therefore rtx 32p executes two microinstructions main memory access including instruction fetches every instruction two clock cycles length different microinstruction executed clock cycle reasons adopting strategy discussed greater length section 94", "url": "RV32ISPEC.pdf#segment58", "timestamp": "2023-11-05 21:10:40", "segment": "segment58", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.3.2 Block diagram Fig. 5.3 is an architectural block diagram of the RTX 32P. The Data Stack and Return Stack are implemented as identical hardware stacks consisting of a 9-bit up/down counter (the Stack Pointer) feeding an address to a 512-element by 32-bit-wide memory. The stack pointers are readable and writable by the system to provide an efficient way of accessing deeply buried stack elements. The ALU section includes a standard multifunction ALU with a DHI register for holding intermediate results. By convention, the DHI register acts as a buffer for the top stack element. This means that the Data Stack Pointer actually addresses the element perceived by the programmer to be the second-from-top stack element. The result is that an operation on the top two stack elements, such as addition, can be performed in a single cycle, with the B side of the ALU reading the second stack element from the Data Stack and the A side of the ALU reading the top stack element from the Data Hi register. The Data Latch on the B side of the ALU input is a normally transparent latch that can be used to retain data for one clock cycle. This speeds up swap operations between the DHI register and the Data Stack. There are no condition codes visible to machine language programs. Add with carry and other multiple precision operations are supported by microcoded instructions that push the carry flag onto the data stack as a logical value (0 for carry clear, -1 for carry set). The DLO register acts as a temporary holding register for intermediate results within a single instruction. Both the DHI and the DLO registers are shift registers, connected to allow 64-bit shifting for multiplication and division. An off-chip Host Interface is used to connect to the personal computer host. Since all on-chip storage is RAM-based, an external host is required for initializing the CPU. The RTX 32P has no program counter. Every instruction contains the address of the next instruction or refers to the address on the top of the return address stack. This design decision is in keeping with the observation that Forth programs contain a very high proportion of subroutine calls. ", "content": "section 633 discusses affects rtx 32p instruction format greater detail instead program counter block described memory address logic contains next address register nar holds pointer fetching next instruction memory address logic uses top element return stack address memory subroutine returns uses ram address register addr reg memory fetches stores efficiently memory address logic also contains incrementby4 circuit generating return addresses subroutine call operations since return stack memory address logic isolated system data bus subroutine calls subroutine returns unconditional jumps performed parallel operations results control transfer operations costing zero clock cycles many cases program memory organized 4g bytes memory addressable byte boundaries instructions 32bit data items required aligned 32bit memory boundaries since data accessed 32bit words memory actual rtx 32p chips address 8m bytes limited number pins package microprogram memory onchip readwrite memory containing 2k elements 30 bits memory addressed 256 pages 8 words opcode machine allocated page 8 words microprogram counter supplies 9bit page address lowest 8 bits used implementation scheme allows supplying 3 bits current microinstruction lowest bit result lin8 conditional microbranch selection address next microinstruction within microcode page allows conditional branching looping execution single opcode instruction decoding accomplished simply loading 9bit opcode microprogram counter using page address microprogram memory since microprogram counter built counter circuit operations span one 8microinstruction page required microinstruction register mir holds output micropro gram memory allows next microinstruction accessed microprogram memory parallel execution current microin struction mir completely removes microprogram memory access delay system critical path use also enforces lower limit two clock cycles instructions instruction could accomplished single clock cycle second noop microinstruction must added allow next instruction flow mir fetching sequence properly host interface allows rtx 32p operate two possible modes master mode slave mode slave mode rtx 32p controlled personal computer host allow program loading micro program loading alteration register memory location system initialization debugging master mode rtx 32p runs program freely host computer monitors status register request service rtx 32p master mode host computer may enter dedicated service loop may perform tasks prefetching next block disk input stream displaying image periodically poll status register rtx 32p wait service host long necessary", "url": "RV32ISPEC.pdf#segment59", "timestamp": "2023-11-05 21:10:41", "segment": "segment59", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.3.3 Instruction set summary The RTX 32P has only one instruction format, shown in Fig. 5.4. Every instruction contains a 9-bit opcode which is used as the page number for addressing microcode. It also contains a 2-bit program flow control field that invokes either an unconditional branch, a subroutine call, or a subroutine exit. In the case of either a subroutine call or an unconditional branch, bits 2-22 are used to specify the high 21 bits of a 23-bit word-aligned target address. This design limits program sizes to 8M bytes unless the page register in the Memory Address Logic is used with special far jump and call instructions. Data fetches and stores see the memory as a contiguous 4G byte address space. ", "content": "wherever possible rtx 32p compiler compacts opcode followed subroutine call return jump single instruction cases compaction possible nop opcode compiled call jump return jump next inline instruction compiled opcode tailend recursion elimination performed compressing subroutine call followed subroutine return simple jump beginning subroutine called saving cost return would otherwise executed calling routine since rtx 32p uses ram microcode memory microcode may completely changed user desired standard software environment cpu32 version mvpforth forth79 dialect haydon 1983 forth instructions included standard microcoded instruction set shown table 52 one thing rtx 32p instruction set may extended user incorporate stack manipulation primitives required particular application note rtx 32p uses ram microcode memory user may add modify instructions desired list merely indicates instructions supplied standard development software package noticeable instruction set number complexity instructions supported table", "url": "RV32ISPEC.pdf#segment60", "timestamp": "2023-11-05 21:10:41", "segment": "segment60", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2(b) ", "content": "shows common forth word combinations available single instructions table", "url": "RV32ISPEC.pdf#segment61", "timestamp": "2023-11-05 21:10:41", "segment": "segment61", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2(c) ", "content": "shows words used support underlying forth operations subroutine call exit table", "url": "RV32ISPEC.pdf#segment62", "timestamp": "2023-11-05 21:10:41", "segment": "segment62", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2(d) ", "content": "lists high level forth words directly supported specialized microcode table", "url": "RV32ISPEC.pdf#segment63", "timestamp": "2023-11-05 21:10:41", "segment": "segment63", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.2(e) ", "content": "shows words added microcode support extended precision integer operations 32bit floating point calculations since instructions vary considerably complexity execution time instructions ranges accordingly simple instructions manipulate data stack swap take 2 microcycles one memory cycle complex microinstructions q 128bit addition may take 10 microinstructions still much faster comparable high level code desired microcoded loops written potentially last thousands clock cycles things block memory moves mentioned earlier instruction invokes sequence microin structions microprogram memory page corresponding 9bit opcode instruction fig 55 shows microinstruction format microcode used horizontal means one format microcode format broken separate fields control different portions machine wisc cpu16 simplicity stack machine approach rtx 32p hardware results simple microcode format case using 30 bits per microinstruction microcode format rtx 32p similar cpu16 discussed previous chapter bits 03 microinstruction specify source system data bus two bus sources used special control signals configure rtx 32p oneclockcycleperbit multiplication nonrestoring division 3264bit numbers bits 89 specify data bus destination two special cases destina tions exist dlo may independently specified bus destination using bits 2223 dhi register always loaded alu output bits 89 1011 specify data stack pointer return stack pointer control respectively bits 1213 control shifter output alu shifter allows shifting left right well 8bit rotation function bits 1415 microinstruction unused therefore included microcode ram bits 1620 control function alu bit 21 specifies carryin 0 1 synthesize multiple precision arithmetic microcode conditional microbranch based carryout low half result forces next carryin 0 1 appropriate bits 2223 control loading shifting dlo register bits 2429 microinstruction used compute 3bit offset microprogram page fetching next microinstruction bits 2426 select one eight condition codes form lowest address bit bits 2728 used constants generate two high order address bits allows jumping 2way conditional branching anywhere within microprogram page every clock cycle bit 29 used increment contents 9bit micro program counter allow opcodes use 8 microcode memory locations bit 30 initiates instruction decod ing sequence next instruction required since instructions variable number clock cycles long bit 31 controls return address incrementer use counter memory block data accesses one microinstruction executed every clock cycle two microinstructions executed every machine macroinstruction", "url": "RV32ISPEC.pdf#segment64", "timestamp": "2023-11-05 21:10:41", "segment": "segment64", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.3.4 Architectural features The heritage of the WISC CPU/16 in the RTX 32P architecture is unmistak\u00ad able . The most obvious area of improvement is the addition of more efficient Memory Address Logic and the isolation of the Return Address Stack from the Data Bus during subroutine call and return operations. These changes, ", "content": "along rtx 32p unique instruction format allow subroutine calls returns jumps processed free extent combined opcodes rtx 32p clock runs twice speed main memory accessed thus giving two clock cycles per memory cycle minimum two clock cycles per instruction number uses rtx 32p instruction format many immediately obvious one executing conditional branches rtx 32p direct hardware support conditional branches since would slow rest hardware much instructions require excessively fast program memory conditional branches accomplished using special obranch opcode combined subroutine call branch target subroutine call processed hardware parallel opcode evaluation whether top stack element zero case branch taken branch taken return stack popped converting subroutine call jump execution continues branch taken microcode pops return stack uses value fetch branch fallthrough instruction effect performing immediate subroutine return cost conditional branch 3 clock cycles take branch 4 clock cycles take branch remember processor memory cycle 2 clock cycles another interesting capability rtx 32p quick access memory location variable even though 0operand instruction format would seem require second memory location specify variable address following operation used special opcode compiled subroutine call address subroutine actually address variable desired fetched microcode steals variable value instruction fetching logic reads forces subroutine return value executed instruction point discussing two methods illustrate several significant capabilities hardware immediately obvious programmers used conventional machines capabilities especially useful programming data structure accesses example expert system decision trees actually allow direct execution data structures direct execution accomplished storing data tagged format 9bit tag corresponding special userdefined opcodes 23bit address subroutine call jump next data element structure subroutine return nil pointer important implementation feature rtx 32p resources machine directly controlled host computer done host interface supports microinstruction register load singlestep clock features features microinstruction desired executed first loading values registers system loading microinstruction cycling clock reading data values back examine results design technique makes writing microcode extremely straightforward eliminating need expensive external analysis hardware also makes testing diagnos tic programs simple write rtx 32p supports interrupt handling including interrupt stack underflow overflow data stack return stack usual technique handling overflows underflows page half onchip stack contents holding area program memory allows programs use arbitrarily deep stacks 512element hardware stack buffer size typical forth programs never experience stack overflow", "url": "RV32ISPEC.pdf#segment65", "timestamp": "2023-11-05 21:10:42", "segment": "segment65", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.4.1 Introduction The Wright State University\u2019s SF1 (which stands for Stack Frame computer number 1) is an experimental multi-stack processor designed to efficiently execute high level languages, including both Forth and C. It is designed to ", "content": "large number stacks using five stacks implementation described sf1 roots forth language crosses boundary mlo ml2 machines allowing instruction address elements two stacks directly stack memory interesting mix features found frisc 3 rtx 32p well unique innovations wright state university developed series stackbased computers starting rufor grewe dixon 1984 purely forth based processor built bitslice components 19851986 computer architecture class built discrete component prototype genera lized stack processor called sf1 19861987 vlsi class extended architecture made multichip custom silicon implementation vlsi version sf1 following description vlsi sf1 implementation intended application area sf1 realtime control using forth c high level languages", "url": "RV32ISPEC.pdf#segment66", "timestamp": "2023-11-05 21:10:42", "segment": "segment66", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.4.2 Block diagram Fig. 5.6 is an architectural block diagram of the SF1. The SF1 has two busses. The MBUS is multiplexed to transfer addresses to program memory and then instructions and data to or from program memory. The SBUS is used to transfer data between system resources. The two-bus design allows instructions to be fetched on the MBUS while data operations are taking place on the SBUS. The ALU has an associated top-of-stack register (TOS) which receives the results of all ALU operations. The ALU input (ALUI) register acts as a holding buffer to contain the second operand for the ALU. ALUI may be loaded from either the SBUS for most operations, or the MBUS for memory fetches. Both the ALUI and the TOS may be routed to the MBUS or the SBUS. The TOS register by convention contains the top stack element of whatever stack is being used for a particular instruction, although it is up to the programmer to ensure it is managed properly. There are eight different sources and destinations connected to the stack bus: S, F, R, L, G, C, I, and P: ", "content": "generalpurpose stack parameters l loop counter stack g global stack f frame stack r return address stack c inline constant value io address space p program counter eight referred stacks machine reference material reality c p resources special nonstack structures stacks l g f r part interchangeable practice may used purpose stack somewhat specialized subroutine return addresses automatically pushed onto stack one top 8192 elements stacks may specified bus source destination whenever stack read top element may either retained popped stack popped top element always shifted stack memory regardless element actually read similarly stack used bus destination one top 8192 elements stack may written top stack element may pushed value sbus c bus source used return 13bit signed constant address field instruction p used load store program counter used address io address space 8k words program counter pc counter asserted mbus provide addresses instructions well loaded mbus jumps subroutine calls pc also read written sbus save restore subroutine return addresses", "url": "RV32ISPEC.pdf#segment67", "timestamp": "2023-11-05 21:10:42", "segment": "segment67", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.4.3 Instruction set summary The SF1 has two instruction formats as shown in Fig. 5.7. The first instruction format is used for jumps and subroutine calls, the second for all other instructions. Fig. 5.7(a) shows the jump/subroutine call instruction format. This instruction format is selected by a 0 in bit 0 of the instruction. Bit 1 of the instruction selects a jump if set to 1, or a subroutine call if set to 0. Bits 2-31 of the instruction select a word-aligned address as the jump/call target. This instruction format is quite similar to that of the RTX 32P shown in Fig. 5.4, but without the opcode in the highest order bits. Both jump and subroutine call instructions take one clock cycle. Fig. 5.7(b) shows the operation instruction format, which is more like the FRISC 3 ALU instruction format shown in Fig. 5.2(c). Bit 0 is a constant 1 which selects this instruction format. Bit 1 selects between a no-branch and a skip operation. If a skip operation is selected and the zero status flag is set, the next instruction in the instruction stream is fetched. This can be used to implement a conditional branch-on-zero instruction sequence. Bits 2-7 select the ALU operations. A special ALU operation returns the status flags from the previous ALU instruction. These flags can be used as an offset into a multi-way jump table for branching on multiple condition codes. This conditional branching is slower than using a skip instruction, but is more flexible. Before covering the operation of bits 8-28, we should describe the way the SBUS works during a clock cycle. The SBUS is used twice during each clock cycle. During the first half of the clock cycle, the SBUS is used to read one of 8 bus sources. The data read is always placed into the ALUI register. During the second half of the clock cycle, the ALU performs its operation on the new ALUI value and the old TOS value. Simultaneously, the old TOS value is written to one of 8 bus destinations. Bit 29 in the instruction format can override the selection of TOS as the value to be written by forcing ALUI (which was just loaded on the first half of the clock cycle) to be asserted on the SBUS during the second half of the clock cycle. Bits 8-11 select the SBUS destination. This destination is written with the TOS value set by the previous instruction during the second half of the clock cycle. Bit 8 selects whether the destination stack is pushed or just written. Similarly, bits 12-15 select the SBUS source, which is read during the first half of the clock cycle. Bit 12 selects whether the source stack is popped. Bits 16-28 provide an address that is used when reading and writing the stacks. This address allows reading or writing any one of the top 8K stack elements directly. Note that there is only one address in the instruction, so ", "content": "source destination stacks must use address given cycle bit 29 used override selection tos value written sbus destination set bit uses alui register instead tos register allows direct data movement two sbus resources loading value alui first half clock cycle storing value second half clock cycle bits 3031 used control memory accesses bit 31 selects extended instruction cycle uses second clock cycle access ram via mbus first clock cycle used fetch next instruction bit 30 specifies ram read write operation tos register read first clock cycle provide address read written second clock cycle provide receive ram data note ram reads writes performed second two clock cycles bits 229 may used perform normal instruction first two cycles first clock cycle often used reload tos register contained address value written ram second clock cycle", "url": "RV32ISPEC.pdf#segment68", "timestamp": "2023-11-05 21:10:42", "segment": "segment68", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.4.4 Architectural features Once again, we see the importance of providing a dedicated path for instruction fetching in the form of the MBUS, with a second path for data manipulations in the form of the SBUS. As with the other stack machines, the SF1 is designed to support fast instruction execution and, in particular, quick subroutine calls. The use of operands in the SF1 operation instruction format is novel. The use of a single top-of-stack register as one input for all ALU operations and the fact that only a single address field is provided makes the architecture feel like a 1-operand stack machine. However, the fact that both a source and a destination may be specified for each instruction makes the machine feel more like a 2-operand machine. Perhaps this instruction format is more properly called a 1-operand instruction, since only a single address is available while both a source and a destination may be selected. The reason for having the top 8K elements of each stack directly addressable is to provide support for languages such as C which have the notion of a stack frame. In addition, one of the stacks can be used as a very large (8K word) register file by simply never pushing or popping that particular stack. The reason for having several hardware stacks is to support fast context switching in real-time control applications. Although the implementation described only contains five hardware stacks, this number can be increased in other versions of the design. A simple way to allocate the stacks would be to dedicate one hardware stack to each of four high priority tasks, with the fifth stack saved and restored to and from program memory as required to process low priority tasks. Subroutine returns are accomplished under program control by popping ", "content": "stack writing top stack element pc prefetch pipeline instruction following subroutine return also executed return takes effect 32bit literal values obtained using pcrelative addressing 13bit offset access constant stored program space constant typically placed unconditional branch subroutine return end procedure used", "url": "RV32ISPEC.pdf#segment69", "timestamp": "2023-11-05 21:10:42", "segment": "segment69", "image_urls": [], "Book": "stack_computers_book" }, { "section": "5.4.5 Implementation and featured application areas The SF1 is implemented on 3.0 micron CMOS MOSIS technology using a full-custom approach. Two custom chip designs are used. One chip contains the ALU and PC, while the other chip implements a 32-bit-wide stack. Control and instruction decoding is accomplished using programmable logic, but will eventually be incorporated onto custom VLSI as well. The implementation of the stack chips is quite different than what has been seen on other stack machines. Since the stack must be designed for random access of elements, an obvious design method would be to incorpor\u00ad ate an adder with a stack pointer into a standard memory. This method has the disadvantage that it is slow and difficult to expand to multi-chip stacks. The approach taken in the SF1 is completely different. Each stack memory is actually a giant shift register that actually moves the stack elements between adjacent memory words when the stack is pushed or popped. Addressing the Nth word in the stack is done simply by addressing the Nth word in memory, since the top element on the stack is always kept shifted into the Oth memory address. One disadvantage of this approach is that shift register cells are larger than regular memory cells, so the largest stack chip made for the SF1 contains only 128 words by 32 bits of memory. The SF1 is primarily a research platform, with an emphasis on real-time control with fast context switching (by dedicating a stack chip to each task) and support for high level languages. The information in this section is based on the description of the SF1 given by Dixon (1987) and Longway (1988). ", "content": "understanding stack machines preceding chapters covered abstract description stack machine several examples real stack machines built shall examine designed way stack machines certain inherent advantages conventional designs three different approaches computer design used reference points chapter first reference point complex instruction set computer cisc typified digital equipment corporation vax series microprocessors used personal computers eg 680x0 80x86 second reference point reduced instruction set computer risc patterson 1985 typified berkeley risc project sequin patterson 1982 stanford mips project hennesy 1984 third reference point stack machines described preceding chapters section 61 discusses history debates taken place years among advocates registerbased machines stack based machines storagetostoragebased machines related topic recent debates proponents high level language cisc architectures risc architectures section 62 discusses advantages stack machines stack machines smaller program sizes lower hardware complexity higher system performance better execution consistency processors many application areas section 63 presents results study instruction frequencies forth programs surprisingly subroutine calls returns constitute significant percentage instruction mix forth programs section 64 examines issue stack management using results stack access simulation results indicate fewer 32 stack elements needed many application programs section also discusses four different methods handling stack overflows large stacks demandfed stack manager paging stack manager associative cache memory section 65 examines cost interrupts multitasking stack based machine simulation shows context switching stack buffers minor cost environments furthermore cost context switching stack buffers may reduced appropria", "url": "RV32ISPEC.pdf#segment70", "timestamp": "2023-11-05 21:10:42", "segment": "segment70", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 6 ", "content": "tely programmed interrupts using lightweight tasks partitioning stack buffer multiple small buffer areas", "url": "RV32ISPEC.pdf#segment71", "timestamp": "2023-11-05 21:10:42", "segment": "segment71", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.1 AN HISTORICAL PERSPECTIVE ", "content": "debate designers machines hardware supported stacks designers long history debate split two major areas debate registerbased nonregisterbased machine designers debate high level language machine designers risc designers hope put forth definitive answers questions raised debates ideas presented references given worthy consideration interested reader", "url": "RV32ISPEC.pdf#segment72", "timestamp": "2023-11-05 21:10:42", "segment": "segment72", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.1.1 Register vs. nonregister machines The debate on whether to design a machine that makes registers explicitly available to the assembly language programmer dates back to design decisions made in the late 1950s. The existence of the stack-based KDF.9 computer (Haley 1962) is evidence that computer architects had begun thinking of alternatives to the standard register-based way of designing computers many years ago. The debate on whether or not to use register-based machines involves a number of alternatives. These alternatives include: pure stack-based machines, single-operand stack-based machines (also called stack/accumu- iator machines), and storage-to-storage machines. The pure stack machines, which fall into the SSO, MSO, SLO, and MLO taxonomy categories, are exceedingly simple. An obvious argument in favor of stack-based machines is that expression evaluation requires the use of a stack-like structure. Register-based machines spend some of their time emulating a stack-based machine while evaluating expressions. However, pure stack machines may require more instructions than a stack/accumula- tor machine (SSI, MS1, SL1, ML1 taxonomy categories) since they cannot fetch a value and perform an arithmetic operation upon that value at the same time. The astute reader will notice that the 32-bit stack machines discussed in Chapter 5 use multiple-operation instructions such as \" @ +\" to compensate for this problem to a large degree. Storage-to-storage machines, in which all instruction operands are in memory, are seen as being valuable in running high level languages such as C and Pascal. The reason given for this is that most assignment statements in these languages have only one or two variables on the right-hand side of the assignment operator. This means that most expressions can be handled with a single instruction. This eliminates instructions which otherwise would be required to shuffle data into and out of registers. The CRISP architecture (Ditzel et al. 1987a, 1987b) is an example of a sophisticated storage-to- storage processor. Some of the most frequently cited references in this debate are a sequence of articles that appeared in Computer Architecture News: Keedy ", "content": "1978a keedy 1978b keedy 1979 myers 1977 schulthess mum precht 1977 sites 1978 articles address issues dated respects nonetheless form good starting point interested historical roots ongoing debate", "url": "RV32ISPEC.pdf#segment73", "timestamp": "2023-11-05 21:10:42", "segment": "segment73", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.1.2 High level language vs. RISC machines ", "content": "related debate proponents high level language machines risc philosophy machine design high level language machines may thought one advanced evolutionary paths cisc philosophy machines poten tially complex instructions map directly onto functions one high level languages cases output front end compiler used generate intermediate level code executed directly machine pcode pascal mcode modula2 ultimate extension philosophy probably symbol project ditzel kwinn 1980 implemented system functions hardware including program editing compilation risc philosophy high level language support one providing simplest possible building blocks compiler use synthesizing high level language operations usually involves code sequences loads stores arithmetic operations implement high level language statement risc proponents claim collections code sequences made run faster risc machine equivalent complex instructions cisc machine stack machine design philosophy falls somewhere philosophies high level language machine design risc design stack machines provide simple primitives may executed single memory cycle spirit risc philosophy however efficient programs stack machines make extensive use application specific code accessed via cheap subroutine calls collection subroutines may thought virtual instruction set tailored needs high level language compilers without requiring complicated hardware support good sampling references topic high level language machines versus risc machines cragon 1980 ditzel patterson 1980 kavipurapu cragon 1980 kavi et al 1982 patterson piepho 1982 wirth 1987", "url": "RV32ISPEC.pdf#segment74", "timestamp": "2023-11-05 21:10:43", "segment": "segment74", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.2 ARCHITECTURAL DIFFERENCES FROM CONVENTIONAL MACHINES ", "content": "obvious difference stack machines conventional machines use 0operand stack addressing instead register memory based addressing schemes difference combined support quick subroutine calls makes stack machines superior conventional machines areas program size processor complexity system complexity processor performance consistency program execution", "url": "RV32ISPEC.pdf#segment75", "timestamp": "2023-11-05 21:10:43", "segment": "segment75", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.2.1 Program size A popular saying is that \u2018memory is cheap\u2019. Anyone who has watched the historically rapid growth in memory chip sizes knows the amount of memory available on a processor can be expected to increase dramatically with time. The problem is that even as memory chip capacity increases, the size of problems that people are calling on computers to solve is growing at an even faster rate. This means that the size of programs and their data sets is growing even faster than available memory size. Further aggravating the situation is the widespread use of high level languages for all phases of programming. This results in bulkier programs, but of course improves programmer productivity. Not surprisingly, this explosion in program complexity leads to a seem\u00ad ing contradiction, the saying that \u2018programs expand to fill all available memory, and then some\u2019. The amount of program memory available for an application is fixed by the economics of the actual cost of the memory chips and printed circuit board space. It is also affected by mechanical limits such as power, cooling, or the number of expansion slots in the system (limits which also figure in the economic picture). Even with an unlimited budget, electrical loading considerations and the speed-of-light wiring delay limit bring an ultimate limit to the number of fast memory chips that may be used by a processor. Small program sizes reduce memory costs, component count, and power requirements, and can improve system speed by allowing the cost effective use of smaller, higher speed memory chips. Additional benefits include better performance in a virtual memory environment (Sweet & Sandman 1982, Moon 1985), and a requirement for less cache memory to achieve a given hit ratio. Some applications, notably embedded microprocessor applications, are very sensitive to the costs of printed circuit board space and memory chips, since these resources form a substantial proportion of all system costs (Ditzel et al. 1987b). The traditional solution for a growing program size is to employ a hierarchy of memory devices with a series of capacity/cost/access-time tradeoffs. A hierarchy might consist of (from cheapest/biggest/slowest to most expensive/smallest/fastest): magnetic tape, optical disk, hard disk, dynamic memory, off-chip cache memory, and on-chip instruction buffer memory. So a more correct version of the saying that \u2018memory is cheap\u2019 might be that \u2018slow memory is cheap, but fast memory is very dear indeed\u2019. The memory problem comes down to one of supplying a sufficient quantity of memory fast enough to support the processor at a price that can be afforded. This is accomplished by fitting the most program possible into the fastest level of the memory hierarchy. The usual way to manage the fastest level of the memory hierarchy is by using cache memories. Cache memories work on the principle that a small section of a program is likely to be used more than once within a short period of time. Thus, the first time a small group of instructions is referenced, it is copied from slow memory into the fast cache memory and saved for later use. This decreases the access delay on the second and subsequent accesses ", "content": "program fragments since cache memory limited capacity instruction fetched cache eventually discarded slot must used hold recently fetched instruction problem cache memory must big enough hold enough program fragments long enough eventual reuse occur cache memory big enough hold certain number instructions called working set significantly improve system performance size program affect performance increase assume given number high level language operations working set consider effect increasing compactness encoding instructions intuitively sequence instructions accom plish high level language statement compact machine machine b machine needs smaller number bytes cache hold instructions generated source code machine b means machine needs smaller cache achieve average memory response time performance way example davidson vaughan 1987 suggest risc computer programs 25 times bigger cisc versions programs although sources especially risc vendors would place number perhaps 15 times bigger also suggest risc computers need cache size twice large cisc cache achieve performance furthermore risc machine twice cache cisc machine still generate twice number cache misses since constant miss ratio generates twice many misses twice many cache accesses resulting need higher speed main memory devices well equal performance corroborated rule thumb risc processor 10 mips million risc instructions per second performance range needs 128k bytes cache memory satisfac tory performance high end cisc processors typically need 64k bytes stack machines much smaller programs either risc cisc machines stack machine programs 25 8 times smaller cisc code harris 1980 ohran 1984 schoellkopf 1980 although limitations observation discussed later means risc processor cache memory may need bigger stack processor entire program memory achieve comparable memory response times anecdotal evidence effect consider following situation unixc programmers risc processors unhappy less 8m 16m bytes memory want 128k bytes cache forth programmers still engaged heated debate whether 64k bytes program space really needed stack machines small program size stack machines decreases system costs eliminating memory chips actually improve system performance happens increasing chance instruction resident high speed memory needed possibly using small program size justification placing entire program fast memory stack processors small memory require ments two factors account extremely small program sizes possible stack machines obvious factor one usually cited literature stack machines small instruction formats conventional architectures must specify operation instruction also operands addressing modes example typical registerbased machine instruction add two numbers together might add r1 r2 instruction must specify add opcode also fact addition done two registers registers r1 r2 hand stackbased instruction set need specify add opcode since operands implicit address current top stack time operand present performing load store instruction pushing immediate data value onto stack wisc cpu16 harris rtx 32p use 8 9bit opcodes respectively yet many opcodes actually needed run programs efficiently loosely encoded instructions found processors discussed book exemplified novix nc4016 allow packing 2 operations instruction achieve little sacrifice code density byteoriented machine less obvious actually important reason stack machines compact code efficiently support code many frequently reused subroutines often called threaded code bell 1973 dewar 1975 code possible conventional machines execution speed penalty severe fact one elementary compiler optimizations risc cisc machines compile procedure calls inline macros added programmers experience many procedure calls conventional machine destroy program performance leads significantly larger programs conventional machines hand stackoriented machines built support pro cedure calls efficiently since working parameters always present stack procedure call overhead minimal requiring memory cycles parameter passing stack processors procedure calls take one clock cycle procedure returns take zero clock cycles frequent case combined operations several qualifications associated claim stack machines compact code machines especially since presenting results comprehensive study program size measures depend largely language used compiler programming style well instruction set processor used also studies harris ohran schoellkopf mostly stack machines used variable length instructions machines described book use 16 32bit fixed length instructions counterbalancing fixed instruction length fact processors running forth smaller programs stack machines programs smaller use frequent subroutine calls allowing high degree code reuse within single application program shall see later section fixed instruction length even 32bit processors rtx 32p cost much program memory space one might think", "url": "RV32ISPEC.pdf#segment76", "timestamp": "2023-11-05 21:10:43", "segment": "segment76", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.2.2 Processor and system complexity ", "content": "speaking complexity computer two levels important processor complexity system complexity processor complexity amount logic measured chip area number transistors etc actual core processor computations system complexity considers processor embedded fully functional system contains support circuitry memory hierarchy software cisc computers become substantially complex years complexity arises need good many functions simultaneously large degree complexity stems attempt tightly encode wide variety instructions using large number instruction formats added complexity comes support multiple programming data models machine reasonably efficient processing cobol packed decimal data types time sliced basis running double precision floating point fortran matrix operations lisp expert systems bound complex complexity cisc machines partially result encoding instructions keep programs relatively small goal reduce semantic gap high level languages machine produce efficient code unfortunately may lead situation almost available chip area used control data paths instance motorola 680x0 intel 80x86 products additionally argument made risc proponents cisc designs may paying perfor mance penalty well size penalty extremes cisc processors take complexity core processor may seem excessive driven common well founded goal establishment consistent simple interface hardware software success approach demonstrated ibm system370 line computers computer family encompasses vast range price performance personal computer plugin cards supercomputers assembly language instruction set clean consistent interface hardware software assembly language level means compilers need excessively complex produce reasonable code may reused among many different machines family another advantage cisc processors since instructions compact require large cache memory acceptable system performance cisc machines traded increased processor complexity reduced system complexity concept behind risc machines make processor faster reducing complexity end risc processors fewer transistors actual processor control circuitry cisc machines accomplished simple instruction formats instructions low semantic content much work take much time instruction formats usually chosen correspond require ments running particular programming language task typically integer arithmetic c programming language reduced processor complexity without substantial cost risc processors large bank registers allow quick reuse frequently accessed data register banks must dualported memory allowing two simultaneous accesses different addresses allow fetching source operands every cycle furthermore low semantic content instructions risc processors need much higher memory bandwidth keep instructions flowing cpu means substantial onchip systemwide resources must devoted cache memory attain acceptable performance also risc processors charac teristically internal instruction pipeline means extra hardware compiler techniques must provided manage pipeline special attention extra hardware resources must used ensure pipeline state correctly saved restored interrupts received finally different risc implementation strategies make significant demands compilers scheduling pipeline usage avoid hazards filling branch delay slots managing allocation spilling register banks decreased complexity processor makes easier get bugfree hardware even complexity shows compiler bound make compilers complex well expensive develop debug reduced complexity risc processors comes offsetting perhaps even severe increase system complexity stack machines strive achieve balance processor com plexity system complexity stack machine designs realize processor simplicity restricting number instructions rather limiting data upon instructions may operate operations top stack elements sense stack machines reduced operand set computers opposed reduced instruction set computers limiting operand selection instead much work instruction may several advantages instructions may compact since need specify actual operation sources obtained onchip stack memory single ported since single element needs pushed popped stack per clock cycle assuming top two stack elements held registers impor tantly since operands known advance top stack elements pipelining needed fetch operands operands always immediately available topofstack registers example consider n registers nc4016 design contrast dozens hundreds randomly accessible registers found risc machine implicit operand selection also simplifies instruction formats even risc machines must multiple instruction formats consider though stack machines instruction formats even extreme one instruction format rtx 32p limiting number instruction formats simplifies instruction decoding logic speeds system operation stack machines extraordinarily simple 16bit stack machines typi cally use 20 35 thousand transistors processor core contrast intel 80386 chip 275 thousand transistors motorola 68020 200 thousand transistors even taking account 80386 68020 32bit machines difference significant stack machine compilers also simple instructions consistent format operand selection fact compilers register machines go stacklike view source program expression evaluation map information onto register set stack machine compilers much less work mapping stacklike version source code assembly language forth compilers particular well known exceedingly simple flexible stack computer systems also simple whole stack programs small exotic cache control schemes required good performance typically entire program fit cachespeed memory chips without complexity cache control circuitry cases program andor data large fit affordable memory softwaremanaged memory hierarchy used frequently used subroutines program segments placed high speed memory infrequently used program segments placed slow memory inexpensive singlecycle calls frequent sections high speed memory make technique effective data stack acts data cache purposes procedure parameter passing data elements moved high speed memory software control desired traditional data cache lesser extent instruction cache might give speed improvements certainly required even desirable small mediumsized applications stack machines therefore achieve reduced processor complexity limiting operands available instruction force reduction number potential instructions available cause explosion amount support hardware software required operate processor result reduced complexity stack computers room left program memory special purpose hardware onchip interesting implication since stack programs small program memory many applications entirely onchip onchip memory faster offchip cache memory would eliminating need complex cache control circuitry sacrificing none speed", "url": "RV32ISPEC.pdf#segment77", "timestamp": "2023-11-05 21:10:44", "segment": "segment77", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.2.3.1 Instruction execution rate The most sophisticated RISC processors boast that they have the highest possible instruction execution rate \u2014 one instruction per processor clock cycle. This is accomplished by pipelining instructions into some sequence of instruction address generation, instruction fetch, instruction decode, data fetch, instruction execute, and data store cycles as shown in Fig. 6.1(a). This breakdown of instruction execution accelerates overall instruction flow, but ", "content": "introduces number problems significant problems management data avoid hazards caused data dependencies problem comes one instruction depends upon result previous instruction create problem second instruc tion must wait first instruction store results fetch operands several hardware software strategies alleviate impact data dependencies none completely solves stack machines execute programs quickly risc machines perhaps even faster without data dependency problem said register machines efficient stack machines register machines pipelined speed stack machines problem caused fact instruction depends effect previous instruction stack whole point however stack machines need pipelined get speed risc machines consider risc machine instruction pipeline modified redesigned stack machine machines need fetch instruction machines done parallel process ing previous instructions convenience shall lump stage instruction decoding risc stack machines need decode instruction although stack machines rtx 32p need perform conditional operations extract parameter fields instruc tion chose format use therefore simpler risc machines next step pipeline major difference becomes apparent risc machines must spend pipeline stage accessing operands instruction least decoding completed risc instruction specifies two registers inputs alu operation stack machine need fetch data waiting top stack needed means minimum stack machine dispense operand fetch portion pipeline actually stack access also made faster register access singleported stack made smaller therefore faster dualported register memory instruction execute portion risc stack machine judged since sort alu used systems even area stack machines gain advantage risc machines precomputing alu functions based topofstack elements instruction even decoded done m17 stack machine operand storage phase takes another pipeline stage risc designs since result must written back register file write conflicts reads need take place new instructions beginning execution causing delays need tripleported register file require holding alu output register using register next clock cycle source register file write operation conversely stack machine simply deposits alu output result topofstack register done additional problem extra data forwarding logic must provided risc machine prevent waiting result written back register file alu output needed input next instruction stack machine always alu output available one implied inputs alu fig 61 b shows risc machines need least three pipeline stages perhaps four maintain throughput instruction fetch operand fetch instruction executeoperand store also noted several problems inherent risc approach data dependencies resource contention simply present stack machine fig 61 c shows stack machines need two stage pipeline instruction fetch instruction execute means reason stack machines slower risc machines executing instructions good chance stack machines made faster simpler using fabrication technology", "url": "RV32ISPEC.pdf#segment78", "timestamp": "2023-11-05 21:10:44", "segment": "segment78", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.2.3.2 System Performance System performance is even more difficult to measure than raw processor performance. System performance includes not only how many instructions can be performed per second on straight-line code, but also speed in handling interrupts, context switches, and system performance degradation because of factors such as conditional branches and procedure calls. Approaches such as the Three-Dimensional Computer Performance tech\u00ad nique (Rabbat et al. 1988) are better measures of system performance than the raw instruction execution rate. RISC and CISC machines are usually constructed to execute straight- line code as the general case. Frequent procedure calls can seriously degrade the performance of these machines. The cost for procedure calls not only includes the cost of saving the program counter and fetching a different stream of instructions, but also the cost of saving and restoring registers, arranging parameters, and any pipeline breaking that may occur. The very existence of a structure called the Return Address Stack should imply how much importance stack machines place upon flow-of-control structures such as procedure calls. Since stack machines keep all working variables on a hardware stack, the setup time required for preparing parameters to pass to subroutines is very low, usually a single DUP or OVER instruction. Conditional branches are a difficult thing for any processor to handle. The reason is that instruction prefetching schemes and pipelines depend upon uninterrupted program execution to keep busy, and conditional branches force a wait while the branch outcome is being resolved. The only other option is to forge ahead on one of the possible paths in the hope that there is nondestructive work to be done while waiting for the branch to take effect. RISC machines handle the conditional branch problem by using a \u2018branch delay slot\u2019 (McFarling & Hennesy 1986) and placing a nondestruc\u00ad tive instruction or no-op, which is always executed, after the branch. Stack machines handle branches in different manners, all of which result in a single-cycle branch without the need for a delay slot and the compiler complexity that it entails. The NC4016 and RTX 2000 handle the problem by specifying memory faster than the processor cycle. This means that there is time in the processor cycle to generate an address based on a conditional ", "content": "branch still next instruction fetched end clock cycle approach works well runs trouble processor speed increases beyond affordable program memory speed frisc 3 generates condition branch one instruction accomplishes branch next instruction really rather clever approach since comparison operation needed branches machine instead comparison operation usually subtraction frisc 3 also specifies condition code interest next branch moves much branching decision comparison instruction requires testing single bit executing succeeding conditional branch rtx 32p uses microcode combine comparisons branches twoinstructioncycle combination takes equivalent time comparison instruction followed condition branch example combination obranch combined single fourmicrocycle twoinstruction cycle operation interrupt handling much simpler stack machines either risc cisc machines cisc machines complex instructions take many cycles may long need interruptible force great amount processing overhead control logic save restore state machine within middle instruction risc machines much better since pipeline needs saved restored interrupt also registers need saved restored order give interrupt service routine resources work common spend several microseconds responding interrupt risc cisc machine stack machines hand typically handle interrupts within clock cycles interrupts treated hardware invoked subroutine calls pipeline flush save thing stack processor needs process interrupt insert interrupt response address subroutine call instruction stream push interrupt mask register onto stack masking interrupts prevent infinite recursion interrupt service calls interrupt service routine entered registers need saved since new routine simply push data onto top stack example fast interrupt servicing stack processor rtx 2000 spends 4 clock cycles 400 ns time interrupt request asserted time first instruction interrupt service routine executed context switching perceived slower stack machine machines however experimental results presented later show case final advantage stack machines simplicity leaves room algorithm specific hardware customized microcontroller implemen tations example harris rtx 2000 onchip hardware multiplier examples application specific hardware semicustom components might fft address generator ad da converters communication ports features significantly reduce parts count finished system dramatically decrease program execu tion time", "url": "RV32ISPEC.pdf#segment79", "timestamp": "2023-11-05 21:10:45", "segment": "segment79", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.3 A STUDY OF FORTH INSTRUCTION FREQUENCIES ", "content": "conceptual understanding stack machines differ computers let us look quantitative results show stack machines perform measurements instruction frequencies code sizes stackbased registerbased machines abound references include blake 1977 cook donde 1982 cook lee 1980 cragon 1979 haikala 1982 mcdaniel 1982 sweet sandman 1982 tanenbaum 1978 unfortunately measurements programs written conventional languages inherently stack based language forth hayes et al 1987 previously published execution statistics forth programs shall expand upon findings results chapter based programs written forth since programs take advantage capabilities stack machine cautions benchmarks still applicable results used rough approximations truth whatever six different benchmark programs referred following sections except noted programs written 16bit forth system follows frac fractal landscape generation program uses random number generator always seeded initial value consistency generate graphics image koopman 1987e koopman 1987f life simple implementation conway game life 80 column 25row character display program run computes ten generations screen full gliders math 32bit floating point package written high level forth code machinespecific primitives normalization etc program run generates table sine cosine tangent values integer degrees 1 10 koopman 1985 compile script used compile several forth programs measuring execution forth compiler fib computation 24th fibonacci number using recursive procedure commonly called dumb fibonacci hanoi towers hanoi problem written recursive procedure program run computes result 12 disks queens n queens problem derived 8 queens chess board puzzle written recursive procedure program finds first acceptable placement n queens nxn board program run computes result 2v12 queens three programs represent best mix different application areas math uses intensive stack manipulation manage 32bit quantities 48bit temporary floating point format 16bit stack life intensive management array memory cells much conditional branching frac graphics line drawing rudimentary graphics projections compilation benchmark also useful reflects activities compiler must tokenizing input streams identifier searches", "url": "RV32ISPEC.pdf#segment80", "timestamp": "2023-11-05 21:10:45", "segment": "segment80", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.3.1 Dynamic instruction frequencies Table 6.1 shows dynamic instruction execution frequencies for the most frequently executed primitives for Frac, Life, Math, and Compile. The dynamic frequency of an instruction is the number of times it is executed during a program run. Appendix C contains the unabridged version of the instruction frequencies given in Table 6.1. The AVE column shows the equally weighted average of the four benchmarks, which is a rough approxi\u00ad mation of execution frequency for most Forth programs. The Forth words selected for inclusion in this table were either in the top ten of the Ave column, or in one of the top ten words for a particular program. For ", "content": "example execute 065 ave value 245 compile value tenth largest compile measurements first thing obvious numbers subroutine calls exits dominate operations well known fact forthderived stack processors place heavy emphasis efficient subroutine calls subroutine exits combination instructions subroutine exit numbers less subroutine call numbers forth operations pop return stack climb two levels subroutine calls performs conditional premature exit calling routine amount time spent stack manipulation primitives also interesting instructions sample approximately 25 spent manipulating stacks first seems rather high however since stack processors capability combining stack manipu lations useful work combinations number much higher seen practice also 25 skewed 5 high usage r r floating point math package manipulate 32bit quantities cost would present 32bit processor 16bit processor used fast access user memory space nc4016 rtx 2000 store intermediate results also interest process getting data onto stack manipulated important process involves variable lit constant user fortunately stack machines able combine instructions operations well final observation many instructions shown appendix c dynamic execution frequencies less 1 however instructions immediately dismissed unimportant many long execution times supported hardware enough look execution frequency determine importance instruction", "url": "RV32ISPEC.pdf#segment81", "timestamp": "2023-11-05 21:10:45", "segment": "segment81", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.3.2 Static instruction frequencies ", "content": "table 62 shows static instruction compilation frequencies often compiled primitives frac life math often compiled primitives used programs compiled compile benchmark includes frac queens hanoi fib static frequency instruction number times appears source program ave column shows equally weighted average four benchmarks rough approximation compilation frequency forth programs forth words selected inclusion table either top ten ave column one top ten words particular program static measurements subroutine calls frequent account ing one four instructions compiled note frac counted twice since included compile actually subroutine call number somewhat lower would otherwise", "url": "RV32ISPEC.pdf#segment82", "timestamp": "2023-11-05 21:10:45", "segment": "segment82", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.3.3.1 Execution speed gains Table 6.3 has four profiles of dynamic program execution with different optimizations for the RTX 32P. Part (a) of the table shows the results of executing programs with no compression of opcodes and subroutines, and no peephole optimization of adjacent opcodes (opcode combination). Part (b) of the table shows the effects of combining common opcode sequences (such as SWAP DROP, OVER +, @ and @+) into single instructions. The column marked OP-OP is the number of combinations of two opcodes treated as a single opcode in the OP, OP-CALL, OP-EXIT, and OP-CALL-EXIT measurements. The spe\u00ad cial cases of LITERAL +, LITERAL AND, etc. are all designated as LITERAL-OP. The special cases of @ and ! are designated VARIABLE-OP. The special cases of @+ and @\u2014 are designated VARIABLE-OP-OP. All the literal and variable special cases require a full instruction to hold an opcode and address, so are not combinable with other instructions. For the example programs, peephole optimization of opcodes was able to achieve a 10% reduction in the number of instructions executed. ", "content": "part c table shows effects using instruction compression instead opcode combination means wherever possible opcodes combined following subroutine calls exits subroutine calls followed exits also combined unconditional jumps accomplish tailend recursion elimination result total 24 instructions combine opcodes subroutine callsreturns translates 40 subroutine calls original program executed free almost subroutine exits executed free exceptions special instructions literals return stack manipulations combined subroutine exits part table shows effects turning opcode combination instruction compression resulting code takes 25 fewer instructions original programs performance speedup possible almost software processing hardware expense inherent parallelism subroutine calls opcodes interesting point execution time math benchmark reduced 61 million instructions 16bit system 940 thousand instructions rtx 32p testimony need 32bit processor floating point calculations life benchmark mostly 8bit data manipulation remained almost systems frac benchmark apparently increased factor four fact 32bit version used higher graphics resolution requiring 4 times number points computed takes approximately 4 times many instructions", "url": "RV32ISPEC.pdf#segment83", "timestamp": "2023-11-05 21:10:45", "segment": "segment83", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.3.3.2 Memory size cost The performance speedup of combining opcodes with subroutine calls is worthwhile, especially since it takes essentially no extra hardware inside the processor. In fact, it actually simplifies the hardware by requiring only one instruction format. The question that must still be resolved is, what is the cost in memory space? Fortunately, Forth programs have a static subroutine call frequency that is even higher than the dynamic frequency. This provides a ripe opportunity for opcode/subroutine call combinations. Table 6.4 shows the difference in static program size between raw programs with no compression and pro\u00ad grams on which both instruction compression and opcode compression have been performed. The RTX 32P uses 9-bit opcodes, 21-bit addresses, and 2-bit control fields. If we were to assume an optimally packed instruction format, we might design an instruction format that used 11 bits to specify an opcode with a single subroutine exit bit, and a 23-bit subroutine call/jump format. Also, let us be generous and assume that this instruction format would get all subroutine exits for free by combining them with opcodes or using a jump instead of call format. This supposes a machine with variable word width (11 or 23 bits), but let us not worry about that, since we are computing a theoretical minimum. In the optimized form, the three programs together would consist of 1953 opcodes (at 11 bits each), 1389 subroutine calls (at 23 bits each), and 565 combination opcodes/address fields (at 34 bits each). This adds up to a total of 72 640 bits. Now consider the actual program compiled using the optimizations on the RTX 32P. Considering that each instruction category uses a fixed 32-bit encoding with some potentially unused fields, the total is 3300 instructions at 32 bits, or 105 600 bits. The memory cost is then 32 960 bits, or 31% of memory \u2018wasted\u2019 over the theoretical minimum. Of course, designing a machine to use 11-bit opcodes and 23-bit subrou\u00ad tine calls would be a neat trick. In a more practical vein, we should consider ", "content": "opop 273 522 198 3 number empty opcodes compressed version programs 766 9 bits number empty subroutine call fields 917 23 bits total 27 985 bits 27 wasted exchange 25 fewer instructions executed getting significant speedup relatively low cost even variablelength instruction format slight problem measurements presented section several relatively small programs programs perform fairly complex operations observation part supportive evidence stack machine programs compact problem large forth programs difficult find nevertheless programs chosen represent reasonable crosssection commonly used forth code author considered opinion results reasonably close would obtained measuring larger sample programs course one way get much larger programs would use output conventional language compiler kind code would probably different characteristics programmers solve problems much differently c fortran forth shall revisit thought chapter 7", "url": "RV32ISPEC.pdf#segment84", "timestamp": "2023-11-05 21:10:45", "segment": "segment84", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4 STACK MANAGEMENT ISSUES ", "content": "since stack machines depend accessing high speed stack memory every instruction characteristics use stack memory vital importance particular processors get faster question much stack memory needs placed onchip obtain good perfor mance answer question crucial since affects cost performance highend stack processors place stack memory onchip equally important question stacks managed especially realm multitasking environments", "url": "RV32ISPEC.pdf#segment85", "timestamp": "2023-11-05 21:10:45", "segment": "segment85", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4.1 Estimating stack size: an experiment The first question, the one of the size of the on-chip stack buffer, is best resolved by a simulation of various programs with different size stack buffers. This simulation measures the amount of traffic to and from memory generated by stack overflows and underflows. Overflows need to copy elements from the hardware stack buffer to a save area in memory. Underflows cause a copying of the elements back from memory to the stack buffer. Table 6.5 and Fig. 6.2 show the results of a simulator that monitored the number of memory cycles spent on data stack buffer spilling and restoring for Life, Hanoi, Frac, Fib, Math, and Queens. While the \u2018toy\u2019 benchmarks Fib, Hanoi, and Queens are not representative of typical programs, all are deeply recursive and are representative of the worst one might expect of stack programs. ", "content": "spilling algorithm used spilled exactly one element stack buffer time push operation performed full stack read exactly one element stack buffer time readpop operation performed empty stack buffer simulation assumed hardware automatically handled spilling cost one memory cycle per element read written rtx 32p instruction set used simulation instruction approximately twice complex would seen hardwired processor rtx 2000 number cycles measured memory cycles microcycles purpose simulation show best behavior could expected certainly within factor three four costs implementations surprisingly frac behaves almost badly hanoi using stack frac pushes 6 elements onto data stack step recursive subdivision algorithm dividing mesh points obvious recursive program potential generate large number elements stack good news stack size stack overflow underflow memory traffic tapers steep exponential rate programs stack buffer size 24 even hanoi generates stack spill fewer 1 instructions practical matter stack size 32 eliminate stack buffer overflows almost programs table 66 fig 63 show simulator results return stack spills restores programs results similar except math emerges unexpectedly heavy user return stack math package written extremely modular easy port systems uses large number deeply nested subroutines also math uses return stack storing large number temporary variables manipulate 48bit data 16bit processor", "url": "RV32ISPEC.pdf#segment86", "timestamp": "2023-11-05 21:10:46", "segment": "segment86", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4.2 Overflow handling ", "content": "examined stack overflows underflows occur program execution handled four possible ways handling spills ensure never happen treat catastrophic system failures use demanddriven stack controller use paging stack control mechanism use data cache memory approach strengths weaknesses", "url": "RV32ISPEC.pdf#segment87", "timestamp": "2023-11-05 21:10:46", "segment": "segment87", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4.2.1 A very large stack memory The simplest way to solve the stack problem is simply to assume that stack overflows will never happen. While this may seem like a foolish strategy at first, it has some merit. The nicest result from using this strategy is that system performance is totally predictable (no stack spilling traffic to slow down the system) and that no stack management hardware is required. ", "content": "approach using large stack memory avoid overflows one taken misc m17 stack overflow program memory capacity exceeded approach taken nc4016 use high speed offchip stack memories may expanded several thousand elements deep processors solved stack overflow problem simply designing away price paid using approach tradeoff offchip memory sizespeed processor speed case small onchip stacks used approach treating overflows catastrophic system event programs debugged still taken simply declaring programmer x elements stacks work responsible never overflowing limit approach practical small simple programs written value x greater 16 32 wisc cpu16 uses approach stack size 256 elements keep hardware simple", "url": "RV32ISPEC.pdf#segment88", "timestamp": "2023-11-05 21:10:46", "segment": "segment88", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4.2.2 Demand-fed single-element stack manager Given that stack overflows are allowed to occur on a regular basis, the most conceptually appealing way to deal with the problem is to use a demand-fed stack manager that moves single elements on and off the stack as required. To implement this strategy, the stack buffer is set up as a circular buffer with a head and tail pointer. A pointer to memory is also needed to keep track of the top element of the memory-resident portion of the stack. Whenever a stack overflow is encountered, the bottom-most buffer-resident element is copied to memory, freeing a buffer location. Whenever an underflow is encountered, one element from memory is copied into the buffer. This technique has the appeal that the processor never moves a stack element to or from memory unless absolutely necessary, guaranteeing the minimum amount of stack traffic. A possible embellishment of this scheme would be to have the stack manager always keep a few elements empty and at least several elements full on the stack. This management could be done using otherwise unused memory cycles, and would reduce the number of overflow and underflow pauses. Unfortunately, this embellishment is of little value on real stack machines, since they all strive to use program memory 100% of the time for fetching instructions and data, leaving no memory bandwidth left over for the stack manager to use. The benefit to demand-fed stack management is that very good use is made of available stack buffer elements. Therefore, it is suitable for use in systems where chip space for stack buffers is at a premium. As an additional benefit, the stack underflows and overflows are spread throughout program execution at a maximum of two per instruction for the case of a data stack spill combined with a subroutine return underflow. The cost of this good performance is that reasonably complex control hardware and three counters for each stack are needed to implement the scheme. The FRISC 3 stack management scheme is similar to the demand-fed strategy. The architects of this system have done considerable research in this area. A generalization of this algorithm, called the cutback-K algor\u00ad ithm, was proposed by Hasegawa & Shigei (1985). Stanley & Wedig (1987) have also discussed top-of-stack buffer management for RISC machines. ", "content": "", "url": "RV32ISPEC.pdf#segment89", "timestamp": "2023-11-05 21:10:46", "segment": "segment89", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.4.2.4 An associative cache The method used by many conventional processors for managing the program stack is to use a conventional data cache memory, Usually mapped into the program memory space. This method involves significant hardware complexity but does not provide any advantage over the previously men\u00ad tioned methods for stack machines, since stack machines do not skip about much in accessing their stack elements. It does provide an advantage when variable length data structures such as strings and records are pushed onto a \u2018stack\u2019 as defined in a C or Ada programming environment. Other publications of interest that discuss the stack management issue ", "content": "", "url": "RV32ISPEC.pdf#segment90", "timestamp": "2023-11-05 21:10:46", "segment": "segment90", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.5 INTERRUPTS AND MULTI-TASKING ", "content": "three components performance processing interrupts first component amount time elapses time interrupt request received processor time processor takes action begin processing interrupt service routine delay called interrupt latency second component interrupt service performance interrupt processing time amount time processor spends actually saving machine state interrupted job diverting execution interrupt service routine usually amount machine state saved minimal presumption interrupt service routine minimize costs saving additional registers plans use sometimes one sees term interrupt latency used describe sum first two components third component interrupt service performance shall call state saving overhead amount time taken save machine registers automatically saved interrupt processing logic must saved order interrupt service routine job state saving overhead vary considerably depending upon complexity interrupt service routine extreme case state saving overhead involve complete context switch multitasking jobs course costs restoring machine state returning interrupted routine consideration determining overall system performance shall consider explicitly since tend roughly equal state saving time since everything saved must restored important meeting timecritical deadline responding interrupt", "url": "RV32ISPEC.pdf#segment91", "timestamp": "2023-11-05 21:10:46", "segment": "segment91", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.5.1 Interrupt response latency CISC machines may have instructions which take a very long time to execute, degrading interrupt response latency performance. Stack machines, like RISC machines, can have a very quick interrupt response latency. This is because most stack machine instructions are only a single cycle long, so at worst only a few clock cycles elapse before an interrupt request is acknowledged and the interrupt is processed. Once the interrupt is processed, however, the difference between RISC and stack machines becomes apparent. RISC machines must go through a tricky pipeline saving procedure upon recognizing an interrupt, as well as a pipeline restoring procedure when returning from the interrupt, in order to ", "content": "avoid losing information partially processed instructions stack machines hand instruction execution pipeline address next instruction executed needs saved means stack machines treat interrupt hardware generated procedure call course since procedure calls fast interrupt processing time low", "url": "RV32ISPEC.pdf#segment92", "timestamp": "2023-11-05 21:10:46", "segment": "segment92", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.5.1.1 Instruction restartability There is one possible problem with stack machine interrupt response latency. That is the issue of streamed instructions and microcoded loops. Streamed instructions are used to repetitively execute an operation such as writing the top data stack element to memory. These instructions are implemented using an instruction repeat feature on the NC4016 and RTX 2000, an instruction buffer on the M17, and microcoded loops on the CPU/ 16 and RTX 32P. These primitives are very useful since they can be used to build efficient string manipulation primitives and stack underflow/overflow service routines. The problem is that, in most cases, these instructions are also noninterruptible. One solution is to make these instructions interruptible with extra control hardware, which may increase processor complexity quite a bit. A potentially hard problem that nonstack processors have with this solution is the issue of saving intermediate results. With a stack processor this is not a problem, since intermediate results are already resident on a stack, which is the ideal mechanism for saving state during an interrupt. Another approach that is used by stack processors is to use a software restriction on the size of the repeat count allowed to be used with streaming instructions. This means that if a block of 100 characters is to be moved in memory, the action may be accomplished by moving several groups of 8 characters at a time. This keeps interrupt latency reasonable without sacrificing much performance. As expected, there is a tradeoff between absolute machine efficiency (with long streamed instructions) and interrupt response latency. In microcoded machines, the tradeoffs are much the same. However, there is a very simple microcode strategy to provide the best of both worlds which is designed into the RTX 32P commercial version. This strategy is having a condition code bit visible to the microcode indicating whether an interrupt is pending. At each iteration of a microcoded loop, the interrupt pending bit is tested, with no cost in execution time. If no interrupt is pending, another iteration is made through the loop. If an interrupt is pending, the address of the streamed instruction is pushed onto the return stack as the address to be executed upon return from the interrupt, and the interrupt is allowed to be processed. As long as the streamed instruction keeps all its state on the stack (which is simple with an operation such as a character block move), there is very little overhead associated with this method when processing an interrupt, and no overhead during normal program execution. ", "content": "", "url": "RV32ISPEC.pdf#segment93", "timestamp": "2023-11-05 21:10:46", "segment": "segment93", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.5.3 Context switches Context switching overhead is usually said to be the reason why \u2018stack machines are no good at multi-tasking\u2019. The argument behind such reason\u00ad ing is usually based on having to save a tremendous amount of stack buffer space into program memory. This idea that stack machines are any worse at multi-tasking than other machines is patently false. Context switching is a potentially expensive operation on any system. In RISC and CISC computers with cache memories, context switching can be more expensive than the manufacturers would have one believe, as a result of hidden performance degradations caused by increased cache misses after the context switch. To the extent that RISC machines use large register files, they face exactly the same problems that are faced by stack machines. An added disadvantage of RISC machines is that their random access to registers dictates saving all registers (or adding complicated hardware to detect which registers are in use), whereas a stack machine can readily save only the active area of the stack buffer. ", "content": "", "url": "RV32ISPEC.pdf#segment94", "timestamp": "2023-11-05 21:10:46", "segment": "segment94", "image_urls": [], "Book": "stack_computers_book" }, { "section": "6.5.3.2 Multiple stack spaces for multi-tasking There is an approach that can be used with stack machines which can eliminate even the modest costs associated with context switching that we have seen. Instead of using a single large stack for all programs, high- ", "content": "prioritytimecritical portions program assigned stack space means process uses stack pointer stack limit registers carve piece stack use upon encountering context switch process manager simply saves current stack pointer process since already knows stack limits new stack pointer value stack limit registers loaded new process ready execute time spent copying stack elements memory amount stack memory needed programs typically rather small furthermore guaranteed design small short timecritical processes even modest stack buffer 128 elements divided among four processes 32 elements four processes needed multitasking system one buffers designated low priority scratch buffer shared using copyin copyout among low priority tasks discussion see notion stack processors large state save effective multitasking myth fact many cases stack processors better multitasking interrupt processing kind computer hayes fraeman 1988 independently obtained results stack spilling context switching costs frisc 3 similar results reported chapter", "url": "RV32ISPEC.pdf#segment95", "timestamp": "2023-11-05 21:10:46", "segment": "segment95", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 7 ", "content": "software issues computer system worthless without software hardware effectively supports software requirements utmost importance stack machines offer new tradeoffs choices considering software issues section 71 discusses importance fast subroutine calls directly indirectly affect program execution speed also software quality programmer productivity section 72 explains choices tradeoffs involved choosing appropriate language programming stack machines stack machines support conventional languages efficiently section 73 discusses interfaces among levels program written stack machine uniformity software interface present stack machines possible registerbased machines gives significant advantages", "url": "RV32ISPEC.pdf#segment96", "timestamp": "2023-11-05 21:10:46", "segment": "segment96", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.1 THE IMPORTANCE OF FAST SUBROUTINE CALLS ", "content": "programmers learnt avoid procedure calls parameter passing reasons efficiency however important tool design well organized programs stack architec tures carry potential efficient invoking procedures schulthess mumprecht 1977 p 25 sentiment expensive procedure calls lead poorly structured programs inhibiting programmers efficiency considerations echoed software stylists well risc cisc advocates atkinson mccreight 1987 ditzel mclellan 1982 parnas 1972 sequin patterson 1982 wilkes 1982 fact lampson 1982 goes far say procedure calls made run fast unconditional jumps", "url": "RV32ISPEC.pdf#segment97", "timestamp": "2023-11-05 21:10:46", "segment": "segment97", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.1.1 The importance of small procedures The use of a large number of small procedures when writing a program reduces the complexity of each piece that must be written, tested, debugged, and understood by the programmer. Lower software complexity implies lower development and maintenance costs, as well as better reliability. Why then, would programmers not make extensive use of small procedures? Most application programs are written in general-purpose languages ", "content": "fortran cobol pl71 pascal c early high level programming languages fortran direct extensions philosophy machines run sequential von neumann machines registers consequently languages general usage developed emphasize long sequences assignment state ments occasional conditional branches procedure calls recent years however complexion software begun change currently accepted best practice software design involves structured programming using modular designs large scale use modules essential partitioning tasks among members programming teams smaller scale modules control complexity limiting amount information programmer must deal given time advanced languages modula2 ada designed specifically promote modular design one hardware innovation resulted increasing popularity modular structured lan guages register used stack pointer main memory exception stack pointer complex instructions always usable compilers cisc hardware added much support subroutine calls years machine code output optimizing compilers modern languages still tends look lot like output earlier nonstructured languages herein lies problem conventional computers still optimized executing programs made streams serial instructions execution traces programs show procedure calls make rather small proportion instructions course partially attributable fact programmers avoid using conversely modern program ing practices stress importance nonsequential control flow small procedures clash two realities leads suboptimal therefore costly hardwaresoftware environment today general purpose computers mean programs failed become organized maintainable using structured languages rather efficiency considerations use hardware encourages writing sequential programs prevented modular languages achieving might although current philosophy break programs small procedures programs still contain fewer larger complicated procedures", "url": "RV32ISPEC.pdf#segment98", "timestamp": "2023-11-05 21:10:46", "segment": "segment98", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.1.2 The proper size for a procedure How many functions should a typical procedure have? Miller gives evidence that the number seven, plus or minus two, applies to many aspects of thinking (Miller 1967). The way the human mind copes with complicated information is by chunking groups of similar objects into fewer, more abstract objects. In a computer program, this means that each procedure should contain approximately seven fundamental operations, such as assign\u00ad ment statements or other procedure calls, in order to be easily grasped. If a procedure contains more than seven distinct operations, it should be broken ", "content": "apart chunking related portions subordinate procedures reduce complexity portion program another part book miller shows human mind grasp two three levels nesting ideas within single context strongly suggests deeply nested loops conditional structures arranged nested procedure calls convoluted indented structures within procedure", "url": "RV32ISPEC.pdf#segment99", "timestamp": "2023-11-05 21:10:46", "segment": "segment99", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.1.4 Architectural support for procedures The problems that arise from poor performance on subroutine calls have been dealt with by computer architects in a variety of ways. RISC designers have taken two different approaches. The Stanford MIPS team uses com\u00ad piler technology to expand procedures as in-line code wherever possible. The MIPS compiler then does very clever register allocation to avoid saving ", "content": "restoring registers procedure calls statistics support choice strategy taken programs follow traditional software design methods fairly large deeply nested pro cedures mips approach appears work well existing software may create stifling effect better software develop ment strategies saw cisc machines second risc approach one originally advocated berkeley risc team uses register windows form register frame stack pointer register stack moved push pop group registers quickly supporting quick subroutine calls approach many advantages stack machine approach detailed implemen tation questions real life may determine success failure product become ones single vs multiple stacks fixed vs variablesized register frames spill management strategies overall machine complexity", "url": "RV32ISPEC.pdf#segment100", "timestamp": "2023-11-05 21:10:46", "segment": "segment100", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.2 LANGUAGE CHOICE ", "content": "choice programming language use solve particular problem taken lightly sometimes choice dictated external forces using ada us department defense contract cases language choice constrained existence limited number compilers general though considerable choice available selecting language new system design software selection considered isolated issue language used reflect entire system developed including system operating environment suitability language solve problem hand development time costs maintainability finished product strengths underlying processor running various languages previous programming experience pro grammers assigned project note experience program mers placed first list poor choice based programmer bias familiar language result problems offset perceived gain productivity", "url": "RV32ISPEC.pdf#segment101", "timestamp": "2023-11-05 21:10:47", "segment": "segment101", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.2.1 Forth: strengths and weaknesses Forth is the most obvious language to consider using on a stack machine. That is because the Forth language is based upon a set of primitives that execute on a virtual stack machine architecture. All the stack machines presented in this book support highly efficient implementations of Forth. All of these machines can use a Forth compiler to generate efficient machine code. The biggest advantage of using Forth then, is that the highest processing rate possible can be squeezed from the machine. One of the characteristics of Forth is its very high use of subroutine calls. This promotes an unprecedented level of modularity, with approximately 10 instructions per procedure being the norm. Tied in with this high degree of ", "content": "modularity interactive development environment used forth compilers environment programs designed top using stubs appropriate built bottom testing every short procedure interactively written large projects topdown bottomup phases repeated cycles since forth stackbased interactive language testing programs scaffolding need written instead values passed procedure pushed onto stack keyboard procedure tested executed results returned top stack interactive development modular programs widely claimed exper ienced forth programmers result factor 10 improvement programmer productivity improved software quality reduced maintenance costs part gain may come fact forth programs usually quite small compared equivalent programs languages requiring less code written debugged 32k byte forth program exclusive symbol table information considered monster may take several hundred thousand bytes source code generate one advantages forth programming language covers full spectrum language levels languages assembly language allow dealing hardware level lan guages fortran deal abstract level little underlying machine forth programs span full range program ming abstraction lowest level forth allows direct access hardware ports system realtime io handling interrupt servicing highest level forth program manage sophisticated know ledge base one facet forth interesting baffling many casual observers extensible language every procedure added language apparent language available programmer grows manner forth much like lisp distinction made core procedures language extensions added programmer enables language flexible extent beyond comprehension people extensively used capability extensibility forth mixed blessings forth tends act programmer amplifier good programmers become exceptional programming forth excellent programmers become phenomenal mediocre programmers generate code works bad programmers go back programming languages forth also moderately difficult learning curve since different enough programming languages bad habits must unlearned new ways conceptualizing solutions problems must acquired practice new skills acquired though common experience forthbased problem solving skills involving modularization partitioning pro grams actually improve programmer effectiveness languages well another problem forth systems include rich enough set programming tools suit many programmers also older forth systems cooperate poorly resident operating systems traits stem forth history use small machines hardware resources realtime control applications limitations generally much problem applications need better support tools fortunately trend newer forth systems provide much better development environments library support past result effects forth best used mediumsized programming projects involving two three programmers compatible programming styles large programming project clashing styles abilities tend prevent production extremely high quality software however within constraints forth programs consistently delivered short time excellent results often solving problems could solved language least solved within budget development time constraints", "url": "RV32ISPEC.pdf#segment102", "timestamp": "2023-11-05 21:10:47", "segment": "segment102", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.2.2 C and other conventional languages Of course, there will always be applications that are better done in conven\u00ad tional languages. Probably the most common reason for using a conven\u00ad tional language will be the existence of a large body of existing source code that must be ported onto a better processor. To illustrate the tradeoffs involved, let us look at the problem of porting an application written in C onto a stack processor using a C compiler written for the stack machine. We shall skip over the problem of translating the program from the source C code into an intermediate form, since this is independent of the machine upon which the program is to run. The portion of the C compiler that is of interest is the so-called \u2018back end\u2019. The back end is the portion of the compiler that takes a predigested intermediate form of a program and produces code for the target machine. Actually, generation of stack-based code for expression evaluation is relatively straightforward. The topic of converting infix arithmetic expres\u00ad sions to stack-based (postfix/RPN) expressions is well researched (Bruno & Lassagne 1975, Couch & Hamm 1977, Randell & Russell 1964). The problem in generating code for stack machines from C is that there are several assumptions about the operating environment deeply entrenched in the language. The most profound of these is that there must be a single program-memory-resident stack that contains both data and subrou\u00ad tine return information. This assumption cannot be violated without \u2018break\u00ad ing\u2019 many C programs. As an example, consider the case of a pointer that references a local variable. That local variable must reside in the program memory space, or it cannot be properly referenced by the program. To make matters worse, C programs typically push large volumes of data, including strings and data structures, onto the C stack. Then, C programs make arbitrary accesses within the area of the current stack frame (the portion of the stack containing variables belonging to the current procedure). These restrictions make it unfeasible to attempt to keep the C stack on a stack machine\u2019s Data Stack. ", "content": "stack machine made efficient running c programs answer stack machine must efficiently support framepointer plusoffset addressing program memory rtx 2000 use user pointer accomplish efficiently frisc 3 use one user defined registers loadstore offset instructions rtx 32p commercial successor frame pointer register dedicated adder computing memory addresses cases access local variables made time required memory operation two memory cyclesone instruction one data best hoped processor resort expensive techniques separated data instruction caches notion often found c high level languages map well onto stack machines register variable since stack machines set registers implies compiler optimization opportunities may missed stack machines partially true true stack machines well suited juggling large number temporary values stacks small number frequently accessed values kept stack quick reference example values might include loop counter kept return stack two addresses string compare kept data stack manner efficiency hardware captured majority c programs one additional concept make c programs fast forth programs stack machines concept supporting forth assembly language processor approach vigor ously pursued several stack machine vendors using approach existing c programs transferred stack machine execution characteristics profiled profiling information used identify critical loops within program loops rewritten forth better speed perhaps augmented application specific microcode case rtx 32p using technique c programs attain virtually performance allforth programs little effort good performance added stack machine qualities low system complexity high processing speed c becomes viable language program stack machines", "url": "RV32ISPEC.pdf#segment103", "timestamp": "2023-11-05 21:10:47", "segment": "segment103", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.2.3 Rule-based systems and functional programming There is evidence that programming languages used to implement rule\u00ad based systems, such as those written in Prolog, LISP, and OPS-5 are very well suited to stack machines. One very exciting possibility is the marriage of real-time control applications with rule-based system knowledge bases. Preliminary research into this area has been encouraging. Much work has been done using Forth as an implementation vehicle. Areas explored include: LISP implementations (Hand 1987, Carr & Kessler 1987), an OPS- 5 implementation (Dress 1986), a Prolog implementation (Odette 1987), neural network simulations (Dress 1987), and development environments ", "content": "realtime expert systems matheus 1986 park 1986 forth implementations subsequently ported stack machine hardware excellent results example rulebased expert5 system des cribed park 1986 runs 15 times faster wisc cpu16 standard ibm pc similar rulebased system actually closer park expert4 slower expert5 runs approximately 740 times faster rtx 32p standard 477 mhz 8088 pc speedup nearly three orders magnitude astonishing merely reflects suitability using stack machine good tree traversal solving problems use decision trees speedup observed rulebased system actually based principle applies wide variety problem areas stack machines treat data structure executable program consider moment example tree data structure pointers internal nodes program action tokens leaves nodes trees pointers addresses children equates subroutine calls many stack processors leaves trees executable instruc tions subroutine calls procedures accomplish task conventional processor would use interpreter traverse tree search leaves stack processor directly execute tree instead since stack machines execute subroutine calls quickly results extremely efficient technique directly executing treeformatted data structures responsible tremendous speed rtx 32p example cited previous paragraph stack machines well suited lisp programming well expert systems lisp forth similar languages many respects treat programs lists function calls lists extensible languages use polish notation arithmetic operations major difference lisp involves dynamic storage allocation cells forth uses static storage allocation since reason stack machine worse garbage collection machines lisp run efficiently stack machine many arguments stack machines suitability lisp apply prolog prolog implementation rtx 32p author made additional discovery efficiently map prolog onto stack machines prolog uses typed data either actual data element pointer data possible encoding prolog data elements one uses highest 9 bits 32bit word data type tag lowest 23 bits used either pointer another node pointer 32bit literal value short literal value using data format data items actually executable instructions instructions rtx 32p constructed allow traversing arbitrarily long series pointer dereferences rate one dereference per memory cycle simply executing data structure program nil pointer checking accomplished defining nil pointer value subroutine call error trapping routine kinds data handling efficiencies simply possible types processors functional programming languages offer promise new way solving problems using different model computation used conventional computers backus 1978 particular method executing functional programs use graph reduction techniques direct execution program graphs discussed rulebased systems equally applicable graph reduction thus stack machines good executing functional programming languages belinfante 1987 published forthbased implementation graph reduction koopman lee 1989 describe threaded interpretive graph reduction engine theoretical point view efficient graph reduction machines gmachine norma fall slo category taxonomy chapter 2 mlo machines superset capabilities slo machines therefore efficient graph reduction well initial investigations area author show rtx 32p simple stack machine compete quite effectively even complex graph reduction machines norma one side effects using functional programming language high degree parallelism available program execution raises idea massively parallel computer made stack processors programmed functional programming language", "url": "RV32ISPEC.pdf#segment104", "timestamp": "2023-11-05 21:10:47", "segment": "segment104", "image_urls": [], "Book": "stack_computers_book" }, { "section": "7.3 UNIFORMITY OF SOFTWARE INTERFACES ", "content": "key conceptual feature stack machines uniformity interface high level code machine instructions procedure calls opcodes use stack means passing data consistent interface several positive impacts software development source code program reflect manner instructions directly supported machine func tions implemented procedures forth language level capability suggests use low level stack language similar forth target compilation languages given assembler target language actual machine user freed worry particular functions implemented means various imple mentations architecture made compatible stackbased source code level without actually provide instruc tions lowcost implementations interface used conven tional languages well forth combinations c code forth code code languages intermingled without problems microcoded machines rtx 32p interface exploited one step application specific microcode used replace critical sequences instructions application program common compiler generated code sequences transparently user fact common method application code development microcoded stack machine first write entire application high level code go back microcode critical loops rewriting subroutines high level language microcode invisible rest program except speed increase speed increase speedup factor approximately two many applications", "url": "RV32ISPEC.pdf#segment105", "timestamp": "2023-11-05 21:10:48", "segment": "segment105", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 8 ", "content": "applications stack machines like computers suitable wide variety applications system high speed processor low system complexity needed good candidate using stack processor section 81 discusses one application area requirements ideal match stack processors application area realtime embedded control realtime control applications require small size low weight low cost low power high reliability section 82 examines different capabilities tradeoffs inherent choice 16bit 32bit hardware selection correctly sized processor vital success design section 83 discusses system implementation considerations choice hardwired microcoded systems involves tradeoffs among complexity speed flexibility choice integration level similarly affects system characteristics section 84 lists eleven broad areas suitable stack processors detailed lists possible applications", "url": "RV32ISPEC.pdf#segment106", "timestamp": "2023-11-05 21:10:48", "segment": "segment106", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.1 REAL-TIME EMBEDDED CONTROL ", "content": "realtime embedded control processors computers built pieces usually complicated equipment cars airplanes computer peripherals audio electronics military vehiclesweapons pro cessor embedded built piece equipment considered computer", "url": "RV32ISPEC.pdf#segment107", "timestamp": "2023-11-05 21:10:48", "segment": "segment107", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.1.1 Requirements of real-time control Often the fact that a computer is present in an embedded system is completely invisible to the user, such as in an automobile anti-skid braking system. Often times, a processor is used to replace expensive and bulky components of a system while providing increased functions and a lower cost. At other times, the fact a computer is present may be obvious, such as in an aircraft autopilot. In all cases, however, the computer is just a component of a larger system. Most embedded systems place severe constraints on the processor in terms of requirements for size, weight, cost, power, reliability and operating ", "content": "environment processor component larger system operating requirements manufacturing constraints time however processor must deliver maximum possible performance respond realtime events realtime events typically external stimulae system require response within matter microseconds milliseconds example high perfor mance jet aircraft inherently unstable depend computer control systems keep flying airborne computer must light small yet make unreasonable demands power cooling time must fall behind task keeping plane flying properly supersonic plane moving perhaps 1000 feet per second speeds milliseconds make difference crashing flying", "url": "RV32ISPEC.pdf#segment108", "timestamp": "2023-11-05 21:10:48", "segment": "segment108", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.1.2 How stack machines meet these needs The manufacturers of the stack machines described in Chapters 4 and 5 all have real-time control applications in mind as possible uses for their technology. What is it that makes stack machines so suitable for these applications? ", "content": "size weight seen stack computers simple terms processor complexity however number gates processor determines overall system size weight rather overall system complexity processor large number pins takes precious printed circuit board area one needs cache memory controller chips large amounts memory takes even printed circuit board area systems require hard disk virtual memory management software environment huge usually question key winning size weight issue keep component count small stack machines low hardware system complexity small program memory requirements well since stack machines less complex machines reliable well power cooling processor complexity affect power consumption system amount power used processor related number transistors especially number pins processor chip processors rely exotic process technology speed usually power hogs processors need huge numbers powerconsuming high speed memory devices likewise break power budget stack computers tend low power requirements fabrication technology used greatly affect power consumption newer cmos chips often minuscule power requirements compared bipolar nmos designs course power consumption directly affects cooling requirements since power used computer eventually given heat cooler operation cmos components reduce number component failures enhancing system reliability operating environment embedded processing applications notorious extreme operating conditions especially automotive military equipment process ing system must deal vibration shock extreme heat cold perhaps radiation remotely installed applications spacecraft undersea applications system must able survive without field service technicians make repairs general rule avoiding problems caused operating environments keep component count number pins low possible stack machines low system complexity high levels integration well standing extreme operating environments cost cost processor may important low medium performance systems since cost chip related number transistors number pins chip low complexity stack processors inherent cost advantage high performance systems cost processor may over whelmed cost multilayered printed circuit boards support chips high speed memory chips cases low system complexity stack machines provides additional advantages computing performance computing performance realtime embedded control environment simply instructionspersecond rating raw computational performance important factors make break system include interrupt response characteristics context swapping overhead additional desirable characteristic good performance programs riddled procedure calls means reducing program memory size even cost fast memory chips object lack cubic inches printed circuit board real estate may force program small memory space previous discussions characteristics stack machines show excel areas", "url": "RV32ISPEC.pdf#segment109", "timestamp": "2023-11-05 21:10:48", "segment": "segment109", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.2 16-BIT VERSUS 32-BIT HARDWARE ", "content": "fundamental decision stack processor select particu lar application size processor data elements 16 bits 32 bits decision 16 32bit processors driven factors cost size performance", "url": "RV32ISPEC.pdf#segment110", "timestamp": "2023-11-05 21:10:48", "segment": "segment110", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.2.2 32-bit hardware sometimes required Most traditional real-time control applications are well served by 16-bit processors. They offer high processing speed in a small system at minimum cost. Of course, part of the reason that traditional applications are well served by 16-bit processors is that capable 32-bit processors have not been widely available for very long. As the more capable 32-bit processors come into greater usage, new application areas will be discovered to put them to good use. 32-bit stack processors should be used instead of 16-bit processors only in cases where the application requires high efficiency at one or more of the following: 32-bit integer calculations, access to large amounts of memory, or floating point arithmetic. 32-bit integer calculations are obviously well suited to a 32-bit processor. Occasions where 32-bit integers are required include graphics and manipula\u00ad tion of large data structures. While a 16-bit processor can simulate 32-bit arithmetic using double-precision operands, 32-bit processors are much more efficient. While 16-bit processors can use segment registers to access more than 64K elements of memory, this technique becomes awkward and slow if it must be used frequently. A program that must continually change the segment register to access data structures (especially single data structures that are bigger than 64K in size) can waste a considerable amount of time computing segment values. Even worse, since the addresses that must be manipulated when computing data record locations that are greater than 16 bits wide, address computations are also slower because of all the double\u00ad precision math involved. A 32-bit processor can offer a linear 32-bit address space with accompanying quick address calculations on a 32-bit data path. Floating point calculations also require a 32-bit processor for good efficiency. 16-bit processors spend a significant amount of time manipulating stack elements when dealing with floating point numbers, whereas 32-bit ", "content": "processors naturally suited size data elements many instances scaled integer arithmetic appropriate floating point numbers increase speed processors cases 16bit processor may suffice however floating point math must often used reduce cost programming project support code written high level languages also advent fast floating point processing hardware traditional speed advantage integer operations floating point operations decreasing disadvantages 32bit processors cost system complexity 32bit processor chips tend cost transistors pins 16bit chips also require 32bitwide program memory generally larger printed circuit board 16bit processors less room onchip extra features hardware multipliers items appear chip fabrication technology gets denser", "url": "RV32ISPEC.pdf#segment111", "timestamp": "2023-11-05 21:10:48", "segment": "segment111", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.3 SYSTEM IMPLEMENTATION APPROACHES ", "content": "decision made 16bit 32bit processor still remains choice selecting manufacturer seven stack machines covered detail book different set tradeoffs areas system complexity flexibility performance tradeoffs reflect suitability different applications one tradeoffs decision hardwired microcoded control", "url": "RV32ISPEC.pdf#segment112", "timestamp": "2023-11-05 21:10:48", "segment": "segment112", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.3.1 Hardwired systems vs. microcoded systems The question of whether the control circuitry should be hardwired or microcoded is an old debate within all computing circles. The advantages of the hardwired approach are that it can be faster for executing those instructions that are directly supported by the system. The disadvantage is that hardwired machines tend to only support simple instructions, and must often execute many instructions to synthesize a complex operation. Microcoded machines are more flexible than hardwired machines. This is because an arbitrarily long sequence of microcode may be executed to implement very complicated instructions. Each instruction may be thought of as a subroutine call to a microcoded procedure. In machines with microcode RAM, the instruction set may be enhanced with application specific instructions to provided significant speed increases for a particular program. The hardwired stack machines all support some rather complex stack operations that are combinations of data stack manipulations, arithmetic operations, and subroutine exits. This is accomplished by manipulating different fields in the instruction format. To the degree that this is possible, the hardwired machine instruction formats are rather like microcode. In fact, Novix has called the NC4016 instructions a form of \u2018external microcode\u2019. In the microcoded stack machines, simple operations such as additions ", "content": "may often take longer hardwired machine complicated opcodes doubleprecision arithmetic operations pack single instruction hardwired machines complex instructions microcoded machines run faster providing special complex opcodes general increased flexibility eliminate raw speed gap two kinds processors final conclusion type processor faster particular application general clear without evaluating approaches important point perform careful evaluation requirements application selecting stack processor", "url": "RV32ISPEC.pdf#segment113", "timestamp": "2023-11-05 21:10:48", "segment": "segment113", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.3.2 Integration level and system cost/performance In addition to exploring the implementation tradeoffs between hardwired control and microcoded control, the 16-bit stack processors discussed in Chapter 4 display the full range of integration level decisions. Integration level is the amount of system hardware that is placed onto the processor chip. The more system functions that are placed on the processor chip, the higher the integration level. Also at issue, however, are the cost/perfor\u00ad mance tradeoffs made in the design with respect to the minimum number and type of components necessary to run the system. The WISC CPU/16 displays the lowest integration level of those pro\u00ad cessors examined. It uses off-the-shelf building blocks to create a processor with dozens of components. Of course, this design approach eliminates the need to repay the large initial chip layout investment required when producing a single-chip version. The MISC M17 is a simple single-chip stack processor. Since it uses program memory for its stacks, only the processor chip and program memory are required for operation. The integration level is reasonably high, and the system complexity is low. The penalty paid for the simplicity of the design is that speed is somewhat slower than what is possible with separated stack memories. The Novix NC4016 also is a single-chip processor, and has an integration level comparable to that of the M17. Not surprisingly, both processors are fabricated using gate arrays of roughly comparable sizes. The major distinc\u00ad tion of the NC4016 is that it uses separate memory chips for both stacks. Separate stack memories provide faster potential processing rates for a given clock speed because of the increased memory bandwidth available, but at the cost of requiring more components at the system level. The Harris RTX 2000 increases the level of system integration beyond the NC4016 by including on-chip stack memories. This actually reduces system complexity while providing potential speed increases, since on-chip memory can be faster than off-chip memory. The cost is more transistors on the chip. However, these extra transistors do not necessarily increase chip size by much. This is because the RTX 2000 uses a different design methodology called standard cell design that is well suited to providing on- chip memories. In fact, RTX 2000 customized systems can be designed that ", "content": "include program memory well stack memory onchip providing singlechip stack computer system likely stack computers designed future differing tradeoffs areas data path widths 16bit 32bit widths processing perhaps 24bit widths signal processing 36 bit widths tagged data architectures level system integration required offchip support raw performance characteristics must taken consideration matching processor selection cost performance requirements target application", "url": "RV32ISPEC.pdf#segment114", "timestamp": "2023-11-05 21:10:48", "segment": "segment114", "image_urls": [], "Book": "stack_computers_book" }, { "section": "8.4 EXAMPLE APPLICATION AREAS ", "content": "application areas stack computers like computers general limited imagination applications seem well suited stack machines include image processing object recognition including optical character recognition thumb print recognition handwriting recognition well image enhancement require extremely powerful processors wide application many commercially interesting applications require processor small inexpensive portable robotics controllers robot arms 5 6 joints degrees freedom typical strategy microcontroller joint plus powerful processor centralized control powerful microcontrollers joint perform complex positional calculations real time mobile system small size low power consumption vital digital filters filters require high speed multiplications keep high data flow rates stack processors room onchip hardware multipliers algorithm specific hardware quickly perform digital filter calculations process control powerful processors go beyond simple process control techniques apply expert system technology realtime process monitoring control stack machines particularly well suited rulebased systems computer graphics several specialpurpose graphics accelerator chips market tend concentrate primitives drawing lines moving blocks bits exciting opportunity area interpreting high level graphics command languages laser printers device independent screen display languages one predominant languages postscript similar forth computer peripherals low system cost stack machine makes well suited controlling computer peripherals disk drives communication links telecommunications high speed controllers provide capability data compression therefore lower transmission costs telefax modem applications also monitor performance transmission equipment automotive control automotive market forces severe restrictions cost environ mental requirements business minute difference cost per component add large profits losses high level system integration mandatory computers improve car performance safety even reducing system cost applications computerized ignition braking fuel distribution antitheft devices collision alert systems dash display systems consumer electronics consumer electronics anything sensitive pricing system integration level automotive products anyone taken apart inexpensive calculator digital watch knows miracles accomplished pieces plastic single chip opportunities use high speed portable inexpensive stack processors abound music synthesis midi compatible devices compact laser disk sound video playback devices digital tape devices slow scan television via telephone lines interactive cable tv services video games military spaceborne control applications spaceborne applications may used commercial purposes reliability environmental requirements many military applications stack processors well suited high speed control appli cations involving missiles aircraft addition applications acoustic electronic signal processing image enhancement communica tions fire control battlefield management parallel processing preliminary research shows stack machines execute functional programming languages efficiently programs written lan guages great deal inherent parallelism may exploited multiprocessor stack machine system future stack computers stack machines reviewed earlier chapters represent first generation commercially available stack processors machines come wide use designs refined meet market requirements improve efficiency questions addressed chapter kinds refinements likely see affect stack machine architectures applications soon answer questions stack machines perform many different circumstances however number important topics upon speculate opinions reasoning presented herein may form basis exploration stack machine concepts ideas chapter taken speculations proven facts section 91 discusses areas need examined providing support conventional programming languages stack machines turns existing stack machine designs handle problems well already section 92 discusses issue virtual memory memory protec tion virtual memory support found current stack machines needed application areas memory protection also supported needed applications future section 93 examines need third stack proposes memoryresident stack frame meet need third stack conventional language support time section 94 discusses impending limits memory bandwidth history behind use memory hierarchies computers stack machines offer solution memory bandwidth problem well suited important application areas section 95 introduces two ideas stack machine design intriguing used current designs one idea involves elimination conditional branches using conditional subroutine returns instead idea involves using stack hold temporarily assembled programs section 96 offers speculation impact stack machines computing", "url": "RV32ISPEC.pdf#segment115", "timestamp": "2023-11-05 21:10:49", "segment": "segment115", "image_urls": [], "Book": "stack_computers_book" }, { "section": "CHAPTER 9 ", "content": "", "url": "RV32ISPEC.pdf#segment116", "timestamp": "2023-11-05 21:10:49", "segment": "segment116", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.1 SUPPORT FOR CONVENTIONAL LANGUAGES ", "content": "initial market stack machines realtime control area high level system integration possible stack machines may also lead use low cost high performance coprocessor cards personal computers lowend workstations well coprocessor cards may well appli cation specific certain class problems even single important software package environments require running much application code conventional programming languages conventional languages implemented easily stack machines problem pure stack machines probably perform quite well register machines running conventional programs written normal programming style problem mostly one mismatch stack machine capabilities requirements conventional languages conventional programs tend use pro cedure calls large numbers local variables stack machines tend good running programs many small procedures local variables part difference due programming styles encour aged common practice structure conventional programming languages extent difference register machines well suited generalpurpose data processing whereas stack machines perform best realtime control environment rate performance conventional languages stack machines brought close even highest performance register machines applications providing modest level hardware support idea course approximately match registerbased machines best sacrificing features make stack machines better areas", "url": "RV32ISPEC.pdf#segment117", "timestamp": "2023-11-05 21:10:49", "segment": "segment117", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.1.1 Stack frames The issue, then, is to identify high level language structures that require additional hardware support. Strangely, the high level language run-time stack is the only major area in which pure stack machines fail to support conventional high level languages. This is because high level languages have the notion of \u2018activation records\u2019, which are \u2018frames\u2019 of elements pushed onto a software-managed stack in program memory upon every subroutine call. In the usual implementation, each stack frame is allocated only once as a large chunk of memory in the preamble to the subroutine being called. This frame contains the input parameters (which were actually allocated by the calling routine), the user declared local variables, and any compiler generated intermediate variables that might be necessary. During the course of the subroutine, arbitrary accesses are made within the stack frame without actually performing any pushes or pops. The stack frame is used as a temporary memory allocation device for subroutine calling, not as a traditional pushdown stack for storing interme\u00ad diate results between successive calculations. This means that it is incompat\u00ad ible with hardware stacks built into stack machines such as those we have been studying. An obvious approach to modify stack machines to meet this ", "content": "requirement build primary stack allocated large chunks random access within frame precisely risc machines register windows described sl2 machines chapter 2 solve problem reason make sense stack machines accesses data pay penalties associated operands instruction format alternative approach build secondary hardware stack accessing local variables slow speed primary data manipulations done lifo hardware data stack lets us best worlds without cost secondary register frame stack general 5 10 times big data stack good operating characteristics", "url": "RV32ISPEC.pdf#segment118", "timestamp": "2023-11-05 21:10:49", "segment": "segment118", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.1.2 Aliasing of registers and memory If this were the end of the tradeoff discussion, we might still be tempted to build a chip with on-chip frames. But, there is a deciding factor which tilts the balance in favor of placing the stack frames in program memory. That additional factor is that the semantics of conventional languages allow access to these local variables by memory address. The C language is notorious for this problem, which affects register machines and stack machines alike. While the aliasing of registers to memory addresses can be handled with clever hardware or compilers, the costs in hardware and/or software com\u00ad plexity are not in keeping with the stack machine design philosophy of maximum performance with minimum complexity. Therefore, the best choice for a stack machine is to maintain the conventional language stack frames in program memory, with a Frame Pointer register available as a hardware pointer for stack frame accesses. If chip space is plentiful, a stack machine might provide on-chip RAM as part of the program memory space to speed up access to the stack frames. It is indeed tempting to write complex compilers in an attempt to keep most local variables on the hardware stack while executing conventional languages. An experiment by this author with some stack machines has shown, however, that the difference between a C compiler that keeps all local variables on the hardware data stack and one that uses program memory with a frame pointer is small. In fact, if the machine has a hardware frame pointer, keeping the frames in program memory is actually somewhat faster. Normally, one would think that keeping local variables on the hardware data stack would be faster than placing them in memory. The reason this is not so is because stack machines are relatively inefficient at accessing deeply buried elements, especially if they may have been spilled out of the stack buffer and into program memory. Much of the time in the all-on-hardware- stack approach is spent thrashing elements about on the stack. While access to memory-resident frame elements is somewhat slower than access to data on the hardware stack, in the end the savings on stack manipulations make up the difference. ", "content": "", "url": "RV32ISPEC.pdf#segment119", "timestamp": "2023-11-05 21:10:49", "segment": "segment119", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.1.4 Conventional language execution efficiency From this discussion, we can see that stack machines can be reasonably efficient at supporting conventional programming languages with their stack frame approach. Of course, stack machines that use a hardware frame pointer into program memory cannot be expected to be as efficient as RISC machines, since they have massive on-chip register windows for direct frame support or exceedingly clever optimizing compilers that do sophisticated global register allocation. To the extent that programs in conventional languages conform to the models used by high performance register machines, stack machines will seem to perform poorly. Aspects of conventional programs that cause this effect include: large segments of straight-line code, near-100% cache hit ratios, large numbers of local variables used across long procedures, and shallowly nested procedures that can be compiled as in-line code. To the extent that programs use structures which run efficiently on stack machines, stack machines can approach or even exceed the performance of register-based machines. Code which is run well by stack machines contains: highly modular procedures with many levels of nesting and perhaps recur\u00ad sion; a relatively small number of frequently used subroutines and inner loops that may be placed in fast memory, and perhaps provided with microcode support; small numbers of local variables passed down through many layers of interface routines; and deeply nested subroutine calls. Also, programs that operate in environments with frequent interrupts and context switches can benefit from using stack machines. ", "content": "practical method using conventional languages stack machines adopt traditional approach implementing bulk program moderately efficient high level language inner loops program recoded assembly language machine affords high performance modest amount effort case project stack machines needed excellent realtime processing characteristics approach yield maximum processing speed programmer time invested one also keep mind reasons select computer raw processing speed single programs reasons might tilt balance favor stack machines include interrupt processing speed task switching speed low overall system complexity need application specific microcode andor hardware support final analy sis stack machines probably run many conventional language programs quite quickly registerbased machines consider ations largely cancel drawbacks especially realtime control applications making stack machines excellent alternative", "url": "RV32ISPEC.pdf#segment120", "timestamp": "2023-11-05 21:10:49", "segment": "segment120", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.2 VIRTUAL MEMORY AND MEMORY PROTECTION ", "content": "use virtual memory memory protection concepts yet widely incorporated existing stack machines stack machine applications date relatively small programs tight constraints hardware software require leave room techniques", "url": "RV32ISPEC.pdf#segment121", "timestamp": "2023-11-05 21:10:49", "segment": "segment121", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.2.2 Virtual memory is not used in controllers Likewise, there is no reason why stack machines cannot be provided with virtual memory capabilities. The one problem with virtual memory is the ", "content": "effect virtual memory miss may require retrying instruction since stack machines essence loadstore machines instruction restar tability harder risc machine fact since handling interrupts quicker stack machine lack instruction pipeline stack machines better handling virtual memory reason stack machines designed virtual memory simple stack machines targeted realtime control applications performance variations large hard disk hard ware requirements associated virtual memory simply inappropriate realtime embedded control environment", "url": "RV32ISPEC.pdf#segment122", "timestamp": "2023-11-05 21:10:49", "segment": "segment122", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.3 THE USE OF A THIRD STACK ", "content": "often proposed design alternative stack machines use third hardware stack purposes given adding third hardware stack usually storage loop counters local variables loop counters current stack machines generally kept top element return address stack subroutines loops mutually well nested considered bad programming style subroutine attempt access loop index parent procedure conceptual merit loop indices stack avoid cluttering return stack nonaddress data perfor mance program effectiveness gains sufficient justify hardware expense local variable storage another issue even using forth language programmers found concept compilermanaged local variables make programs easier create maintain order efficiently hardware needs access stack allocated frames random access locations within frame requirement like supporting conventional languages fact best solution probably third hardware stack rather stack machines support frame pointer software managed program memory stack used conventional language support support local variables forth", "url": "RV32ISPEC.pdf#segment123", "timestamp": "2023-11-05 21:10:49", "segment": "segment123", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.4 THE LIMITS OF MEMORY BANDWIDTH ", "content": "perhaps greatest challenge faced computer architects years problem memory bandwidth memory bandwidth amount information transferred memory per unit time put another way memory bandwidth determines often values accessed memory crux issue program memory usually much bigger terms number transistors devices processor means cpu easily made faster memory keeping within budget comes general observation faster components tend expensive consume power etc given fabrication technology geometry", "url": "RV32ISPEC.pdf#segment124", "timestamp": "2023-11-05 21:10:49", "segment": "segment124", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.4.2 Current memory bandwidth concerns With the latest generations of computers; there is a new problem. Cache memory chips will not be fast enough to keep up with processors of the future. This is not really because processor transistor switching speeds are ", "content": "increasing faster memory chip speeds probably problem pins processor memory chips beginning dominate timing picture way pins become bottleneck transistors getting smaller faster chips become denser unfortunately pins soldered connected together become much smaller except exotic packaging technologies number electrons must pushed pin wire connected pin becomes significant compared ability transistors push electrons pins become bottleneck turn means offchip memory slower order magnitude onchip memory solely delays introduced going chips may situation offchip memory slow keep processor busy creates need additional layer memory response speed onchip cache memory unfortunately fundamental problem approach compared previous memory approaches printed circuit boards may made quite large without problem yield circuit board varies linearly number chips circuit boards repairable defects discovered unfortuna tely yield chips grows worse exponentially area chips easily repairable using separate cache memory chips adding chips printed circuit board provide much cache memory needed within reason however singlechip system enough onchip cache memory increasing chip size provide memory make processor unmanufacturable yield problems tremen dous amounts memory needed highspeed processors especially risc machines seem indicate best may hope modest amount highspeed cache memory onchip large amount slowspeed off chip cache memory program performance mercy hit ratios two different caches best hope", "url": "RV32ISPEC.pdf#segment125", "timestamp": "2023-11-05 21:10:49", "segment": "segment125", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.4.3 The stack machine solution Stack machines provide a much different way to solve the problem. Conventional machines with their caches attempt to capture bits and pieces of programs as they are run. This is part of an attempt to reuse already fetched instructions as they are executed in loops or very frequently called procedures. Part of the problem that interferes with cache performance is that conventional programming languages and compilers produce rambling, sprawling code. Stack machines, on the other hand, encourage the writing of compact code with much reuse of procedures. The impact of the way stack machine programs behave is that stack machines should not use dynamically allocated cache memory. They instead should use small, statically allocated or operating system managed program memory for high speed execution of selected subroutines. Frequently used subroutines may be placed in these portions of program memory, which can be used more freely by the compiler and the user with the knowledge that ", "content": "run quickly since stack machine code compact significant amount program reside high speed onchip memory encourage use modular reused procedures know ledge actually help performance instead hurting often case machines course since onchip program memory need complex bulky control circuitry management cache room available extra program memory 16bit stack processors quite reasonable entire realtime control program data memory local variables reside entirely on chip process technology submicron levels begin true 32bit stack processors well consider different approach memory hierarchies might work consider microcoded machine rtx 32p large dynamic rams may used contain bulk program data actually really extreme case programs rtx 32p seldom need capacity static memory chips programs let us assume true anyway dynamic ram form storage element highest levels program executed infrequently data accessed sparsely infrequently next static memory chips added system used mediumlevel layers program executed fairly frequently also program data manipulated frequently may resident memory may copied dynamic memory period time needed practice may two levels static memory chips large slow ones small fast ones different power cost printed circuit board space characteristics onchip program memory come next hierarchy inner loops important procedures program reside quick access processor several hundred bytes program ram easily fit onto processor chip data program case chips run dedicated programs often case realtime embedded system environment several thousand bytes program rom may reside onchip practice language use many common subroutines rom assist programmer compiler finally microcode memory resides onchip actual control cpu sense memory hierarchy microcode memory may thought another level program memory contains frequently executed actions processor correspond sup ported machine instructions mixture rom ram appropriate course data stack acts fast access device holding intermediate computation results layered hierarchy memory sizes speeds throughout system concept hierarchy new new thought need managed hardware run time compiler programmer easily manage key since stack programs small significant amounts code reside level statically allocate hierarchy since stack machines support fast procedure calls inner loops small segments code frequently executed need stored highspeed memory entire bulk userdefined procedures means dynamic memory alloca tion really required", "url": "RV32ISPEC.pdf#segment126", "timestamp": "2023-11-05 21:10:50", "segment": "segment126", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.5 TWO IDEAS FOR STACK MACHINE DESIGN ", "content": "two interesting stack machine design details common usage may prove useful future designs", "url": "RV32ISPEC.pdf#segment127", "timestamp": "2023-11-05 21:10:50", "segment": "segment127", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.5.2 Use of the stack for holding code Another interesting proposal for stack machine program execution was put forth by Tsukamoto (1977). He examined the conflicting virtues and pitfalls of self-modifying code. While self-modifying code can be very efficient, it is almost universally shunned by software professionals as being too risky. Self-modifying code corrupts the contents of a program, so that the pro\u00ad grammer cannot count on an instruction generated by the compiler or assembler being correct during the full course of a program run. Tsukamoto\u2019s idea allows the use of self-modifying code without the pitfalls. He simply suggests using the run-time stack to store modified program segments for execution. Code can be generated by the application program and executed at run-time, yet does not corrupt the program ", "content": "memory code executed thrown away simply popping stack neither techniques common use today either one may eventually find important application", "url": "RV32ISPEC.pdf#segment128", "timestamp": "2023-11-05 21:10:50", "segment": "segment128", "image_urls": [], "Book": "stack_computers_book" }, { "section": "9.6 THE IMPACT OF STACK MACHINES ON COMPUTING ", "content": "seen stack machines least fast registerbased machines terms raw instructions executed per second also display superior characteristics realtime control applications still fall short registerbased machines environments use many local variables programs little use nested procedure calls question whether risc cisc stack machines best appropriate query design techniques place among different applications stack machines seem best suited primary cpus workstation minicomputer markets reason may receive less attention perhaps deserve areas well suited stay second thought may speculate problem supporting sorts computing tasks stack machines rather current programming practices consider kinds programs stack machines well suited highly modular programs many small deeply nested procedures programs pass small number variables procedures hiding details operation programs frequently reuse small procedures reduce program size com plexity programs easily debugged small procedure size programs present uniform level interface levels module abstraction high level subroutines instructions characteristics seem desirable unfortunately seldom practiced today perhaps use stack machines help improve situation may deepseated knowledge strong points register based machines formed characteristic conventional programming languages hardware use procedure calls used frequently time consuming procedures somewhat long language syntax forced make extremely simple define use require various amounts specification code parameter lists typing information like even project management styles require separate paperwork formal procedures time new subroutine created using many small procedures made difficult small degrees fewer procedures used fueling cycle stack machines present opportunity change cycle procedures calls extremely inexpensive languages forth provide mini mum overhead defining new procedures actually provide environment encourages development testing modularized easily tested code may needed design new programming languages well suited high level language machines chen et al 1980 stack machines particular may see happen extensions variations traditional languages incorporate control structures better exploit stack machine hardware registerbased machines offer performance rewards poorly structured programs often cost harder maintenance difficult debugging increased program size rewarding programmers writing well structured code stack machines may encourage better pro gramming practices turn may influence evolution oaths conventional languages toward providing better means creating main taining executing programs", "url": "RV32ISPEC.pdf#segment129", "timestamp": "2023-11-05 21:10:50", "segment": "segment129", "image_urls": [], "Book": "stack_computers_book" } ]