text
stringlengths 1
7.76k
| source
stringlengths 17
81
|
|---|---|
484 VIRTUALIZATION AND THE CLOUD CHAP. 7 (or wrong)—yet—we think we do right by exploring the similarity between hyper- visors and microkernels a bit more. The main reason the first hypervisors emulated the complete machine was the lack of availability of source code for the guest operating system (e.g., for Win- dows) or the vast number of variants (e.g., for Linux). Perhaps in the future the hypervisor/microkernel API will be standardized, and subsequent operating sys- tems will be designed to call it instead of using sensitive instructions. Doing so would make virtual machine technology easier to support and use. The difference between true virtualization and paravirtualization is illustrated in Fig. 7-5. Here we have two virtual machines being supported on VT hardware. On the left is an unmodified version of Windows as the guest operating system. When a sensitive instruction is executed, the hardware causes a trap to the hypervi- sor, which then emulates it and returns. On the right is a version of Linux modified so that it no longer contains any sensitive instructions. Instead, when it needs to do I/O or change critical internal registers (such as the one pointing to the page tables), it makes a hypervisor call to get the work done, just like an application pro- gram making a system call in standard Linux. Unmodified Windows Modified Linux Trap due to sensitive instruction Trap due to hypervisor call Paravirtualization True virtualization Microkernel Type 1 hypervisor Hardware Figure 7-5. True virtualization and paravirtualization In Fig. 7-5 we have shown the hypervisor as being divided into two parts sepa- rated by a dashed line. In reality, only one program is running on the hardware. One part of it is responsible for interpreting trapped sensitive instructions, in this case, from Windows. The other part of it just carries out hypercalls. In the figure the latter part is labeled ‘‘microkernel.’’ If the hypervisor is intended to run only paravirtualized guest operating systems, there is no need for the emulation of sen- sitive instructions and we have a true microkernel, which just provides very basic services such as process dispatching and managing the MMU. The boundary be- tween a type 1 hypervisor and a microkernel is vague already and will get even less clear as hypervisors begin acquiring more and more functionality and hypercalls, as seems likely. Again, this subject is controversial, but it is increasingly clear that the program running in kernel mode on the bare hardware should be small and reli- able and consist of thousands, not millions, of lines of code.
|
clipped_os_Page_484_Chunk6701
|
SEC. 7.5 ARE HYPERVISORS MICROKERNELS DONE RIGHT? 485 Paravirtualizing the guest operating system raises a number of issues. First, if the sensitive instructions are replaced with calls to the hypervisor, how can the op- erating system run on the native hardware? After all, the hardware does not under- stand these hypercalls. And second, what if there are multiple hypervisors avail- able in the marketplace, such as VMware, the open source Xen originally from the University of Cambridge, and Microsoft’s Hyper-V, all with somewhat different hypervisor APIs? How can the kernel be modified to run on all of them? Amsden et al. (2006) have proposed a solution. In their model, the kernel is modified to call special procedures whenever it needs to do something sensitive. Together these procedures, called the VMI (Virtual Machine Interface), form a low-level layer that interfaces with the hardware or hypervisor. These procedures are designed to be generic and not tied to any specific hardware platform or to any particular hypervisor. An example of this technique is given in Fig. 7-6 for a paravirtualized version of Linux they call VMI Linux (VMIL). When VMI Linux runs on the bare hard- ware, it has to be linked with a library that issues the actual (sensitive) instruction needed to do the work, as shown in Fig. 7-6(a). When running on a hypervisor, say VMware or Xen, the guest operating system is linked with different libraries that make the appropriate (and different) hypercalls to the underlying hypervisor. In this way, the core of the operating system remains portable yet is hypervisor friendly and still efficient. Hardware VMIL/HWinterface lib. Sensitive instruction executed by HW VMI Linux Hardware VMware VMI Linux VMIL to Vmware lib. Hypervisor call Hardware Xen VMI Linux VMIL to Xen library Hypervisor call (a) (b) (c) Figure 7-6. VMI Linux running on (a) the bare hardware, (b) VMware, (c) Xen Other proposals for a virtual machine interface have also been made. Another popular one is called paravirt ops. The idea is conceptually similar to what we described above, but different in the details. Essentially, a group of Linux vendors that included companies like IBM, VMware, Xen, and Red Hat advocated a hyper- visor-agnostic interface for Linux. The interface, included in the mainline kernel from version 2.6.23 onward, allows the kernel to talk to whatever hypervisor is managing the physical hardware.
|
clipped_os_Page_485_Chunk6702
|
486 VIRTUALIZATION AND THE CLOUD CHAP. 7 7.6 MEMORY VIRTUALIZATION So far we have addressed the issue of how to virtualize the CPU. But a com- puter system has more than just a CPU. It also has memory and I/O devices. They have to be virtualized, too. Let us see how that is done. Modern operating systems nearly all support virtual memory, which is basical- ly a mapping of pages in the virtual address space onto pages of physical memory. This mapping is defined by (multilevel) page tables. Typically the mapping is set in motion by having the operating system set a control register in the CPU that points to the top-level page table. Virtualization greatly complicates memory man- agement. In fact, it took hardware manufacturers two tries to get it right. Suppose, for example, a virtual machine is running, and the guest operating system in it decides to map its virtual pages 7, 4, and 3 onto physical pages 10, 11, and 12, respectively. It builds page tables containing this mapping and loads a hardware register to point to the top-level page table. This instruction is sensitive. On a VT CPU, it will trap; with dynamic translation it will cause a call to a hyper- visor procedure; on a paravirtualized operating system, it will generate a hypercall. For simplicity, let us assume it traps into a type 1 hypervisor, but the problem is the same in all three cases. What does the hypervisor do now? One solution is to actually allocate physi- cal pages 10, 11, and 12 to this virtual machine and set up the actual page tables to map the virtual machine’s virtual pages 7, 4, and 3 to use them. So far, so good. Now suppose a second virtual machine starts and maps its virtual pages 4, 5, and 6 onto physical pages 10, 11, and 12 and loads the control register to point to its page tables. The hypervisor catches the trap, but what should it do? It cannot use this mapping because physical pages 10, 11, and 12 are already in use. It can find some free pages, say 20, 21, and 22, and use them, but it first has to create new page tables mapping the virtual pages 4, 5, and 6 of virtual machine 2 onto 20, 21, and 22. If another virtual machine starts and tries to use physical pages 10, 11, and 12, it has to create a mapping for them. In general, for each virtual machine the hypervisor needs to create a shadow page table that maps the virtual pages used by the virtual machine onto the actual pages the hypervisor gav e it. Worse yet, every time the guest operating system changes its page tables, the hypervisor must change the shadow page tables as well. For example, if the guest OS remaps virtual page 7 onto what it sees as physical page 200 (instead of 10), the hypervisor has to know about this change. The trouble is that the guest operat- ing system can change its page tables by just writing to memory. No sensitive oper- ations are required, so the hypervisor does not even know about the change and certainly cannot update the shadow page tables used by the actual hardware. A possible (but clumsy) solution is for the hypervisor to keep track of which page in the guest’s virtual memory contains the top-level page table. It can get this information the first time the guest attempts to load the hardware register that points to it because this instruction is sensitive and traps. The hypervisor can create
|
clipped_os_Page_486_Chunk6703
|
SEC. 7.6 MEMORY VIRTUALIZATION 487 a shadow page table at this point and also map the top-level page table and the page tables it points to as read only. A subsequent attempts by the guest operating system to modify any of them will cause a page fault and thus give control to the hypervisor, which can analyze the instruction stream, figure out what the guest OS is trying to do, and update the shadow page tables accordingly. It is not pretty, but it is doable in principle. Another, equally clumsy, solution is to do exactly the opposite. In this case, the hypervisor simply allows the guest to add new mappings to its page tables at will. As this is happening, nothing changes in the shadow page tables. In fact, the hyper- visor is not even aware of it. However, as soon as the guest tries to access any of the new pages, a fault will occur and control reverts to the hypervisor. The hyper- visor inspects the guest’s page tables to see if there is a mapping that it should add, and if so, adds it and reexecutes the faulting instruction. What if the guest removes a mapping from its page tables? Clearly, the hypervisor cannot wait for a page fault to happen, because it will not happen. Removing a mapping from a page table hap- pens by way of the INVLPG instruction (which is really intended to invalidate a TLB entry). The hypervisor therefore intercepts this instruction and removes the mapping from the shadow page table also. Again, not pretty, but it works. Both of these techniques incur many page faults, and page faults are expensive. We typically distinguish between ‘‘normal’’ page faults that are caused by guest programs that access a page that has been paged out of RAM, and page faults that are related to ensuring the shadow page tables and the guest’s page tables are in sync. The former are known as guest-induced page faults, and while they are intercepted by the hypervisor, they must be reinjected into the guest. This is not cheap at all. The latter are known as hypervisor-induced page faults and they are handled by updating the shadow page tables. Page faults are always expensive, but especially so in virtualized environments, because they lead to so-called VM exits. A VM exit is a situation in which the hypervisor regains control. Consider what the CPU needs to do for such a VM exit. First, it records the cause of the VM exit, so the hypervisor knows what to do. It also records the address of the guest instruction that caused the exit. Next, a con- text switch is done, which includes saving all the registers. Then, it loads the hypervisor’s processor state. Only then can the hypervisor start handling the page fault, which was expensive to begin with. Oh, and when it is all done, it should re- verse these steps. The whole process may take tens of thousands of cycles, or more. No wonder people bend over backward to reduce the number of exits. In a paravirtualized operating system, the situation is different. Here the paravirtualized OS in the guest knows that when it is finished changing some proc- ess’ page table, it had better inform the hypervisor. Consequently, it first changes the page table completely, then issues a hypervisor call telling the hypervisor about the new page table. Thus, instead of a protection fault on every update to the page table, there is one hypercall when the whole thing has been updated, obviously a more efficient way to do business.
|
clipped_os_Page_487_Chunk6704
|
488 VIRTUALIZATION AND THE CLOUD CHAP. 7 Hardware Support for Nested Page Tables The cost of handling shadow page tables led chip makers to add hardware sup- port for nested page tables. Nested page tables is the term used by AMD. Intel refers to them as EPT (Extended Page Tables). They are similar and aim to re- move most of the overhead by handling the additional page-table manipulation all in hardware, all without any traps. Interestingly, the first virtualization extensions in Intel’s x86 hardware did not include support for memory virtualization at all. While these VT-extended processors removed many bottlenecks concerning CPU virtualization, poking around in page tables was as expensive as ever. It took a few years for AMD and Intel to produce the hardware to virtualize memory efficiently. Recall that even without virtualization, the operating system maintains a map- ping between the virtual pages and the physical page. The hardware ‘‘walks’’ these page tables to find the physical address that corresponds to a virtual address. Add- ing more virtual machines simply adds an extra mapping. As an example, suppose we need to translate a virtual address of a Linux process running on a type 1 hyper- visor like Xen or VMware ESX Server to a physical address. In addition to the guest virtual addresses, we now also have guest physical addresses and subse- quently host physical addresses (sometimes referred to as machine physical addresses). We hav e seen that without EPT, the hypervisor is responsible for maintaining the shadow page tables explicitly. With EPT, the hypervisor still has an additional set of page tables, but now the CPU is able to handle much of the intermediate level in hardware also. In our example, the hardware first walks the ‘‘regular’’ page tables to translate the guest virtual address to a guest physical ad- dress, just as it would do without virtualization. The difference is that it also walks the extended (or nested) page tables without software intervention to find the host physical address, and it needs to do this every time a guest physical address is ac- cessed. The translation is illustrated in Fig. 7-7. Unfortunately, the hardware may need to walk the nested page tables more fre- quently then you might think. Let us suppose that the guest virtual address was not cached and requires a full page-table lookup. Every level in paging hierarchy incurs a lookup in the nested page tables. In other words, the number of memory references grows quadratically with the depth of the hierarchy. Even so, EPT dra- matically reduces the number of VM exits. Hypervisors no longer need to map the guest’s page table read only and can do away with shadow page-table handling. Better still, when switching virtual machines, it just changes this mapping, the same way an operating system changes the mapping when switching processes. Reclaiming Memory Having all these virtual machines on the same physical hardware all with their own memory pages and all thinking they are the king of the mountain is great— until we need the memory back. This is particularly important in the event of
|
clipped_os_Page_488_Chunk6705
|
SEC. 7.6 MEMORY VIRTUALIZATION 489 Level 1 offset 63 48 47 39 38 30 29 21 20 12 11 0 Level 2 offset Level 3 offset Level 4 offset page offset + + etc. Guest pointer to level 1 page table Guest pointer to entry in level 1 page table Guest pointer to entry in level 2 page table Look up in nested page tables Look up in nested page tables Figure 7-7. Extended/nested page tables are walked every time a guest physical address is accessed—including the accesses for each level of the guest’s page ta- bles. overcommitment of memory, where the hypervisor pretends that the total amount of memory for all virtual machines combined is more than the total amount of physical memory present on the system. In general, this is a good idea, because it allows the hypervisor to admit more and more beefy virtual machines at the same time. For instance, on a machine with 32 GB of memory, it may run three virtual machines each thinking it has 16 GB of memory. Clearly, this does not fit. Howev- er, perhaps the three machines do not really need the maximum amount of physical memory at the same time. Or perhaps they share pages that have the same content (such as the Linux kernel) in different virtual machines in an optimization known as deduplication. In that case, the three virtual machines use a total amount of memory that is less than 3 times 16 GB. We will discuss deduplication later; for the moment the point is that what looks like a good distribution now, may be a poor distribution as the workloads change. Maybe virtual machine 1 needs more memory, while virtual machine 2 could do with fewer pages. In that case, it would be nice if the hypervisor could transfer resources from one virtual machine to an- other and make the system as a whole benefit. The question is, how can we take aw ay memory pages safely if that memory is given to a virtual machine already? In principle, we could use yet another level of paging. In case of memory shortage, the hypervisor would then page out some of the virtual machine’s pages, just as an operating system may page out some of an application’s pages. The drawback of this approach is that the hypervisor should do this, and the hypervisor has no clue about which pages are the most valuable to the guest. It is very likely to page out the wrong ones. Even if it does pick the right pages to swap (i.e., the pages that the guest OS would also have picked), there is still more trouble ahead.
|
clipped_os_Page_489_Chunk6706
|
490 VIRTUALIZATION AND THE CLOUD CHAP. 7 For instance, suppose that the hypervisor pages out a page P. A little later, the guest OS also decides to page out this page to disk. Unfortunately, the hypervisor’s swap space and the guest’s swap space are not the same. In other words, the hyper- visor must first page the contents back into memory, only to see the guest write it back out to disk immediately. Not very efficient. A common solution is to use a trick known as ballooning, where a small bal- loon module is loaded in each VM as a pseudo device driver that talks to the hyper- visor. The balloon module may inflate at the hypervisor’s request by allocating more and more pinned pages, and deflate by deallocating these pages. As the bal- loon inflates, memory scarcity in the guest increases. The guest operating system will respond by paging out what it believes are the least valuable pages—which is just what we wanted. Conversely, as the balloon deflates, more memory becomes available for the guest to allocate. In other words, the hypervisor tricks the operat- ing system into making tough decisions for it. In politics, this is known as passing the buck (or the euro, pound, yen, etc.). 7.7 I/O VIRTUALIZATION Having looked at CPU and memory virtualization, we next examine I/O virtu- alization. The guest operating system will typically start out probing the hardware to find out what kinds of I/O devices are attached. These probes will trap to the hypervisor. What should the hypervisor do? One approach is for it to report back that the disks, printers, and so on are the ones that the hardware actually has. The guest will then load device drivers for these devices and try to use them. When the device drivers try to do actual I/O, they will read and write the device’s hardware device registers. These instructions are sensitive and will trap to the hypervisor, which could then copy the needed values to and from the hardware registers, as needed. But here, too, we have a problem. Each guest OS could think it owns an entire disk partition, and there may be many more virtual machines (hundreds) than there are actual disk partitions. The usual solution is for the hypervisor to create a file or region on the actual disk for each virtual machine’s physical disk. Since the guest OS is trying to control a disk that the real hardware has (and which the hypervisor understands), it can convert the block number being accessed into an offset into the file or disk region being used for storage and do the I/O. It is also possible for the disk that the guest is using to be different from the real one. For example, if the actual disk is some brand-new high-performance disk (or RAID) with a new interface, the hypervisor could advertise to the guest OS that it has a plain old IDE disk and let the guest OS install an IDE disk driver. When this driver issues IDE disk commands, the hypervisor converts them into com- mands to drive the new disk. This strategy can be used to upgrade the hardware without changing the software. In fact, this ability of virtual machines to remap
|
clipped_os_Page_490_Chunk6707
|
SEC. 7.7 I/O VIRTUALIZATION 491 hardware devices was one of the reasons VM/370 became popular: companies wanted to buy new and faster hardware but did not want to change their software. Virtual machine technology made this possible. Another interesting trend related to I/O is that the hypervisor can take the role of a virtual switch. In this case, each virtual machine has a MAC address and the hypevisor switches frames from one virtual machine to another—just like an Ether- net switch would do. Virtual switches have sev eral advantages. For instance, it is very easy to reconfigure them. Also, it is possible to augment the switch with addi- tional functionality, for instance for additional security. I/O MMUs Another I/O problem that must be solved somehow is the use of DMA, which uses absolute memory addresses. As might be expected, the hypervisor has to intervene here and remap the addresses before the DMA starts. However, hard- ware already exists with an I/O MMU, which virtualizes the I/O the same way the MMU virtualizes the memory. I/O MMU exists in different forms and shapes for many processor architectures. Even if we limit ourselves to the x86, Intel and AMD have slightly different technology. Still, the idea is the same. This hardware eliminates the DMA problem. Just like regular MMUs, the I/O MMU uses page tables to map a memory ad- dress that a device wants to use (the device address) to a physical address. In a vir- tual environment, the hypervisor can set up the page tables in such a way that a de- vice performing DMA will not trample over memory that does not belong to the virtual machine on whose behalf it is working. I/O MMUs offer different advantages when dealing with a device in a virtu- alized world. Device pass through allows the physical device to be directly as- signed to a particular virtual machine. In general, it would be ideal if device ad- dress space were exactly the same as the guest’s physical address space. However, this is unlikely—unless you have an I/O MMU. The MMU allows the addresses to remapped transparently, and both the device and the virtual machine are blissfully unaware of the address translation that takes place under the hood. Device isolation ensures that a device assigned to a virtual machine can direct- ly access that virtual machine without jeopardizing the integrity of the other guests. In other words, the I/O MMU prevents rogue DMA traffic, just as a normal MMU prevents rogue memory accesses from processes—in both cases accesses to unmapped pages result in faults. DMA and addresses are not the whole I/O story, unfortunately. For complete- ness, we also need to virtualize interrupts, so that the interrupt generated by a de- vice arrives at the right virtual machine, with the right interrupt number. Modern I/O MMUs therefore support interrupt remapping. Say, a device sends a mes- sage signaled interrupt with number 1. This message first hits the I/O MMU that will use the interrupt remapping table to translate to a new message destined for
|
clipped_os_Page_491_Chunk6708
|
492 VIRTUALIZATION AND THE CLOUD CHAP. 7 the CPU that currently runs the virtual machine and with the vector number that the VM expects (e.g., 66). Finally, having an I/O MMU also helps 32-bit devices access memory above 4 GB. Normally, such devices are unable to access (e.g., DMA to) addresses beyond 4 GB, but the I/O MMU can easily remap the device’s lower addresses to any ad- dress in the physical larger address space. Device Domains A different approach to handling I/O is to dedicate one of the virtual machines to run a standard operating system and reflect all I/O calls from the other ones to it. This approach is enhanced when paravirtualization is used, so the command being issued to the hypervisor actually says what the guest OS wants (e.g., read block 1403 from disk 1) rather than being a series of commands writing to device regis- ters, in which case the hypervisor has to play Sherlock Holmes and figure out what it is trying to do. Xen uses this approach to I/O, with the virtual machine that does I/O called domain 0. I/O virtualization is an area in which type 2 hypervisors have a practical advan- tage over type 1 hypervisors: the host operating system contains the device drivers for all the weird and wonderful I/O devices attached to the computer. When an ap- plication program attempts to access a strange I/O device, the translated code can call the existing device driver to get the work done. With a type 1 hypervisor, the hypervisor must either contain the driver itself, or make a call to a driver in domain 0, which is somewhat similar to a host operating system. As virtual machine tech- nology matures, future hardware is likely to allow application programs to access the hardware directly in a secure way, meaning that device drivers can be linked di- rectly with application code or put in separate user-mode servers (as in MINIX3), thereby eliminating the problem. Single Root I/O Virtualization Directly assigning a device to a virtual machine is not very scalable. With four physical networks you can support no more than four virtual machines that way. For eight virtual machines you need eight network cards, and to run 128 virtual machines—well, let’s just say that it may be hard to find your computer buried under all those network cables. Sharing devices among multiple hypervisors in software is possible, but often not optimal because an emulation layer (or device domain) interposes itself be- tween hardware and the drivers and the guest operating systems. The emulated de- vice frequently does not implement all the advanced functions supported by the hardware. Ideally, the virtualization technology would offer the equivalence of de- vice pass through of a single device to multiple hypervisors, without any overhead. Virtualizing a single device to trick every virtual machine into believing that it has
|
clipped_os_Page_492_Chunk6709
|
SEC. 7.7 I/O VIRTUALIZATION 493 exclusive access to its own device is much easier if the hardware actually does the virtualization for you. On PCIe, this is known as single root I/O virtualization. Single root I/O virtualization (SR-IOV) allows us to bypass the hypervisor’s involvement in the communication between the driver and the device. Devices that support SR-IOV provide an independent memory space, interrupts and DMA streams to each virtual machine that uses it (Intel, 2011). The device appears as multiple separate devices and each can be configured by separate virtual machines. For instance, each will have a separate base address register and address space. A virtual machine maps one of these memory areas (used for instance to configure the device) into its address space. SR-IOV provides access to the device in two flavors: PF (Physical Functions) and (Virtual Functions). PFs are full PCIe functions and allow the device to be configured in whatever way the administrator sees fit. Physical functions are not accessible to guest operating systems. VFs are lightweight PCIe functions that do not offer such configuration options. They are ideally suited for virtual machines. In summary, SR-IOV allows devices to be virtualized in (up to) hundreds of virtual functions that trick virtual machines into believing they are the sole owner of a de- vice. For example, given an SR-IOV network interface, a virtual machine is able to handle its virtual network card just like a physical one. Better still, many modern network cards have separate (circular) buffers for sending and receiving data, dedi- cated to this virtual machines. For instance, the Intel I350 series of network cards has eight send and eight receive queues 7.8 VIRTUAL APPLIANCES Virtual machines offer an interesting solution to a problem that has long plagued users, especially users of open source software: how to install new appli- cation programs. The problem is that many applications are dependent on numer- ous other applications and libraries, which are themselves dependent on a host of other software packages, and so on. Furthermore, there may be dependencies on particular versions of the compilers, scripting languages, and the operating system. With virtual machines now available, a software developer can carefully con- struct a virtual machine, load it with the required operating system, compilers, li- braries, and application code, and freeze the entire unit, ready to run. This virtual machine image can then be put on a CD-ROM or a Website for customers to install or download. This approach means that only the software developer has to under- stand all the dependencies. The customers get a complete package that actually works, completely independent of which operating system they are running and which other software, packages, and libraries they hav e installed. These ‘‘shrink- wrapped’’ virtual machines are often called virtual appliances. As an example, Amazon’s EC2 cloud has many pre-packaged virtual appliances available for its clients, which it offers as convenient software services (‘‘Software as a Service’’).
|
clipped_os_Page_493_Chunk6710
|
494 VIRTUALIZATION AND THE CLOUD CHAP. 7 7.9 VIRTUAL MACHINES ON MULTICORE CPUS The combination of virtual machines and multicore CPUs creates a whole new world in which the number of CPUs available can be set by the software. If there are, say, four cores, and each can run, for example, up to eight virtual machines, a single (desktop) CPU can be configured as a 32-node multicomputer if need be, but it can also have fewer CPUs, depending on the software. Never before has it been possible for an application designer to first choose how many CPUs he wants and then write the software accordingly. This is clearly a new phase in computing. Moreover, virtual machines can share memory. A typical example where this is useful is a single server hosting multiple instances of the same operating sys- tems. All that has to be done is map physical pages into the address spaces of mul- tiple virtual machines. Memory sharing is already available in deduplication solu- tions. Deduplication does exactly what you think it does: avoids storing the same data twice. It is a fairly common technique in storage systems, but is now appear- ing in virtualization as well. In Disco, it was known as transparent page sharing (which requires modification to the guest), while VMware calls it content-based page sharing (which does not require any modification). In general, the technique revolves around scanning the memory of each of the virtual machines on a host and hashing the memory pages. Should some pages produce an identical hash, the sys- tem has to first check to see if they really are the same, and if so, deduplicate them, creating one page with the actual content and two references to that page. Since the hypervisor controls the nested (or shadow) page tables, this mapping is straightfor- ward. Of course, when either of the guests modifies a shared page, the change should not be visible in the other virtual machine(s). The trick is to use copy on write so the modified page will be private to the writer. If virtual machines can share memory, a single computer becomes a virtual multiprocessor. Since all the cores in a multicore chip share the same RAM, a sin- gle quad-core chip could easily be configured as a 32-node multiprocessor or a 32-node multicomputer, as needed. The combination of multicore, virtual machines, hypervisor, and microkernels is going to radically change the way people think about computer systems. Current software cannot deal with the idea of the programmer determining how many CPUs are needed, whether they should be a multicomputer or a multiprocessor, and how minimal kernels of one kind or another fit into the picture. Future soft- ware will have to deal with these issues. If you are a computer science or engineer- ing student or professional, you could be the one to sort out all this stuff. Go for it! 7.10 LICENSING ISSUES Some software is licensed on a per-CPU basis, especially software for compa- nies. In other words, when they buy a program, they hav e the right to run it on just one CPU. What’s a CPU, anyway? Does this contract give them the right to run
|
clipped_os_Page_494_Chunk6711
|
SEC. 7.10 LICENSING ISSUES 495 the software on multiple virtual machines all running on the same physical ma- chine? Many software vendors are somewhat unsure of what to do here. The problem is much worse in companies that have a license allowing them to have n machines running the software at the same time, especially when virtual machines come and go on demand. In some cases, software vendors have put an explicit clause in the license for- bidding the licensee from running the software on a virtual machine or on an unau- thorized virtual machine. For companies that run all their software exclusively on virtual machines, this could be a real problem. Whether any of these restrictions will hold up in court and how users respond to them remains to be seen. 7.11 CLOUDS Virtualization technology played a crucial role in the dizzying rise of cloud computing. There are many clouds. Some clouds are public and available to any- one willing to pay for the use of resources, others are private to an organization. Likewise, different clouds offer different things. Some give their users access to physical hardware, but most virtualize their environments. Some offer the bare ma- chines, virtual or not, and nothing more, but others offer software that is ready to use and can be combined in interesting ways, or platforms that make it easy for their users to develop new services. Cloud providers typically offer different cate- gories of resources, such as ‘‘big machines’’ versus ‘‘little machines,’’ etc. For all the talk about clouds, few people seem really sure about what they are exactly. The National Institute of Standards and Technology, always a good source to fall back on, lists fiv e essential characteristics: 1. On-demand self-service. Users should be able to provision re- sources automatically, without requiring human interaction. 2. Broad network access. All these resources should be available over the network via standard mechanisms so that heterogeneous devices can make use of them. 3. Resource pooling. The computing resource owned by the provider should be pooled to serve multiple users and with the ability to assign and reassign resources dynamically. The users generally do not even know the exact location of ‘‘their’’ resources or even which country they are located in. 4. Rapid elasticity. It should be possible to acquire and release re- sources elastically, perhaps even automatically, to scale immediately with the users’ demands. 5. Measured service. The cloud provider meters the resources used in a way that matches the type of service agreed upon.
|
clipped_os_Page_495_Chunk6712
|
496 VIRTUALIZATION AND THE CLOUD CHAP. 7 7.11.1 Clouds as a Service In this section, we will look at clouds with a focus on virtualization and operat- ing systems. Specifically, we consider clouds that offer direct access to a virtual machine, which the user can use in any way he sees fit. Thus, the same cloud may run different operating systems, possibly on the same hardware. In cloud terms, this is known as IAAS (Infrastructure As A Service), as opposed to PAAS (Plat- form As A Service, which delivers an environment that includes things such as a specific OS, database, Web server, and so on), SAAS (Software As A Service, which offers access to specific software, such as Microsoft Office 365, or Google Apps), and many other types of as-a-service. One example of an IAAS cloud is Amazon EC2, which happens to be based on the Xen hypervisor and counts multi- ple hundreds of thousands of physical machines. Provided you have the cash, you can have as much computing power as you need. Clouds can transform the way companies do computing. Overall, consolidating the computing resources in a small number of places (conveniently located near a power source and cheap cooling) benefits from economy of scale. Outsourcing your processing means that you need not worry so much about managing your IT infrastructure, backups, maintenance, depreciation, scalability, reliability, perfor- mance, and perhaps security. All of that is done in one place and, assuming the cloud provider is competent, done well. You would think that IT managers are hap- pier today than ten years ago. However, as these worries disappeared, new ones emerged. Can you really trust your cloud provider to keep your sensitive data safe? Will a competitor running on the same infrastructure be able to infer information you wanted to keep private? What law(s) apply to your data (for instance, if the cloud provider is from the United States, is your data subject to the PATRIOT Act, ev en if your company is in Europe)? Once you store all your data in cloud X, will you be able to get them out again, or will you be tied to that cloud and its provider forever, something known as vendor lock-in? 7.11.2 Virtual Machine Migration Virtualization technology not only allows IAAS clouds to run multiple dif- ferent operating systems on the same hardware at the same time, it also permits clever management. We hav e already discussed the ability to overcommit re- sources, especially in combination with deduplication. Now we will look at anoth- er management issue: what if a machine needs servicing (or even replacement) while it is running lots of important machines? Probably, clients will not be happy if their systems go down because the cloud provider wants to replace a disk drive. Hypervisors decouple the virtual machine from the physical hardware. In other words, it does not really matter to the virtual machine if it runs on this machine or that machine. Thus, the administrator could simply shut down all the virtual ma- chines and restart them again on a shiny new machine. Doing so, however, results
|
clipped_os_Page_496_Chunk6713
|
SEC. 7.11 CLOUDS 497 in significant downtime. The challenge is to move the virtual machine from the hardware that needs servicing to the new machine without taking it down at all. A slightly better approach might be to pause the virtual machine, rather than shut it down. During the pause, we copy over the memory pages used by the virtual machine to the new hardware as quickly as possible, configure things correctly in the new hypervisor and then resume execution. Besides memory, we also need to transfer storage and network connectivity, but if the machines are close, this can be relatively fast. We could make the file system network-based to begin with (like NFS, the network file system), so that it does not matter whether your virtual ma- chine is running on hardware in server rack 1 or 3. Likewise, the IP address can simply be switched to the new location. Nevertheless, we still need to pause the machine for a noticeable amount of time. Less time perhaps, but still noticeable. Instead, what modern virtualization solutions offer is something known as live migration. In other words, they move the virtual machine while it is still opera- tional. For instance, they employ techniques like pre-copy memory migration. This means that they copy memory pages while the machine is still serving re- quests. Most memory pages are not written much, so copying them over is safe. Remember, the virtual machine is still running, so a page may be modified after it has already been copied. When memory pages are modified, we have to make sure that the latest version is copied to the destination, so we mark them as dirty. They will be recopied later. When most memory pages have been copied, we are left with a small number of dirty pages. We now pause very briefly to copy the remain- ing pages and resume the virtual machine at the new location. While there is still a pause, it is so brief that applications typically are not affected. When the downtime is not noticeable, it is known as a seamless live migration. 7.11.3 Checkpointing Decoupling of virtual machine and physical hardware has additional advan- tages. In particular, we mentioned that we can pause a machine. This in itself is useful. If the state of the paused machine (e.g., CPU state, memory pages, and stor- age state) is stored on disk, we have a snapshot of a running machine. If the soft- ware makes a royal mess of the still-running virtual machine, it is possible to just roll back to the snapshot and continue as if nothing happened. The most straightforward way to make a snapshot is to copy everything, in- cluding the full file system. However, copying a multiterabyte disk may take a while, even if it is a fast disk. And again, we do not want to pause for long while we are doing it. The solution is to use copy on write solutions, so that data is cop- ied only when absolutely necessary. Snapshotting works quite well, but there are issues. What to do if a machine is interacting with a remote computer? We can snapshot the system and bring it up again at a later stage, but the communicating party may be long gone. Clearly, this is a problem that cannot be solved.
|
clipped_os_Page_497_Chunk6714
|
498 VIRTUALIZATION AND THE CLOUD CHAP. 7 7.12 CASE STUDY: VMWARE Since 1999, VMware, Inc. has been the leading commercial provider of virtu- alization solutions with products for desktops, servers, the cloud, and now even on cell phones. It provides not only hypervisors but also the software that manages virtual machines on a large scale. We will start this case study with a brief history of how the company got start- ed. We will then describe VMware Workstation, a type 2 hypervisor and the com- pany’s first product, the challenges in its design and the key elements of the solu- tion. We then describe the evolution of VMware Workstation over the years. We conclude with a description of ESX Server, VMware’s type 1 hypervisor. 7.12.1 The Early History of VMware Although the idea of using virtual machines was popular in the 1960s and 1970s in both the computing industry and academic research, interest in virtu- alization was totally lost after the 1980s and the rise of the personal computer in- dustry. Only IBM’s mainframe division still cared about virtualization. Indeed, the computer architectures designed at the time, and in particular Intel’s x86 architec- ture, did not provide architectural support for virtualization (i.e., they failed the Popek/Goldberg criteria). This is extremely unfortunate, since the 386 CPU, a complete redesign of the 286, was done a decade after the Popek-Goldberg paper, and the designers should have known better. In 1997, at Stanford, three of the future founders of VMware had built a proto- type hypervisor called Disco (Bugnion et al., 1997), with the goal of running com- modity operating systems (in particular UNIX) on a very large scale multiproces- sor then being developed at Stanford: the FLASH machine. During that project, the authors realized that using virtual machines could solve, simply and elegantly, a number of hard system software problems: rather than trying to solve these prob- lems within existing operating systems, one could innovate in a layer below exist- ing operating systems. The key observation of Disco was that, while the high com- plexity of modern operating systems made innovation difficult, the relative simpli- city of a virtual machine monitor and its position in the software stack provided a powerful foothold to address limitations of operating systems. Although Disco was aimed at very large servers, and designed for the MIPS architecture, the authors realized that the same approach could equally apply, and be commercially relevant, for the x86 marketplace. And so, VMware, Inc. was founded in 1998 with the goal of bringing virtu- alization to the x86 architecture and the personal computer industry. VMware’s first product (VMware Workstation) was the first virtualization solution available for 32-bit x86-based platforms. The product was first released in 1999, and came in two variants: VMware Workstation for Linux, a type 2 hypervisor that ran on top of Linux host operating systems, and VMware Workstation for Windows, which
|
clipped_os_Page_498_Chunk6715
|
SEC. 7.12 CASE STUDY: VMWARE 499 similarly ran on top of Windows NT. Both variants had identical functionality: users could create multiple virtual machines by specifying first the characteristics of the virtual hardware (such as how much memory to give the virtual machine, or the size of the virtual disk) and could then install the operating system of their choice within the virtual machine, typically from the (virtual) CD-ROM. VMware Workstation was largely aimed at developers and IT professionals. Before the introduction of virtualization, a developer routinely had two computers on his desk, a stable one for development and a second one where he could rein- stall the system software as needed. With virtualization, the second test system became a virtual machine. Soon, VMware started developing a second and more complex product, which would be released as ESX Server in 2001. ESX Server leveraged the same virtu- alization engine as VMware Workstation, but packaged it as part of a type 1 hyper- visor. In other words, ESX Server ran directly on the hardware without requiring a host operating system. The ESX hypervisor was designed for intense workload consolidation and contained many optimizations to ensure that all resources (CPU, memory, and I/O) were efficiently and fairly allocated among the virtual machines. For example, it was the first to introduce the concept of ballooning to rebalance memory between virtual machines (Waldspurger, 2002). ESX Server was aimed at the server consolidation market. Before the introduc- tion of virtualization, IT administrators would typically buy, install, and configure a new server for every new task or application that they had to run in the data cen- ter. The result wasthat the infrastructure was very inefficiently utilized: servers at the time were typically used at 10% of their capacity (during peaks). With ESX Server, IT administrators could consolidate many independent virtual machines into a single server, saving time, money, rack space, and electrical power. In 2002, VMware introduced its first management solution for ESX Server, originally called Virtual Center, and today called vSphere. It provided a single point of management for a cluster of servers running virtual machines: an IT administrator could now simply log into the Virtual Center application and control, monitor, or provision thousands of virtual machines running throughout the enter- prise. With Virtual Center came another critical innovation, VMotion (Nelson et al., 2005), which allowed the live migration of a running virtual machine over the network. For the first time, an IT administrator could move a running computer from one location to another without having to reboot the operating system, restart applications, or even lose network connections. 7.12.2 VMware Workstation VMware Workstation was the first virtualization product for 32-bit x86 com- puters. The subsequent adoption of virtualization had a profound impact on the in- dustry and on the computer science community: in 2009, the ACM awarded its
|
clipped_os_Page_499_Chunk6716
|
500 VIRTUALIZATION AND THE CLOUD CHAP. 7 authors the ACM Software System Award for VMware Workstation 1.0 for Lin- ux. The original VMware Workstation is described in a detailed technical article (Bugnion et al., 2012). Here we provide a summary of that paper. The idea was that a virtualization layer could be useful on commodity plat- forms built from x86 CPUs and primarily running the Microsoft Windows operat- ing systems (a.k.a. the WinTel platform). The benefits of virtualization could help address some of the known limitations of the WinTel platform, such as application interoperability, operating system migration, reliability, and security. In addition, virtualization could easily enable the coexistence of operating system alternatives, in particular, Linux. Although there existed decades’ worth of research and commercial develop- ment of virtualization technology on mainframes, the x86 computing environment was sufficiently different that new approaches were necessary. For example, main- frames were vertically integrated, meaning that a single vendor engineered the hardware, the hypervisor, the operating systems, and most of the applications. In contrast, the x86 industry was (and still is) disaggregated into at least four different categories: (a) Intel and AMD make the processors; (b) Microsoft offers Windows and the open source community offers Linux; (c) a third group of com- panies builds the I/O devices and peripherals and their corresponding device driv- ers; and (d) a fourth group of system integrators such as HP and Dell put together computer systems for retail sale. For the x86 platform, virtualization would first need to be inserted without the support of any of these industry players. Because this disaggregation was a fact of life, VMware Workstation differed from classic virtual machine monitors that were designed as part of single-vendor architectures with explicit support for virtualization. Instead, VMware Workstation was designed for the x86 architecture and the industry built around it. VMware Workstation addressed these new challenges by combining well-known virtu- alization techniques, techniques from other domains, and new techniques into a single solution. We now discuss the specific technical challenges in building VMware Work- station. 7.12.3 Challenges in Bringing Virtualization to the x86 Recall our definition of hypervisors and virtual machines: hypervisors apply the well-known principle of adding a level of indirection to the domain of com- puter hardware. They provide the abstraction of virtual machines: multiple copies of the underlying hardware, each running an independent operating system instance. The virtual machines are isolated from other virtual machines, appear each as a duplicate of the underlying hardware, and ideally run with the same speed as the real machine. VMware adapted these core attributes of a virtual ma- chine to an x86-based target platform as follows:
|
clipped_os_Page_500_Chunk6717
|
SEC. 7.12 CASE STUDY: VMWARE 501 1. Compatibility. The notion of an ‘‘essentially identical environment’’ meant that any x86 operating system, and all of its applications, would be able to run without modifications as a virtual machine. A hypervisor needed to provide sufficient compatibility at the hardware level such that users could run whichever operating system, (down to the update and patch version), they wished to install within a particu- lar virtual machine, without restrictions. 2. Performance. The overhead of the hypervisor had to be sufficiently low that users could use a virtual machine as their primary work envi- ronment. As a goal, the designers of VMware aimed to run relevant workloads at near native speeds, and in the worst case to run them on then-current processors with the same performance as if they were running natively on the immediately prior generation of processors. This was based on the observation that most x86 software was not de- signed to run only on the latest generation of CPUs. 3. Isolation. A hypervisor had to guarantee the isolation of the virtual machine without making any assumptions about the software running inside. That is, a hypervisor needed to be in complete control of re- sources. Software running inside virtual machines had to be pre- vented from any access that would allow it to subvert the hypervisor. Similarly, a hypervisor had to ensure the privacy of all data not be- longing to the virtual machine. A hypervisor had to assume that the guest operating system could be infected with unknown, malicious code (a much bigger concern today than during the mainframe era). There was an inevitable tension between these three requirements. For ex- ample, total compatibility in certain areas might lead to a prohibitive impact on performance, in which case VMware’s designers had to compromise. However, they ruled out any trade-offs that might compromise isolation or expose the hyper- visor to attacks by a malicious guest. Overall, four major challenges emerged: 1. The x86 architecture was not virtualizable. It contained virtu- alization-sensitive, nonprivileged instructions, which violated the Popek and Goldberg criteria for strict virtualization. For example, the POPF instruction has a different (yet nontrapping) semantics depend- ing on whether the currently running software is allowed to disable interrupts or not. This ruled out the traditional trap-and-emulate ap- proach to virtualization. Even engineers from Intel Corporation were convinced their processors could not be virtualized in any practical sense. 2. The x86 architecture was of daunting complexity. The x86 archi- tecture was a notoriously complicated CISC architecture, including
|
clipped_os_Page_501_Chunk6718
|
502 VIRTUALIZATION AND THE CLOUD CHAP. 7 legacy support for multiple decades of backward compatibility. Over the years, it had introduced four main modes of operations (real, pro- tected, v8086, and system management), each of which enabled in different ways the hardware’s segmentation model, paging mechan- isms, protection rings, and security features (such as call gates). 3. x86 machines had diverse peripherals. Although there were only two major x86 processor vendors, the personal computers of the time could contain an enormous variety of add-in cards and devices, each with their own vendor-specific device drivers. Virtualizing all these peripherals was infeasible. This had dual implications: it applied to both the front end (the virtual hardware exposed in the virtual ma- chines) and the back end (the real hardware that the hypervisor need- ed to be able to control) of peripherals. 4. Need for a simple user experience. Classic hypervisors were in- stalled in the factory, similar to the firmware found in today’s com- puters. Since VMware was a startup, its users would have to add the hypervisors to existing systems after the fact. VMware needed a soft- ware delivery model with a simple installation experience to encour- age adoption. 7.12.4 VMware Workstation: Solution Overview This section describes at a high level how VMware Workstation addressed the challenges mentioned in the previous section. VMware Workstation is a type 2 hypervisor that consists of distinct modules. One important module is the VMM, which is responsible for executing the virtual machine’s instructions. A second important module is the VMX, which interacts with the host operating system. The section covers first how the VMM solves the nonvirtualizability of the x86 architecture. Then, we describe the operating system-centric strategy used by the designers throughout the development phase. After that, we describe the design of the virtual hardware platform, which addresses one-half of the peripheral diversity challenge. Finally, we discuss the role of the host operating system in VMware Workstation, and in particular the interaction between the VMM and VMX compo- nents. Virtualizing the x86 Architecture The VMM runs the actual virtual machine; it enables it to make forward progress. A VMM built for a virtualizable architecture uses a technique known as trap-and-emulate to execute the virtual machine’s instruction sequence directly, but
|
clipped_os_Page_502_Chunk6719
|
SEC. 7.12 CASE STUDY: VMWARE 503 safely, on the hardware. When this is not possible, one approach is to specify a vir- tualizable subset of the processor architecture, and port the guest operating systems to that newly defined platform. This technique is known as paravirtualization (Barham et al., 2003; Whitaker et al., 2002) and requires source-code level modifi- cations of the operating system. Put bluntly, paravirtualization modifies the guest to avoid doing anything that the hypervisor cannot handle. Paravirtualization was infeasible at VMware because of the compatibility requirement and the need to run operating systems whose source code was not available, in particular Windows. An alternative would have been to employ an all-emulation approach. In this, the instructions of the virtual machines are emulated by the VMM on the hardware (rather than directly executed). This can be quite efficient; prior experience with the SimOS (Rosenblum et al., 1997) machine simulator showed that the use of techniques such as dynamic binary translation running in a user-level program could limit overhead of complete emulation to a factor-of-fiv e slowdown. Although this is quite efficient, and certainly useful for simulation purposes, a factor-of-fiv e slowdown was clearly inadequate and would not meet the desired performance re- quirements. The solution to this problem combined two key insights. First, although trap- and-emulate direct execution could not be used to virtualize the entire x86 archi- tecture all the time, it could actually be used some of the time. In particular, it could be used during the execution of application programs, which accounted for most of the execution time on relevant workloads. The reasons is that these virtu- alization sensitive instructions are not sensitive all the time; rather they are sensi- tive only in certain circumstances. For example, the POPF instruction is virtu- alization-sensitive when the software is expected to be able to disable interrupts (e.g., when running the operating system), but is not virtualization-sensitive when software cannot disable interrupts (in practice, when running nearly all user-level applications). Figure 7-8 shows the modular building blocks of the original VMware VMM. We see that it consists of a direct-execution subsystem, a binary translation subsys- tem, and a decision algorithm to determine which subsystem should be used. Both subsystems rely on some shared modules, for example to virtualize memory through shadow page tables, or to emulate I/O devices. The direct-execution subsystem is preferred, and the dynamic binary transla- tion subsystem provides a fallback mechanism whenever direct execution is not possible. This is the case for example whenever the virtual machine is in such a state that it could issue a virtualization-sensitive instruction. Therefore, each subsystem constantly reevaluates the decision algorithm to determine whether a switch of subsystems is possible (from binary translation to direct execution) or necessary (from direct execution to binary translation). This algorithm has a num- ber of input parameters, such as the current execution ring of the virtual machine, whether interrupts can be enabled at that level, and the state of the segments. For example, binary translation must be used if any of the following is true:
|
clipped_os_Page_503_Chunk6720
|
504 VIRTUALIZATION AND THE CLOUD CHAP. 7 VMM Shared modules (shadow MMU, I/O handling, …) Direct Execution Binary translation Decision Alg. Figure 7-8. High-level components of the VMware virtual machine monitor (in the absence of hardware support). 1. The virtual machine is currently running in kernel mode (ring 0 in the x86 architecture). 2. The virtual machine can disable interrupts and issue I/O instructions (in the x86 architecture, when the I/O privilege level is set to the ring level). 3. The virtual machine is currently running in real mode, a legacy 16-bit execution mode used by the BIOS among other things. The actual decision algorithm contains a few additional conditions. The details can be found in Bugnion et al. (2012). Interestingly, the algorithm does not depend on the instructions that are stored in memory and may be executed, but only on the value of a few virtual registers; therefore it can be evaluated very efficiently in just a handful of instructions. The second key insight was that by properly configuring the hardware, particu- larly using the x86 segment protection mechanisms carefully, system code under dynamic binary translation could also run at near-native speeds. This is very dif- ferent than the factor-of-fiv e slowdown normally expected of machine simulators. The difference can be explained by comparing how a dynamic binary translator converts a simple instruction that accesses memory. To emulate such an instruction in software, a classic binary translator emulating the full x86 instruction-set archi- tecture would have to first verify whether the effective address is within the range of the data segment, then convert the address into a physical address, and finally to copy the referenced word into the simulated register. Of course, these various steps can be optimized through caching, in a way very similar to how the processor cached page-table mappings in a translation-lookaside buffer. But even such opti- mizations would lead to an expansion of individual instructions into an instruction sequence. The VMware binary translator performs none of these steps in software. In- stead, it configures the hardware so that this simple instruction can be reissued
|
clipped_os_Page_504_Chunk6721
|
SEC. 7.12 CASE STUDY: VMWARE 505 with the identical instruction. This is possible only because the VMware VMM (of which the binary translator is a component) has previously configured the hard- ware to match the exact specification of the virtual machine: (a) the VMM uses shadow page tables, which ensures that the memory management unit can be used directly (rather than emulated) and (b) the VMM uses a similar shadowing ap- proach to the segment descriptor tables (which played a big role in the 16-bit and 32-bit software running on older x86 operating systems). There are, of course, complications and subtleties. One important aspect of the design is to ensure the integrity of the virtualization sandbox, that is, to ensure that no software running inside the virtual machine (including malicious software) can tamper with the VMM. This problem is generally known as software fault isola- tion and adds run-time overhead to each memory access if the solution is imple- mented in software. Here also, the VMware VMM uses a different, hardware-based approach. It splits the address space into two disjoint zones. The VMM reserves for its own use the top 4 MB of the address space. This frees up the rest (that is, 4 GB −4 MB, since we are talking about a 32-bit architecture) for the use by the vir- tual machine. The VMM then configures the segmentation hardware so that no vir- tual machine instructions (including ones generated by the binary translator) can ev er access the top 4-MB region of the address space. A Guest Operating System Centric Strategy Ideally, a VMM should be designed without worrying about the guest operat- ing system running in the virtual machine, or how that guest operating system con- figures the hardware. The idea behind virtualization is to make the virtual machine interface identical to the hardware interface so that all software that runs on the hardware will also run in a virtual machine. Unfortunately, this approach is practi- cal only when the architecture is virtualizeable and simple. In the case of x86, the overwhelming complexity of the architecture was clearly a problem. The VMware engineers simplified the problem by focusing only on a selection of supported guest operating systems. In its first release, VMware Workstation sup- ported officially only Linux, Windows 3.1, Windows 95/98 and Windows NT as guest operating systems. Over the years, new operating systems were added to the list with each revision of the software. Nevertheless, the emulation was good enough that it ran some unexpected operating systems, such as MINIX 3, perfectly, right out of the box. This simplification did not change the overall design—the VMM still provided a faithful copy of the underlying hardware, but it helped guide the development process. In particular, engineers had to worry only about combinations of features that were used in practice by the supported guest operating systems. For example, the x86 architecture contains four privilege rings in protected mode (ring 0 to ring 3) but no operating system uses ring 1 or ring 2 in practice (save for OS/2, a long-dead operating system from IBM). So rather than figure out
|
clipped_os_Page_505_Chunk6722
|
506 VIRTUALIZATION AND THE CLOUD CHAP. 7 how to correctly virtualize ring 1 and ring 2, the VMware VMM simply had code to detect if a guest was trying to enter into ring 1 or ring 2, and, in that case, would abort execution of the virtual machine. This not only removed unnecessary code, but more importantly it allowed the VMware VMM to assume that ring 1 and ring 2 would never be used by the virtual machine, and therefore that it could use these rings for its own purposes. In fact, the VMware VMM’s binary translator runs at ring 1 to virtualize ring 0 code. The Virtual Hardware Platform So far, we hav e primarily discussed the problem associated with the virtu- alization of the x86 processor. But an x86-based computer is much more than its processor. It also has a chipset, some firmware, and a set of I/O peripherals to con- trol disks, network cards, CD-ROM, keyboard, etc. The diversity of I/O peripherals in x86 personal computers made it impossible to match the virtual hardware to the real, underlying hardware. Whereas there were only a handful of x86 processor models in the market, with only minor variations in instruction-set level capabilities, there were thousands of I/O devices, most of which had no publicly available documentation of their interface or functionality. VMware’s key insight was to not attempt to have the virtual hardware match the specific underlying hardware, but instead have it always match some configuration composed of selected, canonical I/O devices. Guest operating systems then used their own existing, built-in mechanisms to detect and operate these (virtual) de- vices. The virtualization platform consisted of a combination of multiplexed and emulated components. Multiplexing meant configuring the hardware so it can be directly used by the virtual machine, and shared (in space or time) across multiple virtual machines. Emulation meant exporting a software simulation of the selected, canonical hardware component to the virtual machine. Figure 7-9 shows that VMware Workstation used multiplexing for processor and memory and emulation for everything else. For the multiplexed hardware, each virtual machine had the illusion of having one dedicated CPU and a configurable, but a fixed amount of contiguous RAM starting at physical address 0. Architecturally, the emulation of each virtual device was split between a front- end component, which was visible to the virtual machine, and a back-end compo- nent, which interacted with the host operating system (Waldspurger and Rosen- blum, 2012). The front-end was essentially a software model of the hardware de- vice that could be controlled by unmodified device drivers running inside the virtu- al machine. Regardless of the specific corresponding physical hardware on the host, the front end always exposed the same device model. For example, the first Ethernet device front end was the AMD PCnet ‘‘Lance’’ chip, once a popular 10-Mbps plug-in board on PCs, and the back end provided
|
clipped_os_Page_506_Chunk6723
|
SEC. 7.12 CASE STUDY: VMWARE 507 Virtual Hardware (front end) Back end 1 virtual x86 CPU, with the same instruction set extensions as the un- derlying hardware CUP Up to 512 MB of contiguous DRAM Multiplexed Emulated PCI Bus Scheduled by the host operating system on either a uniprocessor or multiprocessor host Allocated and managed by the host OS (page-by-page) Fully emulated compliant PCI bus Virtual disks (stored as files) or direct access to a given raw device ISO image or emulated access to the real CD-ROM Physical floppy or floppy image Ran in a window and in full-screen mode. SVGA required VMware SVGA guest driver 4x IDE disks 7x Buslogic SCSI Disks 1x IDE CD-ROM 2x 1.44 MB floppy drives 1x VMware graphics card with VGA and SVGA support 2x serial ports COM1 and COM2 1x printer (LPT) 1x keyboard (104-key) 1x PS-2 mouse 3x AMD Lance Ethernet cards 1x Soundblaster Connect to host serial port or a file Can connect to host LPT port Fully emulated; keycode events are gen- erated when they are received by the VMware application Same as keyboard Bridge mode and host-only modes Fully emulated Figure 7-9. Virtual hardware configuration options of the early VMware Workstation, ca. 2000. network connectivity to the host’s physical network. Ironically, VMware kept sup- porting the PCnet device long after physical Lance boards were no longer avail- able, and actually achieved I/O that was orders of magnitude faster than 10 Mbps (Sugerman et al., 2001). For storage devices, the original front ends were an IDE controller and a Buslogic Controller, and the back end was typically either a file in the host file system, such as a virtual disk or an ISO 9660 image, or a raw resource such as a drive partition or the physical CD-ROM. Splitting front ends from back ends had another benefit: a VMware virtual ma- chine could be copied from computer to another computer, possibly with different hardware devices. Yet, the virtual machine would not have to install new device drivers since it only interacted with the front-end component. This attribute, called hardware-independent encapsulation, has a huge benefit today in server envi- ronments and in cloud computing. It enabled subsequent innovations such as sus- pend/resume, checkpointing, and the transparent migration of live virtual machines
|
clipped_os_Page_507_Chunk6724
|
508 VIRTUALIZATION AND THE CLOUD CHAP. 7 across physical boundaries (Nelson et al., 2005). In the cloud, it allows customers to deploy their virtual machines on any available server, without having to worry of the details of the underlying hardware. The Role of the Host Operating System The final critical design decision in VMware Workstation was to deploy it ‘‘on top’’ of an existing operating system. This classifies it as a type 2 hypervisor. The choice had two main benefits. First, it would address the second part of peripheral diversity challenge. VMware implemented the front-end emulation of the various devices, but relied on the device drivers of the host operating system for the back end. For example, VMware Workstation would read or write a file in the host file system to emulate a virtual disk device, or draw in a window of the host’s desktop to emulate a video card. As long as the host operating system had the appropriate drivers, VMware Workstation could run virtual machines on top of it. Second, the product could install and feel like a normal application to a user, making adoption easier. Like any application, the VMware Workstation installer simply writes its component files onto an existing host file system, without per- turbing the hardware configuration (no reformatting of a disk, creating of a disk partition, or changing of BIOS settings). In fact, VMware Workstation could be in- stalled and start running virtual machines without requiring even rebooting the host operating system, at least on Linux hosts. However, a normal application does not have the necessary hooks and APIs necessary for a hypervisor to multiplex the CPU and memory resources, which is essential to provide near-native performance. In particular, the core x86 virtu- alization technology described above works only when the VMM runs in kernel mode and can furthermore control all aspects of the processor without any restric- tions. This includes the ability to change the address space (to create shadow page tables), to change the segment tables, and to change all interrupt and exception handlers. A device driver has more direct access to the hardware, in particular if it runs in kernel mode. Although it could (in theory) issue any privileged instructions, in practice a device driver is expected to interact with its operating system using well-defined APIs, and does not (and should never) arbitrarily reconfigure the hardware. And since hypervisors call for a massive reconfiguration of the hardware (including the entire address space, segment tables, exception and interrupt hand- lers), running the hypervisor as a device driver was also not a realistic option. Since none of these assumptions are supported by host operating systems, run- ning the hypervisor as a device driver (in kernel mode) was also not an option. These stringent requirements led to the development of the VMware Hosted Architecture. In it, as shown in Fig. 7-10, the software is broken into three sepa- rate and distinct components.
|
clipped_os_Page_508_Chunk6725
|
SEC. 7.12 CASE STUDY: VMWARE 509 CPU VMM Context Host OS Context Kernel mode User mode Disk int handler int handler IDTR Any Proc. Host OS write() fs scsi VMM Driver world switch VMM VMX Virtual Machine (i) (ii) (iii) (iv) (v) Figure 7-10. The VMware Hosted Architecture and its three components: VMX, VMM driver and VMM. These components each have different functions and operate independently from one another: 1. A user-space program (the VMX) which the user perceives to be the VMware program. The VMX performs all UI functions, starts the vir- tual machine, and then performs most of the device emulation (front end), and makes regular system calls to the host operating system for the back end interactions. There is typically one multithreaded VMX process per virtual machine. 2. A small kernel-mode device driver (the VMX driver), which gets in- stalled within the host operating system. It is used primarily to allow the VMM to run by temporarily suspending the entire host operating system. There is one VMX driver installed in the host operating sys- tem, typically at boot time. 3. The VMM, which includes all the software necessary to multiplex the CPU and the memory, including the exception handlers, the trap-and- emulate handlers, the binary translator, and the shadow paging mod- ule. The VMM runs in kernel mode, but it does not run in the context of the host operating system. In other words, it cannot rely directly on services offered by the host operating system, but it is also not con- strained by any rules or conventions imposed by the host operating system. There is one VMM instance for each virtual machine, created when the virtual machine starts.
|
clipped_os_Page_509_Chunk6726
|
510 VIRTUALIZATION AND THE CLOUD CHAP. 7 VMware Workstation appears to run on top of an existing operating system, and, in fact, its VMX does run as a process of that operating system. However, the VMM operates at system level, in full control of the hardware, and without de- pending on any way on the host operating system. Figure 7-10 shows the relation- ship between the entities: the two contexts (host operating system and VMM) are peers to each other, and each has a user-level and a kernel component. When the VMM runs (the right half of the figure), it reconfigures the hardware, handles all I/O interrupts and exceptions, and can therefore safely temporarily remove the host operating system from its virtual memory. For example, the location of the inter- rupt table is set within the VMM by assigning the IDTR register to a new address. Conversely, when the host operating system runs (the left half of the figure), the VMM and its virtual machine are equally removed from its virtual memory. This transition between these two totally independent system-level contexts is called a world switch. The name itself emphasizes that everything about the soft- ware changes during a world switch, in contrast with the regular context switch im- plemented by an operating system. Figure 7-11 shows the difference between the two. The regular context switch between processes ‘‘A’’ and ‘‘B’’ swaps the user portion of the address space and the registers of the two processes, but leaves a number of critical system resources unmodified. For example, the kernel portion of the address space is identical for all processes, and the exception handlers are also not modified. In contrast, the world switch changes everything: the entire address space, all exception handlers, privileged registers, etc. In particular, the kernel ad- dress space of the host operating system is mapped only when running in the host operating system context. After the world switch into the VMM context, it has been removed from the address space altogether, freeing space to run both the VMM and the virtual machine. Although this sounds complicated, this can be im- plemented quite efficiently and takes only 45 x86 machine-language instructions to execute. Host OS Context Process A Process B Normal Context Switch VMware World Switch VMM Context Linear Address space VMM A (user-space) Kernel Address space B (user-space) VMX (user-space) Kernel Address space Kernel Address space (host OS) Virtual Machine Figure 7-11. Difference between a normal context switch and a world switch.
|
clipped_os_Page_510_Chunk6727
|
SEC. 7.12 CASE STUDY: VMWARE 511 The careful reader will have wondered: what of the guest operating system’s kernel address space? The answer is simply that it is part of the virtual machine ad- dress space, and is present when running in the VMM context. Therefore, the guest operating system can use the entire address space, and in particular the same loca- tions in virtual memory as the host operating system. This is very specifically what happens when the host and guest operating systems are the same (e.g., both are Linux). Of course, this all ‘‘just works’’ because of the two independent contexts and the world switch between the two. The same reader will then wonder: what of the VMM area, at the very top of the address space? As we discussed above, it is reserved for the VMM itself, and those portions of the address space cannot be directly used by the virtual machine. Luckily, that small 4-MB portion is not frequently used by the guest operating sys- tems since each access to that portion of memory must be individually emulated and induces noticeable software overhead. Going back to Fig. 7-10: it further illustrates the various steps that occur when a disk interrupt happens while the VMM is executing (step i). Of course, the VMM cannot handle the interrupt since it does not have the back-end device driver. In (ii), the VMM does a world switch back to the host operating system. Specifically, the world-switch code returns control to the VMware driver, which in (iii) emulates the same interrupt that was issued by the disk. So in step (iv), the interrupt handler of the host operating system runs through its logic, as if the disk interrupt had oc- curred while the VMware driver (but not the VMM!) was running. Finally, in step (v), the VMware driver returns control to the VMX application. At this point, the host operating system may choose to schedule another process, or keep running the VMware VMX process. If the VMX process keeps running, it will then resume ex- ecution of the virtual machine by doing a special call into the device driver, which will generate a world switch back into the VMM context. As you see, this is a neat trick that hides the entire VMM and virtual machine from the host operating sys- tem. More importantly, it provides the VMM complete freedom to reprogram the hardware as it sees fit. 7.12.5 The Evolution of VMware Workstation The technology landscape has changed dramatically in the decade following the development of the original VMware Virtual Machine Monitor. The hosted architecture is still used today for state-of-the-art interactive hyper- visors such as VMware Workstation, VMware Player, and VMware Fusion (the product aimed at Apple OS X host operating systems), and even in VMware’s product aimed at cell phones (Barr et al., 2010). The world switch, and its ability to separate the host operating system context from the VMM context, remains the foundational mechanism of VMware’s hosted products today. Although the imple- mentation of the world switch has evolved through the years, for example, to
|
clipped_os_Page_511_Chunk6728
|
512 VIRTUALIZATION AND THE CLOUD CHAP. 7 support 64-bit systems, the fundamental idea of having totally separate address spaces for the host operating system and the VMM remains valid today. In contrast, the approach to the virtualization of the x86 architecture changed rather dramatically with the introduction of hardware-assisted virtualization. Hard- ware-assisted virtualizations, such as Intel VT-x and AMD-v were introduced in two phases. The first phase, starting in 2005, was designed with the explicit pur- pose of eliminating the need for either paravirtualization or binary translation (Uhlig et al., 2005). Starting in 2007, the second phase provided hardware support in the MMU in the form of nested page tables. This eliminated the need to main- tain shadow page tables in software. Today, VMware’s hypervisors mostly uses a hardware-based, trap-and-emulate approach (as formalized by Popek and Goldberg four decades earlier) whenever the processor supports both virtualization and nested page tables. The emergence of hardware support for virtualization had a significant impact on VMware’s guest operating system centric-strategy. In the original VMware Workstation, the strategy was used to dramatically reduce implementation com- plexity at the expense of compatibility with the full architecture. Today, full archi- tectural compatibility is expected because of hardware support. The current VMware guest operating system-centric strategy focuses on performance optimiza- tions for selected guest operating systems. 7.12.6 ESX Server: VMware’s type 1 Hypervisor In 2001, VMware released a different product, called ESX Server, aimed at the server marketplace. Here, VMware’s engineers took a different approach: rather than creating a type 2 solution running on top of a host operating system, they de- cided to build a type 1 solution that would run directly on the hardware. Figure 7-12 shows the high-level architecture of ESX Server. It combines an existing component, the VMM, with a true hypervisor running directly on the bare metal. The VMM performs the same function as in VMware Workstation, which is to run the virtual machine in an isolated environment that is a duplicate of the x86 architecture. As a matter of fact, the VMMs used in the two products use the same source code base, and they are largely identical. The ESX hypervisor replaces the host operating system. But rather than implementing the full functionality expected of an operating system, its only goal is to run the various VMM instances and to efficiently manage the physical resources of the machine. ESX Server therefore contains the usual subsystem found in an operating system, such as a CPU sched- uler, a memory manager, and an I/O subsystem, with each subsystem optimized to run virtual machines. The absence of a host operating system required VMware to directly address the issues of peripheral diversity and user experience described earlier. For periph- eral diversity, VMware restricted ESX Server to run only on well-known and certi- fied server platforms, for which it had device drivers. As for the user experience,
|
clipped_os_Page_512_Chunk6729
|
SEC. 7.12 CASE STUDY: VMWARE 513 x86 ESX hypervisor VMM VMM VMM VMM VM ESX VM VM VM Figure 7-12. ESX Server: VMware’s type 1 hypervisor. ESX Server (unlike VMware Workstation) required users to install a new system image on a boot partition. Despite the drawbacks, the trade-off made sense for dedicated deployments of virtualization in data centers, consisting of hundreds or thousands of physical ser- vers, and often (many) thousands of virtual machines. Such deployments are some- times referred today as private clouds. There, the ESX Server architecture provides substantial benefits in terms of performance, scalability, manageability, and fea- tures. For example: 1. The CPU scheduler ensures that each virtual machine gets a fair share of the CPU (to avoid starvation). It is also designed so that the dif- ferent virtual CPUs of a given multiprocessor virtual machine are scheduled at the same time. 2. The memory manager is optimized for scalability, in particular to run virtual machines efficiently even when they need more memory than is actually available on the computer. To achieve this result, ESX Ser- ver first introduced the notion of ballooning and transparent page sharing for virtual machines (Waldspurger, 2002). 3. The I/O subsystem is optimized for performance. Although VMware Workstation and ESX Server often share the same front-end emula- tion components, the back ends are totally different. In the VMware Workstation case, all I/O flows through the host operating system and its API, which often adds overhead. This is particularly true in the case of networking and storage devices. With ESX Server, these de- vice drivers run directly within the ESX hypervisor, without requiring a world switch. 4. The back ends also typically relied on abstractions provided by the host operating system. For example, VMware Workstation stores vir- tual machine images as regular (but very large) files on the host file system. In contrast, ESX Server has VMFS (Vaghani, 2010), a file
|
clipped_os_Page_513_Chunk6730
|
514 VIRTUALIZATION AND THE CLOUD CHAP. 7 system optimized specifically to store virtual machine images and ensure high I/O throughput. This allows for extreme levels of per- formance. For example, VMware demonstrated back in 2011 that a single ESX Server could issue 1 million disk operations per second (VMware, 2011). 5. ESX Server made it easy to introduce new capabilities, which re- quired the tight coordination and specific configuration of multiple components of a computer. For example, ESX Server introduced VMotion, the first virtualization solution that could migrate a live vir- tual machine from one machine running ESX Server to another ma- chine running ESX Server, while it was running. This achievement re- quired the coordination of the memory manager, the CPU scheduler, and the networking stack. Over the years, new features were added to ESX Server. ESX Server evolved into ESXi, a small-footprint alternative that is sufficiently small in size to be pre-installed in the firmware of servers. Today, ESXi is VMware’s most important product and serves as the foundation of the vSphere suite. 7.13 RESEARCH ON VIRTUALIZATION AND THE CLOUD Virtualization technology and cloud computing are both extremely active re- search areas. The research produced in these fields is way too much to enumerate. Each has multiple research conferences. For instance, the Virtual Execution Envi- ronments (VEE) conference focuses on virtualization in the broadest sense. You will find papers on migration deduplication, scaling out, and so on. Likewise, the ACM Symposium on Cloud Computing (SOCC) is one of the best-known venues on cloud computing. Papers in SOCC include work on fault resilience, scheduling of data center workloads, management and debugging in clouds, and so on. Old topics never really die, as in Penneman et al. (2013), which looks at the problems of virtualizing the ARM in the light of the Popek and Goldberg criteria. Security is perpetually a hot topic (Beham et al., 2013; Mao, 2013; and Pearce et al., 2013), as is reducing energy usage (Botero and Hesselbach, 2013; and Yuan et al., 2013). With so many data centers now using virtualization technology, the net- works connecting these machines are also a major subject of research (Theodorou et al., 2013). Virtualization in wireless networks is also an up-and-coming subject (Wang et al., 2013a). One interesting area which has seen a lot of interesting research is nested virtu- alization (Ben-Yehuda et al., 2010; and Zhang et al., 2011). The idea is that a vir- tual machine itself can be further virtualized into multiple higher-level virtual ma- chines, which in turn may be virtualized and so on. One of these projects is appro- priately called ‘‘Turtles,’’ because once you start, ‘‘It’s Turtles all the way down!’’
|
clipped_os_Page_514_Chunk6731
|
SEC. 7.13 RESEARCH ON VIRTUALIZATION AND THE CLOUD 515 One of the nice things about virtualization hardware is that untrusted code can get direct but safe access to hardware features like page tables, and tagged TLBs. With this in mind, the Dune project (Belay, 2012) does not aim to provide a ma- chine abstraction, but rather it provides a process abstraction. The process is able to enter Dune mode, an irreversible transition that gives it access to the low-level hardware. Nevertheless, it is still a process and able to talk to and rely on the ker- nel. The only difference that it uses the VMCALL instruction to make a system call. PROBLEMS 1. Give a reason why a data center might be interested in virtualization. 2. Give a reason why a company might be interested in running a hypervisor on a ma- chine that has been in use for a while. 3. Give a reason why a software developer might use virtualization on a desktop machine being used for development. 4. Give a reason why an individual at home might be interested in virtualization. 5. Why do you think virtualization took so long to become popular? After all, the key paper was written in 1974 and IBM mainframes had the necessary hardware and soft- ware throughout the 1970s and beyond. 6. Name two kinds of instructions that are sensitive in the Popek and Goldberg sense. 7. Name three machine instructions that are not sensitive in the Popek and Goldberg sense. 8. What is the difference between full virtualization and paravirtualization? Which do you think is harder to do? Explain your answer. 9. Does it make sense to paravirtualize an operating system if the source code is avail- able? What if it is not? 10. Consider a type 1 hypervisor that can support up to n virtual machines at the same time. PCs can have a maximum of four disk primary partitions. Can n be larger than 4? If so, where can the data be stored? 11. Briefly explain the concept of process-level virtualization. 12. Why do type 2 hypervisors exist? After all, there is nothing they can do that type 1 hypervisors cannot do and the type 1 hypervisors are generally more efficient as well. 13. Is virtualization of any use to type 2 hypervisors? 14. Why was binary translation invented? Do you think it has much of a future? Explain your answer. 15. Explain how the x86’s four protection rings can be used to support virtualization. 16. State one reason as to why a hardware-based approach using VT-enabled CPUs can perform poorly when compared to translation-based software approaches.
|
clipped_os_Page_515_Chunk6732
|
516 VIRTUALIZATION AND THE CLOUD CHAP. 7 17. Give one case where a translated code can be faster than the original code, in a system using binary translation. 18. VMware does binary translation one basic block at a time, then it executes the block and starts translating the next one. Could it translate the entire program in advance and then execute it? If so, what are the advantages and disadvantages of each technique? 19. What is the difference between a pure hypervisor and a pure microkernel? 20. Briefly explain why memory is so difficult to virtualize. well in practice? Explain your answer. 21. Running multiple virtual machines on a PC is known to require large amounts of mem- ory. Why? Can you think of any ways to reduce the memory usage? Explain. 22. Explain the concept of shadow page tables, as used in memory virtualization. 23. One way to handle guest operating systems that change their page tables using ordin- ary (nonprivileged) instructions is to mark the page tables as read only and take a trap when they are modified. How else could the shadow page tables be maintained? Dis- cuss the efficiency of your approach vs. the read-only page tables. 24. Why are balloon drivers used? Is this cheating? 25. Descibe a situation in which balloon drivers do not work. 26. Explain the concept of deduplication as used in memory virtualization. 27. Computers have had DMA for doing I/O for decades. Did this cause any problems be- fore there were I/O MMUs? 28. Give one advantage of cloud computing over running your programs locally. Giv e one disadvantage as well. 29. Give an example of IAAS, PAAS, and SAAS. 30. Why is virtual machine migration important? Under what circumstances might it be useful? 31. Migrating virtual machines may be easier than migrating processes, but migration can still be difficult. What problems can arise when migrating a virtual machine? 32. Why is migration of virtual machines from one machine to another easier than migrat- ing processes from one machine to another? 33. What is the difference between live migration and the other kind (dead migration?)? 34. What were the three main requirements considered while designing VMware? 35. Why was the enormous number of peripheral devices available a problem when VMware Workstation was first introduced? 36. VMware ESXi has been made very small. Why? After all, servers at data centers usually have tens of gigabytes of RAM. What difference does a few tens of megabytes more or less make? 37. Do an Internet search to find two real-life examples of virtual appliances.
|
clipped_os_Page_516_Chunk6733
|
8 MULTIPLE PROCESSOR SYSTEMS Since its inception, the computer industry has been driven by an endless quest for more and more computing power. The ENIAC could perform 300 operations per second, easily 1000 times faster than any calculator before it, yet people were not satisfied with it. We now hav e machines millions of times faster than the ENIAC and still there is a demand for yet more horsepower. Astronomers are try- ing to make sense of the universe, biologists are trying to understand the implica- tions of the human genome, and aeronautical engineers are interested in building safer and more efficient aircraft, and all want more CPU cycles. However much computing power there is, it is never enough. In the past, the solution was always to make the clock run faster. Unfortunate- ly, we hav e begun to hit some fundamental limits on clock speed. According to Einstein’s special theory of relativity, no electrical signal can propagate faster than the speed of light, which is about 30 cm/nsec in vacuum and about 20 cm/nsec in copper wire or optical fiber. This means that in a computer with a 10-GHz clock, the signals cannot travel more than 2 cm in total. For a 100-GHz computer the total path length is at most 2 mm. A 1-THz (1000-GHz) computer will have to be smal- ler than 100 microns, just to let the signal get from one end to the other and back once within a single clock cycle. Making computers this small may be possible, but then we hit another funda- mental problem: heat dissipation. The faster the computer runs, the more heat it generates, and the smaller the computer, the harder it is to get rid of this heat. Al- ready on high-end x86 systems, the CPU cooler is bigger than the CPU itself. All 517
|
clipped_os_Page_517_Chunk6734
|
518 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 in all, going from 1 MHz to 1 GHz simply required incrementally better engineer- ing of the chip manufacturing process. Going from 1 GHz to 1 THz is going to re- quire a radically different approach. One approach to greater speed is through massively parallel computers. These machines consist of many CPUs, each of which runs at ‘‘normal’’ speed (whatever that may mean in a given year), but which collectively have far more computing power than a single CPU. Systems with tens of thousands of CPUs are now com- mercially available. Systems with 1 million CPUs are already being built in the lab (Furber et al., 2013). While there are other potential approaches to greater speed, such as biological computers, in this chapter we will focus on systems with multi- ple conventional CPUs. Highly parallel computers are frequently used for heavy-duty number crunch- ing. Problems such as predicting the weather, modeling airflow around an aircraft wing, simulating the world economy, or understanding drug-receptor interactions in the brain are all computationally intensive. Their solutions require long runs on many CPUs at once. The multiple processor systems discussed in this chapter are widely used for these and similar problems in science and engineering, among other areas. Another relevant development is the incredibly rapid growth of the Internet. It was originally designed as a prototype for a fault-tolerant military control system, then became popular among academic computer scientists, and long ago acquired many new uses. One of these is linking up thousands of computers all over the world to work together on large scientific problems. In a sense, a system consist- ing of 1000 computers spread all over the world is no different than one consisting of 1000 computers in a single room, although the delay and other technical charac- teristics are different. We will also consider these systems in this chapter. Putting 1 million unrelated computers in a room is easy to do provided that you have enough money and a sufficiently large room. Spreading 1 million unrelat- ed computers around the world is even easier since it finesses the second problem. The trouble comes in when you want them to communicate with one another to work together on a single problem. As a consequence, a great deal of work has been done on interconnection technology, and different interconnect technologies have led to qualitatively different kinds of systems and different software organiza- tions. All communication between electronic (or optical) components ultimately comes down to sending messages—well-defined bit strings—between them. The differences are in the time scale, distance scale, and logical organization involved. At one extreme are the shared-memory multiprocessors, in which somewhere be- tween two and about 1000 CPUs communicate via a shared memory. In this model, every CPU has equal access to the entire physical memory, and can read and write individual words using LOAD and STORE instructions. Accessing a mem- ory word usually takes 1–10 nsec. As we shall see, it is now common to put more than one processing core on a single CPU chip, with the cores sharing access to
|
clipped_os_Page_518_Chunk6735
|
SEC. 8.1 MULTIPROCESSORS 519 main memory (and sometimes even sharing caches). In other words, the model of shared-memory multicomputers may be implemented using physically separate CPUs, multiple cores on a single CPU, or a combination of the above. While this model, illustrated in Fig. 8-1(a), sounds simple, actually implementing it is not really so simple and usually involves considerable message passing under the cov- ers, as we will explain shortly. Howev er, this message passing is invisible to the programmers. C C C C C C C C M C C C C C Shared memory Inter- connect CPU Local memory (a) (b) (c) M C C M C M C M C C M C C M C M C C M M M M C+ M C+ M C+ M C+ M C+ M C+ M Complete system Internet Figure 8-1. (a) A shared-memory multiprocessor. (b) A message-passing multi- computer. (c) A wide area distributed system. Next comes the system of Fig. 8-1(b) in which the CPU-memory pairs are con- nected by a high-speed interconnect. This kind of system is called a message-pas- sing multicomputer. Each memory is local to a single CPU and can be accessed only by that CPU. The CPUs communicate by sending multiword messages over the interconnect. With a good interconnect, a short message can be sent in 10–50 μsec, but still far longer than the memory access time of Fig. 8-1(a). There is no shared global memory in this design. Multicomputers (i.e., message-passing sys- tems) are much easier to build than (shared-memory) multiprocessors, but they are harder to program. Thus each genre has its fans. The third model, which is illustrated in Fig. 8-1(c), connects complete com- puter systems over a wide area network, such as the Internet, to form a distributed system. Each of these has its own memory and the systems communicate by mes- sage passing. The only real difference between Fig. 8-1(b) and Fig. 8-1(c) is that in the latter, complete computers are used and message times are often 10–100 msec. This long delay forces these loosely coupled systems to be used in different ways than the tightly coupled systems of Fig. 8-1(b). The three types of systems differ in their delays by something like three orders of magnitude. That is the difference between a day and three years. This chapter has three major sections, corresponding to each of the three mod- els of Fig. 8-1. In each model discussed in this chapter, we start out with a brief
|
clipped_os_Page_519_Chunk6736
|
520 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 introduction to the relevant hardware. Then we move on to the software, especially the operating system issues for that type of system. As we will see, in each case different issues are present and different approaches are needed. 8.1 MULTIPROCESSORS A shared-memory multiprocessor (or just multiprocessor henceforth) is a computer system in which two or more CPUs share full access to a common RAM. A program running on any of the CPUs sees a normal (usually paged) virtual ad- dress space. The only unusual property this system has is that the CPU can write some value into a memory word and then read the word back and get a different value (because another CPU has changed it). When organized correctly, this prop- erty forms the basis of interprocessor communication: one CPU writes some data into memory and another one reads the data out. For the most part, multiprocessor operating systems are normal operating sys- tems. They handle system calls, do memory management, provide a file system, and manage I/O devices. Nevertheless, there are some areas in which they hav e unique features. These include process synchronization, resource management, and scheduling. Below we will first take a brief look at multiprocessor hardware and then move on to these operating systems’ issues. 8.1.1 Multiprocessor Hardware Although all multiprocessors have the property that every CPU can address all of memory, some multiprocessors have the additional property that every memory word can be read as fast as every other memory word. These machines are called UMA (Uniform Memory Access) multiprocessors. In contrast, NUMA (Nonuni- form Memory Access) multiprocessors do not have this property. Why this dif- ference exists will become clear later. We will first examine UMA multiprocessors and then move on to NUMA multiprocessors. UMA Multiprocessors with Bus-Based Architectures The simplest multiprocessors are based on a single bus, as illustrated in Fig. 8-2(a). Tw o or more CPUs and one or more memory modules all use the same bus for communication. When a CPU wants to read a memory word, it first checks to see if the bus is busy. If the bus is idle, the CPU puts the address of the word it wants on the bus, asserts a few control signals, and waits until the memory puts the desired word on the bus. If the bus is busy when a CPU wants to read or write memory, the CPU just waits until the bus becomes idle. Herein lies the problem with this design. With two or three CPUs, contention for the bus will be manageable; with 32 or 64 it will be unbearable. The system will be totally limited by the bandwidth of the bus, and most of the CPUs will be idle most of the time.
|
clipped_os_Page_520_Chunk6737
|
SEC. 8.1 MULTIPROCESSORS 521 CPU CPU M Shared memory Shared memory Bus (a) CPU CPU M Private memory (b) CPU CPU M (c) Cache Figure 8-2. Three bus-based multiprocessors. (a) Without caching. (b) With caching. (c) With caching and private memories. The solution to this problem is to add a cache to each CPU, as depicted in Fig. 8-2(b). The cache can be inside the CPU chip, next to the CPU chip, on the processor board, or some combination of all three. Since many reads can now be satisfied out of the local cache, there will be much less bus traffic, and the system can support more CPUs. In general, caching is not done on an individual word basis but on the basis of 32- or 64-byte blocks. When a word is referenced, its en- tire block, called a cache line, is fetched into the cache of the CPU touching it. Each cache block is marked as being either read only (in which case it can be present in multiple caches at the same time) or read-write (in which case it may not be present in any other caches). If a CPU attempts to write a word that is in one or more remote caches, the bus hardware detects the write and puts a signal on the bus informing all other caches of the write. If other caches have a ‘‘clean’’ copy, that is, an exact copy of what is in memory, they can just discard their copies and let the writer fetch the cache block from memory before modifying it. If some other cache has a ‘‘dirty’’ (i.e., modified) copy, it must either write it back to mem- ory before the write can proceed or transfer it directly to the writer over the bus. This set of rules is called a cache-coherence protocol and is one of many. Yet another possibility is the design of Fig. 8-2(c), in which each CPU has not only a cache, but also a local, private memory which it accesses over a dedicated (private) bus. To use this configuration optimally, the compiler should place all the program text, strings, constants and other read-only data, stacks, and local vari- ables in the private memories. The shared memory is then only used for writable shared variables. In most cases, this careful placement will greatly reduce bus traf- fic, but it does require active cooperation from the compiler. UMA Multiprocessors Using Crossbar Switches Even with the best caching, the use of a single bus limits the size of a UMA multiprocessor to about 16 or 32 CPUs. To go beyond that, a different kind of interconnection network is needed. The simplest circuit for connecting n CPUs to k
|
clipped_os_Page_521_Chunk6738
|
522 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 memories is the crossbar switch, shown in Fig. 8-3. Crossbar switches have been used for decades in telephone switching exchanges to connect a group of incoming lines to a set of outgoing lines in an arbitrary way. At each intersection of a horizontal (incoming) and vertical (outgoing) line is a crosspoint. A crosspoint is a small electronic switch that can be electrically open- ed or closed, depending on whether the horizontal and vertical lines are to be con- nected or not. In Fig. 8-3(a) we see three crosspoints closed simultaneously, allow- ing connections between the (CPU, memory) pairs (010, 000), (101, 101), and (110, 010) at the same time. Many other combinations are also possible. In fact, the number of combinations is equal to the number of different ways eight rooks can be safely placed on a chess board. Memories CPUs Closed crosspoint switch Open crosspoint switch (a) (b) (c) Crosspoint switch is closed Crosspoint switch is open 000 001 010 011 100 101 110 111 100 101 110 111 000 001 010 011 Figure 8-3. (a) An 8 × 8 crossbar switch. (b) An open crosspoint. (c) A closed crosspoint. One of the nicest properties of the crossbar switch is that it is a nonblocking network, meaning that no CPU is ever denied the connection it needs because some crosspoint or line is already occupied (assuming the memory module itself is available). Not all interconnects have this fine property. Furthermore, no advance planning is needed. Even if seven arbitrary connections are already set up, it is al- ways possible to connect the remaining CPU to the remaining memory.
|
clipped_os_Page_522_Chunk6739
|
SEC. 8.1 MULTIPROCESSORS 523 Contention for memory is still possible, of course, if two CPUs want to access the same module at the same time. Nevertheless, by partitioning the memory into n units, contention is reduced by a factor of n compared to the model of Fig. 8-2. One of the worst properties of the crossbar switch is the fact that the number of crosspoints grows as n2. With 1000 CPUs and 1000 memory modules we need a million crosspoints. Such a large crossbar switch is not feasible. Nevertheless, for medium-sized systems, a crossbar design is workable. UMA Multiprocessors Using Multistage Switching Networks A completely different multiprocessor design is based on the humble 2 × 2 switch shown in Fig. 8-4(a). This switch has two inputs and two outputs. Mes- sages arriving on either input line can be switched to either output line. For our purposes, messages will contain up to four parts, as shown in Fig. 8-4(b). The Module field tells which memory to use. The Address specifies an address within a module. The Opcode gives the operation, such as READ or WRITE. Finally, the op- tional Value field may contain an operand, such as a 32-bit word to be written on a WRITE. The switch inspects the Module field and uses it to determine if the mes- sage should be sent on X or on Y. A B X Y (a) (b) Module Address Opcode Value Figure 8-4. (a) A 2 × 2 switch with two input lines, A and B, and two output lines, X and Y. (b) A message format. Our 2 × 2 switches can be arranged in many ways to build larger multistage switching networks (Adams et al., 1987; Garofalakis and Stergiou, 2013; and Kumar and Reddy, 1987). One possibility is the no-frills, cattle-class omega net- work, illustrated in Fig. 8-5. Here we have connected eight CPUs to eight memo- ries using 12 switches. More generally, for n CPUs and n memories we would need log2 n stages, with n/2 switches per stage, for a total of (n/2) log2 n switches, which is a lot better than n2 crosspoints, especially for large values of n. The wiring pattern of the omega network is often called the perfect shuffle, since the mixing of the signals at each stage resembles a deck of cards being cut in half and then mixed card-for-card. To see how the omega network works, suppose that CPU 011 wants to read a word from memory module 110. The CPU sends a READ message to switch 1D containing the value 110 in the Module field. The switch takes the first (i.e., leftmost) bit of 110 and uses it for routing. A 0 routes to the upper output and a 1 routes to the lower one. Since this bit is a 1, the message is routed via the lower output to 2D.
|
clipped_os_Page_523_Chunk6740
|
524 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 CPUs b b b b a a a a 3 Stages Memories 000 001 010 011 100 101 110 111 000 001 010 011 100 101 110 111 1A 1B 1C 1D 2A 2B 2C 2D 3A 3B 3C 3D Figure 8-5. An omega switching network. All the second-stage switches, including 2D, use the second bit for routing. This, too, is a 1, so the message is now forwarded via the lower output to 3D. Here the third bit is tested and found to be a 0. Consequently, the message goes out on the upper output and arrives at memory 110, as desired. The path followed by this message is marked in Fig. 8-5 by the letter a. As the message moves through the switching network, the bits at the left-hand end of the module number are no longer needed. They can be put to good use by recording the incoming line number there, so the reply can find its way back. For path a, the incoming lines are 0 (upper input to 1D), 1 (lower input to 2D), and 1 (lower input to 3D), respectively. The reply is routed back using 011, only reading it from right to left this time. At the same time all this is going on, CPU 001 wants to write a word to memo- ry module 001. An analogous process happens here, with the message routed via the upper, upper, and lower outputs, respectively, marked by the letter b. When it arrives, its Module field reads 001, representing the path it took. Since these two requests do not use any of the same switches, lines, or memory modules, they can proceed in parallel. Now consider what would happen if CPU 000 simultaneously wanted to access memory module 000. Its request would come into conflict with CPU 001’s request at switch 3A. One of them would then have to wait. Unlike the crossbar switch, the omega network is a blocking network. Not every set of requests can be proc- essed simultaneously. Conflicts can occur over the use of a wire or a switch, as well as between requests to memory and replies from memory. Since it is highly desirable to spread the memory references uniformly across the modules, one common technique is to use the low-order bits as the module number. Consider, for example, a byte-oriented address space for a computer that
|
clipped_os_Page_524_Chunk6741
|
SEC. 8.1 MULTIPROCESSORS 525 mostly accesses full 32-bit words. The 2 low-order bits will usually be 00, but the next 3 bits will be uniformly distributed. By using these 3 bits as the module num- ber, consecutively words will be in consecutive modules. A memory system in which consecutive words are in different modules is said to be interleaved. Inter- leaved memories maximize parallelism because most memory references are to consecutive addresses. It is also possible to design switching networks that are nonblocking and offer multiple paths from each CPU to each memory module to spread the traffic better. NUMA Multiprocessors Single-bus UMA multiprocessors are generally limited to no more than a few dozen CPUs, and crossbar or switched multiprocessors need a lot of (expensive) hardware and are not that much bigger. To get to more than 100 CPUs, something has to give. Usually, what gives is the idea that all memory modules have the same access time. This concession leads to the idea of NUMA multiprocessors, as men- tioned above. Like their UMA cousins, they provide a single address space across all the CPUs, but unlike the UMA machines, access to local memory modules is faster than access to remote ones. Thus all UMA programs will run without change on NUMA machines, but the performance will be worse than on a UMA machine. NUMA machines have three key characteristics that all of them possess and which together distinguish them from other multiprocessors: 1. There is a single address space visible to all CPUs. 2. Access to remote memory is via LOAD and STORE instructions. 3. Access to remote memory is slower than access to local memory. When the access time to remote memory is not hidden (because there is no cach- ing), the system is called NC-NUMA (Non Cache-coherent NUMA). When the caches are coherent, the system is called CC-NUMA (Cache-Coherent NUMA). A popular approach for building large CC-NUMA multiprocessors is the directory-based multiprocessor. The idea is to maintain a database telling where each cache line is and what its status is. When a cache line is referenced, the data- base is queried to find out where it is and whether it is clean or dirty. Since this database is queried on every instruction that touches memory, it must be kept in ex- tremely fast special-purpose hardware that can respond in a fraction of a bus cycle. To make the idea of a directory-based multiprocessor somewhat more concrete, let us consider as a simple (hypothetical) example, a 256-node system, each node consisting of one CPU and 16 MB of RAM connected to the CPU via a local bus. The total memory is 232 bytes and it is divided up into 226 cache lines of 64 bytes each. The memory is statically allocated among the nodes, with 0–16M in node 0, 16M–32M in node 1, etc. The nodes are connected by an interconnection network,
|
clipped_os_Page_525_Chunk6742
|
526 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 as shown in Fig. 8-6(a). Each node also holds the directory entries for the 218 64-byte cache lines comprising its 224-byte memory. For the moment, we will as- sume that a line can be held in at most one cache. Directory Node 0 Node 1 Node 255 (a) (b) Bits 8 18 6 (c) Interconnection network CPU Memory Local bus CPU Memory Local bus CPU Memory Local bus Node Block Offset 0 1 2 3 4 0 0 1 0 0 218-1 82 … Figure 8-6. (a) A 256-node directory-based multiprocessor. (b) Division of a 32-bit memory address into fields. (c) The directory at node 36. To see how the directory works, let us trace a LOAD instruction from CPU 20 that references a cached line. First the CPU issuing the instruction presents it to its MMU, which translates it to a physical address, say, 0x24000108. The MMU splits this address into the three parts shown in Fig. 8-6(b). In decimal, the three parts are node 36, line 4, and offset 8. The MMU sees that the memory word refer- enced is from node 36, not node 20, so it sends a request message through the interconnection network to the line’s home node, 36, asking whether its line 4 is cached, and if so, where. When the request arrives at node 36 over the interconnection network, it is routed to the directory hardware. The hardware indexes into its table of 218 entries, one for each of its cache lines, and extracts entry 4. From Fig. 8-6(c) we see that the line is not cached, so the hardware issues a fetch for line 4 from the local RAM and after it arrives sends it back to node 20. It then updates directory entry 4 to in- dicate that the line is now cached at node 20.
|
clipped_os_Page_526_Chunk6743
|
SEC. 8.1 MULTIPROCESSORS 527 Now let us consider a second request, this time asking about node 36’s line 2. From Fig. 8-6(c) we see that this line is cached at node 82. At this point the hard- ware could update directory entry 2 to say that the line is now at node 20 and then send a message to node 82 instructing it to pass the line to node 20 and invalidate its cache. Note that even a so-called ‘‘shared-memory multiprocessor’’ has a lot of message passing going on under the hood. As a quick aside, let us calculate how much memory is being taken up by the directories. Each node has 16 MB of RAM and 218 9-bit entries to keep track of that RAM. Thus the directory overhead is about 9 × 218 bits divided by 16 MB or about 1.76%, which is generally acceptable (although it has to be high-speed mem- ory, which increases its cost, of course). Even with 32-byte cache lines the over- head would only be 4%. With 128-byte cache lines, it would be under 1%. An obvious limitation of this design is that a line can be cached at only one node. To allow lines to be cached at multiple nodes, we would need some way of locating all of them, for example, to invalidate or update them on a write. On many multicore processors, a directory entry therefore consists of a bit vector with one bit per core. A ‘‘1’’ indicates that the cache line is present on the core, and a ‘‘0’’ that it is not. Moreover, each directory entry typically contains a few more bits. As a result, the memory cost of the directory increases considerably. Multicore Chips As chip manufacturing technology improves, transistors are getting smaller and smaller and it is possible to put more and more of them on a chip. This empir- ical observation is often called Moore’s Law, after Intel co-founder Gordon Moore, who first noticed it. In 1974, the Intel 8080 contained a little over 2000 transistors, while Xeon Nehalem-EX CPUs have over 2 billion transistors. An obvious question is: ‘‘What do you do with all those transistors?’’ As we discussed in Sec. 1.3.1, one option is to add megabytes of cache to the chip. This option is serious, and chips with 4–32 MB of on-chip cache are common. But at some point increasing the cache size may run the hit rate up only from 99% to 99.5%, which does not improve application performance much. The other option is to put two or more complete CPUs, usually called cores, on the same chip (technically, on the same die). Dual-core, quad-core, and octa- core chips are already common; and you can even buy chips with hundreds of cores. No doubt more cores are on their way. Caches are still crucial and are now spread across the chip. For instance, the Intel Xeon 2651 has 12 physical hyper- threaded cores, giving 24 virtual cores. Each of the 12 physical cores has 32 KB of L1 instruction cache and 32 KB of L1 data cache. Each one also has 256 KB of L2 cache. Finally, the 12 cores share 30 MB of L3 cache. While the CPUs may or may not share caches (see, for example, Fig. 1-8), they always share main memory, and this memory is consistent in the sense that there is always a unique value for each memory word. Special hardware circuitry makes
|
clipped_os_Page_527_Chunk6744
|
528 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 sure that if a word is present in two or more caches and one of the CPUs modifies the word, it is automatically and atomically removed from all the caches in order to maintain consistency. This process is known as snooping. The result of this design is that multicore chips are just very small multiproces- sors. In fact, multicore chips are sometimes called CMPs (Chip MultiProces- sors). From a software perspective, CMPs are not really that different from bus- based multiprocessors or multiprocessors that use switching networks. However, there are some differences. To start with, on a bus-based multiprocessor, each of the CPUs has its own cache, as in Fig. 8-2(b) and also as in the AMD design of Fig. 1-8(b). The shared-cache design of Fig. 1-8(a), which Intel uses in many of its processors, does not occur in other multiprocessors. A shared L2 or L3 cache can affect performance. If one core needs a lot of cache memory and the others do not, this design allows the cache hog to take whatever it needs. On the other hand, the shared cache also makes it possible for a greedy core to hurt the other cores. An area in which CMPs differ from their larger cousins is fault tolerance. Be- cause the CPUs are so closely connected, failures in shared components may bring down multiple CPUs at once, something unlikely in traditional multiprocessors. In addition to symmetric multicore chips, where all the cores are identical, an- other common category of multicore chip is the System On a Chip (SoC). These chips have one or more main CPUs, but also special-purpose cores, such as video and audio decoders, cryptoprocessors, network interfaces, and more, leading to a complete computer system on a chip. Manycore Chips Multicore simply means ‘‘more than one core,’’ but when the number of cores grows well beyond the reach of finger counting, we use another name. Manycore chips are multicores that contain tens, hundreds, or even thousands of cores. While there is no hard threshold beyond which a multicore becomes a manycore, an easy distinction is that you probably have a manycore if you no longer care about losing one or two cores. Accelerator add-on cards like Intel’s Xeon Phi have in excess of 60 x86 cores. Other vendors have already crossed the 100-core barrier with different kinds of cores. A thousand general-purpose cores may be on their way. It is not easy to im- agine what to do with a thousand cores, much less how to program them. Another problem with really large numbers of cores is that the machinery needed to keep their caches coherent becomes very complicated and very expen- sive. Many engineers worry that cache coherence may not scale to many hundreds of cores. Some even advocate that we should give it up altogether. They fear that the cost of coherence protocols in hardware will be so high that all those shiny new cores will not help performance much because the processor is too busy keeping the caches in a consistent state. Worse, it would need to spend way too much mem- ory on the (fast) directory to do so. This is known as the coherency wall.
|
clipped_os_Page_528_Chunk6745
|
SEC. 8.1 MULTIPROCESSORS 529 Consider, for instance, our directory-based cache-coherency solution discussed above. If each directory entry contains a bit vector to indicate which cores contain a particular cache line, the directory entry for a CPU with 1024 cores will be at least 128 bytes long. Since cache lines themselves are rarely larger than 128 bytes, this leads to the awkward situation that the directory entry is larger than the cache- line it tracks. Probably not what we want. Some engineers argue that the only programming model that has proven to scale to very large numbers of processors is that which employs message passing and distributed memory—and that is what we should expect in future manycore chips also. Experimental processors like Intel’s 48-core SCC have already dropped cache consistency and provided hardware support for faster message passing in- stead. On the other hand, other processors still provide consistency even at large core counts. Hybrid models are also possible. For instance, a 1024-core chip may be partitioned in 64 islands with 16 cache-coherent cores each, while abandoning cache coherence between the islands. Thousands of cores are not even that special any more. The most common manycores today, graphics processing units, are found in just about any computer system that is not embedded and has a monitor. A GPU is a processor with dedi- cated memory and, literally, thousands of itty-bitty cores. Compared to gener- al-purpose processors, GPUs spend more of their transistor budget on the circuits that perform calculations and less on caches and control logic. They are very good for many small computations done in parallel, like rendering polygons in graphics applications. They are not so good at serial tasks. They are also hard to program. While GPUs can be useful for operating systems (e.g., encryption or processing of network traffic), it is not likely that much of the operating system itself will run on the GPUs. Other computing tasks are increasingly handled by the GPU, especially com- putationally demanding ones that are common in scientific computing. The term used for general-purpose processing on GPUs is—you guessed it— GPGPU. Un- fortunately, programming GPUs efficiently is extremely difficult and requires spe- cial programming languages such as OpenGL, or NVIDIA’s proprietary CUDA. An important difference between programming GPUs and programming gener- al-purpose processors is that GPUs are essentially ‘‘single instruction multiple data’’ machines, which means that a large number of cores execute exactly the same instruction but on different pieces of data. This programming model is great for data parallelism, but not always convenient for other programming styles (such as task parallelism). Heterogeneous Multicores Some chips integrate a GPU and a number of general-purpose cores on the same die. Similarly, many SoCs contain general-purpose cores in addition to one or more special-purpose processors. Systems that integrate multiple different breeds
|
clipped_os_Page_529_Chunk6746
|
530 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 of processors in a single chip are collectively known as heterogeneous multicore processors. An example of a heterogeneous multicore processor is the line of IXP network processors originally introduced by Intel in 2000 and updated regularly with the latest technology. The network processors typically contain a single gener- al purpose control core (for instance, an ARM processor running Linux) and many tens of highly specialized stream processors that are really good at processing net- work packets and not much else. They are commonly used in network equipment, such as routers and firewalls. To route network packets you probably do not need floating-point operations much, so in most models the stream processors do not have a floating-point unit at all. On the other hand, high-speed networking is high- ly dependent on fast access to memory (to read packet data) and the stream proc- essors have special hardware to make this possible. In the previous examples, the systems were clearly heterogeneous. The stream processors and the control processors on the IXPs are completely different beasts with different instruction sets. The same is true for the GPU and the general-pur- pose cores. However, it is also possible to introduce heterogeneity while main- taining the same instruction set. For instance, a CPU can have a small number of ‘‘big’’ cores, with deep pipelines and possibly high clock speeds, and a larger num- ber of ‘‘little’’ cores that are simpler, less powerful, and perhaps run at lower fre- quencies. The powerful cores are needed for running code that requires fast sequential processing while the little cores are useful for tasks that can be executed efficiently in parallel. An example of a heterogeneous architecture along these lines is ARM’s big.LITTLE processor family. Programming with Multiple Cores As has often happened in the past, the hardware is way ahead of the software. While multicore chips are here now, our ability to write applications for them is not. Current programming languages are poorly suited for writing highly parallel programs and good compilers and debugging tools are scarce on the ground. Few programmers have had any experience with parallel programming and most know little about dividing work into multiple packages that can run in parallel. Syn- chronization, eliminating race conditions, and deadlock avoidance are such stuff as really bad dreams are made of, but unfortunately performance suffers horribly if they are not handled well. Semaphores are not the answer. Beyond these startup problems, it is far from obvious what kind of application really needs hundreds, let alone thousands, of cores—especially in home environ- ments. In large server farms, on the other hand, there is often plenty of work for large numbers of cores. For instance, a popular server may easily use a different core for each client request. Similarly, the cloud providers discussed in the previ- ous chapter can soak up the cores to provide a large number of virtual machines to rent out to clients looking for on-demand computing power.
|
clipped_os_Page_530_Chunk6747
|
SEC. 8.1 MULTIPROCESSORS 531 8.1.2 Multiprocessor Operating System Types Let us now turn from multiprocessor hardware to multiprocessor software, in particular, multiprocessor operating systems. Various approaches are possible. Below we will study three of them. Note that all of these are equally applicable to multicore systems as well as systems with discrete CPUs. Each CPU Has Its Own Operating System The simplest possible way to organize a multiprocessor operating system is to statically divide memory into as many partitions as there are CPUs and give each CPU its own private memory and its own private copy of the operating system. In effect, the n CPUs then operate as n independent computers. One obvious opti- mization is to allow all the CPUs to share the operating system code and make pri- vate copies of only the operating system data structures, as shown in Fig. 8-7. Has private OS CPU 1 Has private OS CPU 2 Has private OS CPU 3 Has private OS CPU 4 Memory I/O 1 2 Data Data 3 4 Data Data OS code Bus Figure 8-7. Partitioning multiprocessor memory among four CPUs, but sharing a single copy of the operating system code. The boxes marked Data are the operat- ing system’s private data for each CPU. This scheme is still better than having n separate computers since it allows all the machines to share a set of disks and other I/O devices, and it also allows the memory to be shared flexibly. For example, even with static memory allocation, one CPU can be given an extra-large portion of the memory so it can handle large programs efficiently. In addition, processes can efficiently communicate with one another by allowing a producer to write data directly into memory and allowing a consumer to fetch it from the place the producer wrote it. Still, from an operating systems’ perspective, having each CPU have its own operating system is as primi- tive as it gets. It is worth mentioning four aspects of this design that may not be obvious. First, when a process makes a system call, the system call is caught and handled on its own CPU using the data structures in that operating system’s tables. Second, since each operating system has its own tables, it also has its own set of processes that it schedules by itself. There is no sharing of processes. If a user logs into CPU 1, all of his processes run on CPU 1. As a consequence, it can hap- pen that CPU 1 is idle while CPU 2 is loaded with work.
|
clipped_os_Page_531_Chunk6748
|
532 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Third, there is no sharing of physical pages. It can happen that CPU 1 has pages to spare while CPU 2 is paging continuously. There is no way for CPU 2 to borrow some pages from CPU 1 since the memory allocation is fixed. Fourth, and worst, if the operating system maintains a buffer cache of recently used disk blocks, each operating system does this independently of the other ones. Thus it can happen that a certain disk block is present and dirty in multiple buffer caches at the same time, leading to inconsistent results. The only way to avoid this problem is to eliminate the buffer caches. Doing so is not hard, but it hurts per- formance considerably. For these reasons, this model is rarely used in production systems any more, although it was used in the early days of multiprocessors, when the goal was to port existing operating systems to some new multiprocessor as fast as possible. In research, the model is making a comeback, but with all sorts of twists. There is something to be said for keeping the operating systems completely separate. If all of the state for each processor is kept local to that processor, there is little to no sharing to lead to consistency or locking problems. Conversely, if multiple proc- essors have to access and modify the same process table, the locking becomes complicated quickly (and crucial for performance). We will say more about this when we discuss the symmetric multiprocessor model below. Master-Slave Multiprocessors A second model is shown in Fig. 8-8. Here, one copy of the operating system and its tables is present on CPU 1 and not on any of the others. All system calls are redirected to CPU 1 for processing there. CPU 1 may also run user processes if there is CPU time left over. This model is called master-slave since CPU 1 is the master and all the others are slaves. Master runs OS CPU 1 Slave runs user processes CPU 2 Slave runs user processes CPU 3 User processes OS CPU 4 Memory I/O Bus Slave runs user processes Figure 8-8. A master-slave multiprocessor model. The master-slave model solves most of the problems of the first model. There is a single data structure (e.g., one list or a set of prioritized lists) that keeps track of ready processes. When a CPU goes idle, it asks the operating system on CPU 1 for a process to run and is assigned one. Thus it can never happen that one CPU is
|
clipped_os_Page_532_Chunk6749
|
SEC. 8.1 MULTIPROCESSORS 533 idle while another is overloaded. Similarly, pages can be allocated among all the processes dynamically and there is only one buffer cache, so inconsistencies never occur. The problem with this model is that with many CPUs, the master will become a bottleneck. After all, it must handle all system calls from all CPUs. If, say, 10% of all time is spent handling system calls, then 10 CPUs will pretty much saturate the master, and with 20 CPUs it will be completely overloaded. Thus this model is simple and workable for small multiprocessors, but for large ones it fails. Symmetric Multiprocessors Our third model, the SMP (Symmetric MultiProcessor), eliminates this asymmetry. There is one copy of the operating system in memory, but any CPU can run it. When a system call is made, the CPU on which the system call was made traps to the kernel and processes the system call. The SMP model is illustrat- ed in Fig. 8-9. Runs users and shared OS CPU 1 Runs users and shared OS CPU 2 Runs users and shared OS CPU 3 Runs users and shared OS OS CPU 4 Memory I/O Locks Bus Figure 8-9. The SMP multiprocessor model. This model balances processes and memory dynamically, since there is only one set of operating system tables. It also eliminates the master CPU bottleneck, since there is no master, but it introduces its own problems. In particular, if two or more CPUs are running operating system code at the same time, disaster may well result. Imagine two CPUs simultaneously picking the same process to run or claiming the same free memory page. The simplest way around these problems is to associate a mutex (i.e., lock) with the operating system, making the whole sys- tem one big critical region. When a CPU wants to run operating system code, it must first acquire the mutex. If the mutex is locked, it just waits. In this way, any CPU can run the operating system, but only one at a time. This approach is some- things called a big kernel lock. This model works, but is almost as bad as the master-slave model. Again, sup- pose that 10% of all run time is spent inside the operating system. With 20 CPUs, there will be long queues of CPUs waiting to get in. Fortunately, it is easy to im- prove. Many parts of the operating system are independent of one another. For
|
clipped_os_Page_533_Chunk6750
|
534 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 example, there is no problem with one CPU running the scheduler while another CPU is handling a file-system call and a third one is processing a page fault. This observation leads to splitting the operating system up into multiple inde- pendent critical regions that do not interact with one another. Each critical region is protected by its own mutex, so only one CPU at a time can execute it. In this way, far more parallelism can be achieved. However, it may well happen that some ta- bles, such as the process table, are used by multiple critical regions. For example, the process table is needed for scheduling, but also for the fork system call and also for signal handling. Each table that may be used by multiple critical regions needs its own mutex. In this way, each critical region can be executed by only one CPU at a time and each critical table can be accessed by only one CPU at a time. Most modern multiprocessors use this arrangement. The hard part about writ- ing the operating system for such a machine is not that the actual code is so dif- ferent from a regular operating system. It is not. The hard part is splitting it into critical regions that can be executed concurrently by different CPUs without inter- fering with one another, not even in subtle, indirect ways. In addition, every table used by two or more critical regions must be separately protected by a mutex and all code using the table must use the mutex correctly. Furthermore, great care must be taken to avoid deadlocks. If two critical re- gions both need table A and table B, and one of them claims A first and the other claims B first, sooner or later a deadlock will occur and nobody will know why. In theory, all the tables could be assigned integer values and all the critical regions could be required to acquire tables in increasing order. This strategy avoids dead- locks, but it requires the programmer to think very carefully about which tables each critical region needs and to make the requests in the right order. As the code evolves over time, a critical region may need a new table it did not previously need. If the programmer is new and does not understand the full logic of the system, then the temptation will be to just grab the mutex on the table at the point it is needed and release it when it is no longer needed. However reasonable this may appear, it may lead to deadlocks, which the user will perceive as the sys- tem freezing. Getting it right is not easy and keeping it right over a period of years in the face of changing programmers is very difficult. 8.1.3 Multiprocessor Synchronization The CPUs in a multiprocessor frequently need to synchronize. We just saw the case in which kernel critical regions and tables have to be protected by mutexes. Let us now take a close look at how this synchronization actually works in a multi- processor. It is far from trivial, as we will soon see. To start with, proper synchronization primitives are really needed. If a process on a uniprocessor machine (just one CPU) makes a system call that requires ac- cessing some critical kernel table, the kernel code can just disable interrupts before
|
clipped_os_Page_534_Chunk6751
|
SEC. 8.1 MULTIPROCESSORS 535 touching the table. It can then do its work knowing that it will be able to finish without any other process sneaking in and touching the table before it is finished. On a multiprocessor, disabling interrupts affects only the CPU doing the disable. Other CPUs continue to run and can still touch the critical table. As a conse- quence, a proper mutex protocol must be used and respected by all CPUs to guar- antee that mutual exclusion works. The heart of any practical mutex protocol is a special instruction that allows a memory word to be inspected and set in one indivisible operation. We saw how TSL (Test and Set Lock) was used in Fig. 2-25 to implement critical regions. As we discussed earlier, what this instruction does is read out a memory word and store it in a register. Simultaneously, it writes a 1 (or some other nonzero value) in- to the memory word. Of course, it takes two bus cycles to perform the memory read and memory write. On a uniprocessor, as long as the instruction cannot be broken off halfway, TSL always works as expected. Now think about what could happen on a multiprocessor. In Fig. 8-10 we see the worst-case timing, in which memory word 1000, being used as a lock, is ini- tially 0. In step 1, CPU 1 reads out the word and gets a 0. In step 2, before CPU 1 has a chance to rewrite the word to 1, CPU 2 gets in and also reads the word out as a 0. In step 3, CPU 1 writes a 1 into the word. In step 4, CPU 2 also writes a 1 into the word. Both CPUs got a 0 back from the TSL instruction, so both of them now hav e access to the critical region and the mutual exclusion fails. CPU 1 Memory CPU 2 Bus Word 1000 is initially 0 1. CPU 1 reads a 0 3. CPU 1 writes a 1 2. CPU 2 reads a 0 4. CPU 2 writes a 1 Figure 8-10. The TSL instruction can fail if the bus cannot be locked. These four steps show a sequence of events where the failure is demonstrated. To prevent this problem, the TSL instruction must first lock the bus, preventing other CPUs from accessing it, then do both memory accesses, then unlock the bus. Typically, locking the bus is done by requesting the bus using the usual bus request protocol, then asserting (i.e., setting to a logical 1 value) some special bus line until both cycles have been completed. As long as this special line is being asserted, no other CPU will be granted bus access. This instruction can only be implemented on a bus that has the necessary lines and (hardware) protocol for using them. Modern buses all have these facilities, but on earlier ones that did not, it was not possible to
|
clipped_os_Page_535_Chunk6752
|
536 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 implement TSL correctly. This is why Peterson’s protocol was invented: to synchro- nize entirely in software (Peterson, 1981). If TSL is correctly implemented and used, it guarantees that mutual exclusion can be made to work. However, this mutual exclusion method uses a spin lock be- cause the requesting CPU just sits in a tight loop testing the lock as fast as it can. Not only does it completely waste the time of the requesting CPU (or CPUs), but it may also put a massive load on the bus or memory, seriously slowing down all other CPUs trying to do their normal work. At first glance, it might appear that the presence of caching should eliminate the problem of bus contention, but it does not. In theory, once the requesting CPU has read the lock word, it should get a copy in its cache. As long as no other CPU attempts to use the lock, the requesting CPU should be able to run out of its cache. When the CPU owning the lock writes a 0 to it to release it, the cache protocol automatically invalidates all copies of it in remote caches, requiring the correct value to be fetched again. The problem is that caches operate in blocks of 32 or 64 bytes. Usually, the words surrounding the lock are needed by the CPU holding the lock. Since the TSL instruction is a write (because it modifies the lock), it needs exclusive access to the cache block containing the lock. Therefore every TSL invalidates the block in the lock holder’s cache and fetches a private, exclusive copy for the requesting CPU. As soon as the lock holder touches a word adjacent to the lock, the cache block is moved to its machine. Consequently, the entire cache block containing the lock is constantly being shuttled between the lock owner and the lock requester, generat- ing even more bus traffic than individual reads on the lock word would have. If we could get rid of all the TSL-induced writes on the requesting side, we could reduce the cache thrashing appreciably. This goal can be accomplished by having the requesting CPU first do a pure read to see if the lock is free. Only if the lock appears to be free does it do a TSL to actually acquire it. The result of this small change is that most of the polls are now reads instead of writes. If the CPU holding the lock is only reading the variables in the same cache block, they can each have a copy of the cache block in shared read-only mode, eliminating all the cache-block transfers. When the lock is finally freed, the owner does a write, which requires exclu- sive access, thus invalidating all copies in remote caches. On the next read by the requesting CPU, the cache block will be reloaded. Note that if two or more CPUs are contending for the same lock, it can happen that both see that it is free simul- taneously, and both do a TSL simultaneously to acquire it. Only one of these will succeed, so there is no race condition here because the real acquisition is done by the TSL instruction, and it is atomic. Seeing that the lock is free and then trying to grab it immediately with a TSL does not guarantee that you get it. Someone else might win, but for the correctness of the algorithm, it does not matter who gets it. Success on the pure read is merely a hint that this would be a good time to try to acquire the lock, but it is not a guarantee that the acquisition will succeed.
|
clipped_os_Page_536_Chunk6753
|
SEC. 8.1 MULTIPROCESSORS 537 Another way to reduce bus traffic is to use the well-known Ethernet binary exponential backoff algorithm (Anderson, 1990). Instead of continuously polling, as in Fig. 2-25, a delay loop can be inserted between polls. Initially the delay is one instruction. If the lock is still busy, the delay is doubled to two instructions, then four instructions, and so on up to some maximum. A low maximum gives a fast response when the lock is released, but wastes more bus cycles on cache thrashing. A high maximum reduces cache thrashing at the expense of not noticing that the lock is free so quickly. Binary exponential backoff can be used with or without the pure reads preceding the TSL instruction. An even better idea is to give each CPU wishing to acquire the mutex its own private lock variable to test, as illustrated in Fig. 8-11 (Mellor-Crummey and Scott, 1991). The variable should reside in an otherwise unused cache block to avoid conflicts. The algorithm works by having a CPU that fails to acquire the lock allo- cate a lock variable and attach itself to the end of a list of CPUs waiting for the lock. When the current lock holder exits the critical region, it frees the private lock that the first CPU on the list is testing (in its own cache). This CPU then enters the critical region. When it is done, it frees the lock its successor is using, and so on. Although the protocol is somewhat complicated (to avoid having two CPUs attach themselves to the end of the list simultaneously), it is efficient and starvation free. For all the details, readers should consult the paper. CPU 3 CPU 3 spins on this (private) lock CPU 4 spins on this (private) lock CPU 2 spins on this (private) lock When CPU 1 is finished with the real lock, it releases it and also releases the private lock CPU 2 is spinning on CPU 1 holds the real lock Shared memory 4 2 3 1 Figure 8-11. Use of multiple locks to avoid cache thrashing. Spinning vs. Switching So far we have assumed that a CPU needing a locked mutex just waits for it, by polling continuously, polling intermittently, or attaching itself to a list of wait- ing CPUs. Sometimes, there is no alternative for the requesting CPU to just wait- ing. For example, suppose that some CPU is idle and needs to access the shared
|
clipped_os_Page_537_Chunk6754
|
538 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 ready list to pick a process to run. If the ready list is locked, the CPU cannot just decide to suspend what it is doing and run another process, as doing that would re- quire reading the ready list. It must wait until it can acquire the ready list. However, in other cases, there is a choice. For example, if some thread on a CPU needs to access the file system buffer cache and it is currently locked, the CPU can decide to switch to a different thread instead of waiting. The issue of whether to spin or to do a thread switch has been a matter of much research, some of which will be discussed below. Note that this issue does not occur on a uniproc- essor because spinning does not make much sense when there is no other CPU to release the lock. If a thread tries to acquire a lock and fails, it is always blocked to give the lock owner a chance to run and release the lock. Assuming that spinning and doing a thread switch are both feasible options, the trade-off is as follows. Spinning wastes CPU cycles directly. Testing a lock re- peatedly is not productive work. Switching, however, also wastes CPU cycles, since the current thread’s state must be saved, the lock on the ready list must be ac- quired, a thread must be selected, its state must be loaded, and it must be started. Furthermore, the CPU cache will contain all the wrong blocks, so many expensive cache misses will occur as the new thread starts running. TLB faults are also like- ly. Eventually, a switch back to the original thread must take place, with more cache misses following it. The cycles spent doing these two context switches plus all the cache misses are wasted. If it is known that mutexes are generally held for, say, 50 μsec and it takes 1 msec to switch from the current thread and 1 msec to switch back later, it is more efficient just to spin on the mutex. On the other hand, if the average mutex is held for 10 msec, it is worth the trouble of making the two context switches. The trouble is that critical regions can vary considerably in their duration, so which approach is better? One design is to always spin. A second design is to always switch. But a third design is to make a separate decision each time a locked mutex is encountered. At the time the decision has to be made, it is not known whether it is better to spin or switch, but for any giv en system, it is possible to make a trace of all activity and analyze it later offline. Then it can be said in retrospect which decision was the best one and how much time was wasted in the best case. This hindsight algorithm then becomes a benchmark against which feasible algorithms can be measured. This problem has been studied by researchers for decades (Ousterhout, 1982). Most work uses a model in which a thread failing to acquire a mutex spins for some period of time. If this threshold is exceeded, it switches. In some cases the threshold is fixed, typically the known overhead for switching to another thread and then switching back. In other cases it is dynamic, depending on the observed history of the mutex being waited on. The best results are achieved when the system keeps track of the last few observed spin times and assumes that this one will be similar to the previous ones. For example, assuming a 1-msec context switch time again, a thread will spin for a
|
clipped_os_Page_538_Chunk6755
|
SEC. 8.1 MULTIPROCESSORS 539 maximum of 2 msec, but observe how long it actually spun. If it fails to acquire a lock and sees that on the previous three runs it waited an average of 200 μsec, it should spin for 2 msec before switching. However, if it sees that it spun for the full 2 msec on each of the previous attempts, it should switch immediately and not spin at all. Some modern processors, including the x86, offer special intructions to make the waiting more efficient in terms of reducing power consumption. For instance, the MONITOR/MWAIT instructions on x86 allow a program to block until some other processor modifies the data in a previously defined memory area. Specif- ically, the MONITOR instruction defines an address range that should be monitored for writes. The MWAIT instruction then blocks the thread until someone writes to the area. Effectively, the thread is spinning, but without burning many cycles need- lessly. 8.1.4 Multiprocessor Scheduling Before looking at how scheduling is done on multiprocessors, it is necessary to determine what is being scheduled. Back in the old days, when all processes were single threaded, processes were scheduled—there was nothing else schedulable. All modern operating systems support multithreaded processes, which makes scheduling more complicated. It matters whether the threads are kernel threads or user threads. If threading is done by a user-space library and the kernel knows nothing about the threads, then scheduling happens on a per-process basis as it always did. If the kernel does not ev en know threads exist, it can hardly schedule them. With kernel threads, the picture is different. Here the kernel is aware of all the threads and can pick and choose among the threads belonging to a process. In these systems, the trend is for the kernel to pick a thread to run, with the process it be- longs to having only a small role (or maybe none) in the thread-selection algo- rithm. Below we will talk about scheduling threads, but of course, in a system with single-threaded processes or threads implemented in user space, it is the proc- esses that are scheduled. Process vs. thread is not the only scheduling issue. On a uniprocessor, sched- uling is one dimensional. The only question that must be answered (repeatedly) is: ‘‘Which thread should be run next?’’ On a multiprocessor, scheduling has two dimensions. The scheduler has to decide which thread to run and which CPU to run it on. This extra dimension greatly complicates scheduling on multiprocessors. Another complicating factor is that in some systems, all of the threads are unrelated, belonging to different processes and having nothing to do with one another. In others they come in groups, all belonging to the same application and working together. An example of the former situation is a server system in which independent users start up independent processes. The threads of different proc- esses are unrelated and each one can be scheduled without regard to the other ones.
|
clipped_os_Page_539_Chunk6756
|
540 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 An example of the latter situation occurs regularly in program development en- vironments. Large systems often consist of some number of header files containing macros, type definitions, and variable declarations that are used by the actual code files. When a header file is changed, all the code files that include it must be re- compiled. The program make is commonly used to manage development. When make is invoked, it starts the compilation of only those code files that must be re- compiled on account of changes to the header or code files. Object files that are still valid are not regenerated. The original version of make did its work sequentially, but newer versions de- signed for multiprocessors can start up all the compilations at once. If 10 compila- tions are needed, it does not make sense to schedule 9 of them to run immediately and leave the last one until much later since the user will not perceive the work as completed until the last one has finished. In this case it makes sense to regard the threads doing the compilations as a group and to take that into account when scheduling them. Moroever sometimes it is useful to schedule threads that communicate exten- sively, say in a producer-consumer fashion, not just at the same time, but also close together in space. For instance, they may benefit from sharing caches. Likewise, in NUMA architectures, it may help if they access memory that is close by. Time Sharing Let us first address the case of scheduling independent threads; later we will consider how to schedule related threads. The simplest scheduling algorithm for dealing with unrelated threads is to have a single systemwide data structure for ready threads, possibly just a list, but more likely a set of lists for threads at dif- ferent priorities as depicted in Fig. 8-12(a). Here the 16 CPUs are all currently busy, and a prioritized set of 14 threads are waiting to run. The first CPU to finish its current work (or have its thread block) is CPU 4, which then locks the schedul- ing queues and selects the highest-priority thread, A, as shown in Fig. 8-12(b). Next, CPU 12 goes idle and chooses thread B, as illustrated in Fig. 8-12(c). As long as the threads are completely unrelated, doing scheduling this way is a rea- sonable choice and it is very simple to implement efficiently. Having a single scheduling data structure used by all CPUs timeshares the CPUs, much as they would be in a uniprocessor system. It also provides automatic load balancing because it can never happen that one CPU is idle while others are overloaded. Two disadvantages of this approach are the potential contention for the scheduling data structure as the number of CPUs grows and the usual overhead in doing a context switch when a thread blocks for I/O. It is also possible that a context switch happens when a thread’s quantum ex- pires. On a multiprocessor, that has certain properties not present on a uniproc- essor. Suppose that the thread happens to hold a spin lock when its quantum ex- pires. Other CPUs waiting on the spin lock just waste their time spinning until that
|
clipped_os_Page_540_Chunk6757
|
SEC. 8.1 MULTIPROCESSORS 541 0 4 8 12 1 5 9 13 2 6 10 14 3 7 11 15 A B C D E F G H I J K L M N 7 5 4 2 1 0 Priority CPU 0 A 8 12 1 5 9 13 2 6 10 14 3 7 11 15 B C D E F G H I J K L M N 7 5 4 2 1 0 Priority CPU 4 goes idle CPU 12 goes idle 0 A 8 B 1 5 9 13 2 6 10 14 3 7 11 15 C D E F G H I J K L M N 7 5 4 2 3 3 3 6 6 6 1 0 Priority (a) (b) (c) Figure 8-12. Using a single data structure for scheduling a multiprocessor. thread is scheduled again and releases the lock. On a uniprocessor, spin locks are rarely used, so if a process is suspended while it holds a mutex, and another thread starts and tries to acquire the mutex, it will be immediately blocked, so little time is wasted. To get around this anomaly, some systems use smart scheduling, in which a thread acquiring a spin lock sets a processwide flag to show that it currently has a spin lock (Zahorjan et al., 1991). When it releases the lock, it clears the flag. The scheduler then does not stop a thread holding a spin lock, but instead gives it a lit- tle more time to complete its critical region and release the lock. Another issue that plays a role in scheduling is the fact that while all CPUs are equal, some CPUs are more equal. In particular, when thread A has run for a long time on CPU k, CPU k’s cache will be full of A’s blocks. If A gets to run again soon, it may perform better if it is run on CPU k, because k’s cache may still con- tain some of A’s blocks. Having cache blocks preloaded will increase the cache hit rate and thus the thread’s speed. In addition, the TLB may also contain the right pages, reducing TLB faults. Some multiprocessors take this effect into account and use what is called affin- ity scheduling (Vaswani and Zahorjan, 1991). The basic idea here is to make a serious effort to have a thread run on the same CPU it ran on last time. One way to create this affinity is to use a two-level scheduling algorithm. When a thread is created, it is assigned to a CPU, for example based on which one has the smallest load at that moment. This assignment of threads to CPUs is the top level of the al- gorithm. As a result of this policy, each CPU acquires its own collection of threads. The actual scheduling of the threads is the bottom level of the algorithm. It is done by each CPU separately, using priorities or some other means. By trying to
|
clipped_os_Page_541_Chunk6758
|
542 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 keep a thread on the same CPU for its entire lifetime, cache affinity is maximized. However, if a CPU has no threads to run, it takes one from another CPU rather than go idle. Tw o-level scheduling has three benefits. First, it distributes the load roughly ev enly over the available CPUs. Second, advantage is taken of cache affinity where possible. Third, by giving each CPU its own ready list, contention for the ready lists is minimized because attempts to use another CPU’s ready list are rel- atively infrequent. Space Sharing The other general approach to multiprocessor scheduling can be used when threads are related to one another in some way. Earlier we mentioned the example of parallel make as one case. It also often occurs that a single process has multiple threads that work together. For example, if the threads of a process communicate a lot, it is useful to have them running at the same time. Scheduling multiple threads at the same time across multiple CPUs is called space sharing. The simplest space-sharing algorithm works like this. Assume that an entire group of related threads is created at once. At the time it is created, the scheduler checks to see if there are as many free CPUs as there are threads. If there are, each thread is given its own dedicated (i.e., nonmultiprogrammed) CPU and they all start. If there are not enough CPUs, none of the threads are started until enough CPUs are available. Each thread holds onto its CPU until it terminates, at which time the CPU is put back into the pool of available CPUs. If a thread blocks on I/O, it continues to hold the CPU, which is simply idle until the thread wakes up. When the next batch of threads appears, the same algorithm is applied. At any instant of time, the set of CPUs is statically partitioned into some num- ber of partitions, each one running the threads of one process. In Fig. 8-13, we have partitions of sizes 4, 6, 8, and 12 CPUs, with 2 CPUs unassigned, for ex- ample. As time goes on, the number and size of the partitions will change as new threads are created and old ones finish and terminate. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 4-CPU partition 12-CPU partition Unassigned CPU 6-CPU partition 8-CPU partition Figure 8-13. A set of 32 CPUs split into four partitions, with two CPUs available.
|
clipped_os_Page_542_Chunk6759
|
SEC. 8.1 MULTIPROCESSORS 543 Periodically, scheduling decisions have to be made. In uniprocessor systems, shortest job first is a well-known algorithm for batch scheduling. The analogous al- gorithm for a multiprocessor is to choose the process needing the smallest number of CPU cycles, that is, the thread whose CPU-count × run-time is the smallest of the candidates. However, in practice, this information is rarely available, so the al- gorithm is hard to carry out. In fact, studies have shown that, in practice, beating first-come, first-served is hard to do (Krueger et al., 1994). In this simple partitioning model, a thread just asks for some number of CPUs and either gets them all or has to wait until they are available. A different approach is for threads to actively manage the degree of parallelism. One method for manag- ing the parallelism is to have a central server that keeps track of which threads are running and want to run and what their minimum and maximum CPU requirements are (Tucker and Gupta, 1989). Periodically, each application polls the central ser- ver to ask how many CPUs it may use. It then adjusts the number of threads up or down to match what is available. For example, a Web server can have 5, 10, 20, or any other number of threads running in parallel. If it currently has 10 threads and there is suddenly more de- mand for CPUs and it is told to drop to fiv e, when the next fiv e threads finish their current work, they are told to exit instead of being given new work. This scheme allows the partition sizes to vary dynamically to match the current workload better than the fixed system of Fig. 8-13. Gang Scheduling A clear advantage of space sharing is the elimination of multiprogramming, which eliminates the context-switching overhead. However, an equally clear disad- vantage is the time wasted when a CPU blocks and has nothing at all to do until it becomes ready again. Consequently, people have looked for algorithms that at- tempt to schedule in both time and space together, especially for threads that create multiple threads, which usually need to communicate with one another. To see the kind of problem that can occur when the threads of a process are in- dependently scheduled, consider a system with threads A0 and A1 belonging to process A and threads B0 and B1 belonging to process B. Threads A0 and B0 are timeshared on CPU 0; threads A1 and B1 are timeshared on CPU 1. Threads A0 and A1 need to communicate often. The communication pattern is that A0 sends A1 a message, with A1 then sending back a reply to A0, followed by another such se- quence, common in client-server situations. Suppose luck has it that A0 and B1 start first, as shown in Fig. 8-14. In time slice 0, A0 sends A1 a request, but A1 does not get it until it runs in time slice 1 starting at 100 msec. It sends the reply immediately, but A0 does not get the reply until it runs again at 200 msec. The net result is one request-reply se- quence every 200 msec. Not very good performance.
|
clipped_os_Page_543_Chunk6760
|
544 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 A0 B0 A0 B0 A0 B0 B1 A1 B1 A1 B1 A1 Thread A0 running 0 100 200 300 400 500 600 CPU 0 CPU 1 Time Request 1 Request 2 Reply 2 Reply 1 Figure 8-14. Communication between two threads belonging to thread A that are running out of phase. The solution to this problem is gang scheduling, which is an outgrowth of co- scheduling (Ousterhout, 1982). Gang scheduling has three parts: 1. Groups of related threads are scheduled as a unit, a gang. 2. All members of a gang run at once on different timeshared CPUs. 3. All gang members start and end their time slices together. The trick that makes gang scheduling work is that all CPUs are scheduled syn- chronously. Doing this means that time is divided into discrete quanta as we had in Fig. 8-14. At the start of each new quantum, all the CPUs are rescheduled, with a new thread being started on each one. At the start of the next quantum, another scheduling event happens. In between, no scheduling is done. If a thread blocks, its CPU stays idle until the end of the quantum. An example of how gang scheduling works is given in Fig. 8-15. Here we have a multiprocessor with six CPUs being used by fiv e processes, A through E, with a total of 24 ready threads. During time slot 0, threads A0 through A6 are scheduled and run. During time slot 1, threads B0, B1, B2, C0, C1, and C2 are scheduled and run. During time slot 2, D’s fiv e threads and E0 get to run. The re- maining six threads belonging to thread E run in time slot 3. Then the cycle re- peats, with slot 4 being the same as slot 0 and so on. The idea of gang scheduling is to have all the threads of a process run together, at the same time, on different CPUs, so that if one of them sends a request to an- other one, it will get the message almost immediately and be able to reply almost immediately. In Fig. 8-15, since all the A threads are running together, during one quantum, they may send and receive a very large number of messages in one quantum, thus eliminating the problem of Fig. 8-14.
|
clipped_os_Page_544_Chunk6761
|
SEC. 8.2 MULTICOMPUTERS 545 0 1 2 3 4 5 6 7 0 1 2 3 4 5 A0 B0 B1 D1 E2 A1 B1 D1 E2 A1 A2 B2 D2 E3 A2 B2 D2 E3 A3 D3 E4 A3 C0 D3 E4 C1 D4 E5 A4 C1 D4 E5 C2 E0 E6 A5 C2 E0 E6 C0 A4 A5 D0 E1 A0 B0 D0 E1 CPU Time slot Figure 8-15. Gang scheduling. 8.2 MULTICOMPUTERS Multiprocessors are popular and attractive because they offer a simple commu- nication model: all CPUs share a common memory. Processes can write messages to memory that can then be read by other processes. Synchronization can be done using mutexes, semaphores, monitors, and other well-established techniques. The only fly in the ointment is that large multiprocessors are difficult to build and thus expensive. And very large ones are impossible to build at any price. So something else is needed if we are to scale up to large numbers of CPUs. To get around these problems, much research has been done on multicomput- ers, which are tightly coupled CPUs that do not share memory. Each one has its own memory, as shown in Fig. 8-1(b). These systems are also known by a variety of other names, including cluster computers and COWS (Clusters Of Worksta- tions). Cloud computing services are always built on multicomputers because they need to be large. Multicomputers are easy to build because the basic component is just a stripped-down PC, without a keyboard, mouse, or monitor, but with a high-per- formance network interface card. Of course, the secret to getting high performance is to design the interconnection network and the interface card cleverly. This prob- lem is completely analogous to building the shared memory in a multiprocessor [e.g., see Fig. 8-1(b)]. However, the goal is to send messages on a microsecond time scale, rather than access memory on a nanosecond time scale, so it is simpler, cheaper, and easier to accomplish. In the following sections, we will first take a brief look at multicomputer hard- ware, especially the interconnection hardware. Then we will move onto the soft- ware, starting with low-level communication software, then high-level communica- tion software. We will also look at a way shared memory can be achieved on sys- tems that do not have it. Finally, we will examine scheduling and load balancing.
|
clipped_os_Page_545_Chunk6762
|
546 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 8.2.1 Multicomputer Hardware The basic node of a multicomputer consists of a CPU, memory, a network in- terface, and sometimes a hard disk. The node may be packaged in a standard PC case, but the monitor, keyboard, and mouse are nearly always absent. Sometimes this configuration is called a headless workstation because there is no user with a head in front of it. A workstation with a human user should logically be called a ‘‘headed workstation,’’ but for some reason it is not. In some cases, the PC con- tains a 2-way or 4-way multiprocessor board, possibly each with a dual-, quad- or octa-core chip, instead of a single CPU, but for simplicity, we will assume that each node has one CPU. Often hundreds or even thousands of nodes are hooked together to form a multicomputer. Below we will say a little about how this hard- ware is organized. Interconnection Technology Each node has a network interface card with one or two cables (or fibers) com- ing out of it. These cables connect either to other nodes or to switches. In a small system, there may be one switch to which all the nodes are connected in the star topology of Fig. 8-16(a). Modern switched Ethernets use this topology. (a) (d) (b) (e) (c) (f) Figure 8-16. Various interconnect topologies. (a) A single switch. (b) A ring. (c) A grid. (d) A double torus. (e) A cube. (f) A 4D hypercube.
|
clipped_os_Page_546_Chunk6763
|
SEC. 8.2 MULTICOMPUTERS 547 As an alternative to the single-switch design, the nodes may form a ring, with two wires coming out the network interface card, one into the node on the left and one going into the node on the right, as shown in Fig. 8-16(b). In this topology, no switches are needed and none are shown. The grid or mesh of Fig. 8-16(c) is a two-dimensional design that has been used in many commercial systems. It is highly regular and easy to scale up to large sizes. It has a diameter, which is the longest path between any two nodes, and which increases only as the square root of the number of nodes. A variant on the grid is the double torus of Fig. 8-16(d), which is a grid with the edges connected. Not only is it more fault tolerant than the grid, but the diameter is also less because the opposite corners can now communicate in only two hops. The cube of Fig. 8-16(e) is a regular three-dimensional topology. We hav e il- lustrated a 2 × 2 × 2 cube, but in the most general case it could be a k × k × k cube. In Fig. 8-16(f) we have a four-dimensional cube built from two three-dimen- sional cubes with the corresponding nodes connected. We could make a fiv e- dimensional cube by cloning the structure of Fig. 8-16(f) and connecting the cor- responding nodes to form a block of four cubes. To go to six dimensions, we could replicate the block of four cubes and interconnect the corresponding nodes, and so on. An n-dimensional cube formed this way is called a hypercube. Many parallel computers use a hypercube topology because the diameter grows linearly with the dimensionality. Put in other words, the diameter is the base 2 logarithm of the number of nodes. For example, a 10-dimensional hypercube has 1024 nodes but a diameter of only 10, giving excellent delay properties. Note that in contrast, 1024 nodes arranged as a 32 × 32 grid have a diameter of 62, more than six times worse than the hypercube. The price paid for the smaller diameter is that the fanout, and thus the number of links (and the cost), is much larger for the hypercube. Tw o kinds of switching schemes are used in multicomputers. In the first one, each message is first broken up (either by the user software or the network inter- face) into a chunk of some maximum length called a packet. The switching scheme, called store-and-forward packet switching, consists of the packet being injected into the first switch by the source node’s network interface board, as shown in Fig. 8-17(a). The bits come in one at a time, and when the whole packet has arrived at an input buffer, it is copied to the line leading to the next switch along the path, as shown in Fig. 8-17(b). When the packet arrives at the switch at- tached to the destination node, as shown in Fig. 8-17(c), the packet is copied to that node’s network interface board and eventually to its RAM. While store-and-forward packet switching is flexible and efficient, it does have the problem of increasing latency (delay) through the interconnection network. Suppose that the time to move a packet one hop in Fig. 8-17 is T nsec. Since the packet must be copied four times to get it from CPU 1 to CPU 2 (to A, to C, to D, and to the destination CPU), and no copy can begin until the previous one is fin- ished, the latency through the interconnection network is 4T. One way out is to
|
clipped_os_Page_547_Chunk6764
|
548 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 CPU 1 Input port (a) Output port Entire packet Entire packet Four-port switch C A CPU 2 Entire packet D B (b) C A D B (c) C A D B Figure 8-17. Store-and-forward packet switching. design a network in which a packet can be logically divided into smaller units. As soon as the first unit arrives at a switch, it can be forwarded, even before the tail has arrived. Conceivably, the unit could be as small as 1 bit. The other switching regime, circuit switching, consists of the first switch first establishing a path through all the switches to the destination switch. Once that path has been set up, the bits are pumped all the way from the source to the desti- nation nonstop as fast as possible. There is no intermediate buffering at the inter- vening switches. Circuit switching requires a setup phase, which takes some time, but is faster once the setup has been completed. After the packet has been sent, the path must be torn down again. A variation on circuit switching, called wormhole routing, breaks each packet up into subpackets and allows the first subpacket to start flowing even before the full path has been built. Network Interfaces All the nodes in a multicomputer have a plug-in board containing the node’s connection to the interconnection network that holds the multicomputer together. The way these boards are built and how they connect to the main CPU and RAM have substantial implications for the operating system. We will now briefly look at some of the issues here. This material is based in part on the work of Bhoedjang (2000). In virtually all multicomputers, the interface board contains substantial RAM for holding outgoing and incoming packets. Usually, an outgoing packet has to be copied to the interface board’s RAM before it can be transmitted to the first switch. The reason for this design is that many interconnection networks are synchronous, so that once a packet transmission has started, the bits must continue flowing at a
|
clipped_os_Page_548_Chunk6765
|
SEC. 8.2 MULTICOMPUTERS 549 constant rate. If the packet is in the main RAM, this continuous flow out onto the network cannot be guaranteed due to other traffic on the memory bus. Using a ded- icated RAM on the interface board eliminates this problem. This design is shown in Fig. 8-18. CPU CPU CPU CPU Switch Node 2 Main RAM Main RAM Node 4 Interface board Optional on- board CPU Interface board RAM Node 3 Main RAM Main RAM Node 1 3 2 1 4 5 User OS Figure 8-18. Position of the network interface boards in a multicomputer. The same problem occurs with incoming packets. The bits arrive from the net- work at a constant and often extremely high rate. If the network interface board cannot store them in real time as they arrive, data will be lost. Again here, trying to go over the system bus (e.g., the PCI bus) to the main RAM is too risky. Since the network board is typically plugged into the PCI bus, this is the only connection it has to the main RAM, so competing for this bus with the disk and every other I/O device is inevitable. It is safer to store incoming packets in the interface board’s private RAM and then copy them to the main RAM later. The interface board may have one or more DMA channels or even a complete CPU (or maybe even multiple CPUs) on board. The DMA channels can copy pack- ets between the interface board and the main RAM at high speed by requesting block transfers on the system bus, thus transferring several words without having to request the bus separately for each word. However, it is precisely this kind of block transfer, which ties up the system bus for multiple bus cycles, that makes the inter- face board RAM necessary in the first place. Many interface boards have a CPU on them, possibly in addition to one or more DMA channels. They are called network processors and are becoming in- creasingly powerful (El Ferkouss et al., 2011). This design means that the main CPU can offload some work to the network board, such as handling reliable trans- mission (if the underlying hardware can lose packets), multicasting (sending a packet to more than one destination), compression/decompression, encryption/de- cryption, and taking care of protection in a system that has multiple processes.
|
clipped_os_Page_549_Chunk6766
|
550 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 However, having two CPUs means that they must synchronize to avoid race condi- tions, which adds extra overhead and means more work for the operating system. Copying data across layers is safe, but not necessarily efficient. For instance, a brower requesting data from a remote web server will create a request in the brow- ser’s address space. That request is subsequently copied to the kernel so that TCP and IP can handle it. Next, the data are copied to the memory of the network inter- face. On the other end, the inverse happens: the data are copied from the network card to a kernel buffer, and from a kernel buffer to the Web server. Quite a few cop- ies, unfortunately. Each copy introduces overhead, not just the copying itself, but also the pressure on the cache, TLB, etc. As a consequence, the latency over such network connections is high. In the next section, we discuss techniques to reduce the overhead due to copy- ing, cache pollution, and context switching as much as possible. 8.2.2 Low-Level Communication Software The enemy of high-performance communication in multicomputer systems is excess copying of packets. In the best case, there will be one copy from RAM to the interface board at the source node, one copy from the source interface board to the destination interface board (if no storing and forwarding along the path occurs), and one copy from there to the destination RAM, a total of three copies. However, in many systems it is even worse. In particular, if the interface board is mapped into kernel virtual address space and not user virtual address space, a user process can send a packet only by issuing a system call that traps to the kernel. The kernels may have to copy the packets to their own memory both on output and on input, for example, to avoid page faults while transmitting over the network. Also, the re- ceiving kernel probably does not know where to put incoming packets until it has had a chance to examine them. These fiv e copy steps are illustrated in Fig. 8-18. If copies to and from RAM are the bottleneck, the extra copies to and from the kernel may double the end-to-end delay and cut the throughput in half. To avoid this performance hit, many multicomputers map the interface board directly into user space and allow the user process to put the packets on the board directly, with- out the kernel being involved. While this approach definitely helps performance, it introduces two problems. First, what if several processes are running on the node and need network ac- cess to send packets? Which one gets the interface board in its address space? Having a system call to map the board in and out of a virtual address space is ex- pensive, but if only one process gets the board, how do the other ones send pack- ets? And what happens if the board is mapped into process A’s virtual address space and a packet arrives for process B, especially if A and B have different own- ers, neither of whom wants to put in any effort to help the other? One solution is to map the interface board into all processes that need it, but then a mechanism is needed to avoid race conditions. For example, if A claims a
|
clipped_os_Page_550_Chunk6767
|
SEC. 8.2 MULTICOMPUTERS 551 buffer on the interface board, and then, due to a time slice, B runs and claims the same buffer, disaster results. Some kind of synchronization mechanism is needed, but these mechanisms, such as mutexes, work only when the processes are as- sumed to be cooperating. In a shared environment with multiple users all in a hurry to get their work done, one user might just lock the mutex associated with the board and never release it. The conclusion here is that mapping the interface board into user space really works well only when there is just one user process running on each node unless special precautions are taken (e.g., different processes get different portions of the interface RAM mapped into their address spaces). The second problem is that the kernel may well need access to the intercon- nection network itself, for example, to access the file system on a remote node. Having the kernel share the interface board with any users is not a good idea. Sup- pose that while the board was mapped into user space, a kernel packet arrived. Or suppose that the user process sent a packet to a remote machine pretending to be the kernel. The conclusion is that the simplest design is to have two network inter- face boards, one mapped into user space for application traffic and one mapped into kernel space for use by the operating system. Many multicomputers do pre- cisely this. On the other hand, newer network interfaces are frequently multiqueue, which means that they hav e more than one buffer to support multiple users efficiently. For instance, the Intel I350 series of network cards has 8 send and 8 receive queues, and is virtualizable to many virtual ports. Better still, the card supports core affin- ity. Specifically, it has its own hashing logic to help steer each packet to a suitable process. As it is faster to process all segments in the same TCP flow on the same processor (where the caches are warm), the card can use the hashing logic to hash the TCP flow fields (IP addresses and TCP port numbers) and add all segments with the same hash on the same queue that is served by a specific core. This is also useful for virtualization, as it allows us to give each virtual machine its own queue. Node-to-Network Interface Communication Another issue is how to get packets onto the interface board. The fastest way is to use the DMA chip on the board to just copy them in from RAM. The problem with this approach is that DMA may use physical rather than virtual addresses and runs independently of the CPU, unless an I/O MMU is present. To start with, al- though a user process certainly knows the virtual address of any packet it wants to send, it generally does not know the physical address. Making a system call to do the virtual-to-physical mapping is undesirable, since the point of putting the inter- face board in user space in the first place was to avoid having to make a system call for each packet to be sent. In addition, if the operating system decides to replace a page while the DMA chip is copying a packet from it, the wrong data will be transmitted. Worse yet, if the operating system replaces a page while the DMA chip is copying an incoming
|
clipped_os_Page_551_Chunk6768
|
552 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 packet to it, not only will the incoming packet be lost, but also a page of innocent memory will be ruined, probably with disastrous consequences shortly. These problems can be avoided by having system calls to pin and unpin pages in memory, marking them as temporarily unpageable. However, having to make a system call to pin the page containing each outgoing packet and then having to make another call later to unpin it is expensive. If packets are small, say, 64 bytes or less, the overhead for pinning and unpinning every buffer is prohibitive. For large packets, say, 1 KB or more, it may be tolerable. For sizes in between, it de- pends on the details of the hardware. Besides introducing a performance hit, pin- ning and unpinning pages adds to the software complexity. Remote Direct Memory Access In some fields, high network latencies are simply not acceptable. For instance, for certain applications in high-performance computing the computation time is strongly dependent on the network latency. Likewise, high-frequency trading is all about having computers perform transactions (buying and selling stock) at ex- tremely high speeds—every microsecond counts. Whether or not it is wise to have computer programs trade millions of dollars worth of stock in a millisecond, when pretty much all software tends to be buggy, is an interesting question for dining philosophers to consider when they are not busy grabbing their forks. But not for this book. The point here is that if you manage to get the latency down, it is sure to make you very popular with your boss. In these scenarios, it pays to reduce the amount of copying. For this reason, some network interfaces support RDMA (Remote Direct Memory Access), a technique that allows one machine to perform a direct memory access from one computer to that of another. The RDMA does not involve either of the operating system and the data is directly fetched from, or written to, application memory. RDMA sounds great, but it is not without its disadvantages. Just like normal DMA, the operating system on the communicating nodes must pin the pages invol- ved in the data exchange. Also, just placing data in a remote computer’s memory will not reduce the latency much if the other program is not aware of it. A suc- cessful RDMA does not automatically come with an explicit notification. Instead, a common solution is that a receiver polls on a byte in memory. When the transfer is done, the sender modifies the byte to signal the receiver that there is new data. While this solution works, it is not ideal and wastes CPU cycles. For really serious high-frequency trading, the network cards are custom built using field-programmable gate arrays. They hav e wire-to-wire latency, from re- ceiving the bits on the network card to transmitting a message to buy a few million worth of something, in well under a microsecond. Buying $1 million worth of stock in 1 μsec gives a performance of 1 terabuck/sec, which is nice if you can get the ups and downs right, but is not for the faint of heart. Operating systems do not play much of a role in such extreme settings.
|
clipped_os_Page_552_Chunk6769
|
SEC. 8.2 MULTICOMPUTERS 553 8.2.3 User-Level Communication Software Processes on different CPUs on a multicomputer communicate by sending messages to one another. In the simplest form, this message passing is exposed to the user processes. In other words, the operating system provides a way to send and receive messages, and library procedures make these underlying calls available to user processes. In a more sophisticated form, the actual message passing is hid- den from users by making remote communication look like a procedure call. We will study both of these methods below. Send and Receive At the barest minimum, the communication services provided can be reduced to two (library) calls, one for sending messages and one for receiving them. The call for sending a message might be send(dest, &mptr); and the call for receiving a message might be receive(addr, &mptr); The former sends the message pointed to by mptr to a process identified by dest and causes the called to be blocked until the message has been sent. The latter causes the called to be blocked until a message arrives. When one does, the mes- sage is copied to the buffer pointed to by mptr and the called is unblocked. The addr parameter specifies the address to which the receiver is listening. Many vari- ants of these two procedures and their parameters are possible. One issue is how addressing is done. Since multicomputers are static, with the number of CPUs fixed, the easiest way to handle addressing is to make addr a two- part address consisting of a CPU number and a process or port number on the ad- dressed CPU. In this way each CPU can manage its own addresses without poten- tial conflicts. Blocking versus Nonblocking Calls The calls described above are blocking calls (sometimes called synchronous calls). When a process calls send, it specifies a destination and a buffer to send to that destination. While the message is being sent, the sending process is blocked (i.e., suspended). The instruction following the call to send is not executed until the message has been completely sent, as shown in Fig. 8-19(a). Similarly, a call to receive does not return control until a message has actually been received and put in the message buffer pointed to by the parameter. The process remains sus- pended in receive until a message arrives, even if it takes hours. In some systems,
|
clipped_os_Page_553_Chunk6770
|
554 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 the receiver can specify from whom it wishes to receive, in which case it remains blocked until a message from that sender arrives. Sender blocked Sender blocked Trap to kernel, sender blocked Message being sent Message being sent Sender running Sender running Return Sender running Sender running Trap Message copied to a kernel buffer Return from kernel, sender released (a) (b) Figure 8-19. (a) A blocking send call. (b) A nonblocking send call. An alternative to blocking calls is the use of nonblocking calls (sometimes called asynchronous calls). If send is nonblocking, it returns control to the called immediately, before the message is sent. The advantage of this scheme is that the sending process can continue computing in parallel with the message transmission, instead of having the CPU go idle (assuming no other process is runnable). The choice between blocking and nonblocking primitives is normally made by the sys- tem designers (i.e., either one primitive is available or the other), although in a few systems both are available and users can choose their favorite. However, the performance advantage offered by nonblocking primitives is off- set by a serious disadvantage: the sender cannot modify the message buffer until the message has been sent. The consequences of the process overwriting the mes- sage during transmission are too horrible to contemplate. Worse yet, the sending process has no idea of when the transmission is done, so it never knows when it is safe to reuse the buffer. It can hardly avoid touching it forever. There are three possible ways out. The first solution is to have the kernel copy the message to an internal kernel buffer and then allow the process to continue, as shown in Fig. 8-19(b). From the sender’s point of view, this scheme is the same as a blocking call: as soon as it gets control back, it is free to reuse the buffer. Of
|
clipped_os_Page_554_Chunk6771
|
SEC. 8.2 MULTICOMPUTERS 555 course, the message will not yet have been sent, but the sender is not hindered by this fact. The disadvantage of this method is that every outgoing message has to be copied from user space to kernel space. With many network interfaces, the mes- sage will have to be copied to a hardware transmission buffer later anyway, so the first copy is essentially wasted. The extra copy can reduce the performance of the system considerably. The second solution is to interrupt (signal) the sender when the message has been fully sent to inform it that the buffer is once again available. No copy is re- quired here, which saves time, but user-level interrupts make programming tricky, difficult, and subject to race conditions, which makes them irreproducible and nearly impossible to debug. The third solution is to make the buffer copy on write, that is, to mark it as read only until the message has been sent. If the buffer is reused before the message has been sent, a copy is made. The problem with this solution is that unless the buffer is isolated on its own page, writes to nearby variables will also force a copy. Also, extra administration is needed because the act of sending a message now implicitly affects the read/write status of the page. Finally, sooner or later the page is likely to be written again, triggering a copy that may no longer be necessary. Thus the choices on the sending side are 1. Blocking send (CPU idle during message transmission). 2. Nonblocking send with copy (CPU time wasted for the extra copy). 3. Nonblocking send with interrupt (makes programming difficult). 4. Copy on write (extra copy probably needed eventually). Under normal conditions, the first choice is the most convenient, especially if mul- tiple threads are available, in which case while one thread is blocked trying to send, other threads can continue working. It also does not require any kernel buff- ers to be managed. Furthermore, as can be seen from comparing Fig. 8-19(a) to Fig. 8-19(b), the message will usually be out the door faster if no copy is required. For the record, we would like to point out that some authors use a different cri- terion to distinguish synchronous from asynchronous primitives. In the alternative view, a call is synchronous only if the sender is blocked until the message has been received and an acknowledgement sent back (Andrews, 1991). In the world of real-time communication, synchronous has yet another meaning, which can lead to confusion, unfortunately. Just as send can be blocking or nonblocking, so can receive. A blocking call just suspends the called until a message has arrived. If multiple threads are avail- able, this is a simple approach. Alternatively, a nonblocking receive just tells the kernel where the buffer is and returns control almost immediately. An interrupt can be used to signal that a message has arrived. However, interrupts are difficult to program and are also quite slow, so it may be preferable for the receiver to poll
|
clipped_os_Page_555_Chunk6772
|
556 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 for incoming messages using a procedure, poll, that tells whether any messages are waiting. If so, the called can call get message, which returns the first arrived mes- sage. In some systems, the compiler can insert poll calls in the code at appropriate places, although knowing how often to poll is tricky. Yet another option is a scheme in which the arrival of a message causes a new thread to be created spontaneously in the receiving process’ address space. Such a thread is called a pop-up thread. It runs a procedure specified in advance and whose parameter is a pointer to the incoming message. After processing the mes- sage, it simply exits and is automatically destroyed. A variant on this idea is to run the receiver code directly in the interrupt hand- ler, without going to the trouble of creating a pop-up thread. To make this scheme ev en faster, the message itself contains the address of the handler, so when a mes- sage arrives, the handler can be called in a few instructions. The big win here is that no copying at all is needed. The handler takes the message from the interface board and processes it on the fly. This scheme is called active messages (Von Eicken et al., 1992). Since each message contains the address of the handler, ac- tive messages work only when senders and receivers trust each other completely. 8.2.4 Remote Procedure Call Although the message-passing model provides a convenient way to structure a multicomputer operating system, it suffers from one incurable flaw: the basic paradigm around which all communication is built is input/output. The procedures send and receive are fundamentally engaged in doing I/O, and many people believe that I/O is the wrong programming model. This problem has long been known, but little was done about it until a paper by Birrell and Nelson (1984) introduced a completely different way of attacking the problem. Although the idea is refreshingly simple (once someone has thought of it), the implications are often subtle. In this section we will examine the concept, its implementation, its strengths, and its weaknesses. In a nutshell, what Birrell and Nelson suggested was allowing programs to call procedures located on other CPUs. When a process on machine 1 calls a proce- dure on machine 2, the calling process on 1 is suspended, and execution of the call- ed procedure takes place on 2. Information can be transported from the called to the callee in the parameters and can come back in the procedure result. No mes- sage passing or I/O at all is visible to the programmer. This technique is known as RPC (Remote Procedure Call) and has become the basis of a large amount of multicomputer software. Traditionally the calling procedure is known as the client and the called procedure is known as the server, and we will use those names here too. The idea behind RPC is to make a remote procedure call look as much as pos- sible like a local one. In the simplest form, to call a remote procedure, the client program must be bound with a small library procedure called the client stub that
|
clipped_os_Page_556_Chunk6773
|
SEC. 8.2 MULTICOMPUTERS 557 represents the server procedure in the client’s address space. Similarly, the server is bound with a procedure called the server stub. These procedures hide the fact that the procedure call from the client to the server is not local. The actual steps in making an RPC are shown in Fig. 8-20. Step 1 is the client calling the client stub. This call is a local procedure call, with the parameters pushed onto the stack in the normal way. Step 2 is the client stub packing the pa- rameters into a message and making a system call to send the message. Packing the parameters is called marshalling. Step 3 is the kernel sending the message from the client machine to the server machine. Step 4 is the kernel passing the incoming packet to the server stub (which would normally have called receive earlier). Finally, step 5 is the server stub calling the server procedure. The reply traces the same path in the other direction. Client CPU Client stub Client 2 1 Operating system Server CPU Server stub 4 3 5 Operating system Server Network Figure 8-20. Steps in making a remote procedure call. The stubs are shaded gray. The key item to note here is that the client procedure, written by the user, just makes a normal (i.e., local) procedure call to the client stub, which has the same name as the server procedure. Since the client procedure and client stub are in the same address space, the parameters are passed in the usual way. Similarly, the ser- ver procedure is called by a procedure in its address space with the parameters it expects. To the server procedure, nothing is unusual. In this way, instead of doing I/O using send and receive, remote communication is done by faking a normal pro- cedure call. Implementation Issues Despite the conceptual elegance of RPC, there are a few snakes hiding under the grass. A big one is the use of pointer parameters. Normally, passing a pointer to a procedure is not a problem. The called procedure can use the pointer the same way the caller can because the two procedures reside in the same virtual address
|
clipped_os_Page_557_Chunk6774
|
558 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 space. With RPC, passing pointers is impossible because the client and server are in different address spaces. In some cases, tricks can be used to make it possible to pass pointers. Suppose that the first parameter is a pointer to an integer, k. The client stub can marshal k and send it along to the server. The server stub then creates a pointer to k and passes it to the server procedure, just as it expects. When the server procedure re- turns control to the server stub, the latter sends k back to the client, where the new k is copied over the old one, just in case the server changed it. In effect, the stan- dard calling sequence of call-by-reference has been replaced by copy restore. Un- fortunately, this trick does not always work, for example, if the pointer points to a graph or other complex data structure. For this reason, some restrictions must be placed on parameters to procedures called remotely. A second problem is that in weakly typed languages, like C, it is perfectly legal to write a procedure that computes the inner product of two vectors (arrays), with- out specifying how large either one is. Each could be terminated by a special value known only to the calling and called procedures. Under these circumstances, it is essentially impossible for the client stub to marshal the parameters: it has no way of determining how large they are. A third problem is that it is not always possible to deduce the types of the pa- rameters, not even from a formal specification or the code itself. An example is printf, which may have any number of parameters (at least one), and they can be an arbitrary mixture of integers, shorts, longs, characters, strings, floating-point num- bers of various lengths, and other types. Trying to call printf as a remote procedure would be practically impossible because C is so permissive. Howev er, a rule saying that RPC can be used provided that you do not program in C (or C++) would not be popular. A fourth problem relates to the use of global variables. Normally, the calling and called procedures may communicate using global variables, in addition to communicating via parameters. If the called procedure is now moved to a remote machine, the code will fail because the global variables are no longer shared. These problems are not meant to suggest that RPC is hopeless. In fact, it is widely used, but some restrictions and care are needed to make it work well in practice. 8.2.5 Distributed Shared Memory Although RPC has its attractions, many programmers still prefer a model of shared memory and would like to use it, even on a multicomputer. Surprisingly enough, it is possible to preserve the illusion of shared memory reasonably well, ev en when it does not actually exist, using a technique called DSM (Distributed Shared Memory) (Li, 1986; and Li and Hudak, 1989). Despite being an old topic, research on it is still going strong (Cai and Strazdins, 2012; Choi and Jung, 2013; and Ohnishi and Yoshida, 2011). DSM is a useful technique to study as it shows
|
clipped_os_Page_558_Chunk6775
|
SEC. 8.2 MULTICOMPUTERS 559 many of the issues and complications in distributed systems. Moreover, the idea it- self has been very influential. With DSM, each page is located in one of the mem- ories of Fig. 8-1(b). Each machine has its own virtual memory and page tables. When a CPU does a LOAD or STORE on a page it does not have, a trap to the oper- ating system occurs. The operating system then locates the page and asks the CPU currently holding it to unmap the page and send it over the interconnection net- work. When it arrives, the page is mapped in and the faulting instruction restarted. In effect, the operating system is just satisfying page faults from remote RAM in- stead of from local disk. To the user, the machine looks as if it has shared memory. The difference between actual shared memory and DSM is illustrated in Fig. 8-21. In Fig. 8-21(a), we see a true multiprocessor with physical shared mem- ory implemented by the hardware. In Fig. 8-21(b), we see DSM, implemented by the operating system. In Fig. 8-21(c), we see yet another form of shared memory, implemented by yet higher levels of software. We will come back to this third option later in the chapter, but for now we will concentrate on DSM. (a) Machine 1 Machine 2 Run-time system Operating system Shared memory Application Hardware Run-time system Operating system Application Hardware (b) Machine 1 Machine 2 Run-time system Operating system Shared memory Application Hardware Run-time system Operating system Application Hardware (c) Machine 1 Machine 2 Run-time system Operating system Shared memory Application Hardware Run-time system Operating system Application Hardware Figure 8-21. Various layers where shared memory can be implemented. (a) The hardware. (b) The operating system. (c) User-level software. Let us now look in some detail at how DSM works. In a DSM system, the ad- dress space is divided up into pages, with the pages being spread over all the nodes in the system. When a CPU references an address that is not local, a trap occurs,
|
clipped_os_Page_559_Chunk6776
|
560 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 and the DSM software fetches the page containing the address and restarts the faulting instruction, which now completes successfully. This concept is illustrated in Fig. 8-22(a) for an address space with 16 pages and four nodes, each capable of holding six pages. Globally shared virtual memory consisting of 16 pages Memory Network (a) (b) (c) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 CPU 0 0 2 5 9 CPU 1 1 3 6 8 10 CPU 2 4 7 11 12 14 CPU 3 13 15 CPU 0 0 2 5 9 CPU 1 1 3 6 8 10 10 CPU 2 4 7 11 12 14 CPU 3 13 15 CPU 0 0 2 5 9 CPU 1 1 3 6 8 10 CPU 2 4 7 11 12 14 CPU 3 13 15 Figure 8-22. (a) Pages of the address space distributed among four machines. (b) Situation after CPU 0 references page 10 and the page is moved there. (c) Situation if page 10 is read only and replication is used. In this example, if CPU 0 references instructions or data in pages 0, 2, 5, or 9, the references are done locally. References to other pages cause traps. For ex- ample, a reference to an address in page 10 will cause a trap to the DSM software, which then moves page 10 from node 1 to node 0, as shown in Fig. 8-22(b).
|
clipped_os_Page_560_Chunk6777
|
SEC. 8.2 MULTICOMPUTERS 561 Replication One improvement to the basic system that can improve performance consid- erably is to replicate pages that are read only, for example, program text, read-only constants, or other read-only data structures. For example, if page 10 in Fig. 8-22 is a section of program text, its use by CPU 0 can result in a copy being sent to CPU 0 without the original in CPU 1’s memory being invalidated or disturbed, as shown in Fig. 8-22(c). In this way, CPUs 0 and 1 can both reference page 10 as often as needed without causing traps to fetch missing memory. Another possibility is to replicate not only read-only pages, but also all pages. As long as reads are being done, there is effectively no difference between replicat- ing a read-only page and replicating a read-write page. However, if a replicated page is suddenly modified, special action has to be taken to prevent having multi- ple, inconsistent copies in existence. How inconsistency is prevented will be dis- cussed in the following sections. False Sharing DSM systems are similar to multiprocessors in certain key ways. In both sys- tems, when a nonlocal memory word is referenced, a chunk of memory containing the word is fetched from its current location and put on the machine making the reference (main memory or cache, respectively). An important design issue is how big the chunk should be? In multiprocessors, the cache block size is usually 32 or 64 bytes, to avoid tying up the bus with the transfer too long. In DSM systems, the unit has to be a multiple of the page size (because the MMU works with pages), but it can be 1, 2, 4, or more pages. In effect, doing this simulates a larger page size. There are advantages and disadvantages to a larger page size for DSM. The biggest advantage is that because the startup time for a network transfer is fairly substantial, it does not really take much longer to transfer 4096 bytes than it does to transfer 1024 bytes. By transferring data in large units, when a large piece of address space has to be moved, the number of transfers may often be reduced. This property is especially important because many programs exhibit locality of refer- ence, meaning that if a program has referenced one word on a page, it is likely to reference other words on the same page in the immediate future. On the other hand, the network will be tied up longer with a larger transfer, blocking other faults caused by other processes. Also, too large an effective page size introduces a new problem, called false sharing, illustrated in Fig. 8-23. Here we have a page containing two unrelated shared variables, A and B. Processor 1 makes heavy use of A, reading and writing it. Similarly, process 2 uses B frequent- ly. Under these circumstances, the page containing both variables will constantly be traveling back and forth between the two machines.
|
clipped_os_Page_561_Chunk6778
|
562 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 CPU 1 Code using variable A A B Shared page CPU 2 Code using variable B A B Network A and B are unrelated shared variables that just happen to be on the same page Figure 8-23. False sharing of a page containing two unrelated variables. The problem here is that although the variables are unrelated, they appear by accident on the same page, so when a process uses one of them, it also gets the other. The larger the effective page size, the more often false sharing will occur, and conversely, the smaller the effective page size, the less often it will occur. Nothing analogous to this phenomenon is present in ordinary virtual memory sys- tems. Clever compilers that understand the problem and place variables in the ad- dress space accordingly can help reduce false sharing and improve performance. However, saying this is easier than doing it. Furthermore, if the false sharing con- sists of node 1 using one element of an array and node 2 using a different element of the same array, there is little that even a clever compiler can do to eliminate the problem. Achieving Sequential Consistency If writable pages are not replicated, achieving consistency is not an issue. There is exactly one copy of each writable page, and it is moved back and forth dy- namically as needed. Since it is not always possible to see in advance which pages are writable, in many DSM systems, when a process tries to read a remote page, a local copy is made and both the local and remote copies are set up in their re- spective MMUs as read only. As long as all references are reads, everything is fine. However, if any process attempts to write on a replicated page, a potential con- sistency problem arises because changing one copy and leaving the others alone is unacceptable. This situation is analogous to what happens in a multiprocessor when one CPU attempts to modify a word that is present in multiple caches. The solution there is for the CPU about to do the write to first put a signal on the bus telling all other CPUs to discard their copy of the cache block. DSM systems typi- cally work the same way. Before a shared page can be written, a message is sent to
|
clipped_os_Page_562_Chunk6779
|
SEC. 8.2 MULTICOMPUTERS 563 all other CPUs holding a copy of the page telling them to unmap and discard the page. After all of them have replied that the unmap has finished, the original CPU can now do the write. It is also possible to tolerate multiple copies of writable pages under carefully restricted circumstances. One way is to allow a process to acquire a lock on a por- tion of the virtual address space, and then perform multiple read and write opera- tions on the locked memory. At the time the lock is released, changes can be prop- agated to other copies. As long as only one CPU can lock a page at a given moment, this scheme preserves consistency. Alternatively, when a potentially writable page is actually written for the first time, a clean copy is made and saved on the CPU doing the write. Locks on the page can then be acquired, the page updated, and the locks released. Later, when a process on a remote machine tries to acquire a lock on the page, the CPU that wrote it earlier compares the current state of the page to the clean copy and builds a message listing all the words that have changed. This list is then sent to the acquiring CPU to update its copy instead of invalidating it (Keleher et al., 1994). 8.2.6 Multicomputer Scheduling On a multiprocessor, all processes reside in the same memory. When a CPU finishes its current task, it picks a process and runs it. In principle, all processes are potential candidates. On a multicomputer the situation is quite different. Each node has its own memory and its own set of processes. CPU 1 cannot suddenly decide to run a process located on node 4 without first doing a fair amount of work to go get it. This difference means that scheduling on multicomputers is easier but allocation of processes to nodes is more important. Below we will study these is- sues. Multicomputer scheduling is somewhat similar to multiprocessor scheduling, but not all of the former’s algorithms apply to the latter. The simplest multiproces- sor algorithm—maintaining a single central list of ready processes—does not work however, since each process can only run on the CPU it is currently located on. However, when a new process is created, a choice can be made where to place it, for example to balance the load. Since each node has its own processes, any local scheduling algorithm can be used. However, it is also possible to use multiprocessor gang scheduling, since that merely requires an initial agreement on which process to run in which time slot, and some way to coordinate the start of the time slots. 8.2.7 Load Balancing There is relatively little to say about multicomputer scheduling because once a process has been assigned to a node, any local scheduling algorithm will do, unless gang scheduling is being used. However, precisely because there is so little control
|
clipped_os_Page_563_Chunk6780
|
564 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 once a process has been assigned to a node, the decision about which process should go on which node is important. This is in contrast to multiprocessor sys- tems, in which all processes live in the same memory and can be scheduled on any CPU at will. Consequently, it is worth looking at how processes can be assigned to nodes in an effective way. The algorithms and heuristics for doing this assignment are known as processor allocation algorithms. A large number of processor (i.e., node) allocation algorithms have been pro- posed over the years. They differ in what they assume is known and what the goal is. Properties that might be known about a process include the CPU requirements, memory usage, and amount of communication with every other process. Possible goals include minimizing wasted CPU cycles due to lack of local work, minimiz- ing total communication bandwidth, and ensuring fairness to users and processes. Below we will examine a few algorithms to give an idea of what is possible. A Graph-Theoretic Deterministic Algorithm A widely studied class of algorithms is for systems consisting of processes with known CPU and memory requirements, and a known matrix giving the aver- age amount of traffic between each pair of processes. If the number of processes is greater than the number of CPUs, k, sev eral processes will have to be assigned to each CPU. The idea is to perform this assignment to minimize network traffic. The system can be represented as a weighted graph, with each vertex being a process and each arc representing the flow of messages between two processes. Mathematically, the problem then reduces to finding a way to partition (i.e., cut) the graph into k disjoint subgraphs, subject to certain constraints (e.g., total CPU and memory requirements below some limits for each subgraph). For each solu- tion that meets the constraints, arcs that are entirely within a single subgraph represent intramachine communication and can be ignored. Arcs that go from one subgraph to another represent network traffic. The goal is then to find the parti- tioning that minimizes the network traffic while meeting all the constraints. As an example, Fig. 8-24 shows a system of nine processes, A through I, with each arc labeled with the mean communication load between those two processes (e.g., in Mbps). In Fig. 8-24(a), we have partitioned the graph with processes A, E, and G on node 1, processes B, F, and H on node 2, and processes C, D, and I on node 3. The total network traffic is the sum of the arcs intersected by the cuts (the dashed lines), or 30 units. In Fig. 8-24(b) we have a different partitioning that has only 28 units of network traffic. Assuming that it meets all the memory and CPU con- straints, this is a better choice because it requires less communication. Intuitively, what we are doing is looking for clusters that are tightly coupled (high intracluster traffic flow) but which interact little with other clusters (low intercluster traffic flow). Some of the earliest papers discussing the problem are Chow and Abraham (1982, Lo, (1984), and Stone and Bokhari (1978).
|
clipped_os_Page_564_Chunk6781
|
SEC. 8.2 MULTICOMPUTERS 565 G H I A E F B C D Node 1 Node 2 3 2 3 5 5 8 1 2 4 4 2 3 6 2 1 4 Node 3 G H I A E F B C D Node 1 Node 2 3 2 3 5 5 8 1 2 4 4 2 3 6 2 1 4 Node 3 Traffic between D and I Process Figure 8-24. Tw o ways of allocating nine processes to three nodes. A Sender-Initiated Distributed Heuristic Algorithm Now let us look at some distributed algorithms. One algorithm says that when a process is created, it runs on the node that created it unless that node is overload- ed. The metric for overloaded might involve too many processes, too big a total working set, or some other metric. If it is overloaded, the node selects another node at random and asks it what its load is (using the same metric). If the probed node’s load is below some threshold value, the new process is sent there (Eager et al., 1986). If not, another machine is chosen for probing. Probing does not go on forever. If no suitable host is found within N probes, the algorithm terminates and the process runs on the originating machine. The idea is for heavily loaded nodes to try to get rid of excess work, as shown in Fig. 8-25(a), which depicts send- er-initiated load balancing. I’m full Here, have a process Take some work I’m overloaded Help ! (a) (b) I have nothing to do Yawn I’m bored I’m free tonight Need help ? Figure 8-25. (a) An overloaded node looking for a lightly loaded node to hand off processes to. (b) An empty node looking for work to do.
|
clipped_os_Page_565_Chunk6782
|
566 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Eager et al. constructed an analytical queueing model of this algorithm. Using this model, it was established that the algorithm behaves well and is stable under a wide range of parameters, including various threshold values, transfer costs, and probe limits. Nevertheless, it should be observed that under conditions of heavy load, all machines will constantly send probes to other machines in a futile attempt to find one that is willing to accept more work. Few processes will be off-loaded, but con- siderable overhead may be incurred in the attempt to do so. A Receiver-Initiated Distributed Heuristic Algorithm A complementary algorithm to the one discussed above, which is initiated by an overloaded sender, is one initiated by an underloaded receiver, as shown in Fig. 8-25(b). With this algorithm, whenever a process finishes, the system checks to see if it has enough work. If not, it picks some machine at random and asks it for work. If that machine has nothing to offer, a second, and then a third machine is asked. If no work is found with N probes, the node temporarily stops asking, does any work it has queued up, and tries again when the next process finishes. If no work is available, the machine goes idle. After some fixed time interval, it begins probing again. An advantage of this algorithm is that it does not put extra load on the system at critical times. The sender-initiated algorithm makes large numbers of probes precisely when the system can least tolerate it—when it is heavily loaded. With the receiver-initiated algorithm, when the system is heavily loaded, the chance of a machine having insufficient work is small. However, when this does happen, it will be easy to find work to take over. Of course, when there is little work to do, the re- ceiver-initiated algorithm creates considerable probe traffic as all the unemployed machines desperately hunt for work. However, it is far better to have the overhead go up when the system is underloaded than when it is overloaded. It is also possible to combine both of these algorithms and have machines try to get rid of work when they hav e too much, and try to acquire work when they do not have enough. Furthermore, machines can perhaps improve on random polling by keeping a history of past probes to determine if any machines are chronically underloaded or overloaded. One of these can be tried first, depending on whether the initiator is trying to get rid of work or acquire it. 8.3 DISTRIBUTED SYSTEMS Having now completed our study of multicores, multiprocessors, and multicomputers we are now ready to turn to the last type of multiple processor sys- tem, the distributed system. These systems are similar to multicomputers in that
|
clipped_os_Page_566_Chunk6783
|
SEC. 8.3 DISTRIBUTED SYSTEMS 567 each node has its own private memory, with no shared physical memory in the sys- tem. However, distributed systems are even more loosely coupled than multicom- puters. To start with, each node of a multicomputer generally has a CPU, RAM, a net- work interface, and possibly a disk for paging. In contrast, each node in a distrib- uted system is a complete computer, with a full complement of peripherals. Next, the nodes of a multicomputer are normally in a single room, so they can communi- cate by a dedicated high-speed network, whereas the nodes of a distributed system may be spread around the world. Finally, all the nodes of a multicomputer run the same operating system, share a single file system, and are under a common admin- istration, whereas the nodes of a distributed system may each run a different oper- ating system, each of which has its own file system, and be under a different administration. A typical example of a multicomputer is 1024 nodes in a single room at a company or university working on, say, pharmaceutical modeling, whereas a typical distributed system consists of thousands of machines loosely co- operating over the Internet. Figure 8-26 compares multiprocessors, multicom- puters, and distributed systems on the points mentioned above. Item Multiprocessor Multicomputer Distributed System Node configuration CPU CPU, RAM, net interface Complete computer Node peripherals All shared Shared exc. maybe disk Full set per node Location Same rack Same room Possibly wor ldwide Inter node communication Shared RAM Dedicated interconnect Traditional networ k Operating systems One, shared Multiple, same Possibly all different File systems One, shared One, shared Each node has own Administration One organization One organization Many organizations Figure 8-26. Comparison of three kinds of multiple CPU systems. Multicomputers are clearly in the middle using these metrics. An interesting question is: ‘‘Are multicomputers more like multiprocessors or more like distrib- uted systems?’’ Oddly enough, the answer depends strongly on your perspective. From a technical perspective, multiprocessors have shared memory and the other two do not. This difference leads to different programming models and different mindsets. However, from an applications perspective, multiprocessors and multicomputers are just big equipment racks in a machine room. Both are used for solving computationally intensive problems, whereas a distributed system con- necting computers all over the Internet is typically much more involved in commu- nication than in computation and is used in a different way. To some extent, loose coupling of the computers in a distributed system is both a strength and a weakness. It is a strength because the computers can be used for a wide variety of applications, but it is also a weakness, because programming these applications is difficult due to the lack of any common underlying model.
|
clipped_os_Page_567_Chunk6784
|
568 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Typical Internet applications include access to remote computers (using telnet, ssh, and rlogin), access to remote information (using the World Wide Web and FTP, the File Transfer Protocol), person-to-person communication (using email and chat programs), and many emerging applications (e.g., e-commerce, telemedicine, and distance learning). The trouble with all these applications is that each one has to reinvent the wheel. For example, email, FTP, and the World Wide Web all basi- cally move files from point A to point B, but each one has its own way of doing it, complete with its own naming conventions, transfer protocols, replication techni- ques, and everything else. Although many Web browsers hide these differences from the average user, the underlying mechanisms are completely different. Hiding them at the user-interface level is like having a person at a full-service travel agent Website book a trip from New York to San Francisco, and only later learn whether she has purchased a plane, train, or bus ticket. What distributed systems add to the underlying network is some common paradigm (model) that provides a uniform way of looking at the whole system. The intent of the distributed system is to turn a loosely connected bunch of machines into a coherent system based on one concept. Sometimes the paradigm is simple and sometimes it is more elaborate, but the idea is always to provide something that unifies the system. A simple example of a unifying paradigm in a different context is found in UNIX, where all I/O devices are made to look like files. Having keyboards, print- ers, and serial lines all operated on the same way, with the same primitives, makes it easier to deal with them than having them all conceptually different. One method by which a distributed system can achieve some measure of uni- formity in the face of different underlying hardware and operating systems is to have a layer of software on top of the operating system. The layer, called middle- ware, is illustrated in Fig. 8-27. This layer provides certain data structures and op- erations that allow processes and users on far-flung machines to interoperate in a consistent way. In a sense, middleware is like the operating system of a distributed system. That is why it is being discussed in a book on operating systems. On the other hand, it is not really an operating system, so the discussion will not go into much detail. For a comprehensive, book-length treatment of distributed systems, see Dis- tributed Systems (Tanenbaum and van Steen, 2007). In the remainder of this chap- ter, we will look quickly at the hardware used in a distributed system (i.e., the un- derlying computer network), then its communication software (the network proto- cols). After that we will consider a variety of paradigms used in these systems. 8.3.1 Network Hardware Distributed systems are built on top of computer networks, so a brief introduc- tion to the subject is in order. Networks come in two major varieties, LANs (Local Area Networks), which cover a building or a campus, and WANs (Wide Area
|
clipped_os_Page_568_Chunk6785
|
SEC. 8.3 DISTRIBUTED SYSTEMS 569 Pentium Windows Middleware Middleware Middleware Middleware Application Pentium Linux Application SPARC Solaris Application Mac OS Application Macintosh Common base for applications Network Figure 8-27. Positioning of middleware in a distributed system. Networks), which can be citywide, countrywide, or worldwide. The most impor- tant kind of LAN is Ethernet, so we will examine that as an example LAN. As our example WAN, we will look at the Internet, even though technically the Internet is not one network, but a federation of thousands of separate networks. However, for our purposes, it is sufficient to think of it as one WAN. Ethernet Classic Ethernet, which is described in IEEE Standard 802.3, consists of a co- axial cable to which a number of computers are attached. The cable is called the Ethernet, in reference to the luminiferous ether through which electromagnetic ra- diation was once thought to propagate. (When the nineteenth-century British phys- icist James Clerk Maxwell discovered that electromagnetic radiation could be de- scribed by a wav e equation, scientists assumed that space must be filled with some ethereal medium in which the radiation was propagating. Only after the famous Michelson-Morley experiment in 1887, which failed to detect the ether, did physi- cists realize that radiation could propagate in a vacuum.) In the very first version of Ethernet, a computer was attached to the cable by li- terally drilling a hole halfway through the cable and screwing in a wire leading to the computer. This was called a vampire tap, and is illustrated symbolically in Fig. 8-28(a). The taps were hard to get right, so before long, proper connectors were used. Nevertheless, electrically, all the computers were connected as if the cables on their network interface cards were soldered together.
|
clipped_os_Page_569_Chunk6786
|
570 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Computer Ethernet Switch Computer Ethernet (b) (a) Vampire tap Figure 8-28. (a) Classic Ethernet. (b) Switched Ethernet. With many computers hooked up to the same cable, a protocol is needed to prevent chaos. To send a packet on an Ethernet, a computer first listens to the cable to see if any other computer is currently transmitting. If not, it just begins transmitting a packet, which consists of a short header followed by a payload of 0 to 1500 bytes. If the cable is in use, the computer simply waits until the current transmission finishes, then it begins sending. If two computers start transmitting simultaneously, a collision results, which both of them detect. Both respond by terminating their transmissions, waiting a random amount of time between 0 and T μsec and then starting again. If another collision occurs, all colliding computers randomize the wait into the interval 0 to 2T μsec, and then try again. On each further collision, the maximum wait interval is doubled, reducing the chance of more collisions. This algorithm is known as binary exponential backoff. We saw it earlier to reduce polling overhead on locks. An Ethernet has a maximum cable length and also a maximum number of computers that can be connected to it. To exceed either of these limits, a large building or campus can be wired with multiple Ethernets, which are then con- nected by devices called bridges. A bridge is a device that allows traffic to pass from one Ethernet to another when the source is on one side and the destination is on the other. To avoid the problem of collisions, modern Ethernets use switches, as shown in Fig. 8-28(b). Each switch has some number of ports, to which can be attached a computer, an Ethernet, or another switch. When a packet successfully avoids all collisions and makes it to the switch, it is buffered there and sent out on the port where the destination machine lives. By giving each computer its own port, all collisions can be eliminated, at the cost of bigger switches. Compromises, with just a few computers per port, are also possible. In Fig. 8-28(b), a classical Ethernet with multiple computers connected to a cable by vampire taps is attached to one of the ports of the switch.
|
clipped_os_Page_570_Chunk6787
|
SEC. 8.3 DISTRIBUTED SYSTEMS 571 The Internet The Internet evolved from the ARPANET, an experimental packet-switched network funded by the U.S. Dept. of Defense Advanced Research Projects Agency. It went live in December 1969 with three computers in California and one in Utah. It was designed at the height of the Cold War to a be a highly fault-tolerant net- work that would continue to relay military traffic even in the event of direct nuclear hits on multiple parts of the network by automatically rerouting traffic around the dead machines. The ARPANET grew rapidly in the 1970s, eventually encompassing hundreds of computers. Then a packet radio network, a satellite network, and eventually thousands of Ethernets were attached to it, leading to the federation of networks we now know as the Internet. The Internet consists of two kinds of computers, hosts and routers. Hosts are PCs, notebooks, handhelds, servers, mainframes, and other computers owned by individuals or companies that want to connect to the Internet. Routers are spe- cialized switching computers that accept incoming packets on one of many incom- ing lines and send them on their way along one of many outgoing lines. A router is similar to the switch of Fig. 8-28(b), but also differs from it in ways that will not concern us here. Routers are connected together in large networks, with each router having wires or fibers to many other routers and hosts. Large national or world- wide router networks are operated by telephone companies and ISPs (Internet Ser- vice Providers) for their customers. Figure 8-29 shows a portion of the Internet. At the top we have one of the backbones, normally operated by a backbone operator. It consists of a number of routers connected by high-bandwidth fiber optics, with connections to backbones operated by other (competing) telephone companies. Usually, no hosts connect di- rectly to the backbone, other than maintenance and test machines run by the tele- phone company. Attached to the backbone routers by medium-speed fiber optic connections are regional networks and routers at ISPs. In turn, corporate Ethernets each have a router on them and these are connected to regional network routers. Routers at ISPs are connected to modem banks used by the ISP’s customers. In this way, ev ery host on the Internet has at least one path, and often many paths, to every other host. All traffic on the Internet is sent in the form of packets. Each packet carries its destination address inside it, and this address is used for routing. When a packet comes into a router, the router extracts the destination address and looks (part of) it up in a table to find which outgoing line to send the packet on and thus to which router. This procedure is repeated until the packet reaches the destination host. The routing tables are highly dynamic and are updated continuously as routers and links go down and come back up and as traffic conditions change. The routing algorithms have been intensively studied and modified over the years.
|
clipped_os_Page_571_Chunk6788
|
572 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Backbone High-bandwidth fiber Router at ISP ADSL line to home PC Home PC Medium- bandwidth fiber Router Host Ethernet Fiber or copper wire Local router Regional network Figure 8-29. A portion of the Internet. 8.3.2 Network Services and Protocols All computer networks provide certain services to their users (hosts and proc- esses), which they implement using certain rules about legal message exchanges. Below we will give a brief introduction to these topics. Network Services Computer networks provide services to the hosts and processes using them. Connection-oriented service is modeled after the telephone system. To talk to someone, you pick up the phone, dial the number, talk, and then hang up. Simi- larly, to use a connection-oriented network service, the service user first establishes a connection, uses the connection, and then releases the connection. The essential aspect of a connection is that it acts like a tube: the sender pushes objects (bits) in at one end, and the receiver takes them out in the same order at the other end. In contrast, connectionless service is modeled after the postal system. Each message (letter) carries the full destination address, and each one is routed through the system independent of all the others. Normally, when two messages are sent to the same destination, the first one sent will be the first one to arrive. Howev er, it is possible that the first one sent can be delayed so that the second one arrives first. With a connection-oriented service this is impossible.
|
clipped_os_Page_572_Chunk6789
|
SEC. 8.3 DISTRIBUTED SYSTEMS 573 Each service can be characterized by a quality of service. Some services are reliable in the sense that they nev er lose data. Usually, a reliable service is imple- mented by having the receiver confirm the receipt of each message by sending back a special acknowledgement packet so the sender is sure that it arrived. The acknowledgement process introduces overhead and delays, which are necessary to detect packet loss, but which do slow things down. A typical situation in which a reliable connection-oriented service is appro- priate is file transfer. The owner of the file wants to be sure that all the bits arrive correctly and in the same order they were sent. Very few file-transfer customers would prefer a service that occasionally scrambles or loses a few bits, even if it is much faster. Reliable connection-oriented service has two relatively minor variants: mes- sage sequences and byte streams. In the former, the message boundaries are pre- served. When two 1-KB messages are sent, they arrive as two distinct 1-KB mes- sages, never as one 2-KB message. In the latter, the connection is simply a stream of bytes, with no message boundaries. When 2K bytes arrive at the receiver, there is no way to tell if they were sent as one 2-KB message, two 1-KB messages, 2048 1-byte messages, or something else. If the pages of a book are sent over a network to an imagesetter as separate messages, it might be important to preserve the mes- sage boundaries. On the other hand, with a terminal logging into a remote server system, a byte stream from the terminal to the computer is all that is needed. There are no message boundaries here. For some applications, the delays introduced by acknowledgements are unac- ceptable. One such application is digitized voice traffic. It is preferable for tele- phone users to hear a bit of noise on the line or a garbled word from time to time than to introduce a delay to wait for acknowledgements. Not all applications require connections. For example, to test the network, all that is needed is a way to send a single packet that has a high probability of arrival, but no guarantee. Unreliable (meaning not acknowledged) connectionless service is often called datagram service, in analogy with telegram service, which also does not provide an acknowledgement back to the sender. In other situations, the convenience of not having to establish a connection to send one short message is desired, but reliability is essential. The acknowledged datagram service can be provided for these applications. It is like sending a regis- tered letter and requesting a return receipt. When the receipt comes back, the send- er is absolutely sure that the letter was delivered to the intended party and not lost along the way. Still another service is the request-reply service. In this service the sender transmits a single datagram containing a request; the reply contains the answer. For example, a query to the local library asking where Uighur is spoken falls into this category. Request-reply is commonly used to implement communication in the cli- ent-server model: the client issues a request and the server responds to it. Figure 8-30 summarizes the types of services discussed above.
|
clipped_os_Page_573_Chunk6790
|
574 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 Service Reliable message stream Reliable byte stream Unreliable connection Unreliable datagram Acknowledged datagram Request-reply Example Sequence of pages of a book Remote login Digitized voice Network test packets Registered mail Database query Connection-oriented Connectionless Figure 8-30. Six different types of network service. Network Protocols All networks have highly specialized rules for what messages may be sent and what responses may be returned in response to these messages. For example, under certain circumstances (e.g., file transfer), when a message is sent from a source to a destination, the destination is required to send an acknowledgement back indicat- ing correct receipt of the message. Under other circumstances (e.g., digital tele- phony), no such acknowledgement is expected. The set of rules by which particular computers communicate is called a protocol. Many protocols exist, including router-router protocols, host-host protocols, and others. For a thorough treatment of computer networks and their protocols, see Computer Networks, 5/e (Tanenbaum and Wetherall, 2010). All modern networks use what is called a protocol stack to layer different pro- tocols on top of one another. At each layer, different issues are dealt with. For ex- ample, at the bottom level protocols define how to tell where in the bit stream a packet begins and ends. At a higher level, protocols deal with how to route packets through complex networks from source to destination. And at a still higher level, they make sure that all the packets in a multipacket message have arrived correctly and in the proper order. Since most distributed systems use the Internet as a base, the key protocols these systems use are the two major Internet protocols: IP and TCP. IP (Internet Protocol) is a datagram protocol in which a sender injects a datagram of up to 64 KB into the network and hopes that it arrives. No guarantees are given. The data- gram may be fragmented into smaller packets as it passes through the Internet. These packets travel independently, possibly along different routes. When all the pieces get to the destination, they are assembled in the correct order and delivered. Tw o versions of IP are currently in use, v4 and v6. At the moment, v4 still dominates, so we will describe that here, but v6 is up and coming. Each v4 packet starts with a 40-byte header that contains a 32-bit source address and a 32-bit desti- nation address among other fields. These are called IP addresses and form the basis of Internet routing. They are conventionally written as four decimal numbers
|
clipped_os_Page_574_Chunk6791
|
SEC. 8.3 DISTRIBUTED SYSTEMS 575 in the range 0–255 separated by dots, as in 192.31.231.65. When a packet arrives at a router, the router extracts the IP destination address and uses that for routing. Since IP datagrams are not acknowledged, IP alone is not sufficient for reliable communication in the Internet. To provide reliable communication, another proto- col, TCP (Transmission Control Protocol), is usually layered on top of IP. TCP uses IP to provide connection-oriented streams. To use TCP, a process first estab- lishes a connection to a remote process. The process required is specified by the IP address of a machine and a port number on that machine, to which processes inter- ested in receiving incoming connections listen. Once that has been done, it just pumps bytes into the connection and they are guaranteed to come out the other end undamaged and in the correct order. The TCP implementation achieves this guar- antee by using sequence numbers, checksums, and retransmissions of incorrectly received packets. All of this is transparent to the sending and receiving processes. They just see reliable interprocess communication, just like a UNIX pipe. To see how all these protocols interact, consider the simplest case of a very small message that does not need to be fragmented at any lev el. The host is on an Ethernet connected to the Internet. What happens exactly? The user process gen- erates the message and makes a system call to send it on a previously established TCP connection. The kernel protocol stack adds a TCP header and then an IP header to the front. Then it goes to the Ethernet driver, which adds an Ethernet header directing the packet to the router on the Ethernet. This router then injects the packet into the Internet, as depicted in Fig. 8-31. Internet Ethernet 1 header Headers Router Message Host Ethernet TCP Message IP Figure 8-31. Accumulation of packet headers. To establish a connection with a remote host (or even to send it a datagram), it is necessary to know its IP address. Since managing lists of 32-bit IP addresses is inconvenient for people, a scheme called DNS (Domain Name System) was in- vented as a database that maps ASCII names for hosts onto their IP addresses. Thus it is possible to use the DNS name star.cs.vu.nl instead of the corresponding IP address 130.37.24.6. DNS names are commonly known because Internet email
|
clipped_os_Page_575_Chunk6792
|
576 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 addresses are of the form user-name@DNS-host-name. This naming system al- lows the mail program on the sending host to look up the destination host’s IP ad- dress in the DNS database, establish a TCP connection to the mail daemon process there, and send the message as a file. The user-name is sent along to identify which mailbox to put the message in. 8.3.3 Document-Based Middleware Now that we have some background on networks and protocols, we can start looking at different middleware layers that can overlay the basic network to pro- duce a consistent paradigm for applications and users. We will start with a simple but well-known example: the World Wide Web. The Web was invented by Tim Berners-Lee at CERN, the European Nuclear Physics Research Center, in 1989 and since then has spread like wildfire all over the world. The original paradigm behind the Web was quite simple: every computer can hold one or more documents, called Web pages. Each Web page contains text, images, icons, sounds, movies, and the like, as well as hyperlinks (pointers) to other Web pages. When a user requests a Web page using a program called a Web browser, the page is displayed on the screen. Clicking on a link causes the current page to be replaced on the screen by the page pointed to. Although many bells and whistles have recently been grafted onto the Web, the underlying paradigm is still clearly present: the Web is a great big directed graph of documents that can point to other documents, as shown in Fig. 8-32. University of North South School of Humanities School of Sciences School of Social Sciences Northern University Geography History Languages Main page Geography Dept Big countries Small countries Rich countries Poor countries Humanities History Dept. Ancient times Medieval times Modern times Future times Humanities Languages Dept. English French Dutch Frisian Spanish Humanities Science Astronomy Biology Chemistry Physics Main page Social sciences Anthropology Psychology Sociology Main page Astronomy Dept. Galaxies Nebulas Planets Quasars Stars Sciences Biology Dept. Arachnids Mammals Protozoa Worms Sciences Chemistry Dept. Acids Bases Esters Proteins Sciences Physics Dept. Electrons Mesons Neutrons Neutrinos Protons Sciences Anthropology Dept. African tribes Australian tribes New Guinean tribes Social sciences Psychology Dept. Freud Rats Social sciences Sociology Dept Class struggle Gender struggle Generic struggle Social sciences Figure 8-32. The Web is a big directed graph of documents. Each Web page has a unique address, called a URL (Uniform Resource Loca- tor), of the form protocol://DNS-name/file-name. The protocol is most commonly http (HyperText Transfer Protocol), but ftp and others also exist. Then comes the DNS name of the host containing the file. Finally, there is a local file name telling which file is needed. Thus a URL uniquely specifies a single file worldwide
|
clipped_os_Page_576_Chunk6793
|
SEC. 8.3 DISTRIBUTED SYSTEMS 577 The way the whole system hangs together is as follows. The Web is fundamen- tally a client-server system, with the user being the client and the Website being the server. When the user provides the browser with a URL, either by typing it in or clicking on a hyperlink on the current page, the browser takes certain steps to fetch the requested Web page. As a simple example, suppose the URL provided is http://www.minix3.org/getting-started/index.html. The browser then takes the fol- lowing steps to get the page. 1. The browser asks DNS for the IP address of www.minix3.org. 2. DNS replies with 66.147.238.215. 3. The browser makes a TCP connection to port 80 on 66.147.238.215. 4. It then sends a request asking for the file getting-started/index.html. 5. The www.minix3.org server sends the file getting-started/index.html. 6. The browser displays all the text in getting-started/index.html. 7. Manwhile, the browser fetches and displays all images on the page. 8. The TCP connection is released. To a first approximation, that is the basis of the Web and how it works. Many other features have since been added to the basic Web, including style sheets, dy- namic Web pages that are generated on the fly, Web pages that contain small pro- grams or scripts that execute on the client machine, and more, but they are outside the scope of this discussion. 8.3.4 File-System-Based Middleware The basic idea behind the Web is to make a distributed system look like a giant collection of hyperlinked documents. A second approach is to make a distributed system look like a great big file system. In this section we will look at some of the issues involved in designing a worldwide file system. Using a file-system model for a distributed system means that there is a single global file system, with users all over the world able to read and write files for which they hav e authorization. Communication is achieved by having one process write data into a file and having other ones read them back. Many of the standard file-system issues arise here, but also some new ones related to distribution. Transfer Model The first issue is the choice between the upload/download model and the remote-access model. In the former, shown in Fig. 8-33(a), a process accesses a file by first copying it from the remote server where it lives. If the file is only to be
|
clipped_os_Page_577_Chunk6794
|
578 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 read, the file is then read locally, for high performance. If the file is to be written, it is written locally. When the process is done with it, the updated file is put back on the server. With the remote-access model, the file stays on the server and the cli- ent sends commands there to get work done there, as shown in Fig. 8-33(b). 2. Accesses are done on the client 3. When client is done, file is returned to server New file Old file Server Server Client Client 1. Client fetches file File stays on server Reply Request (a) (b) Figure 8-33. (a) The upload/download model. (b) The remote-access model. The advantages of the upload/download model are its simplicity, and the fact that transferring entire files at once is more efficient than transferring them in small pieces. The disadvantages are that there must be enough storage for the entire file locally, moving the entire file is wasteful if only parts of it are needed, and consis- tency problems arise if there are multiple concurrent users. The Directory Hierarchy Files are only part of the story. The other part is the directory system. All dis- tributed file systems support directories containing multiple files. The next design issue is whether all clients have the same view of the directory hierarchy. As an example of what we mean, consider Fig. 8-34. In Fig. 8-34(a) we show two file servers, each holding three directories and some files. In Fig. 8-34(b) we have a system in which all clients (and other machines) have the same view of the distrib- uted file system. If the path /D/E/x is valid on one machine, it is valid on all of them. In contrast, in Fig. 8-34(c), different machines can have different views of the file system. To repeat the preceding example, the path /D/E/x might well be valid on client 1 but not on client 2. In systems that manage multiple file servers by re- mote mounting, Fig. 8-34(c) is the norm. It is flexible and straightforward to im- plement, but it has the disadvantage of not making the entire system behave like a single old-fashioned timesharing system. In a timesharing system, the file system looks the same to any process, as in the model of Fig. 8-34(b). This property makes a system easier to program and understand.
|
clipped_os_Page_578_Chunk6795
|
SEC. 8.3 DISTRIBUTED SYSTEMS 579 A Root B C A D B C File server 1 Client 1 E F Root A D B C Client 1 E F D Root E F A D B C File server 2 Client 2 E F Root A D B C Client 2 E (a) (b) (c) F Figure 8-34. (a) Two file servers. The squares are directories and the circles are files. (b) A system in which all clients have the same view of the file system. (c) A system in which different clients have different views of the file system. A closely related question is whether or not there is a global root directory, which all machines recognize as the root. One way to have a global root directory is to have the root contain one entry for each server and nothing else. Under these circumstances, paths take the form /server/path, which has its own disadvantages, but at least is the same everywhere in the system. Naming Transparency The principal problem with this form of naming is that it is not fully transpar- ent. Two forms of transparency are relevant in this context and are worth distin- guishing. The first one, location transparency, means that the path name gives no hint as to where the file is located. A path like /server1/dir1/dir2/x tells everyone
|
clipped_os_Page_579_Chunk6796
|
580 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 that x is located on server 1, but it does not tell where that server is located. The server is free to move anywhere it wants to in the network without the path name having to be changed. Thus this system has location transparency. However, suppose that file x is extremely large and space is tight on server 1. Furthermore, suppose that there is plenty of room on server 2. The system might well like to move x to server 2 automatically. Unfortunately, when the first compo- nent of all path names is the server, the system cannot move the file to the other server automatically, even if dir1 and dir2 exist on both servers. The problem is that moving the file automatically changes its path name from /server1/dir1/dir2/x to /server2/dir1/dir2/x. Programs that have the former string built into them will cease to work if the path changes. A system in which files can be moved without their names changing is said to have location independence. A distributed system that embeds machine or server names in path names clearly is not location inde- pendent. One based on remote mounting is not, either, since it is not possible to move a file from one file group (the unit of mounting) to another and still be able to use the old path name. Location independence is not easy to achieve, but it is a desirable property to have in a distributed system. To summarize what we said earlier, there are three common approaches to file and directory naming in a distributed system: 1. Machine + path naming, such as /machine/path or machine:path. 2. Mounting remote file systems onto the local file hierarchy. 3. A single name space that looks the same on all machines. The first two are easy to implement, especially as a way to connect existing sys- tems that were not designed for distributed use. The latter is difficult and requires careful design, but makes life easier for programmers and users. Semantics of File Sharing When two or more users share the same file, it is necessary to define the semantics of reading and writing precisely to avoid problems. In single-processor systems the semantics normally state that when a read system call follows a wr ite system call, the read returns the value just written, as shown in Fig. 8-35(a). Simi- larly, when two wr ites happen in quick succession, followed by a read, the value read is the value stored by the last write. In effect, the system enforces an ordering on all system calls, and all processors see the same ordering. We will refer to this model as sequential consistency. In a distributed system, sequential consistency can be achieved easily as long as there is only one file server and clients do not cache files. All reads and wr ites go directly to the file server, which processes them strictly sequentially. In practice, however, the performance of a distributed system in which all file requests must go to a single server is frequently poor. This problem is often solved
|
clipped_os_Page_580_Chunk6797
|
SEC. 8.3 DISTRIBUTED SYSTEMS 581 a b a b c A B Single processor Original file 1. Write "c" 2. Read gets "abc" (a) (b) a b a b a b c A Client 1 1. Read "ab" 2. Write "c" File server 3. Read gets "ab" Client 2 a b B Figure 8-35. (a) Sequential consistency. (b) In a distributed system with cach- ing, reading a file may return an obsolete value. by allowing clients to maintain local copies of heavily used files in their private caches. However, if client 1 modifies a cached file locally and shortly thereafter client 2 reads the file from the server, the second client will get an obsolete file, as illustrated in Fig. 8-35(b). One way out of this difficulty is to propagate all changes to cached files back to the server immediately. Although conceptually simple, this approach is inef- ficient. An alternative solution is to relax the semantics of file sharing. Instead of requiring a read to see the effects of all previous wr ites, one can have a new rule that says: ‘‘Changes to an open file are initially visible only to the process that made them. Only when the file is closed are the changes visible to other proc- esses.’’ The adoption of such a rule does not change what happens in Fig. 8-35(b), but it does redefine the actual behavior (B getting the original value of the file) as being the correct one. When client 1 closes the file, it sends a copy back to the ser- ver, so that subsequent reads get the new value, as required. Effectively, this is the
|
clipped_os_Page_581_Chunk6798
|
582 MULTIPLE PROCESSOR SYSTEMS CHAP. 8 upload/download model shown in Fig. 8-33. This semantic rule is widely imple- mented and is known as session semantics. Using session semantics raises the question of what happens if two or more cli- ents are simultaneously caching and modifying the same file. One solution is to say that as each file is closed in turn, its value is sent back to the server, so the final re- sult depends on who closes last. A less pleasant, but slightly easier to implement, alternative is to say that the final result is one of the candidates, but leave the choice of which one unspecified. An alternative approach to session semantics is to use the upload/download model, but to automatically lock a file that has been downloaded. Attempts by other clients to download the file will be held up until the first client has returned it. If there is a heavy demand for a file, the server could send messages to the cli- ent holding the file, asking it to hurry up, but that may or may not help. All in all, getting the semantics of shared files right is a tricky business with no elegant and efficient solutions. 8.3.5 Object-Based Middleware Now let us take a look at a third paradigm. Instead of saying that everything is a document or everything is a file, we say that everything is an object. An object is a collection of variables that are bundled together with a set of access proce- dures, called methods. Processes are not permitted to access the variables directly. Instead, they are required to invoke the methods. Some programming languages, such as C++ and Java, are object oriented, but these are language-level objects rather than run-time objects. One well-known sys- tem based on run-time objects is CORBA (Common Object Request Broker Architecture) (Vinoski, 1997). CORBA is a client-server system, in which client processes on client machines can invoke operations on objects located on (possibly remote) server machines. CORBA was designed for a heterogeneous system run- ning a variety of hardware platforms and operating systems and programmed in a variety of languages. To make it possible for a client on one platform to invoke a server on a different platform, ORBs (Object Request Brokers) are interposed be- tween client and server to allow them to match up. The ORBs play an important role in CORBA, even providing the system with its name. Each CORBA object is defined by an interface definition in a language called IDL (Interface Definition Language), which tells what methods the object exports and what parameter types each one expects. The IDL specification can be compiled into a client stub procedure and stored in a library. If a client process knows in advance that it will need to access a certain object, it is linked with the object’s client stub code. The IDL specification can also be compiled into a skele- ton procedure that is used on the server side. If it is not known in advance which CORBA objects a process needs to use, dynamic invocation is also possible, but how that works is beyond the scope of our treatment.
|
clipped_os_Page_582_Chunk6799
|
SEC. 8.3 DISTRIBUTED SYSTEMS 583 When a CORBA object is created, a reference to it is also created and returned to the creating process. This reference is how the process identifies the object for subsequent invocations of its methods. The reference can be passed to other proc- esses or stored in an object directory. To inv oke a method on an object, a client process must first acquire a reference to the object. The reference can come either directly from the creating process or, more likely, by looking it up by name or by function in some kind of directory. Once the object reference is available, the client process marshals the parameters to the method calls into a convenient structure and then contacts the client ORB. In turn, the client ORB sends a message to the server ORB, which actually invokes the method on the object. The whole mechanism is similar to RPC. The function of the ORBs is to hide all the low-level distribution and commu- nication details from the client and server code. In particular, the ORBs hide from the client the location of the server, whether the server is a binary program or a script, what hardware and operating system the server runs on, whether the object is currently active, and how the two ORBs communicate (e.g., TCP/IP, RPC, shar- ed memory, etc.). In the first version of CORBA, the protocol between the client ORB and the server ORB was not specified. As a result, every ORB vendor used a different pro- tocol and no two of them could talk to each other. In version 2.0, the protocol was specified. For communication over the Internet, the protocol is called IIOP (Inter- net InterOrb Protocol). To make it possible to use objects that were not written for CORBA with CORBA systems, every object can be equipped with an object adapter. This is a wrapper that handles chores such as registering the object, generating object refer- ences, and activating the object if it is invoked when it is not active. The arrange- ment of all these CORBA parts is shown in Fig. 8-36. Client Client stub Operating system Client ORB Client code Object adapter Operating system Server code Server Skeleton IIOP protocol Network Server ORB Figure 8-36. The main elements of a distributed system based on CORBA. The CORBA parts are shown in gray.
|
clipped_os_Page_583_Chunk6800
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.