{ "course": "Distributed_Systems", "course_id": "CO3071", "schema_version": "material.v1", "slides": [ { "page_index": 0, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_001.png", "page_index": 0, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:28:43+07:00" }, "raw_text": "Introduction to ystems Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE- HCMUT" }, { "page_index": 1, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_002.png", "page_index": 1, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:28:48+07:00" }, "raw_text": "Applications Finance and commerce eCommerce e.g. Amazon and eBay, PayPal, online banking and trading The information society Web information and search engines, ebooks, Wikipedia; social networking: Facebook and MySpace. Creative industries ana online gaming, music and film in the home, user-generated content, e.g. YouTube, Flickr entertainment Healthcare health informatics, on online patient records, monitoring patients Education e-learning, virtual learning environments; distance learning Transport and logistics GPs in route finding systems, map services: Google Maps, Google Earth Science The Grid as an enabling technology for collaboration between scientists Environmenta sensor technology to monitor earthquakes, floods or tsunamis management HPC Lab - CSE -HCMUl 2" }, { "page_index": 2, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_003.png", "page_index": 2, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:28:53+07:00" }, "raw_text": "Dat A Distributed System is .. . Communication Network Multiple connected CPUs working together Components located at networked computers communicate and coordinate their actions only by message passing A collection of independent computers that appears to its users as a single coherent system. A collection of autonomous computers interconnected by a computer network and equipped with distributed system software to form an integrated computing facility. Multiple independent machines interconnected through a network to coordinate to achieve a common, consistent service or function. Examples: networked machines, Internet, Intranet, mobile and ubiquitous computing HPC Lab - CSE - HCMUT 3" }, { "page_index": 3, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_004.png", "page_index": 3, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:28:57+07:00" }, "raw_text": "A Distributed System is ... 1 . . . a system in which the failure of a computer you didn't even know existed can render your own computer unusable.\" - Leslie Lamport ... multiple computers communicating via a network ... ... trying to achieve some task together Consists of \"nodes\" (computer, phone, car, robot, -..) HPC Lab - CSE - HCMUT" }, { "page_index": 4, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_005.png", "page_index": 4, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:00+07:00" }, "raw_text": "A Distributed System is ... 1 . . . a system in which the failure of a computer you didn't even know existed can render your own computer unusable.\" - Leslie Lamport ... multiple computers communicating via a network .. ... trying to achieve some task together Consists of \"nodes\" (computer, phone, car, robot, ...) HPC Lab - CSE - HCMUT S" }, { "page_index": 5, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_006.png", "page_index": 5, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:04+07:00" }, "raw_text": "Advantages It's inherently distributed e.g. sending a message from your mobile phone to your friend's phone Communication and resource sharing possible For better reliability even if one node fails, the system as a whole keeps functioning For better performance get data from a nearby node rather than one halfway round the world Scalability o To solve bigger problems e.g. huge amounts of data, can't fit on one machine Potential for incremental growth Economics - price-performance ratio HPC Lab - CSE -HCMUT" }, { "page_index": 6, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_007.png", "page_index": 6, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:09+07:00" }, "raw_text": "Disadvantages Distribution-aware PLs, OSs and applications Security and privacy Network connectivity essential o Communication may fail (and we might not even know it has failed) Processes may crash (and we might not know) o All of this may happen nondeterministically. Fault tolerance: we want the system as a whole to continue working, even when some parts are faulty This is hard Writing a program to run on a single computer is comparatively easy?! HPC Lab - CSE- HCMUT 7" }, { "page_index": 7, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_008.png", "page_index": 7, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:11+07:00" }, "raw_text": "Challenges Heterogeneity Openness Security Scalability . Failure handling Concurrency Transparency HPC Lab - CSE - HCMUT 8" }, { "page_index": 8, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_009.png", "page_index": 8, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:17+07:00" }, "raw_text": "Transparency Transparency Description Access Enables local and remote resources to be accessed using identical operations Enables resources to be accessed without knowledge of their physical or network location (for Location example, which building or IP address) Enables several processes to operate concurrently using shared resources without interference Concurrency between them Enables multiple instances of resources to be used to increase reliability and performance Replication without knowledge of the replicas by users or application programmers Enables the concealment of faults, allowing users and application programs to complete their Failure tasks despite the failure of hardware or software components Allows the movement of resources and clients within a system without affecting the operation Mobility of users or programs Performance Allows the system to be reconfigured to improve performance as loads vary Allows the system and applications to expand in scale without change to the system structure or Scaling the application algorithms HPC Lab - CSE - HCMUT 9" }, { "page_index": 9, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_010.png", "page_index": 9, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:21+07:00" }, "raw_text": "Scalability problems Concept Example Centralized services A single server for all users Centralized data A single on-line telephone book Centralized algorithms Doing routing based on complete information HPC Lab - CSE - HCMUT 10" }, { "page_index": 10, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_011.png", "page_index": 10, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:23+07:00" }, "raw_text": "Architecture models HPC Lab - CSE - HCMUT" }, { "page_index": 11, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_012.png", "page_index": 11, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:26+07:00" }, "raw_text": " Client-server first and most commonly used Multiple servers to improve performance and reliability o e.g. search engines (1000's of computers) Proxy servers to reduce load on network, provide access through firewall Peer processes when faster interactive response needed. HPC Lab - CSE - HCMUT 12" }, { "page_index": 12, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_013.png", "page_index": 12, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:29+07:00" }, "raw_text": "Client server Client Server invocation invocation result result Server Client Key: Process. Computer: HPC Lab - CSE- HCMUT 3" }, { "page_index": 13, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_014.png", "page_index": 13, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:33+07:00" }, "raw_text": "A service provided by multiple servers Service Server Client - - Server Client Server - HPC Lab - CSE- HCMUT 4" }, { "page_index": 14, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_015.png", "page_index": 14, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:37+07:00" }, "raw_text": "Proxy servers Web Client server Proxy server Web Client server Intranet Firewall Outside world HPC Lab - CSE- HCMUT l5" }, { "page_index": 15, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_016.png", "page_index": 15, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:42+07:00" }, "raw_text": "Peer-to-peer Peer 2 Peer 1 App App 96 Sharable 96 86 objects C Peer 3 App O 88 Peers 4 .... N 96 9 8e HPC Lab - CSE- HCMUT 6" }, { "page_index": 16, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_017.png", "page_index": 16, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:45+07:00" }, "raw_text": "Client server and mobility Mobile code downloaded from server, runs on locally e.g. web applets . Mobile agent (code + data) travels from computer to another collects information, returning to origin Beware! Security risks HPC Lab - CSE - HCMUT /" }, { "page_index": 17, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_018.png", "page_index": 17, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:49+07:00" }, "raw_text": "Web applets a) client request results in the downloading of applet code Web Client server Applet code b) client interacts with the applet Web Client. CApplet server HPC Lab - CSE- HCMUT 8" }, { "page_index": 18, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_019.png", "page_index": 18, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:52+07:00" }, "raw_text": "Design requirements for DSs Judging how good the architecture is... Performance o how fast will it respond? Quality of Service o are video frames and sound synchronized? Dependability o does it work correctly? HPC Lab- CSE - HCMUT 9" }, { "page_index": 19, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_020.png", "page_index": 19, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:29:56+07:00" }, "raw_text": "Performance Responsiveness o fast interactive response delayed by remote requests o use of caching, replication Throughput o dependent on speed of server and data transfer . Load balancing o use of applets, multiple servers HPC Lab - CSE - HCMUT 20" }, { "page_index": 20, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_021.png", "page_index": 20, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:00+07:00" }, "raw_text": "Quality of Service (QoS) Non-functional properties experienced by users: Deadline properties o hard deadlines (must be met within T time units) o soft deadlines (there is a 9o% chance that the video frame will be delivered within T time units) Multi media traffic, video/sound synchronization depend on availability of sufficient resources Adaptability o ability to adapt to changing system configuration HPC Lab - CSE - HCMUT 2" }, { "page_index": 21, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_022.png", "page_index": 21, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:03+07:00" }, "raw_text": "Dependability Correctness Dependability o correct behavior wrt specification - e.g. use of verification Fault-tolerance o ability to tolerate/recover from faults - e.g. use of redundancy Security o ability to withstand malicious attack - e.g. use of encryption, etc HPC Lab - CSE - HCMUT 22" }, { "page_index": 22, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_023.png", "page_index": 22, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:05+07:00" }, "raw_text": "Distributed Operating Systems HPC Lab - CSE - HCMUT 23" }, { "page_index": 23, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_024.png", "page_index": 23, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:09+07:00" }, "raw_text": "Multiprocessor Operating Systems . Like a uniprocessor operating system Manages multiple CPUs transparently to the user Each processor has its own hardware cache Maintain consistency of cached data HPC Lab - CSE - HCMUT 24" }, { "page_index": 24, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_025.png", "page_index": 24, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:13+07:00" }, "raw_text": "Uniprocessor - Operating Systems An OS acts as a resource manager or an arbitrator o Manages CPU, I/O devices, memory OS provides a virtual interface that is easier to use than hardware Structure of uniprocessor operating systems o Monolithic (e.g., MS-DOS, early UNIX) One large kernel that handles everything o Layered design Functionality is decomposed into N layers Each layer uses services of layer N-1 and implements new service(s) for layer N+1 HPC Lab - CSE - HCMUT 25" }, { "page_index": 25, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_026.png", "page_index": 25, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:17+07:00" }, "raw_text": "Uniprocessor Operating Systems Microkernel architecture Small kernel user-level servers implement additional functionality 0 No direct data exchange between modules OS interface User Memory Process File module User mode application module module Kernel mode System call Microkernel Hardware HPC Lab - CSE - HCMUT 26" }, { "page_index": 26, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_027.png", "page_index": 26, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:20+07:00" }, "raw_text": "Distributed Operating System Manages resources in a distributed system o Seamlessly and transparently to the user Looks to the user like a centralized Os o But operates on multiple independent CPUs Provides transparency o Location, migration, concurrency, replication,... Presents users with a virtual uniprocessor. HPC Lab - CSE - HCMUT 27" }, { "page_index": 27, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_028.png", "page_index": 27, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:25+07:00" }, "raw_text": "Types of Distributed OSs System Description Main Goal Tightly-coupled operating system for multi-processors Hide and manage hardware DOS and homogeneous multi-computers resources Loosely-coupled operating system for heterogeneous Offer local services to remote NOS multicomputers (LAN and WAN) clients Additional layer atop of NOs implementing general- Middleware Provide distribution transparency purpose services HPC Lab - CSE - HCMUT 28" }, { "page_index": 28, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_029.png", "page_index": 28, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:28+07:00" }, "raw_text": "Multicomputer - Operating Systems Machine A Machine B Machine O Distributed applications Distributed operating system services Kernel Kernel Kernel Network HPC Lab - CSE - HCMUT 29" }, { "page_index": 29, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_030.png", "page_index": 29, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:31+07:00" }, "raw_text": "Network Operating System (1) Machine A Machine B Machine C Distributed applications Network OS Network OS Network OS services services services Kernel Kernel Kernel Network HPC Lab - CSE - HCMUT 30" }, { "page_index": 30, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_031.png", "page_index": 30, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:35+07:00" }, "raw_text": "Network Operating System (2) File server Disks on which Client 1 Client 2 shared file system Employs is stored Request Repy o Minim o Additic Network HPC Lab - CSE - HCMUT 3" }, { "page_index": 31, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_032.png", "page_index": 31, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:39+07:00" }, "raw_text": "Middleware-based systems General structure of a distributed system as middleware Machine A Machine B Machine O Distributed applications viddleware serices Network OS Network OS Network OS services services services Kernel Kernel Kernel Network HPC Lab - CSE - HCMUT 32" }, { "page_index": 32, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_1/slide_033.png", "page_index": 32, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:45+07:00" }, "raw_text": "Comparison between Systems Distributed OS Middleware-based Item Network OS os Multiproc. Multicomp. Degree of transparency Very High High Low High Same OS on all nodes Yes Yes No No Number of copies of OS 1 N N N Basis for communication Shared memory Messages Files Model specific Resource management Global, central Global, distributed Per node Per node Scalability No Moderately Yes Varies Openness Closed Closed Open Open HPC Lab - CSE - HCMUT 33" }, { "page_index": 33, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_001.png", "page_index": 33, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:48+07:00" }, "raw_text": "Distributed Systems Communication Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab-CSE-HCMU l" }, { "page_index": 34, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_002.png", "page_index": 34, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:51+07:00" }, "raw_text": "Communication Issues in communication Message-oriented Communication Remote Procedure Calls Transparency but poor for passing references Remote Method Invocation o RMIs are essentially RPCs but specific to remote objects o System wide references passed as parameters Stream-oriented Communication HPC Lab-CSE-HCMUT 2" }, { "page_index": 35, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_003.png", "page_index": 35, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:30:56+07:00" }, "raw_text": "Communication Protocols . Protocols are agreements/rules on communication . Protocols could be connection-oriented or connectionless Message sent Message received Layers Application Pres entat ior Session Transport Network Data link Physical Communi catior Sender Recipient me d iu m HPC Lab-CSE-HCMUT 3" }, { "page_index": 36, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_004.png", "page_index": 36, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:00+07:00" }, "raw_text": "Layered Protocols A typical message as it appears on the network Data link layer header Network layer header Transport layer header Session layer header Presentation layer header Application layer header Data link Message layer trailer Bits that actually appear on the network HPC Lab-CSE-HCMUT" }, { "page_index": 37, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_005.png", "page_index": 37, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:05+07:00" }, "raw_text": "Client-Server TCP Client Server Client Server 1 SYN SYN,request,FlN 2 2 SYN,ACK(SYN) SYN,ACK(FIN),ansWer,FIN 3 3 4 ACK(SYN) ACK(FIN) 5 request FIN 6 ACK(req+FIN) 7 answer 8 FIN Time 9 Time ACK(FIN) (a) (b) HPC Lab-CSE-HCMUT 5" }, { "page_index": 38, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_006.png", "page_index": 38, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:09+07:00" }, "raw_text": "Middleware Protocols Middleware: o Layer that resides between an OS and an application May implement general-purpose protocols that warrant their own layers. Ex: distributed commit Application protocol Application 6 Middleware protocol viddleware 5 Transport protocol Transport 4 Network protocol Network 3 Data link protocol 2 Data link Physical protocol Physical 1 Network HPC Lab-CSE-HCIMlU I" }, { "page_index": 39, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_007.png", "page_index": 39, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:13+07:00" }, "raw_text": "Client-Server communication model Structure: group of servers offering service to clients Based on a request/response paradigm Techniques. o Socket, remote procedure calls (RPC), Remote Method Invocation (RMl file terminal client process server server server kernel kernel kernel kernel HPC Lab-CSE-HCMUT 7" }, { "page_index": 40, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_008.png", "page_index": 40, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:16+07:00" }, "raw_text": "Issues in Client-Server communication Addressing Blocking versus non-blocking Buffered versus unbuffered Reliable versus unreliable Server architecture: concurrent versus sequential Scalability HPC Lab-CSE-HCMUT 8" }, { "page_index": 41, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_009.png", "page_index": 41, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:20+07:00" }, "raw_text": "Addressing Issues Question: how is the server located? . Hard-wired address user server o Machine address and process address are known Broadcast-based user server Server chooses address from a sparse address space o Client broadcasts reguest Can cache response for future NS Locate address via name server user server HPC Lab-CSE-HCMUT" }, { "page_index": 42, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_010.png", "page_index": 42, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:23+07:00" }, "raw_text": "Blocking versus Non-blocking Blocking communication (synchronous) o Send blocks until message is actually sent o Receive blocks until message is actually received Non-blocking communication (asynchronous o Send returns immediately Return does not block either HPC Lab-CSE-HCMUT 10" }, { "page_index": 43, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_011.png", "page_index": 43, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:27+07:00" }, "raw_text": "Buffering issues Unbuffered communication user server o Server must call receive before client can call send Buffered communication user server o Client send to a mailbox Server receives from a mailbox HPC Lab-CSE-HCMUT" }, { "page_index": 44, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_012.png", "page_index": 44, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:32+07:00" }, "raw_text": "Reliability Unreliable channe request o Need acknowledgements (ACKs ACK ussn reply o Applications handle ACKs ACK o ACKs for both request and reply Reliable channel request o Reply acts as ACK for request ussn reply o Explicit ACK for response ACK Reliable communication on unreliable channels o Transport protocol handles lost messages HPC Lab-CSE-HCMUT l 2" }, { "page_index": 45, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_013.png", "page_index": 45, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:36+07:00" }, "raw_text": "Remote Procedure Calls (RPC) Goal: Make distributed computing look like centralized computing Allow remote services to be called as procedures Transparency with regard to location, implementation, language Issues How to pass parameters Bindings Semantics in face of errors Two classes: integrated into prog, language and separate HPC Lab-CSE-HCMUT 3" }, { "page_index": 46, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_014.png", "page_index": 46, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:40+07:00" }, "raw_text": "Conventional procedure call (a) Parameter passing in a local (b) The stack while the called procedure procedure call: the stack before is active the call to read Stack pointer Main program's Main program's local variables ocal variables bytes buf fd return address read's local variables (a) (b) HPC Lab-CSE-HCIVIU l 4" }, { "page_index": 47, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_015.png", "page_index": 47, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:42+07:00" }, "raw_text": "Example: Local Procedure Call Machine Process sum(i, j) int i, j; n = sum(4, 7): return (i+j): HPC Lab-CSE-HCMUT l 5" }, { "page_index": 48, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_016.png", "page_index": 48, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:47+07:00" }, "raw_text": "Example: RPC Stubs Client Server Process Process sum(i, j) message message int i, j; n = sum(4, 7); sum sum 4 4 return (i+j): 7 7 Os na HPC Lab-CSE-HCMUT 6" }, { "page_index": 49, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_017.png", "page_index": 49, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:50+07:00" }, "raw_text": "Parameter passing Local procedure parameter passing o Call-by-value o Call-by-reference: arrays, complex data structures Remote procedure calls simulate this through Stubs - proxies o Flattening - marshalling Related issue: global variables are not allowed in RPCs HPC Lab-CSE-HCMUT 7" }, { "page_index": 50, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_018.png", "page_index": 50, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:55+07:00" }, "raw_text": "Client and Server Stubs Principle of RPC between a client and server program vvait for result Client Call remote Return procedure from call Request Repy Server Call local procedure Time and return results HPC Lab-CSE-HCMUT 8" }, { "page_index": 51, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_019.png", "page_index": 51, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:31:58+07:00" }, "raw_text": "Stubs Client makes procedure call (just like a local procedure call) to the client stub Server is written as a standard procedure Stubs take care of packaging arguments and sending messages . Packaging parameters is called marshalling Stub compiler generates stub automatically from specs in an Interface Definition Language (IDL) o Simplifies programmer task HPC Lab-CSE-HCMUT 9" }, { "page_index": 52, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_020.png", "page_index": 52, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:02+07:00" }, "raw_text": "Steps of a RPC 1. Client procedure calls client stub in normal way 2. Client stub builds message, calls local Os 3. Client's OS sends message to remote Os 4. Remote OS gives message to server stub 5. Server stub unpacks parameters, calls server 6. Server does work, returns result to the stub 7. Server stub packs it in message, calls local OS 8. Server's OS sends message to client's Os 9. Client's OS gives message to client stub 10. Stub unpacks result, returns to client HPC Lab-CSE-HCMUT 20" }, { "page_index": 53, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_021.png", "page_index": 53, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:08+07:00" }, "raw_text": "Example of an RPC Client machine Server machine Server process Client process 1. Client call tc Implementation 6. Stub makes procedure of add local call to \"add\" Server stub k = add(i,j) k = add(i,j) Client stub proc: \"add\" proc: \"add\" 5. Stub unpacks int: val(i) int: val(i) 2. Stub builds int: val(j) val(j) message int: message A proc: \"add\" 4. Server 0s Client OS int: val(i) Server OS hands message int: val(j) to server stub 3. Message is sent across the network HPC Lab-CSE-HCMUT 21" }, { "page_index": 54, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_022.png", "page_index": 54, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:13+07:00" }, "raw_text": "Marshalling Problem: different machines have different data formats o Intel: little endian, SPARC: big endian Solution: use a standard representation o Example: external data representation (XDR Problem: how do we pass pointers? o If it points to a well-defined data structure, pass a copy and the server stub passes a pointer to the local copy . What about data structures containing pointers? o Prohibit o Chase pointers over network . Marshalling: transform parameters/results into a byte stream HPC Lab-CSE-HCMUT 22" }, { "page_index": 55, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_023.png", "page_index": 55, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:17+07:00" }, "raw_text": "Binding Problem: how does a client locate a server? o Use Bindings Server Export server interface during initialization o Send name, version no, unique identifier, handle (address) to binder Client o First RPC: send message to binder to import server interface Binder: check to see if server has exported interface Return handle and unique identifier to client HPC Lab-CSE-HCMUT 23" }, { "page_index": 56, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_024.png", "page_index": 56, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:21+07:00" }, "raw_text": "Case Study: SUNRPC One of the most widely used RPC systems Developed for use with NFS Built on top of UDP or TCP TCP: stream is divided into records UDP: max packet size < 8912 bytes UDP: timeout plus limited number of retransmissions TCP: return error if connection is terminated by server Multiple arguments marshaled into a single structure . At-least-once semantics if reply received, at-least-zero semantics if no reply. With UDP tries at-most-once Use SUN's eXternal Data Representation (XDR) Big endian order for 32 bit integers, handle arbitrarily large data structures HPC Lab-CSE-HCMUT 24" }, { "page_index": 57, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_025.png", "page_index": 57, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:26+07:00" }, "raw_text": "Binder: Port Mapper Server start-up: create port Server stub calls svc register to register prog. #, version # with local port mapper portmapper server register server Port mapper stores prog #, version #, and machine port Client start-up: call clnt create to locate request reply server port Upon return, client can call procedures at client client machine the server HPC Lab-CSE-HCMUT 1.25" }, { "page_index": 58, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_026.png", "page_index": 58, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:30+07:00" }, "raw_text": "Rpcgen: generating stubs server senver procedures seryer stub Q_5Yc.C RPC RPCspecilicalibn1ile Q.h XDR Q.x rpcgen run time Q_xdr.c library Q_clnt.c client stub clle nt CC client a ppllcatlon Q xdr.c: do XDR conversion Detailed example: later in this course HPC Lab-CSE-HCMUT 26" }, { "page_index": 59, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_027.png", "page_index": 59, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:35+07:00" }, "raw_text": "Lightweight RPCs Many RPCs occur between client and server on same machine o Need to optimize RPCs for this special case => use a lightweight RPC mechanism (LRPC) Server S exports interface to remote procedures Client C on same machine imports interface Os kernel creates data structures including an argument stack shared between S and C HPC Lab-CSE-HCMUT 27" }, { "page_index": 60, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_028.png", "page_index": 60, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:39+07:00" }, "raw_text": "Lightweight RPCs RPC execution o Push arguments onto stack Trap to kernel Kernel changes mem map of client to server address space Client thread executes procedure (OS upcall) o Thread traps to kernel upon completion o Kernel changes the address space back and returns control to client Called \"doors\" in Solaris HPC Lab-CSE-HCMUT 28" }, { "page_index": 61, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_029.png", "page_index": 61, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:44+07:00" }, "raw_text": "Doors Computer Client process Server process server door(...) door_return(...): } mainO main( t fd = open(door_name, ... ): Register door fd = door create(... door.call(fd, ... : fattach(fd, door_name, ... ; } Operating system Invoke registered door at other process Return to calling process Which RPC to use? - run-time bit allows stub to choose between LRPC and RPC HPC Lab-CSE-HCMUT 29" }, { "page_index": 62, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_030.png", "page_index": 62, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:49+07:00" }, "raw_text": "Other RPC models Asynchronous RPC o Request-reply behavior often not needed Server can reply as soon as request is received and execute procedure later Deferred-synchronous RPC o Use two asynchronous RPCs o Client needs a reply but can' t wait for it; server sends reply via another asynchronous RPC One-way RPC o Client does not even wait for an ACK from the server o Limitation: reliability not guaranteed (Client does not know if procedure was executed by the server HPC Lab-CSE-HCMUT 30" }, { "page_index": 63, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_031.png", "page_index": 63, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:54+07:00" }, "raw_text": "Asynchronous RPC Client Vait for result Client Wvait for acceptance Call remote Return Call remote Return procedure from call procedure from call Request Request Accept reqguest Repy Server Time Server Call local procedure Time Call local procedure and return results (a) (b) a) The interconnection between client and server in a traditional RPC b) The interaction using asynchronous RPC HPC Lab-CSE-HCMUT 31" }, { "page_index": 64, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_032.png", "page_index": 64, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:32:59+07:00" }, "raw_text": "Deferred Synchronous RPC . A client and server interacting through two asynchronous RPCs Interrupt client WVait for acceptance Client Call remote Return from call Return procedure Acknowledge results Accept Request request Server Time Call local procedure Call client with one-way RPC HPC Lab-CSE-HCMUT 32" }, { "page_index": 65, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_033.png", "page_index": 65, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:02+07:00" }, "raw_text": "Remote Method Invocation (RMI) RPCs applied to objects, i.e., instances of a class o Class: object-oriented abstraction; module with data and operations Separation between interface and implementation Interface resides on one machine, implementation on another RMIs support system-wide object references o Parameters can be object references HPC Lab-CSE-HCMUT 33" }, { "page_index": 66, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_034.png", "page_index": 66, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:08+07:00" }, "raw_text": "Distributed Objects Client machine Server machine Object Client Server State Same interface Method Client as object invokes a method Skeleton Interface invokes Proxy Skeleton same method at object Client OS Server OS Network varshalled invocation is passed across network When a client binds to a distributed object, load the interface (\"proxy\") into client address space Proxy analogous to stubs Server stub is referred to as a skeleton HPC Lab-CSE-HCMUT 34" }, { "page_index": 67, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_035.png", "page_index": 67, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:11+07:00" }, "raw_text": "Proxies and Skeletons Proxy: client stub o Maintains server ID, endpoint, object ID Sets up and tears down connection with the server [Java:] does serialization of local object parameters o In practice, can be downloaded/constructed on the fly (why can' t this be done for RPCs in general?) Skeleton: server stub o Does deserialization and passes parameters to server and sends result to proxy HPC Lab-CSE-HCMUT 35" }, { "page_index": 68, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_036.png", "page_index": 68, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:16+07:00" }, "raw_text": "Java RMI Server o Defines interface and implements interface methods Server program Creates server object and registers object with \"remote object\" registry Client o Looks up server in remote object registry o Uses normal method call syntax for remote methods Java tools o Rmiregistry: server-side name server o Rmic: uses server interface to create client and server stubs HPC Lab-CSE-HCMUT 36" }, { "page_index": 69, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_037.png", "page_index": 69, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:20+07:00" }, "raw_text": "Server socket bind listen accept read write close 1 1 1 Synchronization point - Communication - 1 socket connect write read close Client HPC Lab-CSE-HCMUT 37" }, { "page_index": 70, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_a/slide_038.png", "page_index": 70, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:24+07:00" }, "raw_text": "Berkeley socket primitives Primitive Meaning Socket Create a new communication endpoint Bind Attach a local address to a socket Listen Announce willingness to accept connections Accept Block caller until a connection reguest arrives Connect Actively attempt to establish a connection Send Send some data over the connection Receive Receive some data over the connection Close Release the connection HPC Lab-CSE-HCMUT 38" }, { "page_index": 71, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_001.png", "page_index": 71, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:28+07:00" }, "raw_text": "Distributed Systems Streaming Data Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab-CSE-HCMU l" }, { "page_index": 72, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_002.png", "page_index": 72, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:32+07:00" }, "raw_text": "Classification of real-time systems Classification Examples Latency measured in Tolerance for delay Hard Pacemaker, anti-lock Microseconds-milliseconds None-total system failure brakes potential loss of life Soft Airline reservation Milliseconds-seconds Low-no system failure system, online stock no life at risk quotes, VolP (Skype) Near Skype video, home Seconds-minutes Highno system failure, automation no life at risk Data streaming??? HPC Lab-CSE-HCMUT 2" }, { "page_index": 73, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_003.png", "page_index": 73, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:36+07:00" }, "raw_text": "Data streaming architecture Data source Applications Collection tier Message queuing tier Analysis tier In-memory data store Data access tier Sometimes we need to reach back to get data that has just been analyzed Long term store We want to persist analyzed data for future use HPC Lab-CSE-HCMUT 3" }, { "page_index": 74, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_004.png", "page_index": 74, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:38+07:00" }, "raw_text": "Data collection: data ingestion Request/response Publish/subscribe One-way . Request/acknowledge Stream HPC Lab-CSE-HCMU l" }, { "page_index": 75, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_005.png", "page_index": 75, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:47+07:00" }, "raw_text": "2. S khong bit: Request/response Ngi gi yéu cu khng bit khi náo phn hi s c nhn va có th khng bit nu phn hi tht bi hoc gp li 3. Thich hp cho ti cao va x ly kh0ng ng u: 4. S dng callback hoc h thng thong bao: Thay vi ch i phn hi, ngi gi yéu cu thng ng ky callback hoc s dng h thng thng báo x ly kt qu khi nó sn sang. 5. c s dng trong h thng thi gian thc va ng dng cht Ing cao: Connection 6. Kh nng x ly li: Reguest H thng phi x ly mt cách cn thn cäc trng hp li va tht bi, vi khng có phn hi ngay Ip tc thng báo v Client Service S C. Response Service (a) Sync RR Connection The request and the Client Time t response happen over Reguest the same connection. O Time t +1 Connection O The reguest and the Client response happen over 2 Request O the same connection. Time t+ 2 O Connection Client Response Connection Client Time t Time t + 3 Reguest Connection Client Time t + 4 Time t+ 1 2 Response O Other work being Service performed Timet+ 2 c) Full-async RR O Connection Response Time t+3 (b) Half-async RR HPC Lab-CSE-HCMUT 5" }, { "page_index": 76, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_006.png", "page_index": 76, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:53+07:00" }, "raw_text": "Constructon1 mile ahead.alternateroute recommended Traffc conditons On board service navigaticn 4 Collection Analyss In-memory Data data store Synchronous Request/Response (ng b) Yeu cu va phn hi lién tc: Trong cäch lam vic nay, yéu cu va phn hi xy ra ng thi. iu nay có ngha la ngi gi yéu cu ch cho n khi nhn c phn hi trc khi tip tc thc hin cac tác v khác. ng b hóa: Yéu cu va phn hi c ng b hóa, có ngha la ngi gi yéu cu khng tip tc thc hin các tác v khäc cho n khi nhn c phn hi hoc ht thi gian ch (timeout). n gin: Cäch tip cn nay thng n gin trin khai va hiu, nhng c6 th gay tr nu có nhiu yéu cu phi x ly ng thi Half-Asynchronous Request/Response (Bän ng b): Yéu cu va phn hi khöng ng thi: Trong mö hinh nay, ngi gi yéu cu khöng cn ch i phn hi ngay Ip tc sau khi gi yéu cu. Thay vao ó, h có th tip tc thc hin các tác v khác ma khöng b chn bi quá trinh ch i phn hi. Khng ng b hóa: Yéu cu va phn hi khng c ng b hóa, cho phép ngi gi yéu cu tip tc lam vic trong khi ngi nhn x ly yéu cu va chun b phn hi. Thich hp cho ti cao: M hinh nay thng thich hp cho các h thng i hi x ly nhiu yéu cu ng thi ma khng cn i chm li. HPC Lab-CSE-HCMUT 6" }, { "page_index": 77, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_007.png", "page_index": 77, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:33:58+07:00" }, "raw_text": "Request/acknowledge As the visitor is browsing our site we're collecting data about each page they visit and every link they Getpropensity-to-buy Return score, passing in the click acknowledgment acknowledgment ACK ACK The reguest/acknowledge pattern occurs on the first page they visit The acknowledgment is nothing Propensity service more than a unique identifier When we call the propensity Collection Message Analysis In-memory Dataaccess queuing datastore service, we can pass the unique identifier we obtained on the very first visit. HPC Lab-CSE-HCMUT" }, { "page_index": 78, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_008.png", "page_index": 78, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:06+07:00" }, "raw_text": "Publish/subscribe Car Car Car Car (producer) (producer) (producer) (producer) Current Current Current Current traffic traffic traffic traffic Broker Traffic topic Message Topic Subscription Message Producer Consumer 1 A A 1 Traffic subscription Broker Message Message Producer 2 2 Topic Subscription Consumer B B Current Message Message traffic Producer 3 3 Car Message Topic Subscription Message Producer (consumer) 4 C B 4 HPC Lab-CSE-HCMUT 8" }, { "page_index": 79, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_009.png", "page_index": 79, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:09+07:00" }, "raw_text": "One-way The \"fire and forget\" message pattern The system making the request doesn't need a response The client does not even know whether the request was received by the service Ex: o (Environment) sensors Servers send data to the Monitoring System o RFID tag to RFID receiver HPC Lab-CSE-HCMUT" }, { "page_index": 80, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_010.png", "page_index": 80, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:15+07:00" }, "raw_text": "Stream The stream pattern Connection flips things around as Reguest in other patterns Client Service Response o the service (single message becomes the client A single request results in no data or Reguest/response optional a continual flow of data as a response Connection Request The collection tier connects to a stream Stream Service source and pulls data source Response (continuous) in Streaming HPC Lab-CSE-HCMUT 10" }, { "page_index": 81, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_011.png", "page_index": 81, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:19+07:00" }, "raw_text": "Producer, broker and consumer Producer M Broker M Consumer A M Messages being Messages being consumed (read from produced the broker) (sent to the broker) Message queue Messages being written and read from the gueue The three core parts to a message queuing system Producer sends a message to a broker Broker puts the message into a queue Consumer reads the message from the broker HPC Lab-CSE-HCMUT" }, { "page_index": 82, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_012.png", "page_index": 82, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:25+07:00" }, "raw_text": "Kafka Data source Applications Message queuing M tier Broker M Collection tier Analysis tier In-memory data store Data access tier Producer Message queue Consumer M Sometimes we need to reach back to get data that has just analyzed Long term store We may want to persist analyzed data for future used HPC Lab-CSE-HCMUT l 2" }, { "page_index": 83, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_013.png", "page_index": 83, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:30+07:00" }, "raw_text": "Step A: this looks pretty normal Collection Analysis tier tier DataData DAIF and is what we would like to see A Normal data flow Step B: we can tell something is not quite right backpressure is Collection Analysis tier building tier (B) Backpressure building Step C: our data pipe broke under pressure, and data is now virtually dropping onto the floor Collection D Analysis tier tier and is gone forever. C Data on the floor Data is gone.lost forever HPC Lab-CSE-HCMUT 3" }, { "page_index": 84, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_014.png", "page_index": 84, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:35+07:00" }, "raw_text": "Message delivery semantics At most once: a message may get lost, but it will never be reread by a consumer At /east once: a message will never be lost, but it may be reread by a consumer Exactly-once: a message is never lost and is read by a consumer once and only once. Messages are going over a network to get to their next destination, maybe even the internet. What happens if there is a networking problem? Producer Broker Consumer 1 3 6 Message queue 4 HPC Lab-CSE-HCMUT 14" }, { "page_index": 85, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_015.png", "page_index": 85, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:40+07:00" }, "raw_text": "Exactly-once semantics Apache Kafka and Apache ActiveMQ: NOT support Enough metadata about the messages that you can implement exactly-once semantics with some coordination between producer(s) and consumer(s): > Do not retry to send messages o Read data from the broker to verify that the message you didn't receive an acknowledgment > Store metadata for last message Storing some data about the last message we read, e.g. message offset in Apache Kafka Taking into consideration is what to do if there's a failure storing the metadata 2.Storemetadata for lastmessage Producer M Broker M Consumer M Persistent 1. Do not retry to storage send messages Message queue HPC Lab-CSE-HCMUT l 5" }, { "page_index": 86, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_016.png", "page_index": 86, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:42+07:00" }, "raw_text": "Kafka HPC Lab-CSE-HCMUT 16" }, { "page_index": 87, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_017.png", "page_index": 87, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:47+07:00" }, "raw_text": "Message . models Point-to-Point Receiver msg Sender msg Queue Receiver Receiver Publish/Subscribe Subscriber msg Publisher msg Topics msg Subscriber msg Subscriber HPC Lab-CSE-HCMUT l 7" }, { "page_index": 88, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_018.png", "page_index": 88, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:51+07:00" }, "raw_text": "Publish/Subscribe model Consumer Group A1 Pub/Sub Consumer Queue A Producer Consumer Producer Group A2 Consumer Queue B Producer Consumer Group B1 Producer Consumer HPC Lab-CSE-HCMUT 8" }, { "page_index": 89, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_019.png", "page_index": 89, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:55+07:00" }, "raw_text": "Data streaming architecture: Pub/Sub Data source Kafka Spark Applications Collection tier Message queuing tier Analysis tier In-memory data store Data access tier ThingsBoard Sometimes we need to reach back to get data that has just been analyzed Long term store We want to persist analyzed data for future use HPC Lab-CSE-HCMUT 9" }, { "page_index": 90, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_020.png", "page_index": 90, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:34:58+07:00" }, "raw_text": "Kafka Kafka is a \"publish-subscribe messaging rethought as a distributed commit logj Fast Scalable Durable Distributed HPC Lab-CSE-HCMU l 20" }, { "page_index": 91, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_021.png", "page_index": 91, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:02+07:00" }, "raw_text": "Kafka Open source project Created by LinkedIn, now maintained by Confluence Real time, distributed, resilient, fault tolerant and scalable (100s of broker millions of msg per sec) Used by thousands of companies including over 60% of the Fortune 100 .l & airbnb ORACLE The CISCO New Hork Times Spotify CLOUDFLARE Linkedin HPC Lab-CSE-HCMUT 2" }, { "page_index": 92, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_022.png", "page_index": 92, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:07+07:00" }, "raw_text": "How fast is Kafka? \"Up to 2 million writes/sec on 3 cheap machines' Using 3 producers on 3 different machines, 3x async replication Only 1 producer/machine because NiC already saturated Sustained throughput as stored data grows Slightly different test config than 2M writes/sec above. Throughput vs Size 000008 000009000000000000 0 200 400 600 800 1000 1200 1400 Data Written (GBs) HPC Lab-CSE-HCMUT 22" }, { "page_index": 93, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_023.png", "page_index": 93, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:12+07:00" }, "raw_text": "Why is Kafka so fast? Fast writes: While Kafka persists all data to disk, essentially all writes go to the page cache of OS,i.e. RAM Fast reads. Very efficient to transfer data from page cache to a network socket Linux: sendfileO system call Combination of the two = fast Kafka! Example (Operations): On a Kafka cluster where the consumers are mostly caught up you will see no read activity on the disks as they will be serving data entirely from cache HPC Lab-CSE-HCMUT 23" }, { "page_index": 94, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_024.png", "page_index": 94, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:16+07:00" }, "raw_text": "A first look The who is who producer producer producer Producers write data to brokers Consumers read data from brokers 0 kafka All this is distributed cluster The data Data is stored in topics consumer consumer consumer Topics are split into partitions, which are replicated HPC Lab-CSE-HCMUT 24" }, { "page_index": 95, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_025.png", "page_index": 95, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:23+07:00" }, "raw_text": "Apache Kafka Topic Kafka cluster Px active replica(id y ofpartition x Ry Offset for topic \"zerg.hydra\" Px active replica (idyof partition x Ry this broker is leader for thatpartition broker1 Broker PO P2 Partition R1 R1 broker2 Replica producer PO P1 consumer R2 R2 (\"zerg.hydra\" (\"zerg.hydra\") Kafka cluster broker3 ZooKeeper P1 P2 R3 R3 Data collection Brokers, producers and consumers use ZooKeeper Pre-processing to manage and share state Sampling ZooKeeper Filtering Integration HPC Lab-CSE-HCMUT 25" }, { "page_index": 96, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_026.png", "page_index": 96, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:27+07:00" }, "raw_text": "Topic, Partition and Offset (1) Topics: Place to store/read all messages Each messages must be organized into at least one specific topic Each topic has its own name as id Topics are divided into a number of partitions Partition contains records in an immutable order Each record is identified using a non-stop incremental integer number called offset Allows multiple consumers to read from a topic in parallel HPC Lab-CSE-HCMUT 26" }, { "page_index": 97, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_027.png", "page_index": 97, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:30+07:00" }, "raw_text": "Topic, Partition and Offset t (2) Be careful!: Offset comparison and record order is only guaranteed within a partition Record has a limited lifetime (default: 1 week) Once the data is written, you cannot update it (but you can delete). 1 Incoming data is stored at a random partition unless a key is provided HPC Lab-CSE-HCMUT 27" }, { "page_index": 98, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_028.png", "page_index": 98, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:34+07:00" }, "raw_text": "Topic (1) Topic: feed name to which messages are published Example: \"zerg.hydra Kafka prunes \"head\" based on age or max size or \"key Producer A1 Kafka topic Producer A2 new Producer An Older msgs Newer msgs Producers always append to \"tail (think: append to a file) Broker(s) HPC Lab-CSE-HCMU 28" }, { "page_index": 99, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_029.png", "page_index": 99, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:38+07:00" }, "raw_text": "Topic (2) Consumers use an \"offset pointer\" to Consumer group C1 track/control their read progress (and decide the pace of consumption) Consumer group C2 Producer A1 W W Producer A2 new Producer An Older msgs Newer msgs Producers always append to \"tail\" (think: append to a file) Broker(s) HPC Lab-CSE-HCMU l 29" }, { "page_index": 100, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_030.png", "page_index": 100, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:45+07:00" }, "raw_text": "Partitions (1) A topic consists of partitions Partition: ordered + immutable sequence of messages that is continually appended to Anatomy of a Topic Partition 1 0 0 2 3 4 5 6 7 8 9 0 2 Partition 0 2 3 4 5 6 7 8 9 Writes 1 1 Partition 1 0 1 2 3 4 5 6 7 8 9 2 0 1 2 Old New HPC Lab-CSE-HCMUT 30" }, { "page_index": 101, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_031.png", "page_index": 101, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:50+07:00" }, "raw_text": "Partitions (2) No. of partitions of a topic is configurable No. of partitions determines max consumer (group) parallelism cf. parallelism of Storm's KafkaSpout via builder.setSpout(,,N) Kafka Cluster Server1- Server 2 PO P3 P1 P2 C1 C2 C3 C4 C5 C6 Consumer Group A- Consumer Group B Consumer group A, with 2 consumers, reads from a 4-partition topic Consumer group B, with 4 consumers, reads from the same topic HPC Lab-CSE-HCMUT 3" }, { "page_index": 102, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_032.png", "page_index": 102, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:35:57+07:00" }, "raw_text": "Partition offsets Offset: messages in the partitions are each assigned a unique (per partition) and sequential id called the offset Consumers track their pointers via (offset, partition, topic) tuples Consumer group C1 W Partition 1 0 0 2 3 5 6 7 8 9 0 V Partition 0 5 6 7 8 9 Writes 3 1 V Partition 1 0 1 2 3 4 5 6 7 8 9 2 0 2 Old New HPC Lab-CSE-HCMUT 32" }, { "page_index": 103, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_033.png", "page_index": 103, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:00+07:00" }, "raw_text": "Replications of a partition Replicas: \"backups\" of a partition They exist solely to prevent data loss Replicas are never read from, never written to They do NOT help to increase producer or consumer parallelism! Kafka tolerates (numReplicas - 1) dead brokers before losing data LinkedIn: numReplicas = 2 -> 1 broker can die HPC Lab-CSE-HCMUT 33" }, { "page_index": 104, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_034.png", "page_index": 104, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:07+07:00" }, "raw_text": "Replication Topics Kafka Brokers Leader - server1 Consumer group Partition1 replica . p1 1 consumer1 I producer1 Follower Partition2 Server2 Read data write data Consumer2 0123 p2 replica 2 producer2 Follower Partition3 Consumer 3 Server3 replica O p3 3 old New HPC Lab-CSE-HCMUT 34" }, { "page_index": 105, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_035.png", "page_index": 105, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:13+07:00" }, "raw_text": "Partition Kafka Cluster Kafka write scalability: 6Brokers,3Replicas Multiple messages can beread concurrently Consumers 1Topic,4Partitions using multiplepartitions on different brokers. Read messagesfrom topic Concurrentreads TOPIC M1 M2 M3 M4 From brokers1.2.5,6 R 1 A Broker1 Broker2 Broker3 PartitionO Partition C Partition O Leader Follower Follower Partition1 Partition1 Partition1 Follower Leader Follower Broker4 Broker5 Broker6 Partition2 Partition2 Partition2 Follower Follover Leader Partition3 Partition3 Partition3 Follower Leader Follower HPC Lab-CSE-HCMUT 35" }, { "page_index": 106, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_036.png", "page_index": 106, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:22+07:00" }, "raw_text": "Write Kafka Cluster Kafka write scalability: 6Brokers,3Replicas Multiple messages can bewritten concurrently Producers 1Topic,4Partitions using muttiple partitions on different brokers. Send messages to topic Topic isreplicatedover Concurrentwrites M1 sent to leader TOPIC M1 M2 multiplepartitionsand Toleaderson brokers1and6 ofpartition 0 brokers and followerson brokers2.3,4,5 Broker1 Broker2 Broker3 write Partition O PartitionO write PartitionO Leader Follower Follower write Partition1 Partition1 Partition1 Follower Leader Follower Broker4 Broker5 Broker6 M2concurrently sent to leaderof write Partition2 Partition2 Partition2 partition2on anotherbroker Follower write Follower Leader write Partition3 Partition3 Partition3 Follower Leader Follower HPC Lab-CSE-HCMUT 36" }, { "page_index": 107, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_037.png", "page_index": 107, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:27+07:00" }, "raw_text": "ZooKeeper (1) Kafka Architecture Consumer Group Kafka Cluster Producer-1 Broker-1 Consumer-1 Push Pull messages messages Broker-2 Producer-2 Consumer-2 Broker-3 Producer-3 Consumer-3 Fetch Update kafka offset broker Zookeeper id HPC Lab-CSE-HCMUT 37" }, { "page_index": 108, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_038.png", "page_index": 108, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:31+07:00" }, "raw_text": "ZooKeeper (2) Zookeeper is a centralized service to maintain naming and configuration data. provide flexible and robust synchronization within distributed systems Zookeeper keeps track of status of the Kafka brokers, topics partitions,... and notify all changes to Kafka Kafka Cluster Kafka cannot run without Zookeeper! Kafka Producer Topic A Kafka Consumer Kafka Producer Kafka Consumer Topic B Kafka Producer Kafka Consumer APACHE ZooKeeper Kafka Producer Kafka Consumer Zookeeper Cluster Consumer Metadata HPC Lab-CSE-HCMUT 38" }, { "page_index": 109, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_039.png", "page_index": 109, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:37+07:00" }, "raw_text": "ZooKeeper (3) ACL: access control list To ensure the coordination and synchronization of Kafka brokers, ZooKeeper maintains a list (or directory) that contains information about all the brokers currently functioning within the Kafka cluster. This list is known as \"Cluster Membership\" and is essentially a registry of active brokers. It contains details such as the broker's ID, network address, and other relevant metadata. How Zookeeper is helping Kafka: Kafka Brokers' state & quotas: Zookeeper determines the state of all brokers of the cluster, following the replication option. It also keeps track of how much data is each client allowed to read and write Configuration of Topics: Zookeeper keeps all configuration regarding all the topics including the list of existing topics, the number of partitions for each topic, the location of all the replicas, list of configuration overrides for all topics, and which node is the preferred leader, etc. Access Control Lists: Access control lists or ACLs for all the topics are also maintained within Zookeeper Cluster membership: Zookeeper also maintains a list of all the brokers that are functioning at any given moment and are a part of the cluster Controller Election: If a node for some reason is shutting down, Zookeeper will select one of working replicas to act as partition leaders Consumer Offsets and Registry: Zookeeper keeps all information about how many messages Kafka consumer consumes HPC Lab-CSE-HCMUT 39" }, { "page_index": 110, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_040.png", "page_index": 110, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:42+07:00" }, "raw_text": "Once Kafka receives the messages from producers, it forwards these messages to the consumers Consumer will receive the message and process it Once the messages are processed, consumer will send an acknowledgement to the Kafka broker Once Kafka receives an acknowledgement, it changes the offset to the new value and updates it in the Zookeeper. Since offsets are maintained in the Zookeeper, the consumer can read next message correctly even during server outrages (server ngng hot ng) This above flow will repeat until the consumer stops the request Consumer has the option to rewind/skip to the desired offset of a topic at any time and read all the subseguent messages. HPC Lab-CSE-HCMUT 40" }, { "page_index": 111, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_041.png", "page_index": 111, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:47+07:00" }, "raw_text": "Workflow of consumer g group (1) Producers send message to a topic in a regular interval Kafka stores all messages in the partitions configured for that particular topic similar to the earlier scenario A single consumer subscribes to a specific topic, assume Topic-01 with Group ID as Group-1 Kafka interacts with the consumer in the same way as Pub-Sub Messaging until new consumer subscribes the same topic, Topic-01 with the same Group ID as Group-1 Once the new consumer arrives, Kafka switches its operation to share mode and shares the data between the two consumers. This sharing will go on until the number of consumers reach the number of partition configured for that particular topic. HPC Lab-CSE-HCMUT 4I" }, { "page_index": 112, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_042.png", "page_index": 112, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:51+07:00" }, "raw_text": "Once the number of consumer exceeds the number of partitions, the new consumer will not receive any further message until any one of the existing consumer unsubscribes. This scenario arises because each consumer in Kafka will be assigned a minimum of one partition and once all the partitions are assigned to the existing consumers, the new consumers will have to wait This feature is also called as Consumer Group. In the same way, Kafka will provide the best of both the systems in a very simple and efficient manner. HPC Lab-CSE-HCMUT 42" }, { "page_index": 113, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_043.png", "page_index": 113, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:36:58+07:00" }, "raw_text": "Producer and Consumer (1) Producers send data to broker to write into topics Producer is provided which broker and partition to store its data ack=1 Data To guarantee the data write operation successfully Broker1 we can set the producer wait for the received? Sending Topic-A Producer data data acknowledgement, based on 3 Partition0 yes (Leader) options: Automaticload Balancing data ack=0: Producer do not need to Broker2 know about any ack S Topic-A Partition1 (ISR) ack=1: Producer will wait for the leader ack Broker3 ack=2: Producer will wait for the Topic-A Partition2 acks from leader and replicas (ISR) Receiving Successfull data acknowledgement HPC Lab-CSE-HCMUT 43" }, { "page_index": 114, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_044.png", "page_index": 114, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:02+07:00" }, "raw_text": "Producer and Consumer (2) Producer can use a message key while sending data If a key is provided, all message with a same key will be stored at a same partition Otherwise, the data is written round robin at brokers Use message key to make sure your data is ordered for a specific field (ex: user_id) Broker r cluster of machines - key A partition A Consumer Producer key B key C Consumer partition B - partition C HPC Lab-CSE-HCMUT" }, { "page_index": 115, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_045.png", "page_index": 115, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:08+07:00" }, "raw_text": "Producer and Consumer (3) Consumer retrieve data of a topic from brokers Consumer is also provided which broker and partition to pull data Record is read in order within each partition Consumers can read messages starting from any offset point they choose 1 Broker 1 Consumer1 Topic-T 4 Reading PartitionO data inorder Broker2 Topic-T 2 Partition1 Reading data simultaneously inorder Consumer2 Broker3 Topic-T 23 4 Partition2 HPC Lab-CSE-HCMUT 45" }, { "page_index": 116, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_046.png", "page_index": 116, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:13+07:00" }, "raw_text": "Producer and Consumer (4) Consumers are organized into Consumer Group (using group name retrieve data from exclusive partitions Consumer Group -> Guarantee each new message will be Consumer Producer record] record1 processed by only one consumer of a consumer Topic A recordi Consumer group Consumer Group Example: NOTIFICATION GROUP Consumer DATABASE PERSIST GROUP Consumer Group Producer retord1 Note: If consumers > partitions, some consumers record1 Topic B Cord2 record2 Consumer re might not receive anything Producer katka A distrib.r.ed tfrc:c.r ing platknrri HPC Lab-CSE-HCMUT 46" }, { "page_index": 117, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_047.png", "page_index": 117, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:18+07:00" }, "raw_text": "Producer and Consumer (5) Each consumer group has Consumer Offsets, containing current reading offsets After retrieving data successfully, the consumer will update the corresponding offset Producers We can find these offsets at the topic named consumer offset writes A failure consumer can continue reading 2 456 8 0 1 9 current data after restarting 0 reads Consumer A Consumer B (offset=9) (offset=11) HPC Lab-CSE-HCMUT 47" }, { "page_index": 118, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_048.png", "page_index": 118, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:24+07:00" }, "raw_text": "Semantics Messages are going over a network to get to their next There are 3 delivery semantics to commit the new offset: destination, maybe even the internet. What happens if there is a Exactly one: networking problem? Guarantees that all messages will always be delivered exactly once Producer Broker Consumer Only possible through the Streams API 1 3 6 in Apache Kafka At most once: Message queue Offsets are updated as soon as the message is retrieved Will lose current message if there are any exceptions while processing At least once (preferred): Offsets are updated after the message is processed successfully If there are any errors, the message will be read again until operation successfully -> can cause infinite loop A message can be processed twice or more -> Cause multiple effects (ex: duplicated notifications, emails) HPC Lab-CSE-HCMUT 48" }, { "page_index": 119, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_049.png", "page_index": 119, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:29+07:00" }, "raw_text": "Kafka advantages High-throughput Low latency Fault-Tolerant Durability Kafka architecture: Kafka operations with commands: Scalability Variety of use cases Distributed Real-time handling Message Broker capabilities High concurrency By default persistent Consumer friendly Batch handling capable (ETL like functionality) HPC Lab-CSE-HCMUT" }, { "page_index": 120, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_050.png", "page_index": 120, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:33+07:00" }, "raw_text": "Kafka disadvantages No complete set of monitoring tools Issues with message tweaking ko th thay i ni dung tin nhn (c6 th x6a) It can perform quite well if the message is unchanged because it uses the capabilities of the system Not support wildcard topic selection Kafka monitoring - methods & tools: Lack of pace There can be a problem because of the lack of pace, while API's which are needed by other languages are maintained by different individuals and corporates Reduces performance The brokers and consumers start compressing these messages as the size increases 0 Behaves clumsy hu u when the number of queues in a Kafka cluster increases Lacks some messaging paradigms Some of the messaging paradigms are missing in Kafka like request/reply, point-to-point queues HPC Lab-CSE-HCMUT 50" }, { "page_index": 121, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_051.png", "page_index": 121, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:38+07:00" }, "raw_text": "Kafka applications Metrics: Kafka is often used for operational monitoring data; This involves aggregating statistics from distributed applications to produce centralized feeds of operational data Log Aggregation Solution: Kafka can be used across an organization to collect logs from multiple services and make them available in a standard format to multiple consumers Stream Processing: Popular frameworks such as Storm and Spark Streaming read data from a topic, processes it, and write processed data to a new topic where it becomes available for users and applications; Kafka's strong durability is also very useful in the context of stream processing HPC Lab-CSE-HCMUT 51" }, { "page_index": 122, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_2_b/slide_052.png", "page_index": 122, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:42+07:00" }, "raw_text": "Kafka summary Kafka Cluster + topics + partitions + replication + partition leader & in-sync-replicas (ISR) Broker 101 + offsets topic Source Target Broker 102 Producers Consumers Systems Systems + round robin + consumer offsets + key based ordering Broker109 + consumer groups + acks strategy + at least once + at most once +leader follower Zookeeper + broker management HPC Lab-CSE-HCMUT 52" }, { "page_index": 123, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_001.png", "page_index": 123, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:46+07:00" }, "raw_text": "Distributed Systems & Fog Edge computing Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab-CSE-HCMU l" }, { "page_index": 124, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_002.png", "page_index": 124, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:37:51+07:00" }, "raw_text": "Many applications/services Smart cities Many machines Internet, Intranet: network Libelium Smart World pino MaDs 1 K A Wate Dol libelium wcw.Bbonum.com http://www.libelium.com/libelium-smart-world-infographic-smart-cities-intermet-of-things/ HPC Lab-CSE-HCMUT -1.2-" }, { "page_index": 125, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_003.png", "page_index": 125, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:02+07:00" }, "raw_text": "FUTUREFARMS SURVEYDRONES FLEETOFAGRIBOTS Mrialdronessurveythereo Ahcdofsoeoalsedoaribotsten mpcigwohydwoso iocroosweedirgerilngand vaiiRon.TponaLsospNoso PoRng.moooisczblool small and smart acoicionefiosmcoimg mcroootapceicationorTorttser apieaactpwrnico.s roacofwtllrcaatbygax wldinraawpe 18 1 FARMINGDATA Thearme vaetcumhte crkhandwwoddsto.Thsisserod inthecoud.ovscsnbovedas ooilae.idceco reocnotinoscen compeigiamt ceriyoovi wvragersgoo TEXTINGCOWS EerorsaltcedtoBvostcc slowirgmontcring.ctenima SMARTTRACTORS heslih.endweiooirgtrorcan serdoaatstoasertfarmerswtnn GPsconcoledstorinaand eolalevroraPvMor ootimsedrovtoplennng enoctionincr ngetdsvrvwal rvacosaclerenisn mikyibEs0% sawngfwcctsby1ox" }, { "page_index": 126, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_004.png", "page_index": 126, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:08+07:00" }, "raw_text": "Internet of People Figure 4: Social Web The Internet of Things and 106-108 Services - Networking people,objects and systems CPS- platforms Smart Grid Business Web Smart Factory 26 Smart Building Smart Home Internet of Things Internet of Services 107-109 104-106 Source:Bosch Sotware lrnovations 2012 HPC Lab-CSE-HCMUT" }, { "page_index": 127, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_005.png", "page_index": 127, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:15+07:00" }, "raw_text": "NDUSTRIAL IoT DATA PROCESSING LAYER STACK CLOUDLAYER Slower Big Data Processing Business Analytics/Intelligence Business Logic Data Warehousing Processing Speed/ Response Time FOGLAYER Local Network Data Analysis &Reduction Fog Node/Server Fog Node/Server Fog Node/Server Fog Node/Server Control Response Virtualization/Standardization EDGE LAYER Large Volume Real-time Data Processing Application Application Application Application Application Application Application Application At Source/On Premises Data Visualization Industrial PCs Embedded Systems Gateways Faster Micro Data Storage O O Sensors& Controllers (data origination Source: https://www.winsystems.com/cloud-fog-and-edge-computing-whats-the-difference/ HPC Lab-CSE-HCMUT 5" }, { "page_index": 128, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_006.png", "page_index": 128, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:19+07:00" }, "raw_text": "Edge computing . Computation takes place at the edge of a device's network, which is known as edge computing . A computer is connected with the network of the device, which processes the data and sends the data to the cloud in real-time . That computer is known as \"edge computer\" or \"edge node' Not a physical, it's logical or virtual processing entity Data is processed and transmitted to the devices instantly . Edge nodes transmit all the data captured or generated by the device regardless of the importance of the data. HPC Lab-CSE-HCMUT 6" }, { "page_index": 129, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_007.png", "page_index": 129, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:23+07:00" }, "raw_text": "Processing Scope: Edge Node: Edge nodes perform localized data processing and Fog computing immediate response tasks, often in real-time or near-real-time, for data generated at the edge Fog Node: Fog nodes handle more extensive data processing tasks including aggregation, analysis, and filtering, for a group of edge nodes or devices within a specific geographical area . Fog computing is an extension of cloud computing . It is a layer in between the edge and the cloud . When edge computers send huge amounts of data to the cloud, fog nodes receive the data and analyze what's important. Then the fog nodes transfer the important data to the cloud to be stored and delete the unimportant data or keep them with themselves for further analysis. In this way, fog computing saves a lot of space in the cloud and transfers important data quickly HPC Lab-CSE-HCMUT" }, { "page_index": 130, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_008.png", "page_index": 130, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:31+07:00" }, "raw_text": "Fog (including Edge) computing Cloud Fog technology complements the role of cloud computing and distributes the data Fog 3 processing at the edge of the network Storage Computing which provides faster responses to Network Control application queries and saves the network resources Fog 2 Fog computing model Storage Computing Sensors Network Contro Actuators Fog nodes at T1, T2, T3, etc. levels Fog Fog Cloud Computing Computing Ttitl Storage Storage Benefits of Fog computing Networl Control Network Control Move data to the best place for processing Optimize latency 85 Conserve network bandwidth Collect and secure data Sensors, actuators, mobile phones,tablets, vehicles, smartdevices HPC Lab-CSE-HCMUT 8" }, { "page_index": 131, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_009.png", "page_index": 131, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:37+07:00" }, "raw_text": "Cloud is the centralized storage situated further from the endpoints than any other type of storage. This explains the highest latency, bandwidth cost, and network requirements. On the other hand, cloud is a powerful global solution that can handle huge amounts of data and scale effectively by engaging more computing resources and server space. It works great for big data analytics, long-term data storage and historical data analysis directly with the cloud handing out data that don't need to be processed on the go. At the same time, fog is placed closer to the edge. If necessary, it engages local computing and storage resources for real-time analytics and quick response to events. Just like edge, fog is decentralized meaning that it consists of many nodes. However, unlike edge, fog has a network architecture. Fog nodes are connected with each other and can redistribute computing and storage to better solve given tasks. This approach allows to perform computing and store some (only limited) volume of data directly on devices applications and edge gateways. It usually has a loosely connected structure where edge nodes work with data independently. This is what differentiates edge from network-based fog. Here's a cloud vs. fog vs. edge computing comparison chart that gives a quick overview of these and other differences between these approaches. HPC Lab-CSE-HCMUT 9" }, { "page_index": 132, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_010.png", "page_index": 132, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:45+07:00" }, "raw_text": "Turn off cloud data center => fog cannot work well while edge can work normally(without cloud services) No. Edge computing Fog computing 1 Less scalable than fog computing Highly scalable when compared to edge computing 2 Billions of nodes are present. Millions of nodes are present. Nodes in this computing are installed closer to the 3 Nodes are installed far away from the cloud. cloud(remote database where data is stored). 4 Edge computing is a subdivision of fog computing Fog computing is a subdivision of cloud computing The bandwidth requirement is very low. Because data The bandwidth requirement is high. Data originating from S comes from the edge nodes themselves. edge nodes is transferred to the cloud. 6 Operational cost is higher. Operational cost is comparatively lower. High privacy. Attacks on data are very low. The probability of data attacks is higher Edge devices are the inclusion of the loT devices or 8 Fog is an extended layer of cloud. client's network The power consumption of nodes filter important information 9 The power consumption of nodes is low. from the massive amount of data collected from the device and saves it in the filter high Edge computing helps devices to get faster results by Fog computing helps in filtering important information from 10 processing the data simultaneously received from the the massive amount of data collected from the device and devices. saves it in the cloud by sending the filtered data. HPC Lab-CSE-HCMU l 10" }, { "page_index": 133, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_011.png", "page_index": 133, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:49+07:00" }, "raw_text": "Clients Cloudlet Cloudlets A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge of Cloud the Internet The main purpose of the cloudlet is supporting local area network wide area network (LAN) (WAN) resource-intensive and interactive mobile applications by providing powerful computing resources to mobile devices with lower latency It is a new architectural element that extends today's cloud computing infrastructure It represents the middle tier of a 3-tier hierarchy: mobile device - cloudlet - cloud HPC Lab-CSE-HCMUT" }, { "page_index": 134, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_012.png", "page_index": 134, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:38:55+07:00" }, "raw_text": "Edge computing paradigms comparison Cloud computing Cloudlets Fog computing Mobile edge computing Context-awareness No Low Medium High Geo-distribution Centralized Distributed Distributed Distributed Latency High Low Low Low Mobility support No/Limited Yes Yes Yes Distance Multi hop Single hop Single hop/ Multi hop Single hop Scalability Yes Yes Yes Yes Flexibility Yes Yes Yes Yes Deployment cost High Low Low High HPC Lab-CSE-HCMUT 12" }, { "page_index": 135, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_013.png", "page_index": 135, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:01+07:00" }, "raw_text": "From cloud to frog to edge Cloud Computing SaaS Big Data Analytics Scalability Resource Pooling Elastic Compute Device Fog Federation Dedicated Secure Access Management DIMAnalytics App Hosting laaS and PaaS Data Service Embedded Real-Time Control & HA Communication OS Data Ownership Protection Secure Multi-Cloud interworking Edge Computing Fog Computing HPC Lab-CSE-HCMUT 3" }, { "page_index": 136, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_014.png", "page_index": 136, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:09+07:00" }, "raw_text": "1960-1989 Information Management System Virtual Machines Customer Infonnation Control System Multiple Virtual Storage 1990-2006 2009-2019 AS400 .Nat 2007-2008 Cloudlets Relational Database J2EE ios Fog Computing Web Sphere Android Mobile Edge Computing Mainframe Computing, Mini Dcsktop Cloud Mobilc Cloud Computing, Clicnt Server Edge Computing Computing Computing Computing Deployed at Network Core Deployed at Network Core Mainframe Computing Deployed at Network Edge Centralized Architecture Centralized Architecture -High Process ing Power Low Latency No Support For Mobility Mobility Support -High Storage High Scalability High Latency High Latency -Industnal Usage Ehstic Sarvices Average Scalability Average Scalability Mini Computing Context-awareness Ehstic Services Elstic Services -Low Storage Support for Mobility -Low Processing Power -Easier Management Clicnt Server Computing -Centralized Control Sys tem Evolution of edge computing - Increased Security -High Complexity in Management HPC Lab-CSE-HCMUT 14" }, { "page_index": 137, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_015.png", "page_index": 137, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:15+07:00" }, "raw_text": "Soil moisture Storage Temperature (5) (6) (8) (1) (2) (3) Data collection Gateway (4) sensors Analytics node (12) In-stream Data collection (13) Centre (7) 11 11 11 Analytics (11) (10) Batch processing Pumps (9) App storage Scenario of soil moisture Smart irrigation HPC Lab-CSE-HCMUT I5" }, { "page_index": 138, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_016.png", "page_index": 138, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:21+07:00" }, "raw_text": "xemio buyineamoi Challenges: m duong.tra cuu hong cdn internet Low latency and location awareness Wide-spread geographical distribution Very large number of nodes Predominant role of wireless access Strong presence of streaming Real time applications GPS Positioning data Heterogeneity Cloud Passenger counting Mobility Camera videos big data analytics Iong-term storage Pre-processing a a t t a a Video processing Data Complex data processing Data Bus number Data Bus position Available seats Data HPC Lab-CSE-HCMUT l 6" }, { "page_index": 139, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_017.png", "page_index": 139, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:30+07:00" }, "raw_text": "Incoming IoT Continuous Data Streams Data Streams Smart Bus System: Edge/Fog/Cloud computing Accumulated Data Streams Data Ingestion Edge Raw Data Data Data Cleaning Data Filtering Data Statistics Contextualization Edge Edge Edge Edge Cleaned Data Extracted Data Contextualized Descriptive Knowledge Data (Statistical Results) What is the problem with smart parking in Saint John? Data Query Data Aggregation Data Clustering Fog Fog Fog \"Occupied\"Event Aggregated Data Diagnostic Insights Data (Clusters) Data Ingestion Data Duplication Why is these parking Fog usage patterns/ Fog frequency patterns an issue in Saint John? \"Empty\"Event . Data Prediction Creation Fog Occupied&Empty Cloud Predictive Knowledge Event Data Predicted Results What could be improved in the future? HPC Lab-CSE-HCMUT 17" }, { "page_index": 140, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_018.png", "page_index": 140, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:34+07:00" }, "raw_text": "Data streaming architecture: Pub/Sub Data source Kafka Spark Applications Collection tier Message queuing tier Analysis tier In-memory data store Data access tier ThingsBoard Sometimes we need to reach back to get data that has just been analyzed Long term store We want to persist analyzed data for future use HPC Lab-CSE-HCMUT 8" }, { "page_index": 141, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_019.png", "page_index": 141, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:39+07:00" }, "raw_text": "ThingsBoard ThingsBoard loT Gateway Msg Queue ThingsBoard MQTT Rule engine Real-time Dashboards End User Core Services loT devices HTTP Websockets MQTT/HTTP/COAP HTTP / Websockets External Systems Collection data node Support many protocols Rule engine Pre-processing: sampling, filtering, integration Message queue + etc. => A small scale solution (do not need Kafka + Spark) HPC Lab-CSE-HCMUT 9" }, { "page_index": 142, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_020.png", "page_index": 142, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:42+07:00" }, "raw_text": "PUSH/PULL The data is PUSHed as fast as possible PUSH Collection Message Analysis tier tier queuing tier PULL The data is PULLed as fast as possible .- HPC Lab-CSE-HCMUT 20" }, { "page_index": 143, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_021.png", "page_index": 143, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:49+07:00" }, "raw_text": "Non-streaming & streaming system A traditional DBMS (RDBMS Application Data Hadoop, HBase, Cassandra, and so on): The data is sent \"through\" the query. The query is In those non-streaming systems sent to the data. the data is at rest, and we query it for answers Query Result Query The result is . A streaming system: returned to the application. In-flight data: The data is moved through the query The result is Query engine streamed out Continuous query model: the query data Result is constantly being evaluated as new data arrive The DBMS controls the storage and Application In summary, the important concepts to remember are that ir processes the query. traditional systems, data is typically at rest and is queried to answer questions. In streaming systems, data is often in-flight, and continuous queries are evaluated as new data arrives. This allows streaming systems to process data in Traditional DBMS Streaming System real-time and respond quickly to incoming data HPC Lab-CSE-HCMUT 2" }, { "page_index": 144, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_022.png", "page_index": 144, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:39:55+07:00" }, "raw_text": "Comparison of traditional DBMS to streaming system DBMS Streaming system Query model Queries are based on a one-time model and a The query is continuously executed based on the consistent state of the data. In a one-time model, the data that is flowing into the system. A user user executes a query and gets an answer, and the registers a query once, and the results are query is forgotten. This is a pull model. regularly pushed to the client. Changing data During down time, the data cannot change Many stream applications continue to generate data while the streaming analysis tier is down, possibly requiring a catch-up following a crash. Query state If the system crashes while a query is being executed, it Registered continuous queries may or may not is forgotten. It is the responsibility of the application (or need to continue where they left off. Many times user) to re-issue the query when the system comes it is as if they never stopped in the first place back up. HPC Lab-CSE-HCMU l 22" }, { "page_index": 145, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_023.png", "page_index": 145, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:02+07:00" }, "raw_text": "(1) Tools: (Apache) Spark Streaming, Storm, Flink, and Samza A component that your streaming application is submitted to; this is similar to how Hadoop Map Reduce works. Your application is sent to a node in the cluster that executes your application Separate nodes in the cluster execute your streaming algorithms Data sources are the input to the streaming algorithms The manager controls the lifecycle of the stream processors. Stream processor Data source(s) Stream processor Data source(s) Streaming manager Streaming Stream processor Data source(s) application may be submitted Application driver Stream processor Data sources) You may need this Where your The streaming data source with some systems algorithm runs (Twitter, loT, Network, File...) and output store HPC Lab-CSE-HCMUT 23" }, { "page_index": 146, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_024.png", "page_index": 146, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:07+07:00" }, "raw_text": "(2)) Application driver: with some streaming systems, this will be the client code that defines your streaming programming and communicates with the streaming manager Streaming manager: the streaming manager has the general responsibility of getting your streaming job to the stream processor(s); in some cases it will control or request the resources required by the stream processors Stream processor: the place where your job runs; although this may take many shapes based on the streaming platform in use, the job remains the same: to execute the job that was submitted Data source(s): This represents the input and potentially the output data from your streaming job. With some platforms your job may be able to ingest data from multiple sources in a single job, whereas others may only allow ingestion from a single source. HPC Lab-CSE-HCMUT 24" }, { "page_index": 147, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_025.png", "page_index": 147, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:15+07:00" }, "raw_text": "Apache Spark Handles scheduling thejobs to run on Spark worker the workers Data source(s) The de facto platform for general- purpose distributed computation Spark worker Data source(s) Programming languages: Java. Your_program Contains Spark StreamingContext Spark worker Data sources) Scala, Python, and R (Spark client) Modules: Spark worker Data source(s) Spark Streaming Spark worker Data sources) MLlib (machine learning) Where your SparkR (integration with R) algorithm runs The streaming data source GraphX (for graph processing) (Twitter, loT, Network, File...) and output store Spark StreamingContext is the driver Aiob in Spark Streaming is the logic of your program that's bundled and passed to the Spark workers Spark workers: which run on any number of computers (from one to thousands) and are where your job (your streaming algorithm) is executed. They receive data from an external data source and communicate with the Spark StreamingContext that's running as part of the driver. HPC Lab-CSE-HCMUT 25" }, { "page_index": 148, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_026.png", "page_index": 148, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:19+07:00" }, "raw_text": "Apache Spark Spark SQL Streaming MLlib GraphX (SQL Queries) SQL Queries) (Machine Learning) (Graph Processing) Spark Core API Structured & Unstructured Scala Python Java R Compute Engine Memory Management, Task Scheduling, Fault Recovery, Interaction with Cluster Management Cluster Resource Manager Distributed Storage Data analytics HPC Lab-CSE-HCMUT 26" }, { "page_index": 149, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_027.png", "page_index": 149, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:23+07:00" }, "raw_text": "Message delivery semantics At-most-once - a message may get lost, but it will never be processed Simple a second time => s At-least-once - a message will never be lost, but it may be processed more than once If every time the streaming job receives the same message, it produces the same result => the duplicate-messages situation Exact/y-once - a message is never lost and will be processed only once Detect and ignore duplicates HPC Lab-CSE-HCMUT 27" }, { "page_index": 150, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_028.png", "page_index": 150, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:26+07:00" }, "raw_text": "Algorithms for (streaming) data analytics HPC Lab-CSE-HCMU l 28" }, { "page_index": 151, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_029.png", "page_index": 151, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:32+07:00" }, "raw_text": "Streaming data queries Ad-hoc queries - These are queries asked one time about a stream Ex: What is the maximum value seen so far in the stream? This style of query is the same kind you would execute against an RDBMS Continuous queries: These are queries that are, in essence, asked about the stream at all times Ex: Determine the maximum value ever seen in the stream emitted every five minutes and generate an alert if it exceeds a given threshold Product Query language support As of version 1.1.0 Apache Storm has had sQL support (http://storm.apache.org/releases/1.1.0/storm-sql.html) Apache Storm As of this writing it is still considered experimental and not ready for production use Since version 0.9 of Apache Samza there has been a JIRA open for adding query language support. As of this Apache Samza writing, that JIRA is still open, and Samza does not have any query language support: https://issues.apache.org/jira/browse/SAMZA-390 Table API supporting SQL-like expressions (http://ci.apache.org/projects/ flink/flink-docs-release- Apache Flink 0.9/libs/table.html) Apache Spark Streaming SparkSQL/Hive language support (http://spark.apache.org/docs/latest/ sql-programming-guide.html) HPC Lab-CSE-HCMUT 29" }, { "page_index": 152, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_030.png", "page_index": 152, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:38+07:00" }, "raw_text": "Constrains and relaxing One-pass (Mt In i qua d liu) o We must assume that the data is not being archived and that we only have one chance to process it gi nh d liu ko c lu li, ch c6 1 c hi x ly n6 Many traditional data-mining algorithms are iterative and require multiple passes over the data iu nay t ra thach thc trong vic duy tri va cp Concept drift nht cäc m hinh d oan m bo rng chüng vn (S thay i khäi nim) phü hp vi d liu mi. This is a phenomenon that may impact your predictive models. Concept drift may happen over time as your data evolves and various statistical properties of it change Resource constraints Räng buc v tai nguyén o A temporary peak in the data speed or volume => an algorithm may have to drop tuples that can't be processed in time, called /oad shedding có th cn thit phi loi b mt s d liu khöng th x lý trong thi gian thc Domain constraints Rang buc v Inh vc huge collected data => challenges in analytics c6 th c6 nhng rang buc hoc quy nh c th trong Inh vc cn c tuan th HPC Lab-CSE-HCMUT 30" }, { "page_index": 153, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_031.png", "page_index": 153, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:41+07:00" }, "raw_text": "Stream time and event time Stream time is the time at which an event enters the streaming system: Tstream(e) Event time is the time at which the event occurs: Tevent(e) Skew Stmtrsme [stream(e) > Tevent(e) Time skew = Tstream(e) - Tevent(e) Event time HPC Lab-CSE-HCMUT 3" }, { "page_index": 154, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_032.png", "page_index": 154, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:46+07:00" }, "raw_text": "Windows of time Due to its size and never-ending nature, the stream processing engine can't keep an entire stream of data in memory Cannot perform traditional batch processing on it A window of data represents a certain amount of data that we can perform computations on The trigger policy defines the rules a stream-processing system uses to notify our code that it's time to process all the data that is in the window The eviction policy defines the rules used to decide if a data element should be evicted from the window b x6a Both polices are driven by either time or the quantity of data in the window. HPC Lab-CSE-HCMUT 32" }, { "page_index": 155, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_033.png", "page_index": 155, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:50+07:00" }, "raw_text": "Sliding window The sliding window technique uses eviction and trigger policies that are based on time The window length represents the eviction policy-the duration of time that data is retained and available for processing The sliding interval defines the trigger policy Windowlength Sliding interval 0 1 2 3 Time (in seconds) HPC Lab-CSE-HCMUT 33" }, { "page_index": 156, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_034.png", "page_index": 156, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:40:56+07:00" }, "raw_text": "Sliding window support in popular stream-processing frameworks Framework Sliding window Event or stream time Comments Spark Streaming Yes Stream time Spark Streaming doesn't allow custom policies. Storm No N/A Storm doesn't provide native support for sliding windowing, but it could be implemented usingtimers. Flink Yes Both Flink allows a user to define a custom policy and trigger policies. Samza No N/A Samza doesn't provide direct support for sliding windows. HPC Lab-CSE-HCMUl 34" }, { "page_index": 157, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_035.png", "page_index": 157, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:01+07:00" }, "raw_text": "Tumbling window Eviction and trigger policy Count-based count=2 The eviction policy is always based on the window being full The trigger policy is based on either 1 - - the count of items in the window or time 0 1 2 s Ing Count-based Time (in seconds Temporal-based thi gian Eviction and trigger policy time=2 Temporal-based 0 1 2 3 4 Time (in seconds) HPC Lab-CSE-HCMUT 35" }, { "page_index": 158, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_036.png", "page_index": 158, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:06+07:00" }, "raw_text": "Tumbling window support in popular stream-processing frameworks Framework Count Temporal Comments Spark Streaming No No Currently you would need to build this. Storm Yes Yes Although Storm does not have the native window- ing support,we can easily implement this. Flink Yes Yes Flink has built-in support for both types of tum bling windows. Samza No Yes Samza does not provide direct support for sliding windows. HPC Lab-CSE-HCMUT 36" }, { "page_index": 159, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_037.png", "page_index": 159, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:11+07:00" }, "raw_text": "Algorithms Do khng th lu tr toan b d liu ang trinh bay (vi d liu ang trinh bay la lién tc va In), mt cäch tip cn thng thng la ly mu mt Sampling phn nh ca d liu thc hin tinh toán hoc phan tich. Ly mu nay giúp ta có cái nhin tng quan v d liu ma khng cn lu tr toan b o Since we cannot store the entire stream, one obvious approach is to store a sample Cau hi nay lién quan n vic kim tra xem mt phn t c th t d liu ang trinh bay ä tng xut hin trong Membership Kim tra s tn ti d liu trc 6 cha. iu nay c6 th hu ich trong vic theo döi s xut hin ca s kin c th. o Has this stream element ever occurred in the stream before? Frequency o How many times has stream element X occurred? Counting distinct elements o Count the distinct items in a stream, but remember we are con- strained by memory and don't have the luxury of storing the entire stream HPC Lab-CSE-HCMUT 37" }, { "page_index": 160, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_038.png", "page_index": 160, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:16+07:00" }, "raw_text": "Sampling Distributed (big data) Sampling Fixed-proportion Sample a fixed proportion of elements in the stream (say 1 in 10 Use a hash function that hashes keys of tuples uniformly into 10 buckets Fixed-size Maintain a random sample of fixed size over a potentially infinite stream Reservoir sampling Reservoir sampling Store all the first s elements of the stream to S Suppose we have seen n-1 elements, and now the nth element arrives (n > s) With probability s/n, keep the nth element, else discard it If we picked the nth element, then it replaces one of the s elements in the sample S, picked uniformly at random choose 1 to remove if the new data accepted HPC Lab-CSE-HCMUT 38" }, { "page_index": 161, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_039.png", "page_index": 161, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:21+07:00" }, "raw_text": "Membership Has this stream element ever occurred in the stream before? Bloom filtering o a binary bit array of length m (Target Bit) and is associated with a set of k independent hash functions all item x and with all hash functions: bit [hi(x)] = 1 MEMBERSHIP of stream element z =AND(h1(z),h2(z),...,hk(z) False Positive: it's wrong, but i say true False Negative: I say no but it's true Item (82065728809f4befb24dce98df4a7a9e h1 h2 h3 h4 1 1111 Target_Bit length m HPC Lab-CSE-HCMUT 39" }, { "page_index": 162, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_040.png", "page_index": 162, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:27+07:00" }, "raw_text": "Spam mail with Bloom filtering (1) y = 1B e-mails/darts number of items x = 8B targets/bits length of bits Probability that a given target won't be hit by any darts (i.e. zero bit)? Probability that given dart will not hit a given targets is x-1 x (x-1=(1- Probability that none of y darts will hit a given target is - e X y 1 Probability that any given bit will be zero is e X e 8 1 Probability that given bit will be 1 is 1 - e-8 = 0.1175 Slightly less than 1/8=0.125 false positive HPC Lab-CSE-HCMUT 40" }, { "page_index": 163, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_041.png", "page_index": 163, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:31+07:00" }, "raw_text": " S has m members, array has n bits and there are k hash functions Targets x=n Number of darts y=km We want proportion of 0 be large So non-S will hash to zero at least once Choose k to be n/m or less y km =0.37=37% Probability of 0 is e X e n km Probability of 1 is 1-e n k km (1 - e Probability of false positive: n HPC Lab-CSE-HCMUT" }, { "page_index": 164, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_042.png", "page_index": 164, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:35+07:00" }, "raw_text": "Spam mail with Bloom filtering (3) In the previous example Fraction on 1's is 0.1175 Also the probability of false positive Use two different hash functions 2B darts on 8B targets increase => accuracy increase 7 Probability of zero is e -4 2 =0.0493 HPC Lab-CSE-HCMUT 42" }, { "page_index": 165, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_043.png", "page_index": 165, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:38+07:00" }, "raw_text": "Frequency How many times has stream element X occurred? Count-Min Sketch o A point query: a particular stream element A range query: frequencies in a given range. An inner product query: the join size of two sketches; Ex: we may use this to provide a summarization to this question: What products were viewed after an ad was served? HPC Lab-CSE-HCMUT 43" }, { "page_index": 166, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_044.png", "page_index": 166, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:46+07:00" }, "raw_text": "Count-Min sketch Item 82065728809f4befb24dce98df4a7a9e w numeric arrays, often called counters, the hashing length of each is defined by the length m (m << n items) Each array is indexed starting at 0 and has a range of {0...m - 1} h1 0 Each counter must be associated with a h2 0 W h3 different hash function (h1, h2, h3,...) , which h4 00 01 must be pairwise independent Count first and compute the minimum next : m Count step: hash the item value using the hash function for each respective row and then increment the count for the cell the value hashes to by 1 Min step: The minimum value from the w cells represents the approximate count for the number of times the item was viewed This algorithm will never undercount, but could overcount A width of 8 and a count of 128 (a 2-dimensional array of 8 x 128) the relative error was approximately 1.5% and the probability of the relative error being 1.5% is 99.6% HPC Lab-CSE-HCMUT" }, { "page_index": 167, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_045.png", "page_index": 167, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:51+07:00" }, "raw_text": "Counting distinct elements Count the distinct items in a stream, but remember we are constrained by memory and do not have the luxury of storing the entire stream Bit-pattern-based o Observation of patterns of bits that occur at the beginning of the binary value of each element of the stream. Using the bit pattern - more specifically, the leading zeros in the binary representation of a hash of the stream element - the cardinality is determined o LogLog,HyperLogLog,and HyperLogLog++ Order statistics-based The algorithms in this class are based on order statistics, such as the smallest values that appears in a stream MinCount and Bar-Yossef HPC Lab-CSE-HCMUT 45" }, { "page_index": 168, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_3/slide_046.png", "page_index": 168, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:56+07:00" }, "raw_text": "Flajolet-Martin algorithm The Flajolet-Martin algorithm h(a) is hashed to a bit string Hash element to a sufficiently long bit string Number of zeros at the end is tail lengths of h(a More possible hash values then elements 64 bits for URLs (264 values) Let r be maximum tail length seen so far Whenever number of distinct elements increases Estimate number of different elements as 2' Number of different hash-values increases The probability of \"special/unusual\" hash-value increases Unusual value Ending in many 0's Probability that any given element has tail length at least r is 2-r For m distinct element in the stream, the probability that none of them has at least r is (1 - 2-r)m = (1 - 2-r)2-rm2r Estimate of m Estimate the cardinality of M as 2r/, where 0.77351. HPC Lab-CSE-HCMUT 46" }, { "page_index": 169, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_001.png", "page_index": 169, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:41:59+07:00" }, "raw_text": "Distributed Systems Virtual t & Global time states Time stamp is 1 of virtual time Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE- HCMUT" }, { "page_index": 170, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_002.png", "page_index": 170, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:02+07:00" }, "raw_text": "Contents Time ordering and clock synchronization . Virtual time (logical clock) Distributed snapshot (global state) Consistent/Inconsistent global state Rollback Recovery Debugging & Race messages HPC Lab - CSE - HCMUT 2" }, { "page_index": 171, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_003.png", "page_index": 171, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:08+07:00" }, "raw_text": "Clock synchronization Time in unambiguous in centralized systems o System clock keeps time, all entities use this for time Distributed systems: each node has own system clock o Crystal-based clocks are less accurate (1 part in million) o Prob/em: An event that occurred after another may be assigned an earlier time 2145 2146 2147 Time according Computer on 2144 to local clock which compiler runs output.o created Time according Computer on 2142 2143 2144 2145 to local clock which editor runs output.c created HPC Lab - CSE - HCMUT 3" }, { "page_index": 172, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_004.png", "page_index": 172, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:12+07:00" }, "raw_text": "Physical clocks: a primer Accurate clocks are atomic oscillators o 1s 9,192,631,770 transitions of the cesium 133 atom . Most clocks are less accurate (e.g., mechanical watches) Computers use crystal-based blocks (one part in million) Results in c/ock drift . How do you tell time? o Use astronomical metrics (solar day) Universal coordinated time (UTC) - international standard based on atomic time Add leap seconds to be consistent with astronomical time UTC broadcast on radio (satellite and earth) Receivers accurate to 0.1 - 10 ms Need to synchronize machines with a master or with one another HPC Lab - CSE - HCMUT" }, { "page_index": 173, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_005.png", "page_index": 173, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:16+07:00" }, "raw_text": "Clock synchronization Each clock has a maximum drift rate r - 1-r < dC/dt < 1+r Two clocks may drift by 2r*Dtin time D o To limit drift to d - resynchronize every d/2r seconds dC dt dC Clock time, C = 1 dt dC dt Fast clock Perfect clock Slow clock UTC,t HPC Lab - CSE - HCMUT 5" }, { "page_index": 174, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_006.png", "page_index": 174, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:20+07:00" }, "raw_text": "Cristian's algorithm assume T(reg) and T(replay) is the same Synchronize machines to a time server with time server process P a UTC receiver Machine Preguests time from server every d/2r seconds t req o Receives time t from server, Psets clock to t+treply_where treply is the time to send t repry reply to P time Improve accuracy by making a series of measurements network HPC Lab - CSE - HCMUT 6" }, { "page_index": 175, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_007.png", "page_index": 175, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:23+07:00" }, "raw_text": "Berkeley algorithm Used in systems without UTC receiver o Keep clocks synchronized with one another One computer is master, other are s/aves Master periodically polls slaves for their times Average times and return differences to slaves Communication delays compensated as in Cristian' s algorithm o Failure of master => election of a new master HPC Lab - CSE - HCMUT 7" }, { "page_index": 176, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_008.png", "page_index": 176, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:29+07:00" }, "raw_text": "Berkeley algorithm Time daemon 3:00 3:00 3:05 3:00 0 +5 3:00 10 +15 3:00 +25 -20 Network 2:50 3:25 2:50 3:25 3:05 3:05 (a) (b) (c) a) The time daemon asks all the other machines for their clock values b) The machines answer c) The time daemon tells everyone how to adjust their clock HPC Lab - CSE - HCMUT 8" }, { "page_index": 177, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_009.png", "page_index": 177, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:35+07:00" }, "raw_text": "Centralized (Tp Trung) Distributed approaches Mäy ch chinh la Server A Mi gi, tt c cac may ch khac nhn thi gian t Server A Decentralized (Phi Tp Trung): Mi may ch (Server A, Server B, Server C, ...) tham gia Both approaches studied thus far are centralized vao vic gi va nhn thi gian. Tt c cäc giá tr thi gian c thu thp va tinh giá tr trung binh ng b hóa mng. Decentralized algorithms: use resynchronization intervals Trong thc t, cäc giao thc nh NTP s dng hng tip cn phi tp trung m bo rng mng máy tinh hot ng ng b va chinh xac. o Broadcast time at the start of the interval o Collect all other broadcast that arrive in a period S o Use average value of all reported times o Can throw away few highest and lowest values Approaches in use today o rdate: synchronizes a machine with a specified machine Network Time Protocol (NTP) Uses advanced techniques for accuracies of 1-50 ms HPC Lab - CSE - HCMUT 9" }, { "page_index": 178, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_010.png", "page_index": 178, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:38+07:00" }, "raw_text": "Logical clocks . For many problems, internal consistency of clocks is important o Absolute time is less important o Use logical clocks Key idea: o Clock synchronization need not be absolute o If two machines do not interact, no need to synchronize them More importantly, processes need to agree on the order in which events occur rather than the time at which they occurred. HPC Lab - CSE - HCMUT 10" }, { "page_index": 179, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_011.png", "page_index": 179, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:43+07:00" }, "raw_text": "o 7 b lordor Event ordering Problem: define a total ordering of all events that occur in a system Events in a single processor machine are totally ordered In a distributed system: o No global clock, local clocks may be unsynchronized o Cannot order events on different machines using local times Key idea [Leslie Lamport] o Processes exchange messages Message must be sent before received Send/receive used to order events (and synchronize clocks HPC Lab - CSE - HCMUT" }, { "page_index": 180, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_012.png", "page_index": 180, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:48+07:00" }, "raw_text": "Happened-Before relation If A and B are events in the same process and A executed before B,then A->B If A represents sending of a message and B is the receipt of this message, then A->B Relation is transitive: (A->B) N (B->C) => A->C p messages 3 S S1 Partial ordering on events S not concury ent cC ll s1 HPC Lab - CSE - HCMUT l2" }, { "page_index": 181, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_013.png", "page_index": 181, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:42:52+07:00" }, "raw_text": "Event ordering using HB Goal: define the notion of time of an event such that - If A->B then C(A) < C(B) - If A and B are concurrent, then C(A) or > C(B) Solution: Each processor maintains a logical clock LC: Whenever an event occurs Iocally at I,LCj = LC;+1 When i sends message toj, piggyback LC. When j receives message from i > If LCj < LCj then LCj = LCi +1 eIse do nothing Claim: this algorithm meets the above goals HPC Lab - CSE - HCMUT 3" }, { "page_index": 182, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_014.png", "page_index": 182, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:01+07:00" }, "raw_text": "Lamport's logical clocks 0 0 0 6 A 8 10 6 A 8 10 12 16 20 12 16 20 18 24 B 30 18 24 B 30 24 32 40 24 32 40 30 40 5Q 30 40 50 36 48 C 60 36 48 C 60 42 56 70 42 61 70 48 D 64 80 48 D 69 80 54 72 90 70 77 90 60 80 100 76 85 100 (a) (b) CUrre c r HPC Lab - CSE - HCMUT 4" }, { "page_index": 183, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_015.png", "page_index": 183, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:05+07:00" }, "raw_text": "More Canonical problems Causality o Vector timestamps Global state and termination detection . Election algorithms HPC Lab - CSE - HCMUT 15" }, { "page_index": 184, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_016.png", "page_index": 184, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:10+07:00" }, "raw_text": "Causality Lamport's logical clocks o If A -> B then C(A) < C(B) o Reverse is not true!! Nothing can be said about events by comparing time-stamps! If C(A) < C(B), then ??? . Need to maintain causality o Causal delivery:If send(m) -> send(n) => deliver(m) -> deliver(n) o Capture causal relationships between groups of processes o Need a time-stamping mechanism such that: If T(A) < T(B) then A should have causally preceded B A happened before B HPC Lab - CSE - HCMUT 6" }, { "page_index": 185, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_017.png", "page_index": 185, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:15+07:00" }, "raw_text": "Vector Clocks . Each process i maintains a vector Vi o Vi [i: number of events that have occurred at process z o Vi[j]: number of events occurred at process j that process i knows Update vector clocks as follows o Local event: increment Vi[i] o Send a message: piggyback entire vector V o Receipt of a message: 4Vj[j] = Vj[j]+1 Receiver is told about how many events the sender knows occurred at another process k Vj[k] = max(Vj[k],V{[k]) . Homework: convince yourself that if v(A) < V(B), then A causally precedes B H0 HPC Lab - CSE - HCMUT l7" }, { "page_index": 186, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_018.png", "page_index": 186, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:21+07:00" }, "raw_text": "P 0 P1 P2 P1 P2 [0,0,0] [0,0,0] [0,0,0] [0,0,0] [0,0,0] [0,0,0] [1,0,0]A1 [0,0,1]C1 [1,0,0]A1 [0,0,1] C1 [2,0,0]A2 [2,0,0]Az [2,0,0] Co,1,6] U D [2,1,0]B1 B1 [0,2,6] [2,2,0]B2 [2,2,0] B2 [0,2,2] [2,3,0]B3 B3][0,3,0] [2,2,2] C2 C2 (3,3,0] [3,3,0]A3 [2,3,0] A3 [2,2,3] C3 C3 C0,2,3] [2,2,3] [4,3,3] [4,3,3]A4 A4 HPC Lab - CSE - HCMUT 8" }, { "page_index": 187, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_019.png", "page_index": 187, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:27+07:00" }, "raw_text": "Vector Clocks: Happened-Before j has the vector VB: Event A in process i has the vector r Va and event B in process A->B Vp[i] >Va[i] be Cwqf ul Stand at B look A B-A Vp[j] Stand at A look B A11 B <>-(A-B)-(B-A (VB[i] Az, Az-> A3, A3-> A4 [0,0,0] [0,0,0] [0,0,0] B1-> Bz, Bz-> B3 C1-> Cz, Cz-> C3 [1,0,0]A1 [0,0,1] C1 [2,0,0]A2 [2,0,0] Az- B1, Bz- C2 B3-> A3, C3-> A4 [2,1,0]B1 [2,2,0] [2,2,0]B2 A1-> B1, Az-> Bz, B1-> Cz, Bz-> A Bz-> C3, Cz-> A4, B3-> A4 [2,3,0]B3 [2,2,2] C2 [2,3,0] [3,3,0]A3 [2,2,3] C3 A1-> Bz, A1- Cz, A1-> C3, A1-> B3 [2,2,3] B1-> C3, B1-> A4, B3-> A4 [4,3,3]A4 C1 -> A4 A1I C1, Az Il C1, A3 II C1 B1II C1, Bz II C1, B3 11 C1 B31 Cz, A311 Cz, .. HPC Lab - CSE - HCMUT 20" }, { "page_index": 189, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_021.png", "page_index": 189, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:36+07:00" }, "raw_text": "A1- Az, A2-> A3, A3-> A4 B1-> Bz, Bz-> B3 C1- C2, Cz-> C3 A1 A2 Az-> B1, Bz- C2 B3-> A3, C3-> A4 B1 B2 A1-> B1, Az-> Bz, B1-> Cz, Bz-> A Bz-> C3, Cz-> A4, B3-> A4 B3 C2 A3 C3 A1-> Bz, A1- Cz, A1-> C3, A1-> B3 B1-> C3, B1-> A4, B3-> A4 A4 A1I C1, Az Il C1, A3 II C1 B1II C1, Bz II C1, B3 11 C1 B3 l Cz, A3 1l Cz, .. HPC Lab - CSE - HCMUT 2" }, { "page_index": 190, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_022.png", "page_index": 190, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:41+07:00" }, "raw_text": "Po P1 X2 A1- Az, A2-> A3, A3-> A4 B1-> Bz, Bz- B3 C1- Cz, Cz-> C3 C The Past A2 Az- B1, Bz- C2 B3-> A3, C3-> A4 Cowcurnewcy Cowcunrewcy A1-> B1, Az-> Bz, B1-> Cz, Bz-> A B2 Bz-> C3, Cz-> A4, B3-> A4 B3 C2 A TheFuture C3 A1-> Bz, A1- Cz, A1-> C3, A1-> B3 B1-> C3, B1-> A4, B3-> A4 A4 y A4 A1I C1, Az Il C1, A3 II C1 B1II C1, Bz II C1, B3 11 C1 B3 l Cz, A3 1l Cz, .. HPC Lab - CSE - HCMUT 22" }, { "page_index": 191, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_023.png", "page_index": 191, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:46+07:00" }, "raw_text": "Keyword: system call in Linux => image of process checkpoint 1 Global State JtmrT code : exe commands Global state of a distributed system o Local state of each process dont care code Messages sent but not received (state of the queues data . Many applications need to know the state of the system o Failure recovery, distributed deadlock detection stack . Problem: how can you figure out the state of a distributed system? o Each process is independent o No global clock or synchronization Distributed snapshot: a consistent global state HPC Lab - CSE - HCMUT 23" }, { "page_index": 192, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_024.png", "page_index": 192, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:51+07:00" }, "raw_text": "Consistent/Inconsistent cuts intransive message Consistent cut Inconsistent cut P1 P1 Time m1 m3 m1 m3 P2 P2 m2 m2 P3 P3 send in future but receive in the past Sender of m2 cannot orphan message be identified with this cut (a) A consistent cut (b) An inconsistent cut if 2 HB checkpoint => inconsistent HPC Lab - CSE - HCMUT 24" }, { "page_index": 193, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_025.png", "page_index": 193, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:55+07:00" }, "raw_text": "Distributed snapshot algorithm Assume each process communicates with another process using unidirectional point-to-point channels (e.g, TCP connections) Any process can initiate the algorithm o Checkpoint local state o Send marker on every outgoing channel On receiving a marker Checkpoint state if first marker and send marker on outgoing channels, save 0 ( messages on all other channels until: Subsequent marker on a channel: stop saving state for that channe HPC Lab - CSE - HCMUT 25" }, { "page_index": 194, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_026.png", "page_index": 194, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:43:59+07:00" }, "raw_text": "Distributed Snapshot A process finishes when It receives a marker on each incoming channel and processes them all State: local state plus state of all channels Send state to initiator Any process can initiate snapshot Multiple snapshots may be in progress Each is separate, and each is distinguished by tagging the marker with the initiator ID (and sequence number) B M A M c HPC Lab - CSE - HCMUT 26" }, { "page_index": 195, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_027.png", "page_index": 195, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:03+07:00" }, "raw_text": "Snapshot algorithm example (1) Incoming Outgoing Process State message message M Q Initiate'marker Local Marker filesystem (a) Organization of a process and channels for a distributed snapshot HPC Lab - CSE - HCMUT 27" }, { "page_index": 196, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_028.png", "page_index": 196, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:08+07:00" }, "raw_text": "Snapshot algorithm example (2) finish init V M M a d M Recorded state (b) (c) Finish marker => record a, b, c, d (d) => meet marker again => finish (b) Process Q receives a marker for the first time and records its local state (c) Q records all incoming messages (d) Q receives a marker from its incoming channel and finishes recording the state of the incoming channel HPC Lab - CSE - HCMUT 28" }, { "page_index": 197, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_029.png", "page_index": 197, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:11+07:00" }, "raw_text": "Recovery . Techniques thus far allow failure handling correct state Techniques o Checkpointing: Periodically checkpoint state Upon a crash roll back to a previous checkpoint with a consistent state HPC Lab - CSE - HCMUT 29" }, { "page_index": 198, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_030.png", "page_index": 198, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:17+07:00" }, "raw_text": "Independent checkpointing lnitial state Checkpoint P1 Failure m' m P2 Time Each processes periodically checkpoints independently of other processes Upon a failure, work backwards to locate a consistent cut Problem: if most recent checkpoints form inconsistenct cut, will need to keep rolling back until a consistent cut is found Cascading rollbacks can lead to a domino effect. HPC Lab - CSE - HCMUT 30" }, { "page_index": 199, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_031.png", "page_index": 199, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:19+07:00" }, "raw_text": "Coordinated checkpointing Take a distributed snapshot Upon a failure, roll back to the latest snapshot o All process restart from the latest snapshot HPC Lab- CSE - HCMUT 3" }, { "page_index": 200, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_032.png", "page_index": 200, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:23+07:00" }, "raw_text": "Message logging Checkpointing is expensive o All processes restart from previous consistent cut Taking a snapshot is expensive Infrequent snapshots => all computations after previous snapshot will need to be redone [wasteful] Combine checkpointing (expensive) with message logging (cheap) o Take infreguent checkpoints o Log all messages between checkpoints to local stable storage o To recover: simply replay messages from previous checkpoint Avoids recomputations from previous checkpoint HPC Lab - CSE - HCMUT 32" }, { "page_index": 201, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_033.png", "page_index": 201, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:26+07:00" }, "raw_text": "Debugging HPC Lab - CSE - HCMUT 33" }, { "page_index": 202, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_034.png", "page_index": 202, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:30+07:00" }, "raw_text": "Debugging A1 A2 A3 A4 A5 X B5 B1 B2 B3 B6 B4 C1 C2 C3 C4 C5 Error is detected at B4 Where is the root of error? o P: Somewhere before B4 O P2,Pz? HPC Lab - CSE - HCMUT 34" }, { "page_index": 203, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_035.png", "page_index": 203, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:35+07:00" }, "raw_text": "Debugging P 0 A1 A2 TA3 A4 A5 The Past X B5 B1 B2 B3 B6 B4 4 C3 C4 C5 C2 Error is detected at B4 Event E -> B4, then actions at E may lead to the error at B4 Where is the root of error? Debugging area = Past(B4) o P: Somewhere before B4 O Po,Pz? HPC Lab - CSE - HCMUT 35" }, { "page_index": 204, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_036.png", "page_index": 204, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:37+07:00" }, "raw_text": "Race messages HPC Lab - CSE - HCMUT 36" }, { "page_index": 205, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_037.png", "page_index": 205, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:41+07:00" }, "raw_text": "Race messages P A1 A1 m1=1 m1=1 mz=2 C1 G m2=2 V V S= B1 -Bz = 2 -1 =1 S= B1 -Bz =1 - 2 = -1 Non-determinism of parallel/distributed progam HPC Lab - CSE - HCMUT 37" }, { "page_index": 206, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_038.png", "page_index": 206, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:45+07:00" }, "raw_text": "X A1 m1 m1 A2 m2 B1 m2 B1 B2 B2 FIFO Ordering Non-FIFO Ordering FlFO ordering per process (Send (m1) -> Send (mz)) => (Receive (m1) -> Receive (mz)) HPC Lab - CSE - HCMUT 38" }, { "page_index": 207, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_4/slide_039.png", "page_index": 207, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:50+07:00" }, "raw_text": "Race messages Receive from any source m2 FIFO ordering per process (Bz Il Cz) => m4 may be received at Bz A1 m3 m1 The opposite is wrong? o (A1-> B3) but m1 may be received at B3 Finding all race messages at a receive event is C2 m4 a problem B 3 A2 m5 m7 B4 m6 C3 m6 cannot receive in B2 HPC Lab - CSE - HCMUT 39" }, { "page_index": 208, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_001.png", "page_index": 208, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:53+07:00" }, "raw_text": "Distributed Systems Distributed File S ystems Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMUl" }, { "page_index": 209, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_002.png", "page_index": 209, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:44:57+07:00" }, "raw_text": "Contents Distributed File System architecture . NFS,HDFS S+cateful stateless: dont depend on current state server crash => loss current state A SA a 2 current state HPC Lab - CSE - HCMUT 2" }, { "page_index": 210, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_003.png", "page_index": 210, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:01+07:00" }, "raw_text": "What is a file system? Persistent stored data sets Hierarchical name space visible to all processes API with the following characteristics: access and update operations on persistently stored data sets Sequential access model (with additional random facilities) 0 Sharing of data between users, with access control Concurrent access: certainly for read-only access 0 what about updates? 0 Other features: mountable file stores https://docs.google.com/document/d/1wJXpHEM0kj68jUcL 0 DPRPFHizkHUYTCEyslSncYaJEM/edit?usp=sharing more? ... HPC Lab - CSE- HCMUT" }, { "page_index": 211, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_004.png", "page_index": 211, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:07+07:00" }, "raw_text": "What is a file system? UNiX file system operations filedes = open(name, mode) Opens an existing file with the given name. filedes = creat(name, mode) Creates a new file with the given name. Both operations deliver a file descriptor referencing the open file. The mode is read, write or both. status = close(filedes) Closes the open file filedes. count = read(filedes, buffer, n) Transfers n bytes from the file referenced by filedes to buffer. Transfers n bytes to the file referenced by filedes from buffer. count = write(filedes, buffer, n) Both operations deliver the number of bytes actually transferred and advance the read-write pointer. pos = Iseek(filedes, offset, whence) Moves the read-write pointer to offset (relative or absolute depending on whence). status = unlink(name) Removes the file name from the directory structure. If the file has no other names, it is deleted. status = link(name1, name2 Adds a new name (name2) for a file (name1) status = stat(name, buffer) Gets the file attributes for file name into buffer. 4 HPC Lab - CSE - HCMUl" }, { "page_index": 212, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_005.png", "page_index": 212, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:11+07:00" }, "raw_text": "What is a file system? File attribute record structure File length Creation timestamp updated Read timestamp by system: Write timestamp Attribute timestamp Reference count Owner File type updated Access control list by owner: E.g. for UNIX: rw-rw-r-- 5 HPC Lab - CSE - HCMUl" }, { "page_index": 213, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_006.png", "page_index": 213, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:15+07:00" }, "raw_text": "Model file service architecture Lookup Remember this AddName Client computer Server computer UnName GetNames Directory service Application Application program program Flat file service Client module Read Write Create Delete GetAttributes SetAttributes 6 HPC Lab - CSE - HCMUT" }, { "page_index": 214, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_007.png", "page_index": 214, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:20+07:00" }, "raw_text": "Flat file service Position of first byte Directory service Read(Fileld, i, n) -> Data Lookup(Dir, Name) -> FileId Fileld Write(Fileld, i, Data) AddName(Dir, Name, File CreateO) -> Fileld UnName(Dir, Name) Delete(Fileld) GetNames(Dir, Pattern) -> NameSeq GetAttributes(Fileld) -> Attr SetAttributes(Fileld, Attr) / HPC Lab - CSE- HCMUT" }, { "page_index": 215, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_008.png", "page_index": 215, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:22+07:00" }, "raw_text": "Network File System - NFS HPC Lab - CSE - HCMUT 8" }, { "page_index": 216, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_009.png", "page_index": 216, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:29+07:00" }, "raw_text": "NFS Client computer Server computer The Network File System (NFS) was Application Application developed to allow program program machines to mount a UNIX disk partition on a system calls UNIX kernel- remote machine as if it UNIX kerneI Virtual file system Virtual file system were on a local hard Local Remote drive tsssis a!! This allows for fast. UNIX UNIX NFS NFS seamless sharing of files file file client server system system across a network. NFS protocol 9 HPC Lab - CSE - HCMUT" }, { "page_index": 217, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_010.png", "page_index": 217, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:34+07:00" }, "raw_text": "NFS (1) server operations lookup(dirfh, name) -> fh, attr Returns file handle and attributes for the file name in the directory dirth create(dirfh, name, attr) -> newfh, attr Creates a new file name in directory dirfh with attributes attr and returns the new file handle and attributes. remove(dirfh, name) status Removes file name from directory dirfh getattr(fh) -> attr Returns file attributes of file fh. (Similar to the UNIX stat system call. Sets the attributes (mode, user id, group id, size, access time and setattr(fh, attr) -> attr modify time of a file). Setting the size to 0 truncates the file. read(fh, offset, count) -> attr, data Returns up to count bytes of data from a file starting at offset. Also returns the latest attributes of the file. write(fh, offset, count, data) -> attr Writes count bytes of data to a file starting at offset. Returns the attributes of the file after the write has taken place rename(dirfh, name, todirfh, toname) -> status Changes the name of file name in directory dirfh to toname in directory to todirfh link(newdirfh, newname, dirfh, name) -> status Creates an entry newname in the directory newdirfh which refers to file name in the directory dirfh 10 HPC Lab- CSE- HCMUT" }, { "page_index": 218, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_011.png", "page_index": 218, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:40+07:00" }, "raw_text": "NFS (2) server operations symlink(newdirfh, newname, string) -> status Creates an entry newname in the directory newdirfh of type symbolic link with the value string. The server does not interpret the string but makes a symbolic link file to hold it. readlink(fh) -> string Returns the string that is associated with the symbolic link file identified by fh. mkdir(dirfh, name, attr) -> newfh, attr Creates a new directory name with attributes attr and returns the new file handle and attributes. rmdir(dirfh, name) -> status Removes the empty directory name from the parent directory dirfh. Fails if the directory is not empty. readdir(dirfh, cookie, count) -> entries Returns up to count bytes of directory entries from the directory dirfh. Each entry contains a file name, a file handle, and an opaque pointer to the next directory entry, called a cookie. The cookie is used in subsequent readdir calls to start reading from the following entry. If the value of cookie is 0, reads from the first entry in the directory. statfs(fh) -> fsstats Returns file system information (such as block size, number of free blocks and so on) for the file system containing a file fh. 11 HPC Lab- CSE- HCMUT" }, { "page_index": 219, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_012.png", "page_index": 219, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:46+07:00" }, "raw_text": "NFS overview Remote Procedure Calls (RPC) for communication between client and server Client Implementation Provides transparent access to NFS file system UNIX contains Virtual File system layer (VFS Vnode: interface for procedures on an individual file Translates Vnode operations to NFS RPCs Server Implementation Stateless: Must not have anything only in memory Implication: All modified data written to stable storage before return control to client Servers often add NVRAM to improve performance 12 HPC Lab - CSE- HCMUT" }, { "page_index": 220, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_013.png", "page_index": 220, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:51+07:00" }, "raw_text": "Mapping g Unix system calls NFS operations Unix system call: fd = open(\"/dir/foo\") Traverse pathname to get filehandle for foo dirfh = lookup(rootdirfh, \"dir\"); fh = lookup(dirfh, \"foo\"); Record mapping from fd file descriptor to fh NFS file handle Set initial file offset to 0 for fd Return fd file descriptor Unix system call: read(fd,buffer,bytes) Get current file offset for fd Map fd to fh NFS filehandle Call data = read(fh, offset, bytes) and copy data into buffer Increment file offset by bytes Unix system call: cIose(fd) Free resources associated with fd 13 HPC Lab - CSE - HCMUT" }, { "page_index": 221, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_014.png", "page_index": 221, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:57+07:00" }, "raw_text": "Server 1 Client Server 2 / (root) /(root) / (root) ex port ymunix us r nts Remote Remote pe ople students x staff us ers mo u nt mount big jon bob jim ann jane joe Note: The file system mounted at /usr/students in the client is actually the sub-tree located at /export/people in Server 1 ; the file system mounted at /usr/staff in the client is actually the sub-tree located at /nfs/users in Server 2. 14 HPC Lab - CSE - HCMUT" }, { "page_index": 222, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_015.png", "page_index": 222, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:45:59+07:00" }, "raw_text": "Hadoop HPC Lab - CSE - HCMUT 15" }, { "page_index": 223, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_016.png", "page_index": 223, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:04+07:00" }, "raw_text": "Hadoop: main components Switch Switch Switch Switch CPU CPU CPU CPU - Mem Mem Mem Mem Disk Disk Disk Disk Server1 Server 2 ServerN-1 Server N Rack 1 Rack... Rack M Example with number of replicas per chunk = 2 16 HPC Lab - CSE- HCMUT" }, { "page_index": 224, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_017.png", "page_index": 224, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:09+07:00" }, "raw_text": "Hadoop: main components Switch Switch Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk HDFS Server 1 Server 2 Server N-1 Server N Rack 1 Rack Rack M HDFS: Hadoop Distributed File System Example with number of replicas per chunk = 2 17 HPC Lab - CSE- HCMUT" }, { "page_index": 225, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_018.png", "page_index": 225, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:17+07:00" }, "raw_text": "Hadoop: main components Switch Switch Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk Co C. C6 C2 C. Co C C HDFS C7 C2 C3 C6 C5 C3 C5 C7 Server1 Server2 ServerN-1 Server N Rack 1 Rack.. Rack M Example with number of replicas per chunk = 2 18 HPC Lab - CSE- HCMUT" }, { "page_index": 226, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_019.png", "page_index": 226, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:20+07:00" }, "raw_text": "What is HDFS HDFS is a distributed file system that is fault tolerant, scalable and extremely easy to expand HDFS is the primary distributed storage for Hadoop applications HDFS provides interfaces for applications to move themselves closer to data HDFS is designed to just work', however a working knowledge helps in diagnostics and improvements 19 HPC Lab - CSE- HCMUT" }, { "page_index": 227, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_020.png", "page_index": 227, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:24+07:00" }, "raw_text": "Components of HDFS There are two (and a half) types of machines in a HDFS cluster NameNode is the heart of an HDFS filesystem, it maintains and manages the file system metadata. E.g; what blocks make up a file and on which datanodes those blocks are stored DataNode where HDFS stores the actual data, there are usually quite a few of these 20 HPC Lab - CSE - HCMUT" }, { "page_index": 228, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_021.png", "page_index": 228, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:29+07:00" }, "raw_text": "HDFS Architecture fslmage Metadata (name, replicas,block id) /users/pkothuri/data/part0, r:3,{1,3,5} /users/pkothuri/data/part1, r:2,{2,4} Secondary HDFS Client Name Node Name Node namespace backup Replicatio6, balanging, Heartbeats etc Data Node Data Node Data Node Data Node Data Node 0 0 0 local disks local disks local disks local disks local disks 21 HPC Lab - CSE- HCMUT" }, { "page_index": 229, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_022.png", "page_index": 229, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:34+07:00" }, "raw_text": "Unique features of HDFS HDFS also has a bunch of unique features that make it ideal for distributed systems: Failure tolerant - data is duplicated across multiple DataNodes to protect against machine failures. The default is a replication factor of 3 (every block is stored on three machines). Scalability - data transfers happen directly with the DataNodes so your read/write capacity scales fairly well with the number of DataNodes Space - need more disk space? Just add more DataNodes and re-balance Industry standard - Other distributed applications are built on top of HDFS (HBase Map-Reduce) HDFS is designed to process large data sets with write-once-read-many it is not for low latency access 22 HPC Lab - CSE - HCMUT" }, { "page_index": 230, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_023.png", "page_index": 230, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:39+07:00" }, "raw_text": "HDFS - Data organization Each file written into HDFS is split into data blocks Each block is stored on one or more nodes Each copy of the block is called replica Block placement policy First replica is placed on the local node 0 Second replica is placed in a different rack 0 Third replica is placed in the same rack as the second replica O 23 HPC Lab - CSE- HCMUT" }, { "page_index": 231, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_024.png", "page_index": 231, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:45+07:00" }, "raw_text": "Read Operation in HDFS Address of block location on node 1 Distributed 2.Getblocklocations RPC) Address of block location on node 2 FileSystem open HDFS 1. Client 7. close() FSData Address of block location on node n nputstream Client JVM Metadata Client Node NameNode 0000 Packets Packets Packets 4.read() 6. read() 5. read() Blocks Blocks Blocks DataNode 1 DataNode n DataNode 2 24 HPC Lab - CSE- HCMUT" }, { "page_index": 232, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_5/slide_025.png", "page_index": 232, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:49+07:00" }, "raw_text": "Write Operation in HDFS 2. create() Distributed 7. complete() FileSystem NameNode HDFS Client 3 6. close() FSData namenode WOutputStream Client JVM Client Node 4.write packet 5. ack packet Pipeline of datanodes DataNode DataNode DataNode datanode 1 datanode 2 datanode 3 25 HPC Lab - CSE- HCMUT" }, { "page_index": 233, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_001.png", "page_index": 233, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:52+07:00" }, "raw_text": "Distributed Systems Transactions Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMUl" }, { "page_index": 234, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_002.png", "page_index": 234, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:55+07:00" }, "raw_text": "Slides Distributed Systems, Roxana Geambasu, Columbia University. HPC Lab - CSE - HCMUl 2" }, { "page_index": 235, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_003.png", "page_index": 235, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:46:58+07:00" }, "raw_text": "Contents Transaction is ... Transaction APls .ACID Two-Phase Logging (2PL) . Write-Ahead Logging (WAL) HPC Lab - CSE - HCMUT 3" }, { "page_index": 236, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_004.png", "page_index": 236, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:02+07:00" }, "raw_text": " A key component in any distributed application is a (distributed) database that maintains shared state Two challenges of building a non-distributed DB: Handling failures: failures are inevitable but they create the potential for partial computations and correctness of computations after restart Handling concurrency: concurrency is vital for performance (e.g., l/O is slow so need to overlap with computation), but it creates races. Need to use some form of synchronization to avoid those. HPC Lab - CSE - HCMUT" }, { "page_index": 237, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_005.png", "page_index": 237, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:06+07:00" }, "raw_text": "Transactions Turing-award-winning idea. Abstraction provided to programmers that encapsulates a unit of work against a database. Guarantees that the unit of work is executed atomically in the face of failures and is isolated from concurrency. >t 1 12 instructions encapsulate Do all of them, or you do nothing ! HPC Lab - CSE - HCMUT 5" }, { "page_index": 238, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_006.png", "page_index": 238, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:10+07:00" }, "raw_text": "Transaction APIs Simple but very powerful: txD = Begin( Starts a transaction. Returns a unigue ID for the transaction Outcome = Commit(txlD) Attempts to commit a transaction; returns whether or not the commit was successful. If successful, all operations in the transaction have been applied to the DB. If unsuccessful none of them has been applied. Abort(txlD Cancels all operations of a transaction and erases their effects on the DB. Can be invoked by the programmer or by the database engine itself. if fail, call abort => nothing gonna happen HPC Lab - CSE - HCMUT 6" }, { "page_index": 239, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_007.png", "page_index": 239, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:16+07:00" }, "raw_text": "Semantics By wrapping a set of accesses in a transaction, the database can hide failures and concurrency under meaningful guarantees One such set of guarantees is ACID: Atomicity: Either all operations in the transaction will complete successfully (commit outcome), or none of them will (abort outcome), regardless of failures. Isolation: A transaction's behavior is not impacted by the presence of concurrently executing transactions Durability: The effects of committed transactions survive failures Hide failure Concurrency Consistency example below Example: Suppose a database has a constraint that requires all customer orders to be associated with valid customer IDs. If a transaction attempts to insert an order with an invalid customer ID, the consistency property ensures that the transaction is rolled back, and the database remains in a consistent state where all orders have valid customer IDs. HPC Lab - CSE - HCMUT 7" }, { "page_index": 240, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_008.png", "page_index": 240, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:21+07:00" }, "raw_text": "Example TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) src_bal = Read(src) acc1 bal= Read(acc1) 1 1 if (src_bal > x): 2 acc2 bal = Read(acc2) 2 src_bal -= x Print(acc1_bal + acc2_bal) 3 3 4 Write(src_bal, src) 5 dst_bal = Read(dst) reuslt: the correct result. However, the balance is not the same as we want. It uses the past balance, not the present 6 dst bal+=x Write(dst_bal, dst) 7 The initial balances of accounts A, B are $100, $200 Invocation: TRANSFER(A,B, 50); lnvocation: PRINT SUM(A,B) Without transactions: What could go wrong? Think of crashes or inopportune interleavings between concurrent TRANSFER and REPORT_SUM processes HPC Lab - CSE - HCMUT 8" }, { "page_index": 241, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_009.png", "page_index": 241, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:27+07:00" }, "raw_text": "Example TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) src_bal = Read(src) 1 acc1_bal = Read(acc1) 1 2 if (src_bal > x): 2 acc2_bal = Read(acc2) 3 src_bal -= x Print(acc1_bal + acc2_bal) 4 Write(src_bal, src) 5 dst_bal = Read(dst) 6 dst_bal+= x Write(dst_bal, dst) 7 The initial balances of accounts A, B are $100, $200 Invocation: TRANSFER(A,B, 50); lnvocation: PRINT SUM(A,B) With transactions: How to fix these challenges with transactions? HPC Lab - CSE - HCMUT 9" }, { "page_index": 242, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_010.png", "page_index": 242, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:33+07:00" }, "raw_text": "Example TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) 0 txID = Begin() txID = Begin() 0 1 acc1_bal = Read(acc1) src_bal = Read(src) 1 2 acc2_bal = Read(acc2) if (src_bal > x): 2 3 Print(acc1_bal + acc2_bal) 3 src_bal -= x 4 Commit(txlD Write(src_bal, src) 4 5 dst_bal = Read(dst 6 dst_bal += x Write(dst_bal, dst) 7 8 return Commit(txlD 9 Abort(txD) 10 return FALSE The initial balances of accounts A, B are $100, $200 Invocation: TRANSFER(A,B, 50): Invocation: PRINT_SUM(A,B) HPC Lab - CSE -HCMU1 10" }, { "page_index": 243, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_011.png", "page_index": 243, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:38+07:00" }, "raw_text": "Implementing transactions Idea: Dont write to the DB first, instead, write to something first. Ex: write to Atomicity and Durability: memory first, then copy to the database Operations included in a transaction either all succeed or none succeed despite temporary failures of the process/machine running the DB (assume disk doesn't fail!). If they succeed, they persist despite failures. Key mechanism is write-ahead logging: log to disk sufficient information about each operation before you apply it to the database, such that in the event of a failure in the middle of a transaction, you can undo the effects of its operations on the database. lsolation Operations included in a transaction all witness the database in a coherent state, independent of other transactions. Key mechanism is locking: DB acguires locks on all rows read or written and maintains them until the end of the transaction. HPC Lab - CSE - HCMUT" }, { "page_index": 244, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_012.png", "page_index": 244, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:41+07:00" }, "raw_text": "Two-Phase Locking (2PL) HPC Lab - CSE - HCMUT 2" }, { "page_index": 245, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_013.png", "page_index": 245, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:46+07:00" }, "raw_text": "TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) 0 txID = Begin() 0 txID = Begin() 1 src_bal = Read(src) acc1_bal = Read(acc1) 1 2 if (src_bal > x): acc2_bal = Read(acc2) 2 3 src_bal -= x Print(acc1_bal + acc2_bal) 3 + Write(src_bal, src) 4 Commit(txlD) 5 dst_bal = Read(dst) 6 dst_bal += x 7 Write(dst_bal, dst) 8 return Commit(txlD What locks to take, when, and for how long 9 Abort(txlD) to keep them? 10 return FALSE HPC Lab - CSE - HCMUT 3" }, { "page_index": 246, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_014.png", "page_index": 246, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:51+07:00" }, "raw_text": "Option I : Global lock for entire transaction TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) 0 txID = Begin() lock(table) txID = Begin() - lock(table) 0 1 src_bal = Read(src) acc1_bal = Read(acc1) 1 2 if (src_bal > x): 2 acc2_bal = Read(acc2) 3 src_bal -= x 3 Print(acc1_bal + acc2_bal) + Write(src_bal, src) 4 Commit(txD - unlock(table 5 dst_bal = Read(dst) 6 dst_bal += x 7 Write(dst_bal, dst) 8 return Commit(txID) - unlock(table) 9 Abort(txlD) 1 unlock(table) Problem? 10 return FALSE HPC Lab - CSE - HCMU1 4" }, { "page_index": 247, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_015.png", "page_index": 247, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:47:58+07:00" }, "raw_text": "Option I : Global lock for entire transaction TRANSFER(src, dst, x) REPORT_SUM(acc1, acc2) 0 txID = Begin() lock(table) 0 txID = Begin() - lock(table) 1 src_bal = Read(src) acc1_bal = Read(acc1) 1 2 if (src_bal > x): 2 acc2_bal = Read(acc2) 3 src_bal -= x 3 Print(acc1_bal + acc2_bal) + Write(src_bal, src) 4 Commit(txlD) - unlock(table 5 dst_bal = Read(dst) 6 dst_bal +=x 7 Write(dst_bal, dst) 8 return Commit(txlD) - unlock(table Problem: poor performance. 9 Abort(txlD) 1 unlock(table) Serializes all transactions against that 10 return FALSE table, even if they don't conflict. HPC Lab - CSE - HCMUT l5" }, { "page_index": 248, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_016.png", "page_index": 248, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:02+07:00" }, "raw_text": "Option 2: Row-level locks, release after access TRANSFER(src, dst, x) 0 txID = Begin() src_bal= Read(src < lock(src) 1 2 if (src_bal > x): 3 src_bal -= x Write(src_bal, src) 1 unlock(src) 4 dst_bal = Read(dst) lock(dst) 5 6 dst_bal +=x Write(dst_bal, dst unlock(dst) 8 return Commit(txlD 9 Abort(txlD) Problem? 10 return FALSE HPC Lab - CSE - HCMUT 6" }, { "page_index": 249, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_017.png", "page_index": 249, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:08+07:00" }, "raw_text": "Option 2: Row-level locks, release after access TRANSFER(src, dst, x) 0 txID = Begin() src_bal= Read(src < lock(src) 1 2 if (src_bal > x): 3 src_bal -= x + Write(src_bal, src) 1 unlock(src) REPORT_SUM(src,dst) 5 dst_bal = Read(dst) lock(dst) 6 dst_bal += x Write(dst_bal, dst) unlock(dst) 8 return Commit(txlD Problem: insufficient isolation. 9 Abort(txlD) o Allows other transactions to read sro 10 return FALSE before dst is updated HPC Lab - CSE - HCMUT l 7" }, { "page_index": 250, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_018.png", "page_index": 250, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:13+07:00" }, "raw_text": "Two-Phase Locking (2PL) >Phase 1: acquire locks TRANSFER(src, dst, x) >Phase 2: release locks 0 txID = Begin() src_bal= Read(src < lock(src) 1 - You cannot get more locks after you release one 2 if (src_bal > x): o Typically implemented by her releasing locks automatically at end of commit(/abort(: 3 src_bal -= x Write(src_bal, src) dst_bal = Read(dst) lock(dst) lock source and dst, if other transaction dont use src and dst, they can exe 5 normally. Support concurrent. 6 dst_bal += x Write(dst_bal, dst) 8 return Commit(txlD) unlock(src,dst) 9 Abort(txlD) unlock(src, dst) Problem? 10 return FALSE HPC Lab - CSE - HCMUT 8" }, { "page_index": 251, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_019.png", "page_index": 251, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:17+07:00" }, "raw_text": "2PL can lead to deadlocks tx1:lock(x); tx2: lock(y)i tx1:lock(y); tx2:lock(x)j tx1 might get the lock for y, then tx2 gets lock for x then both transactions wait trying to get the other lock. HPC Lab - CSE - HCMUT 9" }, { "page_index": 252, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_020.png", "page_index": 252, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:21+07:00" }, "raw_text": "Preventing deadlock 2 1 Option 1: Each transaction gets all its locks at once Not always possible (e.g., think foreign key-based navigation in a DB system: rows to lock are determined at runtime). Tx1: (gC) Option 2: Each transaction gets its locks in predefined order TXz : 6t) y o As before, not always possible. add x Typically: detect deadlock and abort some transactions as needed to break the deadlock HPC Lab - CSE - HCMUT 20" }, { "page_index": 253, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_021.png", "page_index": 253, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:25+07:00" }, "raw_text": "Deadlock detection and resolution wCu+ Construct a waits-for graph: Each vertex in the graph is a transaction. There is an edge T1-> T2 if T1 is waiting for a lock T2 holds. There is a deadlock iff there is a cycle in the waits-for graph To resolve, the database unilaterally calls AbortO on one or a few ongoing transactions to break the cycle. HPC Lab - CSE - HCMUT 21" }, { "page_index": 254, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_022.png", "page_index": 254, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:29+07:00" }, "raw_text": "To remember Remember this point: For concurrently control, a database may decide on its own to kill ongoing client transactions! So Abort is a really critical function, which helps address both concurrency control issues and atomicity issues. But how exactly to AbortO? Answer WAL HPC Lab - CSE - HCMUT 22" }, { "page_index": 255, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_023.png", "page_index": 255, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:31+07:00" }, "raw_text": "Write-Ahead Logging (WAL) HPC Lab - CSE - HCMUT 23" }, { "page_index": 256, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_024.png", "page_index": 256, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:36+07:00" }, "raw_text": "Write-Ahead Logging In addition to evolving the state in RAM and on disk, keep a separate, on- disk log of all operations Transaction begin, commit, abort All updates (e.g.X = X -$20;Y = Y +$20 A transaction's operations are provisional until \"commit\" outcome is logged to disk The result of these operations will not be revealed to other clients in meantime (i.e., new value of X will only be revealed after transaction is committed) Observation: Disk writes of single pages/blocks are atomic, but disk writes across pages may not be. HPC Lab - CSE - HCMUT 24" }, { "page_index": 257, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_025.png", "page_index": 257, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:39+07:00" }, "raw_text": "Begin/commit/abort records Log Sequence Number (LSN) o Usually implicit, the address of the first-byte of the log entry LSN of previous record for transaction o Linked list of log records for each transaction Transaction ID Operation type HPC Lab - CSE - HCMUT 25" }, { "page_index": 258, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_026.png", "page_index": 258, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:42+07:00" }, "raw_text": "Update records Need all information to undo and redo the update o prevLSN + xID + opType as before The update itself, e.g.: the update location (usually pagelD, offset, length) old-value new-value HPC Lab - CSE - HCMUT 26" }, { "page_index": 259, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_027.png", "page_index": 259, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:46+07:00" }, "raw_text": "xID = begin0) l/ suppose xID <(42 Log src.bal -= 20; dst.bal += 2o; commit(xlD); Disk Page cache 11 10 src.bal: 100 14 7() dst.bal: 3 Transaction table: Dirty page table: HPC Lab - CSE - HCMUT 27" }, { "page_index": 260, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_028.png", "page_index": 260, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:50+07:00" }, "raw_text": " xID =begin0; l/ suppose xID < 42 Log src.bal -= 20; 780 prevLSN: 0 xld: 42 dst.bal += 20; type: begin commit(xlD) Disk Page cache 11 10 src.bal: 100 40 14 dst.bal: 3 Transaction table: 42: prevLSN = 780 Dirty page table: HPC Lab - CSE - HCMUT 28" }, { "page_index": 261, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_029.png", "page_index": 261, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:48:56+07:00" }, "raw_text": "xID = begin(); l/ suppose xID < 42 LSN: log sequence number Log src.bal -= 20; 780 prevLSN: 0 xld: 42 dst.bal += 20; type: begin commit(xlD); Disk Page cache 860 prevLSN: 78o 11 11 10 xld: 42 type: update src.bal: 100 src.bal: 80 page: 11 offset: 10 src.bal 14 length: 4 integer => 4 old-val: 100 dst.bal: 3 new-val: 8o Transaction table: 42: prevLSN = 86o Dirty page table: 11: firstLSN = 86o, lastLSN = 86o HPC Lab - CSE - HCMUT 29" }, { "page_index": 262, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_030.png", "page_index": 262, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:01+07:00" }, "raw_text": "xID = begin0 l/ suppose xID < 42 Log src.bal -= 20; 780 prevLSN: 0 xld: 42 > dst.bal += 20; type: begin commit(xlD); Disk Page cache prevLSN: 78o xld: 42 11 11 8 6 O 10 type: update page: 11 src.bal: 1o0 src.bal: 80 offset: 10 src.bal length: 4 old-val: 100 14 14 new-val: 8o dst.bal: 3 dst.bal: 23 902 prevLSN: 86o Transaction table: xld: 42 42: prevLSN = 9o2 type: update page: 14 offset: 10 dst.bal Dirty page table: length: 4 11: firstLSN = 86o, lastLSN = 86o old-val: 3 14: firstLSN = 902,lastLSN = 902 new-val: 23 HPC Lab - CSE - HCMUT 30" }, { "page_index": 263, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_6/slide_031.png", "page_index": 263, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:09+07:00" }, "raw_text": "Log xID = begin0 l/ suppose xID < 42 780 prevLSN: 0 xld: 42 src.bal -= 20; type: begin dst.bal += 20; commit(xlD) prevLSN: z8o xld: 42 Disk flush Page cache type: update page: 11 11 11 10 offset: 10 src.bal length: 4 : Tf src.bal: 1o0 src.bal: 80 old-val: 100 new-val: 8o 14 14 902 dst.bal: 3 dst.bal: 23 prevLSN: 860 xld: 42 type: update Transaction table: page: 14 offset: 10 dst.bal length: 4 Dirty page table: old-val: 3 new-val: 23 11: firstLSN = 86o, lastLSN = 86o 14: firstLSN = 902,lastLSN = 902 960 prevLSN: 902 xld: 42 if fail, roll back. 960 => 902 => 860 => 780 type: commit HPC Lab - CSE - HCMUT 31" }, { "page_index": 264, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_001.png", "page_index": 264, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:12+07:00" }, "raw_text": "Distributed Systems Distributed Transactions Thoai Nam High Performance Computing Lab (HPC Lab Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMUl" }, { "page_index": 265, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_002.png", "page_index": 265, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:14+07:00" }, "raw_text": "Slides Distributed Systems, Roxana Geambasu, Columbia University. HPC Lab - CSE - HCMUl 2" }, { "page_index": 266, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_003.png", "page_index": 266, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:17+07:00" }, "raw_text": "Contents - Web service architecture Two-Phase Commit (2PC) HPC Lab - CSE - HCMUT 3" }, { "page_index": 267, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_004.png", "page_index": 267, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:21+07:00" }, "raw_text": "Web service architecture Web front end (FE), database server (DB), network. Web FE FE is stateless, all state in DB Suppose the FE implements a banking application (supporting account transfers, listings, and other network functionality) Suppose the DB supports ACID transactions and the DB FE uses transactions. Question: How do we make this: Scalable? Fault tolerant? HPC Lab - CSE - HCMUT" }, { "page_index": 268, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_005.png", "page_index": 268, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:26+07:00" }, "raw_text": "Scalability: Sharding FE and DB are both sharded: Web Web Web FE FE FE FEs accept requests from end-users' browsers and process them concurrently $$ $$ $$ DB is sharded, say by user IDs. network Suppose each DB backend is on its own transactional (AClD). Then, FE issues transactions DB DB DB against one or more DB shards. Shard 1 Shard 2 Shard n HPC Lab - CSE - HCMUT 5" }, { "page_index": 269, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_006.png", "page_index": 269, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:31+07:00" }, "raw_text": "Fault Tolerance: Replication FE is stateless, so the fact that it is shared means it's Web Web Web FE FE FE also replicated/fault tolerant replica But DB is stateful, so active replication is needed for $$ $$ $$ each shard. Each shard is managed by a replica group for shard 1 group, which cooperate to keep themselves up to network date with respect to the updates FE sends requests for DB different shards go to DB DB DB different replica groups Shard Shard Shard 1 2 n HPC Lab - CSE - HCMUT" }, { "page_index": 270, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_007.png", "page_index": 270, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:37+07:00" }, "raw_text": "Challenges Question: What are the challenges of Web Web Web FE FE FE implementing ACiD across the entire sharded & replicated, DB service? replica $$ $$ $$ group for shard 1 network DB DB DB Shard Shard Shard 1 2 n HPC Lab - CSE - HCMUT" }, { "page_index": 271, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_008.png", "page_index": 271, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:41+07:00" }, "raw_text": "Challenges due to Sharding Web Web Web gnore replication. Implementing ACiD across all DB FE FE FE shard servers: $$ $$ $$ o Case 1: No transactions ever span multiple shards Easy: individual DB shard performs transaction. network Case 2: Transactions can span multiple shards. 1 Challenge: shards participating on any transaction DB DB DB need to agree on (1) whether or not to commit a Shard 1 Shard 2 Shard n transaction and (2) when to release the locks. HPC Lab - CSE - HCMUT 8" }, { "page_index": 272, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_009.png", "page_index": 272, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:47+07:00" }, "raw_text": "Challenges due to Sharding (cont.) Web Web Web Example: FE FE FE Say FE service is a banking service that supports the $$ $$ $$ TRANSFER and REPORT SUM functions from the previous lecture. network If the two accounts are stored on different shards, then the two operations (deduct from one and add to DB DB DB the other) will need to be executed either both or Shard 1 Shard 2 Shard n neither. Unfortunately, the two machines can fail, or decide to unilaterally abort, INDEPENDENTLY. HPC Lab - CSE - HCMUT" }, { "page_index": 273, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_010.png", "page_index": 273, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:52+07:00" }, "raw_text": "Challenges due to Sharding (cont.) Web Web Web Example (continued) FE FE FE So, you need an agreement protocol, and in this case $$ $$ $$ the most suitable is an atomic commitment protocol (why?) network Well-known atomic commitment protocol: two-phase commit. DB DB DB Shard 1 Shard 2 Shard n HPC Lab - CSE - HCMUT 10" }, { "page_index": 274, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_011.png", "page_index": 274, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:49:59+07:00" }, "raw_text": "Challenges due to Replication replica Ignore sharding. Implementing ACiD across all replicas Web Web Web group FE FE FE of a given shard: for Shard 1 Challenge: All replicas of the shard must execute all $s $$ $$ operations in the same order. 1uY oa Shard 1 / If the operations are deterministic, then agreeing on network Replica A the order of keeps the copies of the database on the Shard 1 / different replicas will evolve identically, i.e., they will Replica B DB DB all be kept consistent. Shard 1/ Shard Shard Replica C 2 n So, the challenge is to establish and maintain this agreed-upon order of operations to keep the replicas in sync and ensure the overall consistency of the distributed database. HPC Lab - CSE - HCMUT" }, { "page_index": 275, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_012.png", "page_index": 275, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:05+07:00" }, "raw_text": "Challenges due to Replication (cont.) Example: replica Web Web Web group FE FE FE Suppose there are two transactions, each with a for single operation, against the same cell in the Shard 1 $s $$ $$ database: TX1: x += 1 Shard 1 / network Replica A TX2:x*= 2 Shard 1 / Internally, all three replicas are ACiD databases, so Replica B DB DB they will serialize these transactions, e.g., either Shard 1/ Shard Shard Replica C (TX1, TX2) OR (TX2,TX1) 2 n If Replica A processes (TX1,TX2) and Replica B processes (TX2, TX1), then after executing these transactions, the DB copies on the two replicas will diverge to x=8 and x=7, respectively HPC Lab - CSE - HCMUT l2" }, { "page_index": 276, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_013.png", "page_index": 276, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:11+07:00" }, "raw_text": "Challenges due to Replication (cont.) Example (continued) replica Web Web Web group FE FE FE The problem of agreement on the order in which to for execute operations can be cast as an instance of the Shard 1 $$ $$ $$ consensus problem (why?) nht tri Well known consensus protocol: Paxos (see in Shard 1 / network Replica A consensus) Raft Shard 1 / Replica B DB DB Shard 1/ Shard Shard Replica C 2 n HPC Lab - CSE - HCMUT 3" }, { "page_index": 277, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_014.png", "page_index": 277, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:14+07:00" }, "raw_text": "Two-Phase Commit (2PC) HPC Lab - CSE - HCMUT 14" }, { "page_index": 278, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_015.png", "page_index": 278, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:18+07:00" }, "raw_text": "Two-Phase Commit (2PC) Prepare Phase Commit Phase A A PREPARE OK/FAIL PREPARE 2 TC B TC B C C HPC Lab - CSE - HCMUT l5" }, { "page_index": 279, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_016.png", "page_index": 279, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:22+07:00" }, "raw_text": "2PC for distributed transactions How 2PC integrates with WAL, 2PL that we studied for local transactions. Here's a rough description of a client lib for distributed transactions: beginO: Client lib beginO's a transaction on each separate shard. This produces a separate txID on each server (Tx.S1, Tx.S2,...). As part of the distributed Tx, the client sends the operation to the corresponding shard server. Say op1 goes to S1, op2 goes to S2. Each server grabs local locks, adds op to their local WAL. abort0: Client sends the ABORT message to S1, S2, .. HPC Lab - CSE - HCMUT 6" }, { "page_index": 280, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_017.png", "page_index": 280, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:26+07:00" }, "raw_text": "2PC commitO Prepare Phase Commit Phase S1 S1 : COMMIT/ABORT Dx PREPARE TC S2 TC S2 either client lib or one of Si S3 becomes TC. S3 HPC Lab - CSE - HCMUT l7" }, { "page_index": 281, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_018.png", "page_index": 281, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:30+07:00" }, "raw_text": "2PC commitO Prepare Phase Commit Phase (while holding locks for Tx.S1) S1 2.1. can commit transaction Tx.S1? 2.2.write PREPARE-OK/FAIL to WAL 2.3.send PREPARE-OK/FAIL to TC . PREPARE(Tx.S1) 2.4. wait to hear response from TC PREPARE (continue holding locks for Tx.S1) TC S2 S3 S3 HPC Lab - CSE - HCMUT 8" }, { "page_index": 282, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_019.png", "page_index": 282, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:35+07:00" }, "raw_text": "2PC commitO Prepare Phase Commit Phase 3.1. upon receipt of PREPARE-OK/FAIL or S1 TIMEOUT from all participants: df if all PREPARE-OK:outcome=COMMIT COMMIT/ABORT(Tx.S1) A else:outcome=ABORT 3.2. write outcome to WAL (if COMMIT TC S2 flush). flush = push 3.3. send outcome to participants, client. S3 S3 HPC Lab - CSE - HCMUT 9" }, { "page_index": 283, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_020.png", "page_index": 283, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:41+07:00" }, "raw_text": "2PC commitO Prepare Phase Commit Phase (recall S1 is holding locks for Tx.S1) 4,1,enter COMMlT/ABORT in its WAL S1 (if COMMIT, also flush 4.2. if ABORT, revert Tx.S1 using WAL 4.3. release all locks for Tx.S1. OK 4.4. send OK to TC (who will keep TC S2 retrying to send outcome to participants until it has OK from all) S3 HPC Lab - CSE - HCMUT 20" }, { "page_index": 284, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_021.png", "page_index": 284, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:45+07:00" }, "raw_text": "Timeouts and Failures Prepare Phase Commit Phase S1 S1 PREPARE COMMIT/ABORT PREPARE OK/FAIL TC S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 21" }, { "page_index": 285, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_022.png", "page_index": 285, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:49+07:00" }, "raw_text": "Timeouts Prepare Phase Commit Phase Situation: S1 times out waiting for S1 PREPARE from TC for a transaction. 1.PREPARE TC S2 S3 S3 HPC Lab - CSE - HCMUT 22" }, { "page_index": 286, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_023.png", "page_index": 286, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:53+07:00" }, "raw_text": "Timeouts Prepare Phase Commit Phase Situation: S1 times out waiting for S1 PREPARE from TC for a transaction Action: Safe for S1 to unilaterally abort0). TC S2 S3 S3 HPC Lab - CSE - HCMUT 23" }, { "page_index": 287, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_024.png", "page_index": 287, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:50:57+07:00" }, "raw_text": "Timeouts Prepare Phase Situation: TC times out waiting for PREPARE-OK/FAIL from S1. S1 1.PREPARE PREPARE OK TC S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 24" }, { "page_index": 288, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_025.png", "page_index": 288, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:01+07:00" }, "raw_text": "Timeouts Prepare Phase Situation: TC times out waiting for PREPARE-OK/FAIL from S1. S1 Action: Safe for TC to initiate distributed abort0 by sending ABORT outcome. 1.PREPARE PREPARE TC OK/FAIL S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 25" }, { "page_index": 289, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_026.png", "page_index": 289, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:06+07:00" }, "raw_text": "Situation: S1 times out waiting for outcome from TC Timeouts Action: Is it safe for S1 to unilaterally commit or abort? Prepare Phase Commit Phase S1 S1 X PREPARE COMMIT/ABORT PREPARE TC OK/FAIL S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 26" }, { "page_index": 290, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_027.png", "page_index": 290, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:10+07:00" }, "raw_text": "Situation: S1 times out waiting for outcome from TC Timeouts . Case 1: S1 had sent PREPARE-FAIL in Prepare phase Action: Safe for S1 to unilaterally abort Prepare Phase CommitPhase S1 S1 X PREPARE COMMIT/ABORT PREPARE TC OK/FAIL S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 27" }, { "page_index": 291, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_028.png", "page_index": 291, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:15+07:00" }, "raw_text": "Situation: S1 times out waiting for outcome from. : Case 2: S1 had sent PREPARE-OK in Prepare phase (S1 is Timeouts said to be in the uncertainty period). Action: Can't commit/abort. Runs a termination protocol. Prepare Phase Commit Phase S1 S1 X PREPARE PREPARE OK/FAIL TC S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 28" }, { "page_index": 292, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_029.png", "page_index": 292, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:20+07:00" }, "raw_text": "Termination protocol Wait for TC to come back (might take a while and recall Si hold locks!) Could also ask other participants whether they got outcome. If one did, they can all terminate the protocol accordingly Final message.(COMMIT -> COMMIT,ABORT -> ABORT) If none did (e.g., TC died or got partitioned right before it send outcome), then participants are BLOCKED till TC comes back. S1 S1 PREPARE ot .PREPARE 2 TC S2 TC S2 S3 S3 HPC Lab - CSE - HCMUT 29" }, { "page_index": 293, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_030.png", "page_index": 293, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:24+07:00" }, "raw_text": "Failures Similar analysis applies for failures Some cases: o if participant is not in uncertainty period, on recovery, can decide what to do (unilaterally abort if no decision, otherwise do what decision is.) o if participant is in uncertainty period, it cannot decide on its own, must invoke the termination protocol (which, as before, may not actually terminate if TC fails) HPC Lab - CSE - HCMUT 30" }, { "page_index": 294, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_031.png", "page_index": 294, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:26+07:00" }, "raw_text": "2PC Limitations HPC Lab - CSE - HCMUT 31" }, { "page_index": 295, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_032.png", "page_index": 295, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:30+07:00" }, "raw_text": "2PC is blocking A process can block indefinitely in its uncertainty period until a TC or network failure is resolved. If TC is also a participant, then a single-site failure can cause 2PC to block indefinitely! And it blocks while each shard server is holding locks, preventing other transactions that don't even interact with the failed shard server from making progress! This is why 2PC is called a blocking protocol and cannot be used as a basis for fault tolerance. HPC Lab - CSE - HCMUT 32" }, { "page_index": 296, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_7/slide_033.png", "page_index": 296, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:33+07:00" }, "raw_text": "2PC is expensive Time complexity: 3 message latencies on the critical path: PREPARE -> PREPARE-OK/FAIL -> ABORT/COMMIT Message complexity: common case for n participants + 1 TC: 3n messages That's expensive, esp. if shards are geo distributed Optimizations, or adding an extra phase (3PC), cannot address the blocking/performance problems of 2PC while maintaining its semantic. HPC Lab - CSE - HCMUT 33" }, { "page_index": 297, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_001.png", "page_index": 297, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:37+07:00" }, "raw_text": "Distributed Systems Coordination (Mutual exclusion & Election) Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMU l" }, { "page_index": 298, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_002.png", "page_index": 298, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:39+07:00" }, "raw_text": "Mutual exclusion HPC Lab - CSE - HCMUT 2" }, { "page_index": 299, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_003.png", "page_index": 299, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:44+07:00" }, "raw_text": "Mutual exclusion Problem o A number of processes in a distributed system want exclusive access to some resource Goal (avoiding) Starvation: not every process gets a chance to access the resource o Deadlocks: several processes are waiting for each other to proceed Basic solutions o Permission-based: A process wanting to enter its critical section, or access a resource needs permission from other processes Token-based: A token is passed between processes. The one who has the token may proceed in its critical section, or pass it on when not interested. HPC Lab - CSE - HCMUT 3" }, { "page_index": 300, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_004.png", "page_index": 300, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:49+07:00" }, "raw_text": "Critical section Critical section o Considering a system of N processes p, (i = 1, 2,..., N), that do not share variables. The processes access common resources, but they do so in a critical section The application-level protocol for executing a critical section is as follows: o enter(: enter critical section - block if necessary resourceAccesses0: access shared resources in critical section o exit0: leave critical section - other processes may now enter Mutual exclusion requirements ME1: (safety) At most one process may execute in the critical section (cs) at a time ME2: (liveness) no starvation Requests to enter and exit the critical section eventually succeed ME3: (0ptional,stronger) Requests to enter granted according to causality order HPC Lab - CSE - HCMUT" }, { "page_index": 301, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_005.png", "page_index": 301, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:54+07:00" }, "raw_text": "Permission-based, centralized j (I) Simply use of a coordinator One process is elected as the coordinator, which only lets one process at a time to access the resource: a) Process P1 asks the coordinator for permission to access a shared resource. Permission is granted. c) When P1 releases the resource, it tells the coordinator, which then replies to P2. Request Release RequestLOK OK No reply Queue is 2 empty wOw+jh0 Coordinator (a) (b) 1ueu& (c) HPC Lab - CSE - HCMUT 5" }, { "page_index": 302, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_006.png", "page_index": 302, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:51:58+07:00" }, "raw_text": "Permission-based, centralized 1 (2) Pros o Fairness o No starvation Simplicity, only three messages (request, grant, release) Cons o Coordinator is a single point of failure and performance bottleneck o Distinguishing crashed coordinator from permission denied. HPC Lab - CSE - HCMUT" }, { "page_index": 303, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_007.png", "page_index": 303, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:02+07:00" }, "raw_text": "A distributed algorithm B2 Sends message to B1 => B2 wants to access CS Return a response to a request only when: o The receiving process has no interest in the shared resource; or B1 doesnt want to go to CS o The receiving process is waiting for the resource, but has lower priority (known through comparison of timestamps). Compare priority In all other cases, reply is deferred, implying some more local administration. If already in CS, no reply HPC Lab - CSE - HCMUT 7" }, { "page_index": 304, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_008.png", "page_index": 304, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:07+07:00" }, "raw_text": "Multicast FIFO ordering: If a correct process issues multicast(g,m) and then multicast(g,m'), then every correct process that delivers m'will have already delivered m Causal ordering: If multicast(g,m) -> multicast(g,m') then any correct process that delivers m'will have already delivered m o Note that -> counts multicast messages delivered to the application, rather than all network messages Total ordering: If a correct process delivers message m before m', then any other correct process that delivers m'will have already delivered m. m M ma Hh sam X M M A HPC Lab - CSE - HCMUT 8" }, { "page_index": 305, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_009.png", "page_index": 305, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:12+07:00" }, "raw_text": "A distributed algorithm: Ricart &Agrawala (1981) On initialization state := RELEASED To enter the section state := WANTED; Multicast request to all processes; reguest processing deferred here Wait until (number of replies received = (N - 1))) state := HELD; On receipt of a request at pj (i j) then queue request from p; without replying else reply immediately to P; end if To exit the critical section state := RELEASED; reply to any queued requests; HPC Lab - CSE - HCMUT 9" }, { "page_index": 306, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_010.png", "page_index": 306, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:17+07:00" }, "raw_text": "Ricart &Agrawala (1981) (1) Based on total ordering of all events in the system . A process wanting to access a resource builds a message containing the name of the resource, its process number and the current (logical) time . The process sends the message to all other processes A process receiving the request has three alternatives: 1. If the receiver is not accessing the resource and does not want to access it, it sends back an OK message to the sender 2. If the receiver already has access to the resource, it simply does not reply. Instead, it queues the request v 3. If the receiver wants to access the resource as well but has not yet done so, it compares the timestamp of the incoming message with the one contained in the message that it has sent everyone. The lowest one wins. If the incoming message has a lower timestamp, the receiver sends back an OK message. If its own message has a lower timestamp, the receiver queues the incoming reguest and sends nothing The process waits until everyone has given permission. HPC Lab - CSE - HCMUT 0" }, { "page_index": 307, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_011.png", "page_index": 307, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:22+07:00" }, "raw_text": "Ricart &Agrawala (1981): an example mwhcast > Cs Raakaso Accesses 8 resource OK 8 12 OK OK 8 Accesses resource 0 S 12 OK 12 (c) (a) (b) Two processes want to access a shared resource at the same moment 2 P, has the lowest timestamp, so it wins 3 When process P, is done, it sends an OK also, so P, can now go ahead. HPC Lab - CSE - HCMUT" }, { "page_index": 308, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_012.png", "page_index": 308, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:30+07:00" }, "raw_text": "(1) DO-NOT-WANT (2) DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT P3 P3 P2 P2 DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT WANTED P4 P4 P1 P1 Message DO-NOT-WANT DO-NOT-WANT Timestamp,ID DO-NOT-WANT P5 P6 P5 (102,1) P6 DO-NOT-WANT Critical section(CS Critical section(CS) (4) (3) DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT P3 DO-NOT-WANT P3 ok P2 ok P2 DO-NOT-WANT WANTED ok P4 P1 P4 P DO-NOT-WANT ok HELD DO-NOT-WANT ok P5 P5 P6 P6 DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT Critical section(CS) Critical section(CS) HPC Lab - CSE - HCMUT 12" }, { "page_index": 309, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_013.png", "page_index": 309, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:52:42+07:00" }, "raw_text": "WANTED (5) (6) (7) WANTED (115,3) WANTED ? DO-NOT-WANT P3 DO-NOT-WANT ? DO-NOT-WANT ok P2 P2 P2 ok P1 ok P1 DO-NOT-WANT DO-NOT-WANT DO-NOTVANT HELD HELD HELD ok PS ok P6 P6 WANTED WANTED WANTED DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT (110,5) Critical section(CS Critical section CS) Critical section CS) WANTED (11) (10) (9) (8) WANTED WANTED WANTED DO-NOT-WANT P3 P3 DO-NOT-WANT DO-NOT-WANT DO-NOT-WANT P3 P2 P3 P2 ok P4 P4 P4 ok P5-110 single point of failure. Questions. o If a coordinator is chosen dynamically, to what extent can we speak about a centralized or distributed solution? o Is a fully distributed solution, i.e. one without a coordinator, always more robust than any centralized/coordinated solution? HPC Lab - CSE - HCMUT 25" }, { "page_index": 322, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_026.png", "page_index": 322, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:31+07:00" }, "raw_text": "Basic assumptions All processes have unique id's . All processes know id's of all processes in the system (but not if they are up or down) Election means identifying the most suitable process based on different factors, e.g., highest id. HPC Lab - CSE - HCMUT 26" }, { "page_index": 323, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_027.png", "page_index": 323, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:35+07:00" }, "raw_text": "Election by bullying (1) Principle (Garcia-Molina 1982) Consider N processes {P.....P-1} and let id(Pk) = k. When a process Pk notices that the coordinator is no longer responding to requests, it initiates an election: Pk+2..., PN-1 2. If no one responds, P, wins the election and becomes coordinator 3. If one of the higher-ups answers, it takes over and P. 's job is done. HPC Lab - CSE - HCMUT 27" }, { "page_index": 324, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_028.png", "page_index": 324, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:41+07:00" }, "raw_text": "Election by bullying (2) OK Election Election Election OK OK Coordinator 6 . 3 Previouscoordinator has crashed (a) (b) (c) (d) (e) 1. (a) Process 4 first notices that coordinator has crashed and sends ELECTION to processes with higher numbers 5,6, and 7 2. (b)-(d) Election proceeds, converging into process 6 winning 3. (e) By sending COORDINATOR message process 6 announces it is ready to take over 4. If process 7 is restarted, it will send COORDINATOR message to others and bully them into submission HPC Lab - CSE - HCMUT 28" }, { "page_index": 325, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_029.png", "page_index": 325, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:49+07:00" }, "raw_text": "election C election X Election by bullying (2) Stage 1 answer p 0 p 2 3 4 answer Principle (Garcia-Molina 1982) election Consider N processes {P.....P-1} and let id(Pk) = k. When a process Pk election election X notices that the coordinator is no longer responding to reguests, it Stage 2 answer initiates an election: P1 p P sends an ELECTION message to all processes with higher 2 3 1. 4 identifiers: Pk+1 Pk+2..., Pn-1 2. If no one responds, P wins the election and becomes timeout coordinator X X 3. If one of the higher-ups answers, it takes over and P, 's job is Stage 3 done. p p p p 2 3 4 Eventually.... coordinator C X X Stage 4 p p p Qr 2 The election of coordinator P2, after the failure of p4 and then Ps 29 HPC Lab - CSE - HCMUT" }, { "page_index": 326, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_030.png", "page_index": 326, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:53+07:00" }, "raw_text": "Election in a ring Process priority is obtained by organizing processes into a (logical) ring Process with the highest priority should be elected as coordinator o Any process can start an election by sending an election message to its successor. If a successor is down, the message is passed on to the next successor o If a message is passed on, the sender adds itself to the list. When it gets back to the initiator, everyone had a chance to make its presence known The initiator sends a coordinator message around the ring containing a list of all living processes. The one with the highest priority is elected as coordinator. HPC Lab - CSE - HCMUT 30" }, { "page_index": 327, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_031.png", "page_index": 327, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:53:58+07:00" }, "raw_text": "Election in a ring: an example [6,0,1] [6,0,1,2] [6,0,1,2,3] 3 [3,4,5,6,0,1] [3,4,5,6,0,1,2] [3] [3,4] [6,0,1,2,3,4] [6,0] [3,4,5,6,0] [3,4,5,6] [3,4,5] 5 [6,0,1,2,3,4,5] [6] The initiators are P6 and P3. 31 HPC Lab - CSE - HCMUT" }, { "page_index": 328, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_8/slide_032.png", "page_index": 328, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:07+07:00" }, "raw_text": "Election in wireless networks (1) 3 (2) 3 (3) C C 2 Broadcasting d node 6b 6b e e 6b 1 a a a greceives ereceives 92 9 4 4 broadcast broadcast from b first from gfirst h 5 5 4 h 5 h 8 8 (4) 3 (5) C C [d,2] d d [c,3] e e [h,8.6b 6b 1 1 [4] a a [h,8] [,4] 4 freceives 4 g 9 2 broadcast 0.4] fromefirst [h,8] [.5] h 5 4 h 5 A 8 HPC Lab - CSE - HCMUT 32" }, { "page_index": 329, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_001.png", "page_index": 329, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:10+07:00" }, "raw_text": "Distributed Systems C onsensus Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMUl" }, { "page_index": 330, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_002.png", "page_index": 330, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:14+07:00" }, "raw_text": "Multicast/Broadcast FIFO ordering: If a correct process issues multicast/broadcast(g,m) and then multicast/broadcast(g,m'), then every correct process that delivers m will have already delivered m Causal ordering: If multicast/broadcast(g,m) -> multicast/broadcast(g,m') then any correct process that delivers m'will have already delivered m o Note that -> counts multicast/broadcast messages delivered to the application, rather than all network messages Total ordering: If a correct process delivers message m before m', then any other correct process that delivers m'will have already delivered m. HPC Lab - CSE - HCMUT 2" }, { "page_index": 331, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_003.png", "page_index": 331, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:19+07:00" }, "raw_text": "Online payment Leader Replica 1 .. Replica N Server 1 Server 2 Server N Online Store Service Payment Service Payment Service Payment charge update on other replicates before sending success Svccess Multiple copies Agreeing on the same values/operations over time HPC Lab - CSE - HCMUT 3" }, { "page_index": 332, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_004.png", "page_index": 332, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:23+07:00" }, "raw_text": " The main idea behind is that a single process (the leader) broadcasts the operations that change its state to other process, the followers (replicas) Total order broadcast: every node delivers the same messages in the same order The followers execute the same sequence of operation as the leader, then the state of each follower will match the leader State machine replication (SMR): - every replica acts as SM FlFO-total order broadcast: every update to all replicas Replica deliver update message: apply to own state o Applying an update is deterministic - even errors Replica is a state machine: starts in fixed initial state, goes through same sequence of state transitions in the same order -> all replicas end up in the same state. HPC Lab - CSE - HCMUT" }, { "page_index": 333, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_005.png", "page_index": 333, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:27+07:00" }, "raw_text": "Closely related ideas: Serializable transactions (execute in delivery order) - Active/Passive replication Blockchains, distributed ledgers, smart contracts Limitations: Cannot update state immediately, have to wait for delivery through broadcast Need fault-tolerant total order broadcast HPC Lab - CSE - HCMUT S" }, { "page_index": 334, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_006.png", "page_index": 334, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:31+07:00" }, "raw_text": "Other broadcast Broadcast Assumptions about state update fuwction Total broadcast Deterministic (SMR Causal cac message... ko lien quan gi ti Deterministic, concurrent updates commute nhau => kt qu the same Reliable Deterministic, all updates commute dont care order Deterministic, commutative, idempotent, tolerate Best-effort message loss When updates are commutative, replicas can process updates in different orders and still end up in the same state. HPC Lab - CSE - HCMUT 6" }, { "page_index": 335, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_007.png", "page_index": 335, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:37+07:00" }, "raw_text": "Consensus . A fundamental problem studied in distributed systems, which requires a set of processes to agree on a value in fault tolerant way so that: Every non-faulty process eventually agrees on a value The final decision of every non-faulty process is the same everywhere The value that has been agreed on has been proposed by a process Consensus has a large number of practical applications o Commit transactions, Decisions in general (votes are involved) Hold a lock (Mutual exclusion), Failure detection (Byzantine) Any problem that requires consensus can be solved with a state machine replication. HPC Lab - CSE - HCMUT 7" }, { "page_index": 336, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_008.png", "page_index": 336, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:41+07:00" }, "raw_text": "Consensus and total order broadcast Leader regulates the consensus with the nodes via total order broadcast Single point of failure o Failover: human operator chooses a new leader, e.g., databases Election algorithms can automate the selection of the leader (properties?) Consensus and total order broadcast are formally equivalent Common consensus algorithms: o Paxos: single-value consensus Multi-paxos: generalization to total order broadcast Raft, Viewstamped replication, Zab: FlFO-total order broadcast HPC Lab - CSE - HCMUT 8" }, { "page_index": 337, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_009.png", "page_index": 337, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:46+07:00" }, "raw_text": "Consensus system models Paxos, Raft, etc., assume a partially synchronous crash-recovery model. . Why not asynchronous? no solution o FLP result (Fisher, Lynch, Paterson): There is no deterministic consensus algorithm that is guaranteed to terminate in an asynchronous crash-stop system model Paxos, Raft and others, use clocks only used for timeouts/failures detector to ensure progress. Safety (correctness) does not depend on timing There are also consensus algorithms for a partially synchronous Byzantine system model (used in blockchains) Practical considerations o ZooKeeper (https://zookeeper.apache.org/) etcd (https://etcd.io/) HPC Lab - CSE - HCMUT 9" }, { "page_index": 338, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_010.png", "page_index": 338, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:51+07:00" }, "raw_text": "Leader in consensus Some consensus uses a leader to sequence messages o Use a failure detector (timeout) to determine suspected crash or unavailable leader On suspected leader crash, a new leader is elected 2.uaaurs Prevent two leaders at the same time \"split-brain A B C D E Elects a leader (B) Cannot elect a different leader as C already voted (B Ensure leader per term: Term is incremented every time a leader election is started A node can only vote once per term Require a quorum of nodes to elect a leader in a term HPC Lab - CSE - HCMUT 10" }, { "page_index": 339, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_011.png", "page_index": 339, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:54+07:00" }, "raw_text": "A single leader? Can guarantee unique leader per term? Cannot prevent having multiple leaders from different terms Example: Node 1 is leader in term t, but due to a network partition it can no longer communication with Node 2 and 3 Node 1 Node 2 Node 3 x Node 2 and 3 may elect a new leader in term t + 1 Node 1 may not even know that a new leader has been elected! HPC Lab - CSE - HCMUT" }, { "page_index": 340, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_012.png", "page_index": 340, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:54:59+07:00" }, "raw_text": "Checking if a leader has been voted out Leader Follower 1 Follower 2 Am I still be leader in term t? yes yes Can we deliver message w next in term t yes yes Right, now deliver w please For every decision (message to deliver), the leader must first get acknowledgements from a quorum. [source] Distributed Systems course given by Dr. Martin Kleppmann (University of Cambridge, UK HPC Lab - CSE - HCMUT l 2" }, { "page_index": 341, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_013.png", "page_index": 341, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:01+07:00" }, "raw_text": "Raft https://raft.github.io/ HPC Lab - CSE - HCMUT 3" }, { "page_index": 342, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_014.png", "page_index": 342, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:06+07:00" }, "raw_text": "Raft Modern solution the problem of consistency An algorithm that guarantees the strongest consistency possible Raft is based on state machine replication In Raft, time is divided into election term A term is depicted by a logical clock and just increases forward The term starts by an election to decide who becomes a leader Raft guarantees that for any term there is at most one leader. HPC Lab - CSE - HCMUT 4" }, { "page_index": 343, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_015.png", "page_index": 343, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:09+07:00" }, "raw_text": "(Raft algorithm) Times out, new election Receives vote Starts up from majority Times out, start election Follower Candidate Leader Discovers current eader or new term Discovers new term HPC Lab - CSE - HCMUT l 5" }, { "page_index": 344, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_016.png", "page_index": 344, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:13+07:00" }, "raw_text": "Raft algorithm overview Every process starts as follower A follower expects to receive a periodic heartbeat from the leader containing the election term the leader was elected in If the follower does not receive any heartbeat within a certain period of time, a timeout fires and the leader is presumed dead The follower starts a new election by increment the current election term and transitioning to candidate state It then votes for itself and sends a reqguest to all processes in the system to vote for it. stamping the request with the current election term. HPC Lab - CSE - HCMUT 6" }, { "page_index": 345, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_017.png", "page_index": 345, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:19+07:00" }, "raw_text": "term 1 term 2 t3 term 4 Raft terms election normal no emerging operation leader Outcome o The candidate wins the election: the candidate becomes a leader and starts sending out heartbeats to the other processes Another process wins the election: In this case, terms between process are compared if another process claims to be the leader with a term greater or equal the candidate's term, it accepts the new leader and returns to the follower state o A period of time goes by with no winner: very unlikely, but if it happens, then candidate will eventually time-out and starts a new election process. One single leader guarantee is enough? o On way to avoid dynamic leaders is by using a fencing token (a number that increases every time a distributed lock is acquired - a logical clock). HPC Lab - CSE - HCMUT l7" }, { "page_index": 346, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_018.png", "page_index": 346, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:29+07:00" }, "raw_text": "1 2 3 Raft 4 5 6 7 8 log index 1 1 1 2 3 3 3 3 leader x-3 y-1y-9x-2 x-0y-7 1x-5 x-4 Log replication 1 1 1 2 3 x-3 y-1y<-9x-2] x-0 o The leader is the only one that can make changes to the replica states 1 1 1 2 3 3 3 3 A log is created inside the leader and X-3 y-1 y-9x-2 x-0y-7 x-5 x-4 followers then replicated across the followers 1 1 (log replication) x-3 y-1 o When the leader applies an operation 1 1 1 2 3 3 3 to its local state, it appends a new log X-3 y<-1 y-9 x-2 X-0 y-7 x-5 entry into its own log (operation is H logged). committed entries Logs are composed of entries, which are numbered sequentially. Each entry contains the term in which it was created (the number in each box) and a command for the state machine. An entry is considered committed if it is safe for that entry to be applied to state machines. HPC Lab - CSE - HCMUT 8" }, { "page_index": 347, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_019.png", "page_index": 347, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:31+07:00" }, "raw_text": "Raft Follower candidate Leader We need a protocol structure to handle data consistency across multiple nodes HPC Lab - CSE - HCMUT 9" }, { "page_index": 348, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_020.png", "page_index": 348, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:34+07:00" }, "raw_text": "Raft Follower Follower Follower HPC Lab - CSE - HCMUT 20" }, { "page_index": 349, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_021.png", "page_index": 349, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:37+07:00" }, "raw_text": "Raft Follower candidate Follower HPC Lab - CSE - HCMUT 21" }, { "page_index": 350, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_022.png", "page_index": 350, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:40+07:00" }, "raw_text": "Raft Follower candidate Follower HPC Lab - CSE - HCMUT 22" }, { "page_index": 351, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_023.png", "page_index": 351, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:42+07:00" }, "raw_text": "Raft Follower Client Leader 3 3 Follower Set(3) AppendEntries HPC Lab - CSE - HCMUT 23" }, { "page_index": 352, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_024.png", "page_index": 352, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:46+07:00" }, "raw_text": "Raft Follower Set(3) client Leader 3 Follower Set(3) Set(3) HPC Lab - CSE - HCMUT 24" }, { "page_index": 353, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_025.png", "page_index": 353, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:49+07:00" }, "raw_text": "Raft Follower ACK Set(3) Client Leader ACK 3 Follower Set(3) Set(3) HPC Lab - CSE - HCMUT 25" }, { "page_index": 354, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_026.png", "page_index": 354, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:53+07:00" }, "raw_text": "Raft Follower 3 Set(3) Client Leader 3 3 Follower Set(3) 3 Set(3) Leader wait only for a majority (Quorum) of followers to commit HPC Lab - CSE - HCMUT 26" }, { "page_index": 355, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_027.png", "page_index": 355, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:57+07:00" }, "raw_text": "Raft Follower 3 Set(3) Client Leader 3 3 Follower Set(3) 3 Set(3) HPC Lab - CSE - HCMUT 27" }, { "page_index": 356, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_028.png", "page_index": 356, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:55:59+07:00" }, "raw_text": "Byzantine HPC Lab - CSE - HCMUT 28" }, { "page_index": 357, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_029.png", "page_index": 357, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:03+07:00" }, "raw_text": "Byzantine generals The problem [Lamport 1982] three or more generals are to agree to attack or retreat o one (commander) issues the order the others (lieutenants) decide one or more generals are treacherous (= faulty!) Propose attacking to one general, and retreating to another either commander or lieutenants can be treacherous! Requirements Termination, Agreement as before Integrity: If the commander is correct then all correct processes decide on the value proposed by commander. HPC Lab - CSE - HCMUT 29" }, { "page_index": 358, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_030.png", "page_index": 358, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:07+07:00" }, "raw_text": "Consensus in synchronous (1) system Uses basic multicast guaranteed delivery by correct processes assuming the sender does not crash Admits process crash failures assume up to f of the processes may crash How it works.. o f +1 rounds o relies on synchrony (timeout!) HPC Lab - CSE - HCMUT 30" }, { "page_index": 359, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_031.png", "page_index": 359, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:11+07:00" }, "raw_text": "Consensus in synchronous (2) system Initially o each process proposes a value from a set D Each process o maintains the set of values v, known to it at round r In each round r, where 1 r f+1,each process o multicasts the values to each other (only values not sent before, Vr - V In round f+1 o each process chooses minimum HPC Lab - CSE - HCMUT 3" }, { "page_index": 360, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_032.png", "page_index": 360, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:16+07:00" }, "raw_text": "Crash failure: A server halts, but working correctly Consensus in synchronous (3) until it halts system Arbitrary failure: A server may produce arbitrary responses at arbitrary times . Why it works? o sets timeout to maximum time for correct process to multicast message can conclude process crashed if no reply if process crashes, some value not forwarded.. At round f+1 o all correct process arrive at the same set of values hence reach the same decision value (minimum) at least f+1 rounds needed to tolerate f crash failures 7 . What about arbitrary failures? HPC Lab - CSE - HCMUT 32" }, { "page_index": 361, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_033.png", "page_index": 361, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:20+07:00" }, "raw_text": "Byzantine generals... Processes exhibit arbitrary failures o up to f of the N processes faulty In asynchronous system can use timeout to detect absence of a message cannot conclude process crashed if no reply impossibility with N 3f In asynchronous system o cannot use timeout to reliably detect absence of a message impossibility with even one failure!!! hence impossibility of reliable totally ordered multicast.. HPC Lab - CSE - HCMUT 33" }, { "page_index": 362, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_034.png", "page_index": 362, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:24+07:00" }, "raw_text": "Impossibility with three generals Assume synchronous system o 3 processes, one faulty if no message received, assume l proceed in rounds messages 3:1:u' meaning '3 says 1 says u . Problem! '1 says v' and '3 says 1 says u cannot tell which process is telling the truth! goes away if digital signatures used... Show o no solution to agreement for N=3 and f=1 HPC Lab - CSE - HCMUT 34" }, { "page_index": 363, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_035.png", "page_index": 363, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:28+07:00" }, "raw_text": " Byzantine generals Three Faulty processes are shown in red commander commader 1:V 1:V 1:W 1:X 2:1:V 2:1:W R2 3:1:M 3:1:X Commander faulty Com't Pa cannot tell which value sent by commander +Qll HPC Lab - CSE - HCMUT 35" }, { "page_index": 364, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_036.png", "page_index": 364, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:32+07:00" }, "raw_text": "Impossibility with three generals So, if the solution exists o and also when commander faulty (w), since cannot distinguish between the two scenarios o obtain P must decide on x when commander faulty Thus o no solution exists HPC Lab - CSE - HCMUT 36" }, { "page_index": 365, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_037.png", "page_index": 365, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:36+07:00" }, "raw_text": "But.. Solution exists for 4 processes with one faulty commander sends value to each of the lieutenants o each lieutenant sends value it received to its peers o if commander faulty, then correct lieutenants have gathered all values sent by the commander o if one lieutenant faulty, the each correct lieutenant receives 2 copies of the value from the commander Thus o correct lieutenants can decide on majority of the values received Can generalize to N = 3f+1 HPC Lab - CSE - HCMUT 37" }, { "page_index": 366, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_038.png", "page_index": 366, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:41+07:00" }, "raw_text": "Four Byzantine generals Faulty processes are shown in red commander commander 1:V 1:M 1:W 1:V 1:V 1:V 2:1:V 2:1:v 3:1:V 3:1:W 4:1:V 4:1:V 4:1:V 4:1:V Z 2:1:V 3:1:W 2:1:M 3:1:W P A P A Pa decides majority(v,u,v) = v P2,P3 and p4 decide 1 P4 decides majority(v,u,v) = v (no majority exists HPC Lab - CSE - HCMUT 38" }, { "page_index": 367, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_039.png", "page_index": 367, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:44+07:00" }, "raw_text": "In asynchronous systems.. No guaranteed solution exists even for one failure!!! [Fisher, Lynch, Paterson 85] o does not mean never reach consensus in presence of failures o but that can reach it with positive probability But... o Internet asynchronous, exhibits arbitrary failures and uses consensus? Solutions exist using o partially synchronous systems o randomization [Aspnes & Herlihy, Lynch, etc HPC Lab - CSE - HCMUT 39" }, { "page_index": 368, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_9/slide_040.png", "page_index": 368, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:47+07:00" }, "raw_text": "Chain replication Blockchain consensus mechanisms HPC Lab - CSE - HCMUT 40" }, { "page_index": 369, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_001.png", "page_index": 369, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:50+07:00" }, "raw_text": "Distributed Systems Replication Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab - CSE - HCMU" }, { "page_index": 370, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_002.png", "page_index": 370, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:54+07:00" }, "raw_text": "Online payment Leader Replica 1 .. Replica N Server 1 Server 2 Server N Online Store Service Payment Service Payment Service Payment charge Svccess Number of replicas? HPC Lab - CSE - HCMUT 2" }, { "page_index": 371, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_003.png", "page_index": 371, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:56:57+07:00" }, "raw_text": "Load balancing: replication Server 1 Server 2 Server 1 Server 2 HPC Lab - CSE - HCMUT 3" }, { "page_index": 372, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_004.png", "page_index": 372, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:00+07:00" }, "raw_text": "Replicatin Server 2 Server 1 0 C of HPC Lab - CSE - HCMUT" }, { "page_index": 373, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_005.png", "page_index": 373, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:04+07:00" }, "raw_text": "Reasons for replication Copy of the same data in multiple nodes . Databases (Apache Cassandra), file systems (NFS - Network file sharing)... A node that has a copy of the data is called replica . If some replicas are faulty, others are still accessible . Spread load across replicas If data does not change, data distribution becomes relatively easy HPC Lab - CSE - HCMUT 5" }, { "page_index": 374, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_006.png", "page_index": 374, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:08+07:00" }, "raw_text": "Reasons for replication Enhancing reliability . Protection against hardware crashes and corruption of data Improving performance Scaling in terms of numbers and geographical area Caching is a special form of replication Trade-off between performance and scalability. Challenges (Keep replicas consistent) When one copy is updated, other copies need to be updated, as well A number of consistency models Protocols for distribution of updates HPC Lab - CSE - HCMUT 6" }, { "page_index": 375, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_007.png", "page_index": 375, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:12+07:00" }, "raw_text": "System performance and replication Replication improves server performance, particularly reliability and availability However, if data is not replicated properly, this can lead to a system that cannot be utilized in practice. HPC Lab - CSE - HCMUT /" }, { "page_index": 376, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_008.png", "page_index": 376, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:16+07:00" }, "raw_text": "Replication caching (1) 1 2 3 4 The idea of keeping around Latency Server pieces of information that you utilize frequently Cache 2 3 1 2 4 Client 1 Client 2 Client 3 HPC Lab - CSE - HCMUT 8" }, { "page_index": 377, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_009.png", "page_index": 377, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:20+07:00" }, "raw_text": "Replication caching (2) New update 4 Publish and subscribe architecture Latency Server e.g., trigger on events Update/Refresh 3 4 5 6 5 Client 1 Client 2 Client 3 HPC Lab - CSE - HCMUT" }, { "page_index": 378, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_010.png", "page_index": 378, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:27+07:00" }, "raw_text": "New update Replication caching (3) 3 4 5 6 Cache eviction policies Latency Server First-in, First out (FIFO) - evict or overwrite whatever has been sitting in the cache the longest Random eviction - adding new data to the cache and overwriting old data at random Least recently used (LRU) - evicting the data item that has gone the longest untouched Cache order? In the context of distributed systems, Content Update/Refresh Distributed Networks (CDNs) 3 Computer around the world that contain copies of popular websites (in cache) ? 3 4 5 6 5 6 Self-organizing lists: Client 1 Client 2 Client 3 Principle that suggests that chaos is one of the most well-designed and efficient structures available Cache replacement or (Cache eviction) Analogy: your pile of papers in your desk HPC Lab - CSE - HCMUT 0" }, { "page_index": 379, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_011.png", "page_index": 379, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:31+07:00" }, "raw_text": "a = 1 Data changes Consistency Client Server Tracking and managing changes on the data is a key problem in replication. a = a +1 The system must identified which a = 2 requests have been already seen. X a = a +1 767 17.3K 891 posts followers following a = 3 Follow Success HPC Lab - CSE - HCMUT" }, { "page_index": 380, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_012.png", "page_index": 380, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:36+07:00" }, "raw_text": "bt lc Idempotence Idempotence: an operation that has not additional effect in a final result even after applied multiple times. o Reguests can be retried without deduplication o A function f is idempotent if f(x) = f(f(x) Not idempotent: f(likeCount) = likeCount + 1 o Idempotent: f(likeSet) = likeSet u {userID} Choice of retry behavior: o At-most-once semantics: send reguest, don't retry, update may not happen o At-least-once semantics: retry request until acknowledged, many repeat update RPC Exactly-once semantics: retry + idempotence or deduplication. HPC Lab - CSE - HCMUT 2" }, { "page_index": 381, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_013.png", "page_index": 381, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:40+07:00" }, "raw_text": "Adding and then removing again (1) Server Client 1 Client 2 Set of likes[...] f: add like (A) x Set of likes [A] f(A) = Set of likes u{A}; g: unlike(A) g(A) = Set of Iikes {A}; f: add like (A) Success (ACK) idempotent? Success (ACK) HPC Lab - CSE - HCMUT 3" }, { "page_index": 382, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_014.png", "page_index": 382, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:43+07:00" }, "raw_text": "Adding and then removing again (2) Client 1 Server 1 Server 2 add(A) add(A) X x HPC Lab - CSE - HCMUT 14" }, { "page_index": 383, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_015.png", "page_index": 383, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:46+07:00" }, "raw_text": "Adding and then removing again (3) Client 1 Server 1 Server 2 add(A) add(A) remove(A) remove(A) x x HPC Lab - CSE - HCMUT l 5" }, { "page_index": 384, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_016.png", "page_index": 384, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:50+07:00" }, "raw_text": "Adding and then removing again (4) Client 1 Server 1 Server 2 (t1, add(A) \"remove(A)\" does not A -> (ty true actually remove x: it (t1, add(A) labels A with \"false\" to A -> (ty true (t,, remove(A)) indicate it is invisible (a tombstone) A -> (t2, false (t2, remove(A)) X x HPC Lab - CSE - HCMUT 6" }, { "page_index": 385, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_017.png", "page_index": 385, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:57:56+07:00" }, "raw_text": "Replication issues Main issue . To keep replicas consistent, we generally need to ensure that all conflicting operations are done in the same order everywhere Conflict operations . Read-write conflict: a read operation and a write operation act concurrently Write-write conflict: two concurrent write operations tota ordwring Issue Guaranteeing global ordering on conflicting operations may be a costly operation downgrading scalability Keep the order for all replicates o Solution: weaken consistency requirements so that hopefully global synchronization can 5 &cs later J) C onsiS+onW be avoided. L mes8&ne in applcchor HPC Lab - CSE - HCMUT 7" }, { "page_index": 386, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_018.png", "page_index": 386, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:00+07:00" }, "raw_text": "Reconciling replicas Replicas periodically communicate among themselves to check for any inconsistencies. (Anti-entropy protocol) Server 1 Server 2 t1< t2 A -> (t1, true A -> (t2, false A -> (t2, false A -> (t2, false Update the record (data item) witH latest timestamps HPC Lab - CSE - HCMUT 8" }, { "page_index": 387, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_019.png", "page_index": 387, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:04+07:00" }, "raw_text": "Concurrent writes by different clients Client 1 Server 1 Server 2 Client 1 (t1, set(A=1 (t2, set(A=2) t2 (t1, set(A=1)) (t2, set(A=2)) Two common approaches Last writer wins (LWW o Use timestamps with total order (e.g., Lamport clock) Multi-value register o Use timestamps with partial order (e.g., Vector clock) HPC Lab - CSE - HCMUT 9" }, { "page_index": 388, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_020.png", "page_index": 388, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:09+07:00" }, "raw_text": "Replica failure Observation . A replica may be unavailable due to network partition or node fault, etc. Assume o Each replica has a probability p of being unavailable (faulty) o Probability of all n replicas being faulty: p\" o Probability of >= 1 out of n replicas being faulty: 1 - (1 - p)r Example with p=0.01 Replicas n P (>=1 faulty) P (>= (n+1)/2 faulty) P (all n faulty) 1 0.01 0.01 0.01 3 0.03 3 x 10-4 10-6 5 0.049 1 x 10-5 10-10 100 0.63 6 x 10-74 10-200 HPC Lab - CSE - HCMUT 20" }, { "page_index": 389, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_021.png", "page_index": 389, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:13+07:00" }, "raw_text": "Read-after-write consistency Server 1 Server 2 Client 1 a = 1 a = 1 (t1, set(a=2)) Write (a, 2 t1) (t1, set(a=2)) a = 2 x get(a) Read (a) t2 X get(a) (1, to) a = 1 HPC Lab - CSE - HCMUT 21" }, { "page_index": 390, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_022.png", "page_index": 390, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:17+07:00" }, "raw_text": "Replicated-write protocols (1) Write operations can be carried out at multiple replicas Active replication Each replica has an associated process carrying out update operations Drawback: operations need to be carried out in same order everywhere Lamport timestamps (does not scale well in large systems) Central coordinator (sequencer) Problem with central coordinator: replicated invocations Coordinator in each replica for managing invocations and replies HPC Lab - CSE - HCMUT 22" }, { "page_index": 391, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_023.png", "page_index": 391, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:22+07:00" }, "raw_text": "Quorum-based protocols Client 1 Server 1 Server 2 Server 3 Majority (among the replicas) a = 1 a = 1 a = 1 based mechanisms Ex: 2 out of 3 quorum (t1, set(a=2) Write (a, 2) t. a = 2 Client looks at the a = 2 x timestamp to figure out ok ok atest value Value set get(a) Read (a) X (1, to) (2,t1) a = 2 HPC Lab - CSE - HCMUT 23" }, { "page_index": 392, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_024.png", "page_index": 392, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:26+07:00" }, "raw_text": "Quorum-based protocols (1) ] [ Clients request and acquire permission of multiple servers before either reading or writing replicated data item Gifford's scheme for N replicas . Read quorum: client needs permission from arbitrary Nr servers Prevents read-write conflicts Write quorum: client needs permission from arbitrary Nw servers Prevents write-write conflicts Constraints: o Nr+Nw > N o Nw> N/2 HPC Lab - CSE - HCMUT 24" }, { "page_index": 393, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_025.png", "page_index": 393, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:30+07:00" }, "raw_text": "Quorum-based protocols (2) 1 Read quorum and write quorum share >= 1 replica Server 1 Server 2 Server 3 Server 4 Server 5 Read quorum Write quorum HPC Lab - CSE - HCMUT 25" }, { "page_index": 394, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_026.png", "page_index": 394, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:37+07:00" }, "raw_text": "Quorum-based protocols (3) Read quorum A B C D A B C D A B C D E F G H E F G H E G H J K J K L J K L NR=3, Nw=10 NR=7, Nw=6 N=1, Nw=12 Writequorum (a) (b) (c) a) Correct choice of read and write set b) May lead to write-write conflicts c) Correct choice known as ROWA (read one, write all) HPC Lab - CSE - HCMUT 26" }, { "page_index": 395, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_10/slide_027.png", "page_index": 395, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:42+07:00" }, "raw_text": "Read repair Client 1 Server 1 Server 2 Server 3 Co a = 1 a = 1 a = 1 get(a) Read (a) x (1, to) (2,t1) to < t1: a = 2 (t1, set(a=2)) Write (a, 2) Client-support o Update (t1, 2) is more recent than (to, 1) since t,< t o Client helps propagate (t1, 2) to other replicas. HPC Lab - CSE - HCMUT 27" }, { "page_index": 396, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_001.png", "page_index": 396, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:45+07:00" }, "raw_text": "&Asynchronous Computations Thoai Nam High Performance Computing Lab (HPC Lab Faculty of Computer Science and Technology HCMC University of Technology HPC Lab-CSE-HCMUT" }, { "page_index": 397, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_002.png", "page_index": 397, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:50+07:00" }, "raw_text": "EPC side effect shw& A computation that can obviously be divided Communication??? into a number of completely independent parts, each of which can be executed by a separate process(or) Input data Bulk Synchronous Parallel (BSP Processes Results No communication or very little communication between processes; Each process can do its tasks without any interaction with other processes. Data parallel computation HPC Lab-CSE-HCMUT 2" }, { "page_index": 398, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_003.png", "page_index": 398, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:58:56+07:00" }, "raw_text": "D cOMhUNiCOde BSP 0ca Computation superstep: 3 small tasks data Superstep 1 Communication > Synchronization Barrier synchro: sure that all process has done computation + communication V - W Synchronization Barrier Computation Superstep n Communication W Synchronization Barrier HPC Lab-CSE-HCMUT" }, { "page_index": 399, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_004.png", "page_index": 399, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:00+07:00" }, "raw_text": "Synchronous computations In a (fully) synchronous application, all the processes synchronized at regular points. Barrier A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait; All processes can continue from this point when all the processes have reached it (or, in some implementations, when a stated number of processes have reached this point). HPC Lab-CSE-HCMUl" }, { "page_index": 400, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_005.png", "page_index": 400, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:03+07:00" }, "raw_text": "Processes reaching barrier at different times P. P P P 1 2 b-1 Aiiie Ttne Barrier HPC Lab-CSE-HCMU 5" }, { "page_index": 401, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_006.png", "page_index": 401, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:08+07:00" }, "raw_text": "Message-passing In message-passing systems, barriers provided with library routines MPI MPI_Barrier( P1 Po Barrier with a named communicator being BarrierO the only parameter Called by each process BarrierO; Processes in the group, blocking wait until all > BarrierO until all members of reach their the group have barrier call reached the barrier call and only returning then. HPC Lab-CSE-HCMU l" }, { "page_index": 402, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_007.png", "page_index": 402, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:11+07:00" }, "raw_text": "Synchronized Computations Can be classified as: Fully synchronous In fully synchronous, all processes involved in the computation must be synchronized Locally synchronous In locally synchronous, processes only need to synchronize with a set of logically nearby processes, not all processes involved in the computation HPC Lab-CSE-HCMUT 1" }, { "page_index": 403, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_008.png", "page_index": 403, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:15+07:00" }, "raw_text": "Fully synchronized computation examples Data Parallel Computations Same operation performed on different data elements simultaneously; i.e., in parallel. Particularly convenient because: Ease of programming (essentially only one program) Can scale easily to larger problem sizes Many numeric and some non-numeric problems can be cast in a data parallel form. HPC Lab-CSE-HCMUT 8" }, { "page_index": 404, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_009.png", "page_index": 404, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:59:21+07:00" }, "raw_text": "Example To add the same constant to each Instruction element of an array: a[] = a[] + k; for : (i=0; i use old value 0(logn) 9 10 11 12 13 14 15 + 1-0 100 10 11 i-2 1=3 i=4 ims i=6 i-7 i=8 6 8 9 10 11 12 13 14 5 15 + + + 0 i=o i=0 i0 1=0 i=o i=o i=o i=o i-o i=o i=o i0 i=0 i=o 10 HPC Lab-CSE-HCMUT l 3" }, { "page_index": 409, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_014.png", "page_index": 409, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:00:09+07:00" }, "raw_text": "Data parallel example - prefix sum problem l. Parallel Prefix sums (n, x[]) { XoXXX3X4XsX6X7X8Xg X10 X X12 X13 X14 X15 2. for (int i = 0; i < [lognl; i++) 3. forall (int j = 0; j < n; j++ 1st stage 4. if 1j Z 2 6 9 10 11 12 13 14 + + + + + + + + + 5. x[j] = x[j] + x[j-2]; i-o i=o i1 i-4 i=s i=6 i=7 i=8 i=9 i-10 i-11 i-12 i=13 1-14 6. return x[]; 7. 2nd stage 6 8 9 10 13 12 13 14 + + + ? i0 i=o 0 i2 1-4 in5 i=6 i=7 i=8 i=9 i-10 i-11 i-12 0(logn) 3rd stage 5 6 8 9 10 11 12 13 14 15 + ? ? ? ? + io i=o i=o i=o i=0 i=0 im0 i=0 i=2 i=3 i=4 i=s i=6 i=7 i-8 th stage 5 6 7 8 9 10 11 12 13 14 15 + ? ? ? ? ? ? ? ? + i-o i=o i=0 i=o 1=0 i=o i=o i=o i=o i-o i=o i=o i=o i=o i=0 1-0 HPC Lab-CSE-HCMUT 4" }, { "page_index": 410, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_015.png", "page_index": 410, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:00:14+07:00" }, "raw_text": "s Parallelism) Synchronous s Iteration (Synchronous Each iteration composed of several processes that start together at beginning of iteration. Next iteration cannot begin until all processes have finished previous iteration. Using forall: for : (j=0; jl 9. Recv(&l, Pi-1,j); hi+1j<>r 10. Recv(&r, Pi+l,j); hij-1>d 11. Recv(&d, Pi,j-1); hii+I<> U 12. Recv(&u, Pi,j+1); l3. } while (!converged(i, j && (k < Max_loop)); 14. Send(&h, &i, &j, &k, P master; HPC Lab-CSE-HCMUT 20" }, { "page_index": 416, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_021.png", "page_index": 416, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:00:54+07:00" }, "raw_text": "Asynchronous Computations HPC Lab-CSE-HCMU l 2" }, { "page_index": 417, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_022.png", "page_index": 417, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:00:58+07:00" }, "raw_text": "Asynchronous computations Computations in which individual processes operate without needing to synchronize with other processes. Asynchronous computations important because synchronizing processes is an expensive operation which very significantly slows the computation - A major cause for reduced performance of parallel programs is due to the use of synchronization Global synchronization is done with barrier routines. Barriers cause processor to wait sometimes needlessly HPC Lab-CSE-HCMUT 22" }, { "page_index": 418, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_023.png", "page_index": 418, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:01:02+07:00" }, "raw_text": "Heat distribution problem (Locally synchronous computation) An area has known temperatures along each of its edges +1, Find the temperature distribution within Divide area into fine mesh of points Temperature at an inside point taken to be average of temperatures of four neighboring points. Convenient to describe edges by points. Temperature of each point by iterating the equation: hi-1,j + hi+1,j + hi,j-1 + hi,j+1 4 (0< i< n, 0< < n) for a fixed number of iterations or until the difference between iterations less than some very small amount. HPC Lab-CSE-HCMUT 23" }, { "page_index": 419, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_11/slide_024.png", "page_index": 419, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:01:12+07:00" }, "raw_text": "Sequential algorithms l. Seg_heat_distribution_verl ( { l. Seq_heat_distribution_ver2 ( { 2. do { 2. do { 3. for (k=0; k= gradient computation cycle; -> = model update / optimization backpropagation can be performed in parallel, to gather more information about the loss function faster Most transformations applied to a specific training sample in deep neural networks do not involve data from other samples Sum of per-parameter gradients computed using subsets : , x\") of a mini-batch (x) matches the per-parameter gradients for the entire input batch: L(x;w)/w = L(x;w)/w +..+ L(xr;w)/w HPC Lab - CSE-HCMUT 14" }, { "page_index": 447, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_015.png", "page_index": 447, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:03:28+07:00" }, "raw_text": "Centralized & Decentralized optimization Centralized optimization The optimization cycle is executed in a central machine while the gradient computation code is replicated onto the remaining cluster nodes Decentralized optimization Both cycles are replicated in each cluster node and some form of synchronization is realized that allows the distinct optimizers to act cooperatively HPC Lab - CSE-HCMUT 5" }, { "page_index": 448, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_016.png", "page_index": 448, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:03:35+07:00" }, "raw_text": "Centralized optimization A single optimizer instance (often called parameter Optimizer Model Model server) is responsible for updating a specific model parameter Parameter servers depend on A the gradients computed by Data Source Data Source Parameter Server Worker 1 Worker n workers that perform The blue process -> computes per-parameter gradients based on the backpropagation (workers) current model parameters by applying backpropagation on mini-batches Depending on whether drawn from the training data computations across workers The optimization cycle -> consumes these gradients to determine model are scheduled synchronously parameter updates or asynchronously, this can have different effects on the optimization. HPC Lab - CSE-HCMUT 6" }, { "page_index": 449, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_017.png", "page_index": 449, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:03:42+07:00" }, "raw_text": "Decentralized optimization (1) Each worker independently probes the loss function to Optimizer Model Optimizer Model find gradient descent trajectories to minima that have good generalization properties A Data Source Data Source To arrive at a better joint Master Worker1 Workern model, some form of > In this figure, we assume the existence of a dedicated master node, arbitration is necessary to which processes the individual parameter adjustments suggested by the bring the different views into workers and comes up with a new global model state that is then shared alignment. with them The blue process -> computes per-parameter gradients based on the current model parameters by applying backpropagation on mini-batches drawn from the training data The optimization cycle -> consumes these gradients to determine model parameter updates HPC Lab - CSE-HCMUT l7" }, { "page_index": 450, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_018.png", "page_index": 450, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:03:50+07:00" }, "raw_text": "Decentralized optimization (2) Multiple independent entities concurrently try to Master merge => not convergent O Workers solve a similar but not exactly the same problem U Exploration Phase The loss function in deep learning is usually non- Exploitation Phase trivial Find different descent trails more appealing and converge towards different local minima Over time, the workers diverge and eventually to become a new model arrive at incompatible models models that cannot be merged without destroying 0 the accumulated information Local Local Maximum Minimum DDLS that rely on decentralized optimization have to take measures to limit divergence The master and all workers start from the same model state w, on the yellow plateau (=high loss) Each worker shares its model updates with the master node, which in O turn merges the updates to distill latent parameter adjustments that Exploration phase (> ) have worked better on average across the investigated portion of the The workers iteratively evaluate the loss function training dataset 0 using different mini-batches and independently > A revised new global state w, is then shared with the workers, update their local models which use it as the starting point of the next exploration phase" }, { "page_index": 451, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_019.png", "page_index": 451, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:03:56+07:00" }, "raw_text": "Synchronous Synchronous systems o In bulk synchronous (or simply synchronous) systems, computations across all workers occur simultaneously Global synchronization barriers ensure that individual worker nodes do not progress until the remaining workers have reached the same state Asynchronous systems free to do something, problem: not convergent o Asynchronous systems take a more relaxed approach to organizing collaborative training and avoid delaying the execution of a worker to accommodate other workers (i.e. the workers are allowed to operate at their own pace) Bounded asynchronous systems A hybrid approach between two above archetypes They operate akin to centralized asynchronous systems, but enforce rules to accommodate workers progressing at different paces The workers operate asynchronously with respect to each other, but only within certain bounds HPC Lab - CSE-HCMUT 9" }, { "page_index": 452, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_020.png", "page_index": 452, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:02+07:00" }, "raw_text": "Centralized Synchronous (1) systems parameter server, many workers. workers share the model from the parameter server Centralized systems Model training is split between the workers (=gradient computation) and the parameter servers (=model update) Synchronization: Training cannot progress without a full parameter exchange between the parameter server and its workers The parameter server is dependent on the gradient input to update the model The workers are dependent on the updated model in order to further investigate the loss function The cluster as a whole cyclically transitions between phases, during which all workers perform the same operation HPC Lab - CSE-HCMUT 20" }, { "page_index": 453, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_021.png", "page_index": 453, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:12+07:00" }, "raw_text": "Centralized Synchronous 0L(xD;w) C(xD;w) oL(xDn;w) +...+ (2) E.gi dw Ow. dw Parameter systems Server Worker 1 Worker2 Worker3 Each training cycle begins with the ww-9 workers downloading new model C(xw) C(x2;w) C(x3;w) dw dw dw parameters (w) from the parameter server Workers locally sample a training PARAMETER SERVERPROGRAM mini- batch (x Di) and compute Require: initial model wo, learning rate n, number of workers n 1: for t + 0,1,2,... do per-parameter gradients lqi 2: Broadcast wt akl- synchronous Workers share their gradients with 3: Await gradients g, from all workers 4: Wt+1 + wt - nZn=1 9t the parameter server 5: end for The parameter server aggregates the gradients from all workers and injects Require: training data source Di the aggregate into an optimization 1: for t+ 0,1,2, ... do algorithm to update the model. 2: Await wt 3: Sample mini-batch x Di xwt) 4: gi F dwt 5: Send gi to parameter server 6: end for HPC Lab - CSE-HCMUT 2" }, { "page_index": 454, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_022.png", "page_index": 454, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:19+07:00" }, "raw_text": "Centralized Synchronous (3) systems Assuming appropriate and The next training step can only be conducted sufficient random sampling, larger once all workers have completed their mini-batches may represent the assigned task and submitted gradients training distribution better [31] A majority of cluster machines always has to wait for stragglers [34] Optimizing a model using mini- Relaxed solutions batches with a large coverage of The training set is (1) large enough, (2) reasonably the training distribution tend to get well-balanced, and (3) sufficiently randomly trapped in sharp minima basins of distributed among the workers the loss function Minor portions of the training data are absent such that this requirement can be relaxed Ending training epochs once 75% of all training samples have been processed [23] Over-provision by allocating more workers and ending each gradient aggregation phase once a quorum has been reached [17] HPC Lab - CSE-HCMUT 22" }, { "page_index": 455, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_023.png", "page_index": 455, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:27+07:00" }, "raw_text": "Broadcast Map Reduce Broadcast Map- argminxD,w Decentralized 1E=1w, Master Wo Worker 1 uo Synchronous Worker 2 uo (1) Workerm systems MASTER PROGRAM Require: initial model state wo, number of workers n Rely on decentralized optimization 1: t0 2: loop independently conduct model 3: Broadcast wt training in each worker 4: - from all workers 5: more relax on exploration state Works do not exchange parameters 6: tt+T to further model training, but 7: end loop good for hardware when training rather to share the independent Require: training data source Di, learning rate findings from each worker with the 1: t0 rest of the cluster to determine 2: loop 3: Await wt descent trajectories with good 4: generalization properties 5: for u+ t+1,t+ 2...,t+ T do 6: Sample mini-batch x Di Workers operate in phases @C(x;wi) 7: 2 w separated by global u+1 8: end for The decentralized synchronization barriers 9: 10: synchronous system tt+T 11: end loop SparkNet [I0]" }, { "page_index": 456, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_024.png", "page_index": 456, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:33+07:00" }, "raw_text": "Synchronous Decentralized (2) systems Due to the different properties of the mini- The initial model parameters (wo) are batches, each worker eventually arrives at a distributed among the workers to initialize the local models (wi) slightly better (w.r.t. C), but different model The master node acts as a synchronization Exploration phase conduit Each worker Randomly sample mini-batches from their Exploitation phase locally available partition of the training The worker models (w) are merged to form a new dataset joint model (Wt+T) Determine per-parameter gradients and adjust their model to minimize the loss function () This process is repeated t times, during which each worker independently trains its local model in isolation HPC Lab - CSE-HCMUT 24" }, { "page_index": 457, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_025.png", "page_index": 457, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:40+07:00" }, "raw_text": "Decentralized Synchronous (3) systems Keeping the local optimizers running for too t determines how much time should be spent on long will result in reduced convergence improving the local models versus synchronizing performance or even setbacks if the worker the states across machines models diverge too far To make the best use of the cluster GPUs Limit the amount of independent exploratior =>large t steps (t) [10, 25] Often lead to sub-optimal convergence rates [11] C => small t The best rate of convergence for a given model can typically be achieved if t is rather small (t < 10) [8, 10] Any choice of t represents the dilemma of finding a balance between harnessing the benefits from having more computational resources and the need to limit divergence among workers Practically motivated suggestions such as to aim for a 1:5 computation-to- computation ratio (83.3% GPU utilization; [10]) may serve as a starting point for hyper-parameter search and to determine whether efficient decentralized optimization is possible at all using a certain configuration. HPC Lab - CSE-HCMUT 25" }, { "page_index": 458, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_026.png", "page_index": 458, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:46+07:00" }, "raw_text": "Centralized Asynchronous (1) systems Asynchronous systems Each worker acts alone Centralized systems Each worker shares its gradients with the parameter server once a mini-batch has been processed Centralized Asynchronous DDLS [14] [17l [19] [21] [35] Instead of waiting for other workers to reach the same state, the parameter server eagerly injects received gradients into the optimization algorithm to train the model Each update of the global model is only based on the gradient input from a single worker Similar to the eager aggregation mechanisms Instead of discarding the results from all remaining workers and losing the invested computational resources, each worker is allowed to simply continue using its locally cached stale version of the model parameters HPC Lab - CSE-HCMUT 26" }, { "page_index": 459, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_027.png", "page_index": 459, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:04:56+07:00" }, "raw_text": "Centralized Worker1 W Worker3 aC(x:w) Asynchronous du 0C(x²:w) Parameter Server dw fast => run many ster (2) w systems Problem of this. W PARAMETER SERVER PROGRAM Worker2 Require: initial model state wo, learning rate very slow => stuck at first step, very old value 1wwo Cx²2) Each worker approaches the 2:Distribute w dw 3: for t+0,1,2,...do parameter server at its own pace to 4: if received gradients (g-) from worker i, with a delay of steps then offer gradient, after which the global 5: model is updated immediately, and 6: Sendw to worker 7: end if request updated model parameters 8: end for Each worker maintains a separate parameter exchange cycle with the Require: training data source Di parameter server 1: for t +0,1,2,... do There is no interdependence between 2: Await of current parameter server model w 3: wi t w workers, situations where straggler 4: nodes delay the execution of other x;w) 5: gi workers cannot happen. Owr 6: Send ga to parameter server 7: end for HPC Lab - CSE-HCMUT 27" }, { "page_index": 460, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_028.png", "page_index": 460, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:05+07:00" }, "raw_text": "Wo Centralized ModelState O Descent Trajectory Worker 1 Gradient W1 Asynchronous (3) Worker 2 Gradient systems For this system to work, choosing the results from one W2 worker over another must not introduce a bias that significantly changes the shape of the loss function W3 Thus, on average, the mini-batches sampled by each worker have to mimic the properties of the training Local Local W5 W4 Maximum Minimum distribution reasonably well At any point in time, only a single worker is in In the figure, both workers start from the same model possession of the most recent version of the model state (wo), but draw different mini-batches from the Other workers only possess stale variants that represent same distribution the state of the parameter server during their last Worker 1: send gradients(w.), PS(w.->w), receive w interaction with it A Worker 2: send gradients(w.), PS(w1->w2), receive w2 Any gradients that they produce are relevant to the A Worker 1: send gradients(w,), PS(w2->w), receive w3 shape of the loss function around that stale model Worker 2: send gradients(w2), PS(w->w4), receive w4 representation Staleness has serious implications on model training [37] In a cluster with multiple asynchronous workers, the next worker to exchange parameters is usually stale by some amount of update steps HPC Lab - CSE-HCMUT 28" }, { "page_index": 461, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_029.png", "page_index": 461, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:11+07:00" }, "raw_text": "Centralized Asynchronous (4) systems The fair scheduling is undesirable in practice The slowest machine would hold back faster machines, which is exactly the situation that asynchronous systems try to avoid Gradients from severely eclipsed workers can confuse the parameter server's optimizer Can setback training or even destroy the model To avoid compounding delays The parameter server typically places workers that indicated their readiness to upload gradients in a priority queue based on their staleness [17], [19], [42] To protect against adverse influences from severe stragglers, some systems allow defining conditions that must be fulfilled before queued requests can be processed by the parameter server These conditions typically take the form of either a value or delay bound. HPC Lab - CSE-HCMUT 29" }, { "page_index": 462, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_030.png", "page_index": 462, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:18+07:00" }, "raw_text": "Bounded Asynchronous systems Value bounds Delay bounds The parameter server maintains a copy of all (e.g. the Stale Synchronous Parallel [35] versions of the model currently in use across the Each worker (i) maintains a separate clock (ti) 0 cluster Whenever a worker submits gradients to the parameter server, t' is increased w.: he most recent model If the clock of a worker differs from that of the slowest worker by more than s steps, it is delayed until the slow worker has caught up currently not known by the slowest worker If a worker triggers an update that leads to a If a worker downloads the current global model it violation of some value bound is ensured that this model includes all local (i.e. Ilwt -wt-s Il.. max ), it is delayed until the updates and may also contain updates from other value bound condition holds again workers within a range of /ti-s, t'+s-1/ update Choosing a reliable metric and limit for a value steps. bound can be difficult [38] The magnitude of future model updates is largely unknown Adjustment during training HPC Lab - CSE-HCMUT 30" }, { "page_index": 463, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_031.png", "page_index": 463, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:24+07:00" }, "raw_text": "Decentralized Asynchronous (1) systems The workers act independently Combining master (w) and worker models (wi)in such a and continue to explore the loss setting is to apply linear interpolation as function based on a model that is wi wi - a(wi - w detached from the master's and a > β (1) w w+ B(w- w) current state The workers cannot replace their ([11], [25], [29], [30], [42], [43] model parameters upon The worker model is displaced towards the master model's state at a rate of a times their relative distance completing a parameter The master model is displaced in the opposite direction at a exchange with the master node rate of B Instead, workers have to merge This operation is equivalent to temporarily extending the loss the respective asynchronously function with the squared I2-norm of the difference between gathered information both models. HPC Lab - CSE-HCMUT 31" }, { "page_index": 464, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_032.png", "page_index": 464, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:34+07:00" }, "raw_text": "Worker 1 Decentralized argminfxD,w Asynchronous Master to 2 (2) systems Worker2 MASTER PROGRAM The decentralized asynchronous system Require: initial model state Wo 1: w wo E/astic Averaging SGD (EASGD) [30] 2: loop 3: if received download request from worker i then Once t iterations have been completed by a 4 Upload w to worker i 56 end if worker if received from worker i then 7: wtw+8i The master node's current model w is downloaded 8: end if 9: end loop and the penalization term Si is computed and applied to the local model rameter sharing interval T, initial model state wo Then &i is transferred to the master node, which 1: wi - w applies the inverse operation to w (a=) 2: for ti + 1,2,... do 3: WKW A symmetric force (elastic symmetric) 4: if ti mod =0then 5: Download w from master between each worker and the master node 6: si a(w - w) 7: Upload ' to master that equally attracts both models 8: 9: end if The decentralized 10: Sample mini-batch x Di asynchronous system 11: 8C(x,w) dw EASGD [30] HPC Lab - CSE-Hc 12: end for" }, { "page_index": 465, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_033.png", "page_index": 465, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:42+07:00" }, "raw_text": "Decentralized Master Worker 1 Asynchronous Worker 2 Penalization (3) systems O The individual models (workers & Master) are evolving side-by-side in parallel 0 O Note that there is no direct interaction between workers Local Local Stability is maintained by the Maximum Minimum penalization coefficients (a and B) in combination with the length of isolated The communication demand with the master node scales roughly learning phases (t) linear with the number of workers The optimizer hyper-parameters, a, B To avoid congestion induced delays due to network l/O bandwidth and t, are inter-dependent and must be limitations at the master, t must be scaled accordingly [11] weighted carefully for each training task long phases of isolated training can severely hamper convergence and cluster setup to constrain how far due to the increasing incompatibilities in the models. individual workers can diverge from the master and one another HPC Lab - CSE-HCMUT 33" }, { "page_index": 466, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_034.png", "page_index": 466, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:45+07:00" }, "raw_text": "Communication Patterns (CP) - HPC Lab - CSE-HCMU1 34" }, { "page_index": 467, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_035.png", "page_index": 467, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:51+07:00" }, "raw_text": "CP in centralized systems The parameter server a bottleneck o 1 parameter server & n workers: sending & receiving w .llwll parameters In bulk-synchronous systems Parameter up- and downloads occur sequentially, implying a communication delay of at least 2wTw + (n - 1)Rw +uw in theory, where Tw Rw and uw respectively denote the time required to transmit, reduce and update llwll parameters (i. e. the model) Efficient collective communication Binomial tree [15] Y Lower bound: 2[logz(n+1)]Tw+log2(n)lRw +Uu Scatter-reduce/broadcast algorithm [44] Y Lower bound: (2 + 2-1)Tw + n-1 Rw + Uu n n Asynchronous communication The minimum communication delay per worker: 2Tw + Uw A full parameter exchange with all workers: nTw + Uu Parameter exchange requests from individual workers can be overlapped if n > 1 HPC Lab - CSE-HCMUT 35" }, { "page_index": 468, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_036.png", "page_index": 468, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:05:56+07:00" }, "raw_text": "Parameter server (1)) If the parameter server is a bottleneck, it is highly desirable to distribute this role [21] Most gradient-descent-based optimization algorithms can be executed independently for each model parameter, which permits almost arbitrary slicing In practice this freedom is limited by the overheads incurred from peering with each additional endpoint to complete a parameter exchange [23] Asynchronous systems where congestion-free k : w communication (i.e. between k parameter servers and w workers) is more difficult to realize because the workers operate largely at their own pace [25] Additional limitations apply if training depends on hyper-parameter schedules that must be coordinated across parameter servers, or if reproducibility is desired [21] HPC Lab - CSE-HCMUT 36" }, { "page_index": 469, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_037.png", "page_index": 469, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:03+07:00" }, "raw_text": "(2) Worker 1 Worker3 A popular variant of the multi-parameter- 11 server-approach is to migrate the parameter server role into the worker nodes [17] [19] a 241[27] W Such that all nodes are workers, but also act ac Wb as parameter servers (i. e. k = w) Each worker is responsible for maintaining Worker 2 and updating 1/w the global model parameters It can be beneficial in homogeneous The external communication demand of each node is reduced to 2-1lwll cluster setups The locally maintained model partition does not Any node-failure requires a complete have to be exchanged via the network reorganization of the cluster [12] HPC Lab - CSE-HCMUT 37" }, { "page_index": 470, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_038.png", "page_index": 470, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:10+07:00" }, "raw_text": "(3) The entire parameter server function is implemented Worker 1 Worker 2 in each worker The workers synchronously compute gradients, which A are shared between machines using 9 w Step 1 a collective all-reduce operation lower bound delay = [log2(n)l(Tw+Rw)+Uw) w a ring algorithm n Tw + n- Rw +Uw lower bound delay = 2 Step 3 r n Step 2 Worker 3 Worker4 Each machine uses the thereby locally accumulated identical gradients to step an equally parameterized optimizer copy, which in turn applies exactly the same update Not only robust to node failures, but also makes adding and removing nodes trivial. HPC Lab - CSE-HCMUT 38" }, { "page_index": 471, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_039.png", "page_index": 471, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:21+07:00" }, "raw_text": "ResNet-110 VGG-A ResNet-152 on CIFAR-10 [4] on ImageNet [7] on ImageNet [4] l w Il = 6.7 MiB, --- E 0.09 s 0.45s lw =223 MiB,-- 0.77 s lw= 491 MiB,-= 64 samples 50 samples 32 samples 0.08 2.80 2.00 S 0.06 2.10 1.50 0.04 1.40 1.00 0.02 0.70 0.50 1 1 1 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 0.08 1.20 0.80 0.06 0.90 0.60 0.04 0.60 0.40 0.02 0.30 0.20 2 4 6 8 10 12 14 2 4 6 8 10 12 14 2 4 6 8 10 12 14 Number of Workers (n) - -- inference + backpropagation time using NVIDIA TitanX GPU; k=1, naive synchronous; -- k=4, naive synchronous; k=1, binomial tree reduction/broadcast; -.-. k=1, scatter reduction/broadcast; ---- k=1, asynchronous; ---- k=4, asynchronous; -+- k=n, ring algorithm (synchronous). Lower bound communication delays when training various models in different cluster setups in Ethernet and InfiniBand environments, assuming Rw 10 GiB/s and Uw 2 GiB/s in an ideal scenario with no latency or competing l/O requests that need arbitration HPC Lab - CSE-HCMUT 39" }, { "page_index": 472, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_040.png", "page_index": 472, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:26+07:00" }, "raw_text": "CP in decentralized systems Assuming isolated training phases of t cycles, the communication demand of each 1wl decentralized worker per local compute step is only Decentralized systems typically maintain a higher computation hardware utilization, even with limited network bandwidth, which can make training large models possible in spite of bandwidth-constraints Scaling out to larger cluster sizes may still result in the master node becoming a bottleneck it is possible to split the master's role in decentralized systems to reduce communication costs like in centralized systems Each machine is a self-contained independent trainer => decentralized DDLS have many options for organizing parameter exchanges HPC Lab - CSE-HCMUT 40" }, { "page_index": 473, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_041.png", "page_index": 473, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:33+07:00" }, "raw_text": "CP in decentralized systems: 1 s Worker 2 D-PSGD [26] Worker 1 Worker 3 A ring-like structure After each training cycle, each worker sends its model Worker 5 ws parameters to its neighbors and also integrates the Worker4 w models it receives from them The more hops two workers are away from each other the further they may diverge 2: for t+ 0,1,2, ... do 3: Because the ring is closed 4: Sample mini-batch x Di all workers project the @C(x;wi) 5: 8w same distance-attenuated 6: Await wt i-1 and w, i+1 from neighbor nodes (jt1 force on each other which 7: 2 1 j+1-1 aw2) - ng3 w Vt+1 2+1 is crucial for stability 8: end for D-PSGD [26] HPC Lab - CSE-HCMUT 41" }, { "page_index": 474, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_042.png", "page_index": 474, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:42+07:00" }, "raw_text": "Worker 1 CP in decentralized s 11 systems: TreeEASGD[25] Worker 2 Worker 3 ws A hierarchical tree-based 0000 0800 communication pattern Worker4 Worker 5 Worker 6 Worker 7 U u Avoids bandwidth-related limitations Require: training data Di, initial model state wo, learning rate n, penalization Each worker implements two parameter exchange intervals 1: wi = wo 2: for t+- 1,2, ... do 3: model parameters with the respective 4: Send wi to worker Nup upstream node 5: end if 6: = 0 then 7: forN+N model parameters with all adjacent downstream 8: Send w' to worker N nodes 9: end for 10: end if The degree of exploration is controlled by 11: if received some model w from any up- or downstream worker then separately adjusting the up- and downstream 12: w-w+a(w-w) 13: end if 14: Sample mini-batch x Di 15: @C(x;wi) Taown) for each worker based on its depth in the W dwi TreeEASGD [25] 16: end for hierarchy HPC Lab - CSE-HCMUT 42" }, { "page_index": 475, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_043.png", "page_index": 475, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:52+07:00" }, "raw_text": "CP in decentralized Require: training data source Di, initial model state wo, learning rate n, number of cluster nodes n, parameter exchange probability p 1: ai = 1 systems: GoSGD[45] n 3: loop 4: if received parameters (w,2) from worker j then 2 Workers and can peer with each other to 5: w2 W2 ai+aj aitaj W 6: 2 exchange parameters by implementing a Q 7: end if sum-weighted gossip protocol 8: Sample mini-batch x D° Each worker (i) defines a variableoi that is @C(x;wi) 9: wr wr dwi 10: if s Bernoulli(p) = 1 then initialized equally such that D;ai =1 11: ai r After each local update, a random Bernoulli 12: Send (wi, ') to randomly chosen worker. distribution is sampled to decide whether a 13: end if 14: end loop GoSGD[45 parameter exchange should be done The probability (p) determines the Workers that shared their state recently are weighted down average communication interval in relevance because the information they collected about a' is halved and sent along with the current the loss function has become more common knowledge model parameters (w')to the destination (=gossip) among other workers worker, which in turn replaces its local model The variance of staleness within the cluster is minimized with the weighted average based on its own By setting the learning rate (n) to zero, all workers and the received a value asymptotically approximate a consensus model. HPC Lab - CSE-HCMUT 43" }, { "page_index": 476, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_044.png", "page_index": 476, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:06:59+07:00" }, "raw_text": "DDLS s (1) tools TensorF/ow (Google; [17]) and MXNet (Apache Foundation; DistBelief (Google; [21] [19]) A centralized asynchronous DDLS with support for multiple parameter servers Modern descendants of DistBelief that improve upon Project Adam (Microsoft; [23]) previous approaches by introducing new concepts, such as Took a similar approach by moving gradient defining backup workers Optimizing model partitioning using self-tuning heuristic computation steps into the parameter servers for some neural network layers models, and improving scalability by allowing hierarchical Organize the parameter servers in a Paxos parameter servers to be configured cluster to establish high availability CaffeOnSpark (Yahoo; [24]) and BigDL (Intel; [27]) Petuum [16] The opposite approach and focus on easy integration with existing data analytics systems and commodity hardware Imposing delay bounds to control the 0 staleness of asynchronous workers can environments by implementing centralized synchronous improve the rate of convergence data-parallel model training on top of Apache Spark Accommodate the frequent communication needs of such Parameter Server [14] systems, they use sophisticated communication patterns to Formalizing the processing of deep learning implement a distributed parameter server workloads to establish hybrid parallelism and integrate with a general machine learning architecture HPC Lab - CSE-HCMUT 44" }, { "page_index": 477, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_045.png", "page_index": 477, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:07+07:00" }, "raw_text": "s (2) DDLS tools SparkNet [10] The data-parallel optimizer of PyTorch (Facebook; [47]) A decentralized synchronous DDLS that Implement a custom interface to realize synchronous model replicates Caffe-solvers using Apache Spark's training using collective communication primitives map-reduce API to realize training in Either one or all workers act as parameter server (all-reduce commodity cluster environments approach) As part of the popular Java DDLS EASGD [30] deeplearning4j Retain the idea of limited isolated training phases but The restriction to synchronous execution is imposes fully asynchronous scheduling often considered as a major downside by this Having a single master node can become a bottleneck as the approach cluster grows larger MPCA-SGD [11] D-PSGD[261 TreeEASGD [25] and GoSGD [45] Improve upon SparkNet Approaches to further scale out decentralized optimization Extend the basic Spark-based approach by by distributing the master function overlapping computation and communication COTS HPC [9] and FireCaffe [15] to realize quasi-asynchronous training and Optimized for HPC and GPU supercomputer environments extrapolates the recently observed descent where they have been shown to achieve unparalleled trajectory to cope with staleness performance for certain applications HPC Lab - CSE-HCMUT 45" }, { "page_index": 478, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_046.png", "page_index": 478, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:28+07:00" }, "raw_text": "DDLS Name Parallelism Optimi- Parameter Scheduling Topology Remarks (a-z) (Model/Data) zation Exchange distributed PS Each worker acts as a parameter server for 1 of the model (cf. Section 3.4.1). Distributed BigDL[27] DP only central sync. scatter-red. (always k = n) parameter exchanges are realized via the Spark block manager. CaffeOnSpark[24] distributed PS Parameter exchange realized via RDMA using repeated invocations of MPI functions DP only central sync. scatter-red. (always k = n) Equivalent implementations are available for Caffe2 and Chainer. COTS HPC[9] distrib. array Model layers partitioned along tensor dimensions and distributed across cluster. Fine- MP only central sync. - abstraction grained access is managed via a low-level array abstraction. D-PSGD[26] DP only decentral sync. 2:1 reduce closed ring Each node exchanges parameters with only its neighbors on the ring (cf. Section 3.4.2) DistBelief[21] MP + DP central async. ad hoc distrib. PS Model partitions spread across dedicated parameter server nodes (cf. Section 3.4.1). Decentralized asynchronous system as discussed in Section 3.3.5. Reactive adjustment of EASGD[30] DP only decentral async. ad hoc single master hyper-parameters can speedup training [42]. FireCaffe[15] DP only central sync. binom. tree single PS Simplistic centralized synchronous system as discussed in Section 3.3.1. GoSGD[45] soft-bounded ad hoc No dedicated master node. Parameter exchanges between any two workers realized via DP only decentral p2p mesh async. sum-weighted randomized gossip protocol as discussed in Section 3.4.2. MPCA-SGD[11] soft-bounded dedicated Model updating and sharing updates are decoupled. Penalization occurs as a part of the DP only decentral binom. tree async. master node model's cost function. Staleness effects are dampened using an extrapolation mechanism. scatter-reduce distributed PS Supports various advanced parameter server configurations, including but not limited to MXNet[19] MP + DP central bounded async async.: ad hoc (default k=m) hierarchical multi-stage proxy servers (cf. Section 3.4.1). Parameter reduce Model partitions spread redundantly across parameter server group. Workers organized MP + DP central bounded async. distrib. PS Server[14] async.: ad hoc in model parallelism enabled groups. One worker per group can act as a proxy server. ad hoc with Pioneered the use of delay bounds to control staleness (cf. Section 3.3.4). Average model Petuum[16] MP + DP central bounded async distrib. PS eager scatter staleness is further reduced through the eager distribution of model parameters. Dedicated parameter server group that is managed as a Paxos cluster. Hybrid parallelism Project Adam[23] MP + DP central async. ad hoc distrib. PS realized through transferring gradient computation for fully connected layers into PS. single PS or Model parallelism capabilities were added recently with version 1.4.0. Can only use either PyTorch[47] MP + DP central sync. all-reduce replicated PS synchronous data-parallelism or model parallelism. dedicated Decentralized synchronous implementation as discussed in Section 3.3.2. Realized using SparkNet[10] DP only decentral sync. reduce master node Spark map-reduce. Production-grade re-implementation present in deeplearning4j. scatter/all-red. distributed PS Supports single and multi parameter server setups, as well as all-reduce-based ap. TensorFlow[17] MP + DP central bounded async. async.: ad hoc (default k = ) proaches. By default, each worker acts as a parameter server for a portion of the model. TreeEASGD[25] ad hoc All nodes are workers and form a tree. Each worker only exchanges parameters with its DP only decentral bounded async. tree immediate up- and downstream neighbors (cf. Section 3.4.2). mrC LaU 40" }, { "page_index": 479, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_047.png", "page_index": 479, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:33+07:00" }, "raw_text": "Parallelism DP is more frequently supported than MP Decentral optimization is based on the concept of sparse communication between independent trainers; Realizing cross-machine MP in such systems is counter-intuitive New modeling and training techniques [2][4] allow utilizing the available parameter space more efficiently, while technological improvements in hardware allow processing increasingly larger models Not every model can be partitioned evenly across a given number of machines, which leads to the under-utilization of workers [20] If a model fits well into the GPU memory, the resource requirements of the backpropagation algorithm can often be regulated reasonably well by adjusting the mini-batch size in DP systems Some DDLS discourage using cross-machine MP in favor of DP, which is less susceptible to processing time variations HPC Lab - CSE-HCMUT 47" }, { "page_index": 480, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_048.png", "page_index": 480, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:38+07:00" }, "raw_text": "Optimization A trend towards decentralized systems in research Centralized DDLS Minor improvements, such as tailored optimization techniques [2], [36], [39 The development of domain-specific compression methods [13] Centralized DDLS dominate industry usage and application research, although centralized and decentralized DDLS offer similar convergence guarantees [26] Centralized approaches are generally better understood and easier to use 0 Most popular and industry- backed deep learning frameworks (PyTorch, TensorFlow, MxNet, etc.) contain centralized DDLS implementations that are mature, highly optimized and work tremendously well as long as parameter exchanges do not dominate the overall execution [11], [48] HPC Lab - CSE-HCMUT 48" }, { "page_index": 481, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_049.png", "page_index": 481, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:42+07:00" }, "raw_text": "Scheduling Centralized asynchronous methods Cope better with performance deviations and have the potential to yield a higher hardware utilization But introduce new challenges such as concurrent updates and staleness => some DDLS support synchronous and asynchronous modes of operation Centralized bounded asynchronous DDLS can always simulate synchronous and asynchronous scheduling If a delay bound is used, s = is identical to synchronous, while s = results in fully asynchronous behavior Decentralized DDLS Some decentralized DDLS define a simple threshold (t) to limit the amount of exploration per training phase Others take a more dynamic approach to cope better with bandwidth limitations, which is indicated using the term soft-bounded HPC Lab - CSE-HCMUT" }, { "page_index": 482, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_050.png", "page_index": 482, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:48+07:00" }, "raw_text": "Parameter Exchange mechanism Binomial tree methods Scale worse than scattering operations, but are preferable in high latency environments because less individual connections between nodes are required [44] Collective operation Common, but not necessarily the only parameter exchange method available Some synchronous DDLs implement several collective operations and switch between them to maximize efficiency HPC Lab - CSE-HCMUT 50" }, { "page_index": 483, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_051.png", "page_index": 483, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:52+07:00" }, "raw_text": "Topology Centralized DDLS The current state-of-the-art in centralized DDLs for small clusters is the synchronous all-reduce. based approach Large and heterogeneous setups can be utilized efficiently using hierarchically structured asynchronous communication patterns [19] Decentralized DDLS Heavily structured communication protocols [25], [26], boosting techniques [11], as well as relatively unstructured methods [45] have been reported to offer better convergence rates than naive implementations HPC Lab - CSE-HCMUT 51" }, { "page_index": 484, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_052.png", "page_index": 484, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:07:57+07:00" }, "raw_text": "Right technique The non-linear non-convex nature of deep learning models in combination with the abundance of distributed methods opens up a large solution space [2] Although frequently done, comparing DDLS based on processing metrics such as GPU utilization or training sample throughput is not useful in practice Such performance indicators can easily be maximized by increasing the batch-size, allowing more staleness or extending exploration phases, which does not necessarily equate to faster training or yield a better model Benchmark is not enough Well-established deep learning benchmarks like DAWN- Bench [48] propose comparing the end-to- end training performance by measuring quality metrics (e.g. time to ac- curacy x%) However, optimal configurations w.r.t. quality metrics are usually highly task dependent and may vary as the training progresses [42] HPC Lab - CSE-HCMUT 52" }, { "page_index": 485, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_053.png", "page_index": 485, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:02+07:00" }, "raw_text": "Benchmark tools The collection and quantitative study of the performance of DDLS using standardized Al benchmarks is becoming increasingly important and can provide guidance regarding what configurations work well in practice DAWNBench [48] Strong emphasis on distributed implementations, but focuses only on a few workloads MLPerf [49] Expand the scope and defines stricter test protocols to establish better comparability Deep500 [501: a new benchmark tool Focus on gathering more information by defining metrics and measurement points along the training pipeline AlBench [51] Aim at covering many machine learning applications like recommendation systems, speech recognition, image generation, image compression, text-to-text translation, etc. HPC Lab - CSE-HCMUT 53" }, { "page_index": 486, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_054.png", "page_index": 486, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:10+07:00" }, "raw_text": "Criteria of DDLS ERRE MP higher + +++ + trivial MP + mini-batch pipelining lower ++ ++++ ++ medium DP + central + synchronous higher + + + easy DP + central + asynchronous lower +++ ++ +++ hard DP + decentral + synchronous likely + ++ ++ easy DP + decentral + asynchronous lower ++ +++ hard +++ 1 Requires Large Balanced Dataset; 2 Requires Balanced Model; 3 Optimal Learning Rate as cluster grows; 4 Complexity due to Overlapping Computation and Communication (e.g. stability, entanglement, etc.); 6 Resilience to Sporadic Outside Influences; 7 difficulty to Establish Reproducible Behavior HPC Lab - CSE-HCMUT 54" }, { "page_index": 487, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_055.png", "page_index": 487, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:15+07:00" }, "raw_text": "Future research directions Using decentralized optimization techniques in conjunction with P2P model sharing [45] An interesting area of research for certain loTs or automotive applications A comprehensive analysis of different distributed approaches in real-life scenarios would be helpful to many practitioners An actual cluster setups the situation is usually more complex due to competing workloads Most works in distributed deep learning restrict themselves to ideal test scenarios Efficiently realizing distributed training in heterogeneous setups is a largely un-tackled engineering problem An investment commodity, clusters are often not replaced, but rather extended A structured quantitative analysis of the results from DDLs benchmarks could be interesting for many practitioners HPC Lab - CSE-HCMUT 55" }, { "page_index": 488, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_056.png", "page_index": 488, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:23+07:00" }, "raw_text": "Reference (1) 1. Y. LeCun, Y. Bengio, and G. E. Hinton, \"Deep Learning,\" Nature, vol. 521, no. 7553, pp. 436-444, 2015. 2. J. Schmidhuber, \"Deep Learning in Neural Networks: An Overview, Neural Networks, vol. 61, pp. 85-117, 2015 3. A. Krizhevsky, I. Sutskever, and G. E. Hinton, \"ImageNet Classifi- cation with Deep Convolutional Neural Networks, Adv. in Neura/ Information Processing Systems, vol. 25, pp. 1097-1105, 2012. 4. K. He, X. Zhang, S. Ren, and J. Sun, \"Deep Residual Learning for Image Recognition,\" Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 770-778, 2016. 5. S. Shi, Q. Wang, P. Xu et al., \"Benchmarking State-of-the-Art Deep Learning Software Tools,\" arXiv CoRR, vol. abs/1608.07249, 2016. 6. N. Shazeer, A. Mirhoseini, K. Maziarz et al., \"Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer,\" Proc 5th Intl. Conf. on Learning Representations, 2017. 7. K. Simonyan and A. Zisserman, \"Very Deep Convolutional Net- works for Large-Scale Image Recoginition,\" Proc. 3rd Intl. Conf. on Learning Representations, 2015. 8. M. Langer, \"Distributed Deep Learning in Bandwidth- Constrained Environments,\" Ph.D. dissertation, La Trobe University, 2018. 9. A. Coates, B. Huval, T. Wang, D. J. Wu, B. C. Catanzaro, and A. Y. Ng, \"Deep Learning with COTS HPC Systems, Proc. 3Oth Intl. Conf. or Machine Learning, pp. 1337-1345, 2013. 10. P. Moritz, R. Nishihara et al., \"SparkNet: Training Deep Networks in Spark,\" Proc. 4th Intl. Conf. on Learning Representations, 2016. 11. M. Langer, A. Hall et al., \"MPCA SGD - A Method for Distributed Training of Deep Learning Models on Spark,\" IEEE Trans. on Paralle/ and Distributed Systems, vol. 29, no.11, pp. 2540-2556,2018 12. K. Zhang, S. Alqahtani, and M. Demirbas, \"A Comparison of Distributed Machine Learning Platforms,\" Proc. 26th Intl. Conf. on Computer Communications and Networks, 2017. 13. T.Ben-NunandT.Hoefler,\"DemystifyingParallelandDistributed Deep Learning: An In-Depth Concurrency Analysis,\" ACM Com- put. Surv. vol.52,no.4, 2019 HPC Lab - CSE-HCMUT 56" }, { "page_index": 489, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_057.png", "page_index": 489, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:31+07:00" }, "raw_text": "Reference (2) 14. M.Li,D.G.Andersen,A.J.Smolaetal.,\"CommunicationEfficient Distributed Machine Learning with the Parameter Server,\" Adv. in Neural Information Processing Systems, vol. 27, pp. 19-27, 2014. 15. F.N.landola,K.Ashrafetal.,\"FireCaffe:Near-LinearAcceleration of Deep Neural Network Training on Compute Clusters,\" Proc. /EEE Conf. on Computer Vision and Pattern Recognition, 2015 16. E. P. Xing, Q. Ho, W. Dai, J. K. Kim, J. Wei, S. Lee et al., \"Petuum - A New Platform for Distributed Machine Learning on Big Data,\" IEEE Trans. on Big Data, vol. 1, no. 2, pp. 49-67, 2015. 17. M. Abadi, P. Barham, J. Chen et al., \"TensorFlow: A System for Large-Scale Machine Learning, Proc. 12th USENIX Symp. on Operating Systems Design and Implementation, pp. 265-283, 2016. 18. Q. V. Le, \"Building High-Level Features Using Large Scale Unsu- pervised Learning,\" Proc. IEEE Intl. Conf. on Acoustics, Speech and Signal Processing, pp. 8595-8598, 2013. 19. T. Chen, M. Li, Y. Li et al., \"MXNet: A Flexible and Efficient Ma- chine Learning Library for Heterogeneous Distributed Systems,\" Proc. 29th Conf. on Neural Information Processing Systems, 2015. 2O. Y. Huang, Y. Cheng, A. Bapna, O. Firat, M. X. Chen et al., \"GPipe: Efficient Training of Giant Neural Networks using Pipeline Paral- lelism,\" arXiv CoRR, vol. abs/1811.06965, 2018 21. J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin et al., \"Large Scale Distributed Deep Networks,\" Adv. in Neural Information Processing Systems, pp. 1223-1231, 2012. 22. S. loffe and C. Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, Proc. 32nd lntl. Conf. on Machine Learninq,vol. 37,pp.448-456, 2015 23. T. Chilimbi, Y. Suzue et al., \"Project Adam: Building an Efficient and Scalable Deep Learning Training System, Proc. 11th USEN/X Symp. on 0s Design and 1mplementation, pp. 571-582,2014. HPC Lab - CSE-HCMUT 57" }, { "page_index": 490, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_058.png", "page_index": 490, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:39+07:00" }, "raw_text": "Reference (3) 24. A. Feng, J. Shi, and M. Jain, \"CaffeOnSpark Open Sourced for Distributed Deep Learning on Big Data Clusters,\" 2016. [Online] Available: http://yahoohadoop.tumblr.com/post/139916563586/ caffeonspark- open- sourced- for- distributed- deep. 25. S. Zhang, \"Distributed Stochastic Optimization for Deep Learn- ing,\" Ph.D. dissertation, New York University, 2016 26. X.Lian,C.Zhang,H.Zhang,C.-J.Hsiehetal.,\"CanDecentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent, Adv. in Neural /nformation Processing Systems, vol. 30, pp. 5330-5340, 2017. 27. J. Dai, Y. Wang, X. Qiu, D. Ding, Y. Zhang et al., \"BigDL: A Distributed Deep Learning Framework for Big Data,\" Proc. ACM Symposium on Cloud Computing, pp. 50-60, 2019. 28. I. J. Goodfellow, O. Vinyals, and A. M. Saxe, \"Qualitatively Char- acterizing Neural Network Optimization Problems,\" Proc. 3rd Intl. Conf. on Learning Representations, 2015. 29. H. R. Feyzmahdavian, A. Aytekin et al., \"An Asynchronous Mini- Batch Algorithm for Regularized Stochastic Optimization,\" Proc. 54th lEEE Conf. on Decision and Control, pp.1384-1389, 2015. 3O. S. Zhang, A. Choromanska, and Y. LeCun, \"Deep learning with Elastic Averaging SGD,\" Adv. in Neural Information Processing Systems, vol. 28, pp. 685-693, 2015 31. N. S. Keskar, D. Mudigere, J. Nocedal et al., \"On Large-Batch Train- ing for Deep Learning: Generalization Gap and Sharp Minima,\" Proc. 5th Intl. Conf. on Learning Representations, 2017. 32. I. Sutskever, J. Martens, G. Dahl et al., \"On the Importance of Initialization and Momentum in Deep Learning,\" Proc. 3Oth Intl. Conf. on Machine Learning, vol. 28, no. 3, pp.1139-1147, 2013. 33. D. Kingma and J. Ba, \"Adam: A Method for Stochastic Optimiza- tion,\" Proc. 3rd Intl. Conf. on Learning Representations, 2015 34. J. Chen, X. Pan et al., \"Revisiting Distributed Synchronous SGD,\" Proc. 5th Intl. Conf. on Learning Representations, 2017. 35. Q.Ho,J.Cipar,H.Cui,J.K.Kimetal.,\"MoreEffectiveDistributed ML via a Stale Synchronous Parallel Parameter Server,\" Adv. in Neura/ Information Processing Systems, vol. 26, pp. 1223-1231, 2013. HPC Lab - CSE-HCMUT 58" }, { "page_index": 491, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_059.png", "page_index": 491, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:47+07:00" }, "raw_text": "Reference (4) 36. R. Tandon, Q. Lei, A. Dimakis, and N. Karampatziakis, \"Gradient Coding: Avoiding Stragglers in Distributed Learning,\" Proc. 34th Int/. Conf. on Machine Learning, vol. 70, pp. 3368-3376, 2017. 37. A. Agarwal and J. C. Duchi, \"Distributed Delayed Stochastic Op- timization,\" Adv. in Neural Information Processing Systems, vol. 24, pp. 873-881,2011. 38. W. Dai, A. Kumar, J. Wei, Q. Ho et al., \"High-Performance Distributed ML at Scale Through Parameter Server Consistency Models, Proc. 29th Conf. on Artificial Intelligence, pp. 79-87, 2015. 39. I. Mitliagkas, C. Zhang et al., \"Asynchrony Begets Momentum, With an Application to Deep Learning,\" Proc. 54th A/lerton Conf. on Communication, Control, and Computing, pp. 997-1004, 2017. 40. S.Zheng,Q.Meng,T.Wang,W.Chen,N.Yuetal.,\"Asynchronous Stochastic Gradient Descent with Delay Compensation,\" Proc. 34th Intl. Conf. on Machine Learning, pp. 4120-4129, 2017. 41. F. Niu, B. Recht, C. Re', and S. J. Wright, \"Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent, Adv. in Neural Information Processing Systems, vol. 24, pp. 693-701, 2011. 42. H. Kim, J. Park, J. Jang, and S. Yoon, \"DeepSpark: Spark-Based Deep Learning Supporting Asynchronous Updates and Caffe Compatibility,\" arXiv CoRR, vol. abs/1602.08191, 2016. 43. X. Lian, Y. Huang, Y. Li, and J. Liu, \"Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization,\" Adv. in Neural Information Processing Systems, vol. 28, pp. 2737-2745, 2015. 44. R. Thakur, R. Rabenseifner, and W. Gropp, \"Optimization of Collective Communication Operations in MPICH,\" Intl. Jrnl. of High Performance Computing Applications, vol. 19, pp. 49-66, 2005. 45. M. Blot, D. Picard, M. Cord, and N. Thome, \"Gossip Training for Deep Learning,\" arXiv CoRR, vol. abs/1611.09726, 2016. 46. S. Boyd, A. Ghosh et al., \"Randomized Gossip Algorithms, 1EEE Trans. on Information Theory, vol. 52, no. 6, pp. 2508-2530, 2006 47. N. Ketkar, \"Deep Learning with Python: A Hands-on Introduc- tion,\" /SBN: 978-1-4842-2766-4, pp. 195-208, 2017. HPC Lab - CSE-HCMUT 59" }, { "page_index": 492, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_12/slide_060.png", "page_index": 492, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:51+07:00" }, "raw_text": "Reference (5)) 48. C. Coleman, D. Narayanan, D. Kang, T. Zhao et al., \"DAWNBench: An End-to-End Deep Learning Benchmark and Competition,\" N/PS ML Systems Workshop, 2017. 49. P. Mattson, C. Cheng, C. Coleman et al., \"MLPerf Training Bench- mark,\" arXiv CoRR, vol. abs/1910.01500, 2019 5O. T. Ben-Nun, M. Besta, S. Huber, A. N. Ziogas, D. Peter, and T. Hoefler, \"A Modular Benchmarking Infrastructure for High- Performance and Reproducible Deep Learning,\" Proc. 33rd Intl. Parallel and Distributed Processing Symposium, pp. 66-77, 2019 51. W. Gao, F. Tang, L. Wang, J. Zhan, C. Lan, C. Luo, Y. Huang, C. Zheng et al., \"AiBench: An Industry Standard Internet Service Al Benchmark Suite,\" arXiv CoRR, vol. abs/1908.08998, 2019 HPC Lab - CSE-HCMUT 60" }, { "page_index": 493, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_001.png", "page_index": 493, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:54+07:00" }, "raw_text": "Federated & Si 6warm Learning Thoai Nam High Performance Computing Lab (HPC Lab Faculty of Computer Science and Technology HCMC University of Technology HPC Lab-CSE-HCMU l" }, { "page_index": 494, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_002.png", "page_index": 494, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:08:56+07:00" }, "raw_text": "Distributed ML/Deep Learning HPC Lab-CSE-HCMU l 2" }, { "page_index": 495, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_003.png", "page_index": 495, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:01+07:00" }, "raw_text": "ML: high level view Notation o D: data oA: model parameters o L: function to optimize (e.g., minimize loss Goal: Update A based on D to optimize L Typical approach: iterative convergence At = F(A(t-1),z(A(t-1),D) merge updates to parameters iteration t compute updates that minimize L HPC Lab - CSE-HCMUT 3" }, { "page_index": 496, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_004.png", "page_index": 496, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:04+07:00" }, "raw_text": "Abstracting ML algorithms Can we find commonalities among ML algorithms? This would allow finding o Common abstractions o Systems solutions to efficiently implement these abstractions Some common aspects o We have a prediction model A o A should optimize some complex obiective function L o ML algorithm does this by iteratively refining A HPC Lab - CSE-HCMUT" }, { "page_index": 497, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_005.png", "page_index": 497, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:08+07:00" }, "raw_text": "High level view Notation o D: data oA: model parameters o L: function to optimize (e.g., minimize loss) Goal: Update A based on D to optimize L Typical approach: iterative convergence At = F(A(t-1),(A(t-1),D)) merge updates to parameters iteration t compute updates that minimize L HPC Lab - CSE-HCMUT 5" }, { "page_index": 498, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_006.png", "page_index": 498, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:16+07:00" }, "raw_text": "Centralized Synchronous 0L(xD;w) C(xD;w) dL(xDn;w) +...+ E.gi dw Ow. dw systems Parameter Server Worker 1 Worker2 Worker3 Each training cycle begins with the workers down/oading new model C(xw) C(x2;w) C(x3;w) dw dw dw parameters (w) from the parameter server Workers locally sample a training PARAMETER SERVERPROGRAM mini- batch (x Di) and compute Require: initial model wo, learning rate n, number of workers n 1: for t + 0,1,2,... do per-parameter gradients lqi 2: Broadcast wt Workers share their gradients with 3: Await gradients g, from all workers 4: Wt+1 + wt - nZi=1 9t the parameter server 5: end for The parameter server aggregates the gradients from all workers and injects Require: training data source Di the aggregate into an optimization 1: for t+ 0, 1,2, ... do algorithm to update the model. 2: Await wt 3: Sample mini-batch x Di xwt) 4: gi F dwt 5: Send g, to parameter server 6: end for HPC Lab - CSE-HCMUT" }, { "page_index": 499, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_007.png", "page_index": 499, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:24+07:00" }, "raw_text": "Worker 1 Centralized W Worker3 a(x:w) Asynchronous du 0C(x²:w) Parameter Server dw w systems W PARAMETERSERVER PROGRAM Worker2 Require: initial model state wo, learning rate 1:wwo Cx²2) Each worker approaches the 2:Distribute w dw 3: for t+0,1,2,...do parameter server at its own pace to 4: if received gradients (g-) from worker i, with a delay of steps then offer gradient, after which the global 5: model is updated immediately, and 6: Sendw to worker 7: end if request updated model parameters 8: end for Each worker maintains a separate parameter exchange cycle with the Require: training data source Di parameter server 1: for t +0,1,2,... do There is no interdependence between 2: Await of current parameter server model w 3: wi t w workers, situations where straggler 4: Sample mini-batch x Di nodes delay the execution of other x;w) 5: gi workers cannot happen Owr 6: Send ga to parameter server 7: end for HPC Lab - CSE-HCMUT 7" }, { "page_index": 500, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_008.png", "page_index": 500, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:27+07:00" }, "raw_text": "Federated Learning HPC Lab-CSE-HCMUT 8" }, { "page_index": 501, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_009.png", "page_index": 501, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:31+07:00" }, "raw_text": "A shift of paradigm: from centralized to decentralized data The standard setting in Machine Learning (ML) considers a centralized dataset processed in a tightly integrated system But in the real world data is often decentralized across many parties. data center HPC Lab - CSE-HCMUT" }, { "page_index": 502, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_010.png", "page_index": 502, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:35+07:00" }, "raw_text": "Decentralized data Sending the data may be too costly o Self-driving cars are expected to generate several TBs of data a day o Some wireless devices have limited bandwidth/power Data may be considered too sensitive o We see a growing public awareness and regulations on data privacy o Keeping control of data can give a competitive advantage in business and research HPC Lab - CSE-HCMUT 10" }, { "page_index": 503, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_011.png", "page_index": 503, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:38+07:00" }, "raw_text": "How about each party learn on its own The local dataset may be too small o Sub-par predictive performance (e.g., due to overfitting) Xx X XX X o Non-statistically significant results (e.g., medical studies) The local dataset may be biased o Not representative of the target distribution. HPC Lab - CSE-HCMUT" }, { "page_index": 504, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_012.png", "page_index": 504, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:41+07:00" }, "raw_text": "A broad definition of federated learning Federated Learning (FL) aims to collaboratively train a ML model while keeping the data decentralized. HPC Lab - CSE-HCMUT l2" }, { "page_index": 505, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_013.png", "page_index": 505, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:48+07:00" }, "raw_text": "nitialize model Each party make an update using its local data set 1 Server aggregates Parties share local updates and sends updates for 3 back to parties Iterative aggregation e HPC Lab - CSE-HCMUT 3" }, { "page_index": 506, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_014.png", "page_index": 506, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:51+07:00" }, "raw_text": "A broad definition of federated learning Federated Learning (FL) aims to collaboratively train a ML model while keeping the data decentralized We would like the final model to be as good as the centralized solution (ideally) or at least better than what each party can learn on its own. HPC Lab - CSE-HCMUT 4" }, { "page_index": 507, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_015.png", "page_index": 507, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:55+07:00" }, "raw_text": "Data distribution o In distributed learning, data is centrally stored (e.g., in a data center) The main goal is just to train faster We control how data is distributed across workers: usually, it is distributed uniformly at random across workers o In FL, data is naturally distributed and generated locally Data is not independent and identically distributed (non-i.i.d.), and it is imbalanced Additional challenges that arise in FL o Enforcing privacy constraints o Dealing with the possibly limited reliability/availability of participants o Achieving robustness against malicious parties HPC Lab - CSE-HCMUT 5" }, { "page_index": 508, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_016.png", "page_index": 508, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:09:59+07:00" }, "raw_text": "Cross-device vs. cross-silo FL Cross-device FL Cross-silo FL 00100 > Massive number of parties (up to 1o10) >2-100 parties >Small dataset per party (could be size 1) > Medium to large dataset per party > Limited availability and reliability > Reliable parties, almost always available >Some parties may be malicious. > Parties are typically honest. HPC Lab - CSE-HCMUT 6" }, { "page_index": 509, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_017.png", "page_index": 509, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:03+07:00" }, "raw_text": "Server orchestrated vs. fully decentralized FL Server-orchestrated FL Fully decentralized FL > Server-client communication > Device-to-device communication >Global coordination, global aggregation > No global coordination, local aggregation > Server is a single point of failure and may > Naturally scales to a large number of become a bottleneck devices. HPC Lab - CSE-HCMUT /" }, { "page_index": 510, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_018.png", "page_index": 510, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:08+07:00" }, "raw_text": " A synchronous update scheme that proceeds in rounds of communication For efficiency, at the beginning of each round, a random fraction C of clients is selected, and the server sends the current model parameters to each of these clients McMahan, H. Brendan, Eider Moore, Daniel Ramage, and Seth Hampson. \"Communication-efficient learning of deep networks from decentralized data.\" A/STATS, 2017. HPC Lab - CSE-HCMUT 8" }, { "page_index": 511, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_019.png", "page_index": 511, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:12+07:00" }, "raw_text": "Federated Learning Recall in traditional deep learning model training o For a training dataset containing n samples (x , yi), 1 i n, the training objective is: Et=1fi(w) min ,f(w) where f (w) der WERd n f (w) = l(xj, yi, w) is the loss of the prediction on example (xj, yi) Deep learning optimization relies on SGD and its variants, through mini-batches Wt+1-Wt-nVf(Wt;Xk,Yk) HPC Lab - CSE-HCMUT 9" }, { "page_index": 512, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_020.png", "page_index": 512, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:16+07:00" }, "raw_text": "Federated Learning In federated learning o Suppose n training samples are distributed to K clients, where Pk is the set of indices of data points on client k,and nk = Pkl . o For training objective min f(w) WERd nkFk(w) 1EiePkfi(w) where Fk(w) def n nk HPC Lab - CSE-HCMUT 20" }, { "page_index": 513, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_021.png", "page_index": 513, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:20+07:00" }, "raw_text": "A baseline - FederatedSGD (FedSGD) A randomly selected client that has nk training data samples in federated learning A randomly selected sample in traditional deep learning Federated SGD (FedSGD): a single step of gradient descent is done per round Recall in federated learning, a C-fraction of clients are selected at each round o C = 1: full-batch (non-stochastic) gradient descent o C< 1: stochastic gradient descent (SGD) HPC Lab - CSE-HCMUT 21" }, { "page_index": 514, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_022.png", "page_index": 514, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:25+07:00" }, "raw_text": "Federated learning - FederatedAveraging (FedAvg) >Learning rate: n; total #samples: n; total #clients: K; #samples on a client k: nk; clients fraction C = 1 In a round t: o The central server broadcasts current model w, to each client; each client k computes gradient: 9k = VF(wt), on its local data >Approach 1: Each client k submits gk; the central server aggregates the gradients to generate a new model: nkFk(w) nk gk. =1 n >Approach 2: Each client k computes: wk < wt - n.gk ; the central server performs t+1 aggregation: Tk k For multiple times => FederatedAveraging (FedAvg) t+1 n HPC Lab - CSE-HCMUT 22" }, { "page_index": 515, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_023.png", "page_index": 515, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:28+07:00" }, "raw_text": "Federated learning - deal with limited communication Increase computation o Select more clients for training between each communication round o Increase computation on each client. HPC Lab - CSE-HCMUl 23" }, { "page_index": 516, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_024.png", "page_index": 516, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:32+07:00" }, "raw_text": "A baseline - FederatedSGD (FedSGD): computation cost >Learning rate: n; total #samples: n; total #clients: K; #samples on a client k: nk; clients fraction C In a round t: o The central server broadcasts current model w to each client; each client k computes gradient: gk = VF(wt), on its local data. >Approach 2: nk The central server performs aggregation: K A Vt+1 n B HPC Lab - CSE-HCMUT 24" }, { "page_index": 517, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_025.png", "page_index": 517, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:38+07:00" }, "raw_text": "A baseline - FederatedSGD (FedSGD) Algorithm 1FederatedAveraging.The K clients are indexed by kB is the local minibatch size,E is the number 1. At first, a model is randomly initialized on of local epochs,and n is the learning rate. the central server. Server executes: 2. For each round t: initialize wo (a) A random set of clients are chosen; foreachroundt=1.2....do (b) Each client performs local gradient mmaxC·K,1 descent steps; St(random set of m clients (c) The server aggregates model for each client k E St in parallel do parameters submitted by the clients. w+1 + ClientUpdate(k, wt) A ClientUpdatek,w):// Run on client k B-split P into batches of size B for each local epochifrom1 to E do for batch b EB do w+w-nl(w;b return w to server HPC Lab - CSE-HCMUT 25" }, { "page_index": 518, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_026.png", "page_index": 518, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:42+07:00" }, "raw_text": "Swarm Learning Warnat-Herresthal, S., Schultze, H., Shastry, K.L. et al. Swarm Learning for decentralized and confidential clinical machine learning. Nature 594, 265-270 (2021). https://doi.org/10.1038/s41586-021-03583-3. 2. Jialiang Han, Yun Ma, Yudong Han:Demystifying Swarm Learning: A New Paradigm of Blockchain-based Decentralized Federated Learning. CoRR abs/2201.05286 (2022 3. https://github.com/spiffe/spire. 4. https://spiffe.io/docs/latest/spiffe-about/overview/ HPC Lab-CSE-HCMU l 26" }, { "page_index": 519, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_027.png", "page_index": 519, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:51+07:00" }, "raw_text": "Swarm learning b d a C Local learning Centrallearning Federated learning Swarm Learning Data at edge Data and parameter 4 ? at edge ? 3 2 3 2 3 3 4 Data and parameter Parameter Disconnected central central (a) illustration of the concept of local learning (c) Federated learning, with data being kept with the data with data and computation at different contributor and computing performed at the site of local disconnected locations. data storage and availability, but parameter settings orchestrated by a central parameter server (b) In cloud computing, data are moved centrally so that machine learning can be (d) Swarm learning, which dispenses with a dedicated server, carried out by centralized computing shares the parameters via the Swarm network and builds the models independently on private data at the individual sites. however, a remainder of a central structure is kept. HPC Lab - CSE-HCMUT 27" }, { "page_index": 520, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_028.png", "page_index": 520, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:10:58+07:00" }, "raw_text": "Schematic of the Swarm network Model Private data Model Private data SL shares the parameters via the &] &] Swarm network and builds the models QD OD independently on private data at Swarm edge nodes, without the need of a central custodian Swarm edge node 101 101 Swarm edge node 010 010 With the benefit of blockchain, SL Parameters Parameters secures data sovereignty, security, and Swarm network confidentiality Private permissioned Each participant is well defined and blockchainnetwork 101 101 only pre-authorized participants can 010 010 be onboarded and execute Model Private data .Parameters Parameters. ModelPrivate data & transactions OD Private data are used at each node 0- together with the model provided by the Swarm network Swarm edge node Swarm edge node HPC Lab - CSE-HCMUT 28" }, { "page_index": 521, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_029.png", "page_index": 521, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:02+07:00" }, "raw_text": "The workflow of SL A new edge node enrolls via a blockchain smart contract, obtains the model, and performs localized training until a user-defined Synchronization Interval (Si) Local model parameters are exchanged and merged to update the global model before the next training round Apart from Swarm edge nodes, there are Swarm coordinator nodes responsible for maintaining metadata like the model state, training progress, and licenses, without model parameters. HPC Lab - CSE-HCMUT 29" }, { "page_index": 522, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_030.png", "page_index": 522, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:10+07:00" }, "raw_text": "E: SPIRE SPIRE SL framework (1) Server Server Swarm Learning (SL) node runs a user-defined DL algorithm 2 which iteratively updates the local mode 5 5 Swarm Network Swarm Network Node Node Swarm Network (SN) node interacts with each other to maintain M 3 3 3 global state information about the model and tracks training Ca C progress via the Ethereum blockchain platform 7 7 Swarm Learning Swarm Learning Swarm Learning Swarm Learning Besides, each SL node registers itself with an SN node when Node Node Node Node initialization and each SN node coordinates the training pipeline of its SL nodes Note that the blockchain records only metadata like the model state and training progress, without model parameters License Server License Server Swarm Learning Command Interface (SwCi) node securely connects to SN nodes to view the status, control and manage the SL framework SPIFFE SPIRE Server node guarantees the security of the SL framework Each SN node or SL node includes a SPIRE Agent Workload Attestor plugin that communicates with the SPIRE Server nodes to attest their identities and to acquire and manage a SPIFFE Verifiable Identity Document (SVID) License Server (LS) node installs and manages the license to run the SL framework. HPC Lab - CSE-HCMUT 30" }, { "page_index": 523, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_031.png", "page_index": 523, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:19+07:00" }, "raw_text": "SPIRE SPIRE Server Server SL framework (2) Swarm Network Peer-to-Peer Port is used by SN nodes to share state Swarm Network Swarm Network Node Node information about the blockchain with each other, i.e. Line 1 3 3 3 3 Swarm Network File Server Port is used by SN nodes to run file servers and share state information about the SL framework with each other, i.e. Line 2 Swarm Learning Swarm Learning Swarm Learning Swarm Learning Node Node Node Node Swarm Network API Port is used by SN nodes to run REST-based API servers, 4 i.e. Line 3. The API server is used by SL nodes to send and receive state information from the SN node they are registered with. It is also used by SWCI nodes to view and manage the status of the SL framework License Server License Server Swarm Learning File Server Port is used by SL nodes to run file servers and share learned parameters of the model with each other, i.e. Line 4 SPIRE Server API Port is used by SPIRE Server nodes to run gRPC-based API servers for SN nodes and SL nodes to connect to the SPIRE Server and acquire SVIDs, i.e. Line 5 SPIRE Server Federation Port is used by SPIRE Server nodes to connect each other in the SPIRE federation and send and receive trust bundles, i.e. Line 6 License Server API Port is used by the LS node to run a REsT-based API server and a management interface, i.e. Line 7. The API server is used by SN nodes and SL nodes to connect to the LS node and acquire licenses. The management interface is used by the SL framework administrators to connect to the LS node from browsers and administer licenses. HPC Lab - CSE-HCMUT 31" }, { "page_index": 524, "chapter_num": 13, "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3071", "source_file": "/workspace/data/converted/CO3071_Distributed_Systems/Chapter_13/slide_032.png", "page_index": 524, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:11:24+07:00" }, "raw_text": "The workflow of SL node An SL node initializes in the following pipeline After initialization, each SL node regularly shares its learned parameters with other SL nodes and merges 1) It acquires a license from the LS node their parameters 2) It acquires an SVID from the SPIRE Server This merging periodicity is defined by a Synchronization node Interval (SI), which specifies the number of training batches after which SL nodes merge their learned 3) It registers itself with an SN node parameters 4) It starts a file server and announces to the SN At the end of each SI, one of the SL nodes is elected as the node that it is ready to run the training leader by the blockchain program The leader collects local models from each SL node and 5) It starts running the user-specified model merges them into a global model by averaging their training program learned parameters After that, each SL node receives this merged model and starts the next Sl HPC Lab - CSE-HCMUT 32" } ] }