{ "course": "Parallel_Computing", "course_id": "CO3067", "schema_version": "material.v1", "slides": [ { "page_index": 0, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_001.png", "page_index": 0, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:20:54+07:00" }, "raw_text": "Parallel Processing Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology -1.1-" }, { "page_index": 1, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_002.png", "page_index": 1, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:20:57+07:00" }, "raw_text": "BK Chapter 1: Introduction TP.HCM HPC and applications New strends Introduction - What is parallel processing? - Why do we use parallel processing? Parallelism HPC Lab - CSE - HCMUT -1.2-" }, { "page_index": 2, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_003.png", "page_index": 2, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:03+07:00" }, "raw_text": "Applications (1) BK TP.HCM STS ASCENT CONFIGURATION COMPARISON OF PRESSURE COEFFICIENT IA105A Wind Tunnel Test with F3D/Chimero Novier-Stokes Solve gch1.55 Computotion Wind Tunne 2/22/87 Fluid dynamics Day 360 Brain simulation Weather forecast (PCM) iprccos A.Tinmermarn,U.ofHawaii Simulation of oil spill In in BP oil ship problem Astronomy Simulation of Uranium-235 created ING ING from Phutonium-239 decay 6140 SCIENCEDHOtOLIBRAR 0140 Simulation Medicine i.e. Lithium atom Renault F1 Simulation of car accident HPC Lab - CSE - HCMUT -1.3-" }, { "page_index": 3, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_004.png", "page_index": 3, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:06+07:00" }, "raw_text": "s (2) BK Applications TP.HCM Critical HPC issues Global warming Alternative energy Financial disaster modeling Healthcare New trends Big Data Internet of Things (loT) 3D movies and large scale games are fun Homeland security Smart cities HPC Lab - CSE - HCMUT -1.4-" }, { "page_index": 4, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_005.png", "page_index": 4, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:14+07:00" }, "raw_text": "BK TP.HCM 4.6 30 billion RFlD tags today billion 12+ TBs (1.3B in 2005) camera of tweet data phones every day world wide 100s of fo s81 millions of 130m GPS enabled devices sold 25+ TBs of annually Goog log data le :: Reader every day OOta k http:ffww. more x Google 76 million Google 2+ billion Analytics YouTube smart meters in 2009.. people on the Web by end 200M by 2014 2011 HPC Lab - CSE - HCMUT -1.5-" }, { "page_index": 5, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_006.png", "page_index": 5, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:20+07:00" }, "raw_text": "BK IoT and Services TP.HCM Figure 4: Internet of People 106-108 Social Web The Internet of Things and Services - Networking people, objects and systems CPS- platforms Smart Grid 1 Business Web Smart Factory 26 X Smart Building Smart Home EA Internet of Things Internet of Services 107-109 104-106 Source.Bosch Software lrnovations 2012 HPC Lab - CSE - HCMUT -1.6-" }, { "page_index": 6, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_007.png", "page_index": 6, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:28+07:00" }, "raw_text": "BK Smart cities TP.HCM Libelium Smart World Smart Roads ightinc se Urban Maps martParkin Golf Courses libelium www.libeiiwm.cem http://www.libelium.com/libelium-smart-world-infographic-smart-cities-internet-of-things/ HPC Lab - CSE - HCMUT -1.7-" }, { "page_index": 7, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_008.png", "page_index": 7, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:41+07:00" }, "raw_text": "Different thinking BK TP.HCM Smart cities: 2008 <> 2018 Industry: 4.0 <> 3.0 IEI LS TIPES BIG ICSE CILLECTIO T0OLSRY9S A SHARING A PROCESSING= ECISION SPEED STORAGE POET OUFATE DALLEVE S DATA VOLUME AX ECPLINE - MAACE SDFCES SEARCH SISTEMSSZE tllss ENLAMO lp process using arrives patterns improving dtta lnle raud psss understand Éerroneous practical definition business qualitative computatior - analysis C fieldsProbability study closely applications mining performance Ana use O odecision S make alytics realistic simple predictive decisions simplest order rulesinvolved siXS within Others tends - C HPC Lab - CSE - HCMUT -1.8-" }, { "page_index": 8, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_009.png", "page_index": 8, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:21:51+07:00" }, "raw_text": "BK Data collection TP.HCM BIG COLLECTIO T0LSAR MAY9S O SHARING C PROCESSING= STORAGE OUFAE DRAFY S DAT VOLUME ECLIE SDUFCES UAWC SEAFCH SISTESSZE LTE Advanced Cellular 4G/LTE 3G-GPS/GPRS ZG/GSM/EDGE.CDMA.EVDO WEIGHTLESS WIMAX LICENSE-FREE SPECTRUM DASH7 wi Fi WiFi BLUETOOTH UWB Z-WAVE ZIGBEE 6LOWVPAN NFC ANT RFID WAN Connect WideArea Network-80220 POWERLINE MAN ETHERNET PRINTED MetropolitanAreaNetwork-802.16 LAN LocalArea Network-8o2.11 P4IPv6UDP DTLSRFLTeinEtMQTT DDS CoAP XMPP HTTP SOCKETSRESTAP PAN PersonalAreaNetwork-802.15 Ambient Light Touch Screen Proximity Fingerprint Attitude Things Gyroscope Moisture Magnetometer Gravity Accelerometer Barometer HPC Lab - CSE - HCMUT -1.9-" }, { "page_index": 9, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_010.png", "page_index": 9, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:03+07:00" }, "raw_text": "process C Data analytics using arriveso BK patterns improving past understand Eerroneous TP.HCM practical marketing \"definition qualitative business computatio S analysis % fields Probability E fieldarea study closely applications mining performance Ana use odecision make lytics realistic simple predictive decisions simplest order 1sIxa mn rules involved Others within o tenas Supercomputing Artificial.lntelligence Data Mining Learn Big Data IP videos Scientific Simulations share music Business lntelligence CLOUD STORAGE Deep Learning pictures documents contacts files Collect HPC Lab - CSE - HCMUT -1.10-" }, { "page_index": 10, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_011.png", "page_index": 10, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:14+07:00" }, "raw_text": "High Performance Computing BK TP.HCM http://www.top500.org/ HPCE The European HPC Strategy FUGAKU Altair AMD ASETEK Pepe micro Chelsio Cool1T D 415.53 Petaflops systems The Commission recognised the need for an Eu-level policy in HPc to optimise Cintel Mellan NEC Since1986-Covering the Fastest Computers in national and European investments, addressing the entire HPC ecosystem. The 7,299,072 cores the Worid and the People Who Run Them Commission adopted its HPC Strategy on 15 February 2012 in the Communication RAFT f t \"High Performance Computing(HPCEurope's place in a globalrace\"to ensure European leadership in the supply and use of HPC systems and services by 2020 Search this site Search Subscribe to recei The Competitiveness Council on 29/30 May 2013 adopted conclusions on this Communication,highlighting the role of HPc in the Eu's innovation capacity and Home News Technologies v Sectors v Exascale Resourcesy Specials Evel stressing its strategic importance to the Eu's industrial and scientific capabilities as July 30.2015 well as to its citizens. White House Launches National HPC Strategy John Russell and Tiffany Trader High-Performance Computing (HPC) is a strategic resource for Europe's future Yesterday's executive order by President Barack Obama creating a Mastering advanced computing technologies from hardware to software has National Strategic Computing Initiative(NsCl) is not only powerful acknowledgment of the vital role HPC plays in modern society but is become essential for innovation, growth and jobs. also indicative of government's mounting worry that failure to coordinate and nourish HPC development on a broader scalewould put the nation at risk.Not surprisingly.early reaction from the HPC community has been largely positive HPC5 Summit 143.5 Petaflops 2,297,824 cores HPC5 Sunway TaihuLight 35.45 Petaflops SuperMUC-NG 93.0 Petaflops 669.760 cores 19.47 Petaflops 10.649.600 c0res 305,856 cores HPC Lab - CSE - HCMUT -1.11-" }, { "page_index": 11, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_012.png", "page_index": 11, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:20+07:00" }, "raw_text": "Exascale Race/Technologies BK IDC-Projected Exascale Dates and Suppliers TP.HCM U.S. EU =SustainedES:2023 =SustainedES:2023-24 PeakES:2021 =PeakES:2021 VendorsU.S. VendorsU.S.,Europe Processors:U.S. ProcessorsU.S.ARM InitiativesNSCI/ECP InitiativesPRACE,ETP4HPC =Cost:$300-500M per system Cost$300-$350 per system,plus plus heavy R&D investments heavy R&D investments China Japan SustainedES:2023 Sustained ES: 2023-24 =PeakES:2020 =Peak ES:Notplanned VendorsChinese -Vendors: Japanese Processors: Chinese (plus U.S.?) Processors:Japanese 13th 5-Year Plan =Cost:$600-850M,this includes both1 =Cost$350-500M per system, system and the R&D costs...will also plus heavy R&D do many smaller size systems HPC Lab - CSE - HCMUT -1.12-" }, { "page_index": 12, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_013.png", "page_index": 12, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:28+07:00" }, "raw_text": "BK Supercomputing Conference (sc) TP.HCM TOP500 (1993) -> Green500 (SC06) -> Graph500 (SC10 SC17: 12-17 Nov 2017, Denver, Colorado - US I0-500 Fog/Edge computing for smart cities SC18: 11-16 Nov 2018, Dallas, Texas - US Deep500 Countries System Share Segments System Share China Industry 8.8% United States 6% Research Japan Academic Germany 15.2% Government 40.4% France Vendor United Kingdom Classified Italy 55% Netherlands 21.8% Canada 28.6% Poland Others Countries Performance Share Segments Performance Share China Industry 8.6% United States Research Japan Academic 153 28.8% Germany Government 35.4% France Vendor United Kingdom Classified Italy Netherlands Canada Poland 50.2% 29.6% Others HPC Lab - CSE - HCMUT -1.13-" }, { "page_index": 13, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_014.png", "page_index": 13, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:34+07:00" }, "raw_text": "BK Data analytics: Al TP.HCM Hidden Layers Al Artificial Intelligence Input Output Normal Tumor Machine Learning Stroke Hemorrhage Supervised Learning Unsupervised Learning W2,1 W2.2 Ws.1 Deep Learning W2,3 Nthneuron 2nd Hidden Layer W3,2 Value W3,3 W2,n Wx + b >Output .. ReLu W3,n Non-linearity b2.n G. Zaharchuk et al. AJNR Am J Neuroradiol doi:10.3174/ajnr.A5543 HPC Lab- CSE - HCMUT -1.14-" }, { "page_index": 14, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_015.png", "page_index": 14, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:41+07:00" }, "raw_text": "Al BK TP.HCM Accuracy = big data + computational power 100% 99.99% HUMAN DEEP LEARNING 90% TrainingTestIValidation AACCCAAT 80% TRADITIONAL Training MACHINE Training Trained Neural LEARNING Data Network .l 70% Model Assessment QTY OF DATA Deep learning keeps improving as data grows 28% 26% AlexNet,8layers deep learning ZF.8layers (big neural network) VGG,19layers 16% GoogLeNet.22layers ResNet.152layers 12% traditional machine learning CUlmage human 7.3%6.7% 3.6% 3.0% Human error Shallow Deep Data HPC Lab - CSE - HCMUT -1.15-" }, { "page_index": 15, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_016.png", "page_index": 15, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:47+07:00" }, "raw_text": "Japan Plans s Super-Efficient Al Supercomputer BK TP.HCM By Tiffany Trader November 28, 2o16 Editor's note: a source familiar with the project is reporting that the target of 13o petaflops is actually half-precision (16-bit). The target for double precision (64-bit) is 33 petaflops. Additionally, the budget of 19.5 billion yen is for the machine and the building and facility. The target for installation is now 2oi8 Q1. The story has been updated to reflect these changes and we will report further as it develops. ABCl open innovation platform Algorithm Investmentfor Al R&D Collaborative R&D Industry and Academia DAIST Al Cloud design, AiRC construction,and operation Launching business Al Service Developers/Providers Startups ML/DL Framework Developers Computing Big Data Technology transfer Various Power Al companies Common Al Platform Big Data Holders IDC Vendors Common Modules Common Data/Models Source: AlST document HPC Lab - CSE - HCMUT -1.16-" }, { "page_index": 16, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_017.png", "page_index": 16, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:55+07:00" }, "raw_text": "BK Summit TP.HCM ntelligentMachines The world's most powerful supercomputer is tailor made for the Alera The technology used to build America's new Summit machine will also help us make the leap to exascale computing COMBATING CANCER Through the development of scalable deep neural networks scientists at the US Department of Energy and the National Cancer institute are making strides in improving diagnosis and treatment of this disease MIT The arrival of Summit gives researchers a powerful boost DECIPHERING HIGH-ENERGY June 8,2018 in the fight against cancer. PHYSICS DATA by Martin Giles Physicists possess truckloads PREDICTINGFUSION ENERGY of data from large,high-energy Technology Obtaining the long-sought experiments, such as the Large Hadron Collider in Switzerland benefits of fusion energy-the With Al supercomputing. same energy that powers the Sun-depends an reliabie physicists can lean on machines to identify important pieces of fusion reactors. Predictive Al Review software is already intormation-data thatis too contributing to this goal by massive for any single human to helping scientists anticipate handle and that could change our disruptions to the volatile understanding of the universe plasmas inside experimental reactors. Summit's arrival IDENTIFYING NEXT-GENERATIONMATERIALS allowsresearchers to take Deep learning on Summit could help this work to the next leve! scientists identify materials for and further integrate Al with next-generationtechnologies-better fusion technology materials, and more efficient semiconductors. By training Al algorithms to predict materials propertiesbasedondetailed ALCHALLENGES experimental images, researchers could definitively answer longstanding questions about FOR THE SUMMIT materials'behaviors at atomic scates. SUPERCOMPUTER HPC Lab - CSE - HCMUT -1.17-" }, { "page_index": 17, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_018.png", "page_index": 17, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:22:58+07:00" }, "raw_text": "Aurora BK TP.HCM The next supercomputer of United State in 2021 => Al Argonne NATIONALLAJORATORY U.S.DEPARTMINTOF ENERGY intel CFA HPC Lab - CSE - HCMUT -1.18-" }, { "page_index": 18, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_a/slide_019.png", "page_index": 18, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:03+07:00" }, "raw_text": "NARLabs Ai Platform in 2018 TAIWANIA 2 The NCHC is accelerating 9 TAIWANIA 2 Al innovation in Taiwan petaflops Rmax 19" }, { "page_index": 19, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_001.png", "page_index": 19, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:05+07:00" }, "raw_text": "Parallel Processing Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology -1.1-" }, { "page_index": 20, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_002.png", "page_index": 20, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:07+07:00" }, "raw_text": "BK HCMUT TP.HCM What we are doing? HPC Lab - CSE - HCMUT -1.2-" }, { "page_index": 21, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_003.png", "page_index": 21, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:14+07:00" }, "raw_text": "Internet of Things (loT) BK TP.HCM Timothy Chou Do Learn LTE Advanceo Cellular4G/LTE 3G-GPS/GPRS 2G/GSM/EDGE,CDMA.EVDO WEIGHTLESS WIMAX LICENSE-FREE SPECTRUM Collect DASH7 Wi Fi WiFi BLUETOOTH Z-WAVE UWB ZIGBEE 6LOWPAN NFC ANT RFID WAN Connect WideArea Network-80220 POWERLINE ETHERNET MAN PRINTED MetropolitanArea Network-8o2.16 LAN PAN Local Area Network 802.11 IPv6UDP DTLS RPLTeiQTT DDSC XMPPHTTP SOCKETSREST AP PersonalArea Network 802.15 Ambient Light Touch Screen Proximity Fingerprint Attitude Things Accelerometer Gyroscope Moisture Magnetometer Gravity Barometer HPC Lab - CSE - HCMUT 3" }, { "page_index": 22, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_004.png", "page_index": 22, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:18+07:00" }, "raw_text": "BK CoLab TP.HCM (1) Smart Cities Real problems (2) Industry 4.0 Traffic jam, Urban flooding Pollution: air & water Smart light, Robotics Applications, Smart Smart Bio- Material Cities Grid medical Solutions Products Joint/new Labs 1 Core Lab - Joint/new Labs Computing & Data Cyber Big Block- Al loT Infrastructure & Security Data chain Core Technologies HPC Lab HPC Lab - CSE - HCMUT -1.4-" }, { "page_index": 23, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_005.png", "page_index": 23, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:23+07:00" }, "raw_text": "BK HPC Lab TP.HCM Partners VNU-HCM (HCMUT Intel Ho Chi Minh city Plan: 2012-2022 HPC Lab: set up in 2013 Strengthen HPC in Vietnam Solving big problems Connect Contract Leading in technology Partners: HPE (2015) & Nvidia (2017 Collaborationcreates new possibilities & Applications: 2013-2022 opportunities which wouldn't otherwise exist Traffic analysis Close Collaborate Urban flooding Big data analytics HPC Lab - CSE - HCMUT -1.5-" }, { "page_index": 24, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_006.png", "page_index": 24, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:33+07:00" }, "raw_text": "BK SuperNode I & II TP.HCM SuperNode I in 1998-20o0 JService vien vat ly X Ele Edit ew Hgtcry Eoclma i_http:/172.2.13.2index.php=pplicaon&a=v GGoo Custonize Links Wincows velcome to Faculty Supernode computing system erowse home dlirector jol Service vien vatly Services Job Status Browsing directory of hungng-Mozilla Firefox rapu ile Edit view History Bookmarks Toois Help w.spn.hcmut.edu.vn/index.php Supernode computing system Start: Finish StaP You are logged in as hungnq.Logoul Browse home directory Browsing/ Submit new job Job Status Filename Size Permission OwnerID GroupID Function Services .bash_logout 24 rw-r-1 539 1000 bashprofle 191 rw-r-r 539 1000 .bashrc 124 539 1000 gtkrc Don 120 539 1000 .kde 4096 drwxr-xr-x 539 1000 prime 7455 rw-rw-r- 539 1000 SuperNode II in 2003-2005 2295 539 1000 111 5156 539 539 Done HPC Lab - CSE - HCMUT -1.6-" }, { "page_index": 25, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_007.png", "page_index": 25, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:38+07:00" }, "raw_text": "BK SuperNode V TP.HCM Blackmun-core OpenLDAP DHCP DNS Blackmun SSH VM VM Group gateway VM VM Shared Authentication group Contextualization group user info File System control control info control Cloud_d Environment Service manager Usermanager arbitrator manager repository Jobmanager Group Status Os image MySQL Resource DB SuperNode-V manager manager repo allocatio Elasticitypolicy SuperNode-V project: 2010-2012 HPC Lab - CSE - HCMUT -1.7-" }, { "page_index": 26, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_008.png", "page_index": 26, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:43+07:00" }, "raw_text": "BK EDA-Grid & VN-Gria TP.HCM SuperNode II Applications Chip design Data mining Airfoid optimization Security Monitoring User Management Scheduling Campus/VN-Grid (GT) Resource Management Information Service Data Service POP-C++ HPC Lab - CSE - HCMUT -1.8-" }, { "page_index": 27, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_009.png", "page_index": 27, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:23:52+07:00" }, "raw_text": "SuperNode-XP BK TP.HCM 50 TFlops machine Future Work Internet He thong luu trur mo rong 1 U InfiniBand Switch 1 U Ethernet Switch InfiniBand line Ethernet line Giao tiép data trong ni Dóng vai tr diéu khién. kiém soat két nói hé thóng bo cac node tinh toan vói Internet Headnode1 Headnode2 (backup) Temporary Storage (8TB) Luu tru tap trung dur lieu tai headnode (khi chua co he thong storage mo rong) O O O O O O O S-Island M-Island L-Island 12 compute nodes 8 compute nodes 4 compute nodes ILO connection - 24 cores/node - 24 cores/node - 24 cores/node Duöng quan tri (monitoring he - 128 GB RAM - 256 GB RAM + SSD - 512 GB RAM thöng thong qua headnode) - Xeon Phi - Xeon Phi - SSD + Xeon Phi HPC Lab - CSE - HCMUT -1.9-" }, { "page_index": 28, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_010.png", "page_index": 28, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:01+07:00" }, "raw_text": "BK Applications on SuperNode-XP TP.HCM Start End User Apps Organization/Unit INOMAR center Time Time A/Prof. Nguyen HCMUT-Faculty of Civil 30/09/14 OpenTelemac Thong Engineering Ms. Pham Ngoc Thanh, Dr. Ing Jorg 06/06/16 OpenFoam Vietnam Germany University Franke HCMUT-Faculty of Dr.Le Thanh Van 20/08/16 Hadoop Computer Science A/Prof.Tran Van HCMUT-Faculty of 25/10/16 BLAS Hoai Computer Science HCMUT-Faculty of Mr. Quan 01/11/16 DFFT Electrical and Electronic Engineering VASP, Quantum Ngi A/P Expresso Simulations Ho: Env.Sci/Disaster Mitigation Phl Pro Vietnam Germany University Dr. Engineering Dr. Dr. Dr. Lol HPC Application Area Dr. Fibance & Business OpenFOAM sims Science Research facebook HPC Lab - CSE - HCMUT -1.10-" }, { "page_index": 29, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_011.png", "page_index": 29, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:10+07:00" }, "raw_text": "SE High Performance Computing LAB BK intel HewlettPackard Enterprise SALINIZATIONIN MEKONG DELTA AUTOMATICMOTORBIKE Simulationand prediction DETECTIONIN VIETNAM Jorenvironmentalproblems LIBRARIES INTEL XEON PHI(MIC) Automatic tratfic densit CO-PROCESSOR cstinration in Ho Chi Mnh cip ANSYS128 cores) HADOOP&SPARK CADENCE OpenTELEMAC OpenFOAM Intel Parallel Studio BLAS 2MICCARDS/NODE 61C0RES/CARD ITS HCMUT Internet Traffic informationJor & Ho Chi Minh cib Cloud Services SUPERNODE-XP & HPC APPLICATIONS INFINIBAND PCIe SWITCHES WIDTH - x16 56GBPS) SPEED - 5 GT/s S-Island M-lsland L-Island 12 COMPUTE NODES 8 COMPUTE NODES 4 COMPUTE NODES 24 CORES/NODE 24CORES/NODE 24CORES/NODE 128GBRAM 256 GBRAM +SSD 512 GB RAM+SSD HPC Lab - CSE - HCMUT -1.11-" }, { "page_index": 30, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_012.png", "page_index": 30, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:16+07:00" }, "raw_text": "SuperNode-XP Architecture BK TP.HCM Batch jobs Interactive jobs Long/&large apps Portal Performance Apps/Users Admin Monitoring Analytics Big data tools (Hadoop, Spark...) Resource Scheduler Resource Management (PBSpro) 24 Computing Nodes CPUs Xeon Phi GPUs RAM Hard disk (2x12 cores) (2x61cores) (3548 cores) 128-512GB (SSD) Network Infrastructure Infiniband5o GBps) Gigabit Ethernet Control plane Storage Big data nodes Infiniband(1ooGbps) Control plane HPC Lab - CSE - HCMUT -1.12-" }, { "page_index": 31, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_013.png", "page_index": 31, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:22+07:00" }, "raw_text": "HPDA = Data-Intensive Computing Using HPC BK TP.HCM Drivers: Competition Complexity .Time Advanced Analytics Modeling & Simulation Existing HPC users Existing HPC users Larger problem sizes Intelligence community,FSI Higher resolution Data-driven science/ Iterative methods engineering (e.g., biology) EP jobs to the cloud Knowledge discovery (Novartis) ML/DL,cognitive,AI New commercial users New commercial users E.g., SMEs Fraud/anomaly detection Business intelligence Affinity marketing Personalized medicine https://www.hpcwire.com/2017/04/20/hyperion-idc-paints-bullish-picture-hpc-future HPC Lab - CSE - HCMUT -1.13-" }, { "page_index": 32, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_014.png", "page_index": 32, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:27+07:00" }, "raw_text": "BK HPC Lab TP.HCM Machines HPDA (HPC + Big data + AI) platform: Data collection: data streaming SuperNode-XP/Al 256/512 GB RAM Data analytics: Data mining & Al algorithms InfiniBand 50, 100,200 Gbps Performance Storage > 200 TB Resource management Tools PBSpro, Yarn + Al scheduling algorithms MPI, OpenMP Networking Hadoop, Spark, Kafka MPI One-sided communication TensorFlow, Torch, Caffee2 PBSpro, Torque Virtualization Products Docker & VM Portal & Monitoring SuperNode machine series SuperNode-DataCore: Hybric HPC Open OnDemand: https://openondemand.org Data Platform for HPDA Zabbix: https://github.com/zabbix/zabbix Smart Village/City Data Core Prometheus: https://prometheus.io Grafana: https://grafana.com HPC Lab - CSE - HCMUT -1.14-" }, { "page_index": 33, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_b/slide_015.png", "page_index": 33, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:33+07:00" }, "raw_text": "Platform BK TP.HCM Future Work Internet He thong luu trur mo rong 1 U InfiniBand Switch 1U Ethernet Switch InfiniBand line Ethernet line Giao tiep data trong ni Dong vai tro dieu khien. kiem soat két noi he thong bo cac node tinh toar voi Internet Mang adnode wackup Temporaty Storage (STB Luu tra tap trung dur lieu tai O headnode (khi chua co he thong storage m rong) M-Island L-Island Hiéu suät 8 compute nodes 4 compute nodes 24 cores/node 24 cores/node - 256 GB RAM + SSD -512GB RAM Multi-core & Many-core (XP, GPU) - Xeon Phi - SSD+ Xeon Phi .Mang töc do siéu cao: InfiniBand .Storage (NFS, Lustre, ...) -HPC/Big data/AI: MPI, Hadoop, Spark, DL Ky thuat äo hoa Quan ly & phan b tai nguyén Su dung .Giäi phäp tu döng/bän tu döng .Truy cap D liéu/két qua Quan tri .Hé thóng Admin Nguöi düng .U'ng dung HPC" }, { "page_index": 34, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_001.png", "page_index": 34, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:35+07:00" }, "raw_text": "Parallel Processing Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology -1.1-" }, { "page_index": 35, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_002.png", "page_index": 35, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:37+07:00" }, "raw_text": "BK How to do TP.HCM Parallel processing HPC Lab - CSE - HCMUT -1.2-" }, { "page_index": 36, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_003.png", "page_index": 36, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:40+07:00" }, "raw_text": "Sequential Processing BK TP.HCM D 1 CPU Simple Big problems??? HPC Lab - CSE - HCMUT -1.3-" }, { "page_index": 37, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_004.png", "page_index": 37, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:42+07:00" }, "raw_text": "New Approach BK TP.HCM Modeling Analysis Simulation HPC Lab - CSE - HCMUT -1.4-" }, { "page_index": 38, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_005.png", "page_index": 38, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:45+07:00" }, "raw_text": "Grand Challenge Problems BK TP.HCM A grand challenge problem is one that cannot be solved in a reasonable amount of time with today's computers Ex: - Modeling large DNA structures - Global weather forecasting - Modeling motion of astronomical bodies HPC Lab - CSE - HCMUT -1.5-" }, { "page_index": 39, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_006.png", "page_index": 39, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:49+07:00" }, "raw_text": "N-body BK TP.HCM The N2 algorithm: - N bodies N-1 forces to calculate for each F i5 bodies FiA inj 12 N2 calculations in total m - After the new positions of the m1 bodies are determined, the calculations must be repeated HPC Lab - CSE - HCMUT -1.6-" }, { "page_index": 40, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_007.png", "page_index": 40, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:53+07:00" }, "raw_text": "Galaxy BK TP.HCM - 1o7 stars and so 1o14 calculations have to be repeated - Each calculation could be done in 1us (10-6s) - It would take 3 years for one iteration (26800 hours - But it only takes 10 hours for one iteration with 2680 processors HPC Lab - CSE - HCMUT -1.7-" }, { "page_index": 41, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_008.png", "page_index": 41, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:55+07:00" }, "raw_text": "Solutions BK TP.HCM Power processor - 50 Hz -> 100 Hz -> 1 GHz -> 4 Ghz ->... -> Upper bound? Smart worker - Better algorithms Parallel processing HPC Lab - CSE - HCMUT -1.8-" }, { "page_index": 42, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_009.png", "page_index": 42, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:24:59+07:00" }, "raw_text": "Parallel Processing Terminology BK TP.HCM Parallel processing Parallel computer - Multi-processor computer capable of parallel processing Throughput: - The throughput of a device is the number of results it produces per unit time Speedup S = Time(the most efficient sequential algorithm)/Time(parallel algorithm Parallelism Pipeline Data parallelism Control parallelism HPC Lab - CSE - HCMUT -1.9-" }, { "page_index": 43, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_010.png", "page_index": 43, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:02+07:00" }, "raw_text": "BK Pipeline TP.HCM A number of steps called segments or stages The output of one segment is the input of other segment Stage 1 Stage 2 Stage 3 HPC Lab - CSE - HCMUT -1.10-" }, { "page_index": 44, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_011.png", "page_index": 44, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:05+07:00" }, "raw_text": "BK Data Parallelism TP.HCM Distributing the data across different parallel computing nodes Applying the same operation simultaneously to elements of a data set HPC Lab - CSE - HCMUT -1.11-" }, { "page_index": 45, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_012.png", "page_index": 45, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:09+07:00" }, "raw_text": "Pipeline & Data Parallelism BK TP.HCM 1. Sequential execution W2 A B C W 2. Pipeline A W5 B C W 3. Data Parallelism W4 A B W W5 W2 A B C W6 A B C HPC Lab - CSE - HCMUT -1.12-" }, { "page_index": 46, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_013.png", "page_index": 46, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:18+07:00" }, "raw_text": "Pipeline & Data Parallelism BK TP.HCM Pipeline is a special case of control parallelism 0 T(s): Sequential execution time 0 T(p): Pipeline execution time (with 3 stages) T(dp): Data-parallelism execution time (with 3 processors) S(p): Speedup of pipeline S(dp): Speedup of data parallelism Widget 1 2 3 5 6 8 10 T(s) 3 6 12 15 18 21 24 27 30 T(p) 3 4 5 6 8 10 11 12 T(dp) 3 3 3 6 6 6 9 12 S(p) 1 1+1/2 1+4/5 2 2+1/7 2+1/4 2+1/3 2+2/5 2+5/11 2+1/2 S(dp) 1 2 3 2 2+1/2 3 2+1/3 2+2/3 3 2+1/2 HPC Lab - CSE - HCMUT -1.13-" }, { "page_index": 47, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_014.png", "page_index": 47, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:21+07:00" }, "raw_text": "Pipeline & Data Parallelism BK TP.HCM 3,5 3 2,5 2 S(p) 1,5 -- S(dp) 1 0,5 0 2 3 4 5 6 7 8 9 10 HPC Lab - CSE - HCMUT -1.14-" }, { "page_index": 48, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_015.png", "page_index": 48, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:25+07:00" }, "raw_text": "BK Control Parallelism TP.HCM Task/Function parallelism Distributing execution processes (threads) across different parallel computing nodes Applying different operations to ProblemData Set different data elements simultaneously task 0 task 1 task 2 task 3 HPC Lab - CSE - HCMUT -1.15-" }, { "page_index": 49, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_016.png", "page_index": 49, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:28+07:00" }, "raw_text": "BK Example TP.HCM {Milk, Sugar, Bread, Tea} {Bread, Milk, Coffee, Meat} {Milk, Sugar} {Milk, Sugar, Bread} {Milk, Bread, Sugar, Salt} {Apple, Orange, Banana, Sugar, Milk} {Milk, Bread, Sugar, Beer} Pipeline? Control parallelism? Data parallelism? HPC Lab - CSE - HCMUT -1.16-" }, { "page_index": 50, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_017.png", "page_index": 50, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:32+07:00" }, "raw_text": "Throughput: Woodhouse problem BK TP.HCM 5 persons complete 1 woodhouse in 3 days 10 persons complete 1 woodhouse in 2 days How to build 2 houses with 10 persons? (1) 10 persons building the 1st woodhouse and then the 2nd one later (sequentially) 2) 10 persons building 2 woodhouses concurrently; it means that each group of 5 persons complete a woodhouse HPC Lab - CSE - HCMUT -1.17-" }, { "page_index": 51, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_018.png", "page_index": 51, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:36+07:00" }, "raw_text": "Throughput BK TP.HCM The throughput of a device is the number of results it produces per unit time High Performance Computing (HPC) - Needing large amounts of computing power for short periods of time in order to completing the task as soon as possible High Throughput Computing (HTC) - How many jobs can be completed over a long period of time instead of how fast an individual job can complete HPC Lab - CSE - HCMUT -1.18-" }, { "page_index": 52, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_1_c/slide_019.png", "page_index": 52, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:39+07:00" }, "raw_text": "Scalability BK TP.HCM An algorithm is scalable if the level of parallelism increases at Ieast linearly with the problem size. An architecture is scalable if it continues to yield the same performance per processor, albeit used in large problem size, as the number of processors increases. Data-parallelism algorithms are more scalable than control- parallelism algorithms HPC Lab - CSE - HCMUT -1.19-" }, { "page_index": 53, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_001.png", "page_index": 53, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:42+07:00" }, "raw_text": "Chapter 2 PRAM: Matrix Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 54, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_002.png", "page_index": 54, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:49+07:00" }, "raw_text": "Matrix addition BK TP.HCM B[nxn] C[nxn] A[nxn] b11 b12 bin C11 C12 a11 a12 Cin ain ... b21 b22 b2n C21 C22 C2n a21 a22 a2n ... + ... .. ... ... ... C n2 an1 b C n1 n2 ... nn ... nn C[nxn] + b11 a12 + b12 C11 = a11 1 C12 C1n ain + b21 a2n + b2n C2n ... ... + b. C n2 + b + b n1 C I a nn nn nn HPC Lab - CSE-HCMUT" }, { "page_index": 55, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_003.png", "page_index": 55, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:25:55+07:00" }, "raw_text": "Matrix addition: BK TP.HCM PRAM with nxn processors C[nxn] C11= a11 + b11 C12 = a12 + b12 P 12 C21= a21+ b21 C22= a22 + b22 C2n = a2n + P 21 P 22 D processor: Read: aj & b; Write: Ciy + b +b n2 a + b n1 - nn nn nn P P No overlapping data n1 n2 nn CRCW: O(1) All (nxn) processors run '+' in parallel: O(1) => EREW: O(1) HPC Lab - CSE-HCMUT" }, { "page_index": 56, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_004.png", "page_index": 56, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:00+07:00" }, "raw_text": "Matrix addition: BK TP.HCM PRAM P. processor: > All n processors run +' in parallel in n steps: O(n) ??? => Read: aj & bij C[nxn] Write: Cij Step 1 Step 2 Step n C11= a11 + b11 C12 = a12 + b12 No overlapping data P1 P1 P1 CRCW: O(n) C22 = a22 + b22 EREW: O(n) C21 = a21+ b21 C2n= a2n P 2 P 2 P 2 Your algorithm with ... ... k processors (k << n)? Cn1= an1 +b Cn2= an2 + bn2 Cnn + b nr P P. n n HPC Lab - CSE-HCMUT" }, { "page_index": 57, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_005.png", "page_index": 57, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:07+07:00" }, "raw_text": "B[nx n] Matrix multiplication BK b11 b1i bin TP.HCM ... b2j r ... ... Cii a b a +.+ a b i1 i2 n ... ... ... ... bn1 D nj b ... nn A[nxn] C[nxn] a1i a12 ain C11 C1n ... ... ... Cii ail ai2 ain ... ... ... ... an1 an2 C n1 C .. ... nn HPC Lab - CSE-HCMUT" }, { "page_index": 58, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_006.png", "page_index": 58, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:12+07:00" }, "raw_text": "C BK ij TP.HCM Cii b2j * b a a + ... + i.2 a nj Vector: A[ith row] x B[jth columnl Number of operations =.n *opt(.* (n-1) *opt(+' 0(1) n + 0(log(n)) processors 0(log(n)) C[nxn] C11 C1n Cii C n1 C nj nn HPC Lab - CSE-HCMUT" }, { "page_index": 59, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_007.png", "page_index": 59, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:15+07:00" }, "raw_text": "PRAM BK TP.HCM Matrix multiplication C using n processors: O(log(n)) ij (n*n)Cij C[nxn]: ( : O(n2*log(n)) n processors: processors: 0(log(n)) processors??? C[n x n] C11 Cij Cn1 nn HPC Lab - CSE-HCMUT" }, { "page_index": 60, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_008.png", "page_index": 60, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:19+07:00" }, "raw_text": "PRAM BK TP.HCM using processors: 0(log(n)) ij n (n*n)Cij C[nx n]: n n3 processors: 0(log(n)) processors??? C[nxn] C11 1n Difference between CRCW & EREW??? Cii Cn1 nn HPC Lab - CSE-HCMUT" }, { "page_index": 61, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_009.png", "page_index": 61, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:25+07:00" }, "raw_text": "B[nxn] Row BK b11 b1j TP.HCM b2j Ci1 ai2 X b +.. + b n 1 Ci2 ail * ai2 + k +... + + b - in n2 ... ... D nj b ... nn C a C + . + b in i1 2 n nn A[nxn] C[n[xn] Cij ail ai2 ain Ci1 ... ... ... in HPC Lab - CSE-HCMUT" }, { "page_index": 62, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_010.png", "page_index": 62, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:31+07:00" }, "raw_text": "Concurrent Read: in row BK TP.HCM Ci1 ai2 +... + a b + n1 C i2 a + ai2 + ... + b i1 a 2.2 i.n n2 C a a C + ... + a in i2 2.n in nn P1 ... P. CR: working In parallel ER: problem CW : working EW: working HPC Lab - CSE-HCMUT" }, { "page_index": 63, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_011.png", "page_index": 63, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:40+07:00" }, "raw_text": "B[nxn] Column BK b11 b1i TP.HCM ... b2j C1i a * b a b + ... + a * b 11 7 12 l n nj ... ... ... ... C2i a22 b +... + X b nj ... ... ... ... ... bn1 D nj b ... nn Cni - a a +.. + a b n1 7 n2 nn nj A[nxn] C[nxn] a11 a12 ain C1j ... ... ... Cii ail ai2 ain ... ... ... ... an1 C nj ... nn HPC Lab - CSE-HCMUT" }, { "page_index": 64, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_012.png", "page_index": 64, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:46+07:00" }, "raw_text": "Column BK TP.HCM C1i a * b a b + ... + a b 2.1 n n j C2i a22 X 0 a + .. + a. 0 21 2j 2.n nj a * + ... + a a 0 n l n2 27 nn nj P1 ... P CR: working In parallel ER: problem CW : working working EW: HPC Lab - CSE-HCMUT" }, { "page_index": 65, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_013.png", "page_index": 65, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:53+07:00" }, "raw_text": "PRAM BK with n2 processors TP.HCM EREW a11 a12 ain P 12 ... a2n a21 a22 a2n P 21 P 22 P. 2n ... ... ... ... an2 an1 a P P nn ... 'nn n2 ... nn b. n1 why not use processors b n-1)1 with ... EREW? ... ... b11 b ... ... nn b n1 b2(n-1) ... b1(n-1) b21 bin HPC Lab - CSE-HCMUT" }, { "page_index": 66, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_014.png", "page_index": 66, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:56+07:00" }, "raw_text": "PRAM BK TP.HCM EREW a11 a12 ain b1i b22 b nn a2n a21 a2(n-1) b12 b b n-l)n ni ... ... an2 an3 an1 b b b1 21 32 1n HPC Lab - CSE-HCMUT" }, { "page_index": 67, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_015.png", "page_index": 67, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:26:59+07:00" }, "raw_text": "PRAM BK TP.HCM with n2 p EREW processors A C[n x n] - 4 b j+1)k ai(j-l) aij ai(j+1) (j-1)k HPC Lab - CSE-HCMUT" }, { "page_index": 68, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_016.png", "page_index": 68, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:07+07:00" }, "raw_text": "PRAM BK with n2 processors TP.HCM EREW a11 a12 ain P 12 ... a2n a21 a22 a2n P 21 P 22 .. ... ... ... an2 an1 a P P d nn ... nn n2 ... nn b. n1 Each P ij b in n s n-1)1 Run lI X 1 steps ... sequentially ... ... Run u+II in n s steps b11 b ... ... nn O(n) b n1 b2(n-1) ... b1(n-1) b2n b21 bin HPC Lab - CSE-HCMUT" }, { "page_index": 69, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_017.png", "page_index": 69, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:15+07:00" }, "raw_text": "B[n x n] PRAM BK b11 b1j TP.HCM with n processors b2j Ci1 * b1 ai2 + +... + a ... ... ... Cii a ai2 * + ... + . i1 nj b ni b ... nn ... C * b ai2 + b +.. + a : ln ln in nn A[n x n] C[n n] Pi: Row A[i] Pi: Column B[j] Cii ail ai2 ain ... ... i1 ... Pi: B[n x n] P n HPC Lab - CSE-HCMUT" }, { "page_index": 70, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_018.png", "page_index": 70, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:18+07:00" }, "raw_text": "PRAM BK TP.HCM CRCW with n processors Each Pi: ai2 b + . + b O(n) a * 2j a ij i1 in nj RoW C[i]: O(n2) HPC Lab - CSE-HCMUT" }, { "page_index": 71, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_019.png", "page_index": 71, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:26+07:00" }, "raw_text": "B[n x n] PRAM BK b11 b1j TP.HCM EREW with n processors ... b2j Ci1 +ai2 * b21 ail + .. + n ... ... ... ... ... Cii a ai2 * + ... + i1 nj D nj b ... nn ... C ail * b ai2 + b + .. + in 1n nn A[n x n] C[n n] Pi: Row A[i] P 1 Pi: Column B[j] Cii ail ai2 ain Cin .O. ... il ... ... Each Pi: B[n x n] ER??? P n HPC Lab - CSE-HCMUT" }, { "page_index": 72, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_020.png", "page_index": 72, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:33+07:00" }, "raw_text": "PRAM BK TP.HCM EREW with n processors Each Pi: C * X + ... + * b => O(n) a a ij i1 2 j in nj ROW C[i]: O(n2) v.-> B[1] B[2] B[k] B[n]---> A[1] C12 Cik P1 A[2] C21 C22 P 2 C 2n ... Ci2 A[k] C i1 in A[n] C C P n1 n2 ... ... nn n HPC Lab - CSE-HCMUT" }, { "page_index": 73, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_a/slide_021.png", "page_index": 73, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:38+07:00" }, "raw_text": "PRAM BK TP.HCM with k processors (k << n) EREW Each Pi: step: O(k*n) phase (n/k steps): O(k*n*n/k)- O(n2) 1 n/k phases: 0(n2*n/k)- 0(n3/k) Step n/k Step 1 B[1] B[k] B[n-k] B[n] A[1] C1(n-k) P1 Phase k x k 1 A[k] Ck1 Ck(n-k) Ck(n-k) A[n-k] P1 (n-k)1 (n-k)k (n-k)(n-k) (n-k)n Phase n/k A[n] Cn1 nn HPC Lab - CSE-HCMUT" }, { "page_index": 74, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_001.png", "page_index": 74, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:42+07:00" }, "raw_text": "Chapter 2 Abstract Machine Models Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 75, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_002.png", "page_index": 75, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:45+07:00" }, "raw_text": "Abstract Machine Models BK TP.HCM An abstract machine model is mainly used in the design and analysis of parallel algorithms without worry about the details of physical machines. Three abstract machine models: - PRAM - BSP - Phase Parallel HPC Lab - CSE - HCMUT" }, { "page_index": 76, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_003.png", "page_index": 76, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:49+07:00" }, "raw_text": "RAM (1) BK TP.HCM RAM (Random Access Machine) Read-only X1 X2 Xn input tape ro Location r1 Program counter r2 r 3 Memory Write-only X1 X2 output tape HPC Lab - CSE - HCMUT" }, { "page_index": 77, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_004.png", "page_index": 77, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:53+07:00" }, "raw_text": "RAM (2) BK TP.HCM RAM model of serial computers Memory is a sequence of words, each capable of containing an integer Each memory access takes one unit of time Basic operations (add, multiply, compare) take one unit time Instructions are not modifiable Read-only input tape, write-only output tape HPC Lab - CSE - HCMUT" }, { "page_index": 78, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_005.png", "page_index": 78, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:27:56+07:00" }, "raw_text": "PRAM (1) BK TP.HCM Parallel Random Access Machine (lntroduced by Fortune and Wyllie, 1978 Control P1 P n Private memory Private memory Private memory Interconnection network Global memory HPC Lab - CSE - HCMUT" }, { "page_index": 79, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_006.png", "page_index": 79, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:00+07:00" }, "raw_text": "PRAM 2) BK TP.HCM A control unit An unbounded set of processors, each with its own private memory and an unique index Input stored in global memory or a single active processing element Step: (1) read a value from a single private/global memory location (2) perform a RAM operation (3) write into a single private/global memory location During a computation step: a processor may activate another processor All active, enable processors must execute the same instruction (albeit on different memory location)??? Computation terminates when the last processor halts HPC Lab - CSE - HCMUT" }, { "page_index": 80, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_007.png", "page_index": 80, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:04+07:00" }, "raw_text": "PRAM (3 BK TP.HCM PRAM composed of: - P processors, each with its own unmodifiable program - A single shared memory composed of a sequence of words, each capable of containing an arbitrary integer - a read-only input tape - a write-only output tape PRAM model is a synchronous (SIMD)/ asynchronous (MIMD), shared address space parallel computer - Processors share a common clock but may execute different instructions in each cycle HPC Lab - CSE - HCMUT" }, { "page_index": 81, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_008.png", "page_index": 81, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:07+07:00" }, "raw_text": "PRAM(4) BK TP.HCM Definition: The cost of a PRAM computation is the product of the parallel time complexity and the number of processors used. Ex: a PRAM algorithm that has time complexity O(log p) using p processors has cost O(p log p) HPC Lab - CSE - HCMUT" }, { "page_index": 82, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_009.png", "page_index": 82, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:11+07:00" }, "raw_text": "Time Complexity Problem BK TP.HCM Time complexity of a PRAM algorithm is often expressed in the big-O notation Machine size n is usually small in existing parallel computers Ex: - Three PRAM algorithms A, B and C have time complexities if 7n,(n log n)/4,n log log n. - Big-O notation: A(O(n)) < C(O(n log log n)) < B(O(n log n)) - Machines with no more than 1024 processors: log n log 1024 = 10 and log log n log log 1024 < 4 and thus:B < C No two processors can simultaneously read the same memory location. Exclusive Write (EW) > No two processors can simultaneously write to the same memory location. Concurrent Read (CR) > Processors can simultaneously read the same memory location - Concurrent Write (CW) > Processors can simultaneously write to the same memory location, using some conflict resolution scheme. HPC Lab - CSE - HCMUT" }, { "page_index": 84, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_011.png", "page_index": 84, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:20+07:00" }, "raw_text": "Conflicts Resolution Schemes (2 BK TP.HCM Common/ldentical CRCW - All processors writing to the same memory location must be writing the same value. - The software must ensure that different values are not attempted to be written. Arbitrary CRCW Different values may be written to the same memory location, and an arbitrary one succeeds. Priority CRCW - An index is associated with the processors and when more than one processor write occurs, the lowest-numbered processor succeeds. - The hardware must resolve any conflicts HPC Lab - CSE - HCMUT" }, { "page_index": 85, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_012.png", "page_index": 85, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:23+07:00" }, "raw_text": "PRAM Algorithm BK TP.HCM Begin with a single active processor active Two phases: - A sufficient number of processors are activated These activated processors perform the computation in parallel [log p] activation steps: p processors to become active The number of active executing a single instruction HPC Lab - CSE - HCMUT" }, { "page_index": 86, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_013.png", "page_index": 86, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:27+07:00" }, "raw_text": "Parallel Reduction (1) BK TP.HCM 4 3 8 2 9 0 5 6 3 7 10 10 5 17 15 32 9 41 HPC Lab - CSE - HCMUT" }, { "page_index": 87, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_014.png", "page_index": 87, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:30+07:00" }, "raw_text": "Parallel Reduction (2) BK TP.HCM (EREW PRAM Algorithm in Figure2-7, page 32,book [1]) Ex: SUM(EREW) Initial condition: List of n 1 elements stored in A[0..(n-1) Final condition: Sum of elements stored in A[0] Global variables: n, A[0..(n-1)], j begin spawn (Po, P1,..., PLn/2 J -1) for all P; where 0 = i=n/2-1 do for j<- 0 to[log n1-1 do if i modulo 2j = 0 and 2i+2l< n the A[2i]< A[2i]+ A[2i+2] endif endfor endfor end HPC Lab - CSE - HCMUT" }, { "page_index": 88, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_015.png", "page_index": 88, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:34+07:00" }, "raw_text": "Broadcasting j on a PRAM BK TP.HCM \"Broadcast\" can be done on CREW PRAM in O(1) steps: - Broadcaster sends value to shared memory - Processors read from shared memory Requires logP steps on EREW PRAM M S P P P P HPC Lab - CSE - HCMUT" }, { "page_index": 89, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_016.png", "page_index": 89, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:37+07:00" }, "raw_text": "BSP - Bulk Synchronous Parallel BK TP.HCM BSP Model - Proposed by Leslie Valiant of Harvard University - Developed by W.F.McColl of Oxford University Node (w) Node Node P M P M P M Barrier (l) Communication Network (g) HPC Lab - CSE - HCMUT" }, { "page_index": 90, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_017.png", "page_index": 90, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:40+07:00" }, "raw_text": "BSP Model BK TP.HCM A set of n nodes (processor/memory pairs) Communication Network - Point-to-point, message passing (or shared variable) Barrier synchronizing facility - All or subset Distributed memory architecture HPC Lab - CSE - HCMUT" }, { "page_index": 91, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_018.png", "page_index": 91, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:43+07:00" }, "raw_text": "BSP F Programs BK TP.HCM A BSP program: - n processes, each residing on a node - Executing a strict sequence of supersteps - In each superstep, a process executes: > Computation operations: w cycles > Communication: gh cycles > Barrier synchronization: /cycles HPC Lab - CSE - HCMUT" }, { "page_index": 92, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_019.png", "page_index": 92, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:46+07:00" }, "raw_text": "A Figure of BSP F Programs BK TP.HCM P1 P2 P3 P4 Superstep 1 Computation Communication Barrier Superstep 2 Computation Communication Barrier HPC Lab - CSE - HCMUT" }, { "page_index": 93, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_020.png", "page_index": 93, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:51+07:00" }, "raw_text": "Three e Parameters BK TP.HCM The basic time unit is a cycle (or time step) w parameter Maximum computation time within each superstep Computation operation takes at most w cycles g parameter Number of cycles for communication of unit message when all processors are involved in communication - network bandwidth (total number of local operations performed by all processors in one second) / (total number of words delivered by the communication network in one second) h relation coefficient number of incoming or outgoing messages for a superstep Communication operation takes gh cycles I parameter - Barrier synchronization takes / cycles HPC Lab - CSE - HCMUT" }, { "page_index": 94, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_021.png", "page_index": 94, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:55+07:00" }, "raw_text": "Time Complexity of BSP BK TP.HCM Algorithms Execution time of a superstep: - Sequence of the computation, the communication, and the synchronization operations: w + gh + I Overlapping the computation, the communication, and the -1 synchronization operations: max{w, gh, I} HPC Lab - CSE - HCMUT" }, { "page_index": 95, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_022.png", "page_index": 95, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:28:59+07:00" }, "raw_text": "Phase Parallel BK TP.HCM Proposed by Kai Hwang & Zhiwei Xu Similar to the BSP: - A parallel program: sequence of phases Next phase cannot begin until all operations in the current phase have finished - Three types of phases: > Parallelism phase: the overhead work involved in process management, such as process creation and grouping for parallel processing > Computation phase: local computation (data are available) Interaction phase: communication, synchronization or aggregation (e.g., reduction and scan) Different computation phases may execute different workloads at different speed. HPC Lab - CSE - HCMUT" }, { "page_index": 96, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_2_b/slide_023.png", "page_index": 96, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:01+07:00" }, "raw_text": "Ref BK TP.HCM BSP: http://www.computingreviews.com/hottopic/ hottopic_essay.cfm?htname=BSP HPC Lab - CSE - HCMU'l" }, { "page_index": 97, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_001.png", "page_index": 97, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:04+07:00" }, "raw_text": "OpenMP Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 98, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_002.png", "page_index": 98, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:06+07:00" }, "raw_text": "Process: single & multithreaded BK TP.HCM Single-threaded Process Multiplethreaded Process Threads of Execution Multiple instruction stream Single instruction stream Common Address Space HPC Lab - CSE - HCMUT" }, { "page_index": 99, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_003.png", "page_index": 99, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:09+07:00" }, "raw_text": "Process Model BK TP.HCM STACK STACK Shared memory segments pipes, open files or mmap'd DATA files DATA TEXT TEXT Shared Memory maintained by kernel processes processes HPC Lab - CSE - HCMUT" }, { "page_index": 100, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_004.png", "page_index": 100, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:13+07:00" }, "raw_text": "Threaded Process Model BK TP.HCM Shared memory Thread Thread STACK STACK Process Thread Thread DATA DATA Thread Thread TEXT TEXT HPC Lab - CSE - HCMUT" }, { "page_index": 101, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_005.png", "page_index": 101, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:16+07:00" }, "raw_text": "What are Threads BK TP.HCM Thread is a piece of code that can execute in concurrence with Hardware Context other threads It is a schedule entity on a Registers processor Status Word Local state Program Counter clobal/shared state PC Running Hard/Software context HPC Lab - CSE - HCMUT" }, { "page_index": 102, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_006.png", "page_index": 102, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:23+07:00" }, "raw_text": "Levels of Parallelism BK TP.HCM Code-Granularity Code Item Task i-1 Task i Task i+1 Large grain (task level) Program Funcl func2 func3 7 Medium grain 1 (control level) Function (thread) Fine grain [0] [2] (data level) a a [1] a b [0] b [1] b [2] Loop Very fine grain x Load (multiple issue x With hardware HPC Lab - CSE - HCMUT" }, { "page_index": 103, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_007.png", "page_index": 103, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:26+07:00" }, "raw_text": "Thread I Example BK TP.HCM void *func 1/ define local data // f function code thr exit(exit value); main thread t tid; int exit value; thread create (O, O, func(, NULL, &tid); thread join (tid, 0, &exit value); HPC Lab - CSE - HCMUT" }, { "page_index": 104, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_008.png", "page_index": 104, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:29+07:00" }, "raw_text": "Pthread problem BK TP.HCM Pthread is too tedious: explicit thread management is often unnecessary - Consider the matrix multiply example We have a sequential code, we know which loop can be executed in parallel; the program conversion is quite mechanic: we should just say that the loop is to be executed in parallel and let the compiler do the rest. OpenMP does exactly that!!! HPC Lab - CSE - HCMUT" }, { "page_index": 105, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_009.png", "page_index": 105, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:33+07:00" }, "raw_text": "OpenMP BK TP.HCM de fact standard model for programming shared memory machines C/C++/Fortran + parallel directives + APls ( by #pragma in C/C++ by comments in Fortran many free/vendor compilers, including GCC HPC Lab - CSE - HCMUT" }, { "page_index": 106, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_010.png", "page_index": 106, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:37+07:00" }, "raw_text": "BK What is OpenMP? TP.HCM What does OpenMP stands for? Open specifications for Multi Processing via collaborative work between interested parties from the hardware and software industry, government and academia OpenMP is an Application Program Interface (API) that may 7 be used to explicitly direct multi-threaded, shared memory parallelism API components: Compiler Directives, Runtime Library Routines Environment Variables OpenMP is a directive-based method to invoke parallel computations on share-memory multiprocessors HPC Lab - CSE - HCMUT" }, { "page_index": 107, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_011.png", "page_index": 107, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:40+07:00" }, "raw_text": "What is OpenMP? BK TP.HCM OpenMP API is specified for C/C++ and Fortran OpenMP is not intrusive to the original serial code: instructions appear in comment statements for Fortran and pragmas for C/C++ OpenMP website: http://www.openmp.org Materials in this lecture are taken from various OpenMP tutorials in the website and other places HPC Lab - CSE - HCMUT" }, { "page_index": 108, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_012.png", "page_index": 108, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:43+07:00" }, "raw_text": "Why OpenMP? BK TP.HCM OpenMP is portable: supported by HP, IBM, Intel and others It is the de facto standard for writing shared memory programs - To become an ANSI standard? OpenMP can be implemented incrementally, one function or even one loop at a time - A nice way to get a parallel program from a sequential program HPC Lab - CSE - HCMUT" }, { "page_index": 109, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_013.png", "page_index": 109, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:46+07:00" }, "raw_text": "OpenMP reference BK TP.HCM Official home page: http://openmp.org/ Specification: https://www.openmp.org/specifications/ Version is 5.1 (Nov 2020) Version is 5.0 (Nov 2018) Compiler GCC http://gcc.gnu.org/wiki/openmp GCC 9 & 10 -> OpenMP spec 5.0 HPC Lab - CSE - HCMUT" }, { "page_index": 110, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_014.png", "page_index": 110, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:50+07:00" }, "raw_text": "OpenMP BK programs with GCC TP.HCM Compile with -fopenmp $ gcc -wall -fopenmp program.c Run the executable specifying the number of threads with OMP NUM THREADS environment variable S OMP NUM THREADS=1 ./a.out # use 1 threads S OMP NUM THREADS=4 ./a.out # use 4 threads See 2.6.1 in OpenMP 5.1 \"Determining the Number of Threads for a parallel Region\" for other ways to control the number of threads HPC Lab - CSE - HCMUT" }, { "page_index": 111, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_015.png", "page_index": 111, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:29:56+07:00" }, "raw_text": "BK OpenMP execution model TP.HCM Master F J F thread 0 0 0 0 H R H R K N K N Parallel region Parallel region OpenMP uses the fork-join model of parallel execution. All OpenMP programs begin with a single master thread The master thread executes seguentially until a parallel region is encountered, when it creates a team of parallel threads (FORK) - When the team threads complete the parallel region, they synchronize and terminate, leaving only the master thread that executes sequentially (JOIN) HPC Lab - CSE - HCMUT" }, { "page_index": 112, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_016.png", "page_index": 112, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:01+07:00" }, "raw_text": "OpenMP general code structure BK TP.HCM #include main () { int varl, var2, var3; Serial code /* Beginning of parallel section. Fork a team of threads. Specify variable scoping*/ #pragma omp parallel private(varl, var2) shared(var3) / * Parallel section executed by all threads */ * All threads join master thread and disband*/ Resume serial code HPC Lab - CSE - HCMUT" }, { "page_index": 113, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_017.png", "page_index": 113, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:05+07:00" }, "raw_text": "OpenMP directives BK TP.HCM Format: #pragma omp directive-name [clause,... newline (use for multiple lines) Example #pragma omp parallel default(shared) private(beta, pi) Scope of a directive is one block of statements {...} HPC Lab - CSE - HCMUT" }, { "page_index": 114, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_018.png", "page_index": 114, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:08+07:00" }, "raw_text": "BK TP.HCM #pragma omp parallel to launch a team of threads then #pragma for to omp #pragma omp parallel distribute iterations to threads Note: all OpenMP pragmas have the common format: #pragma omp for for (i=0; i int main( { printf(\"hellon\"); #pragma omp parallel printf(\"worldn\"); return 0; } $ OMP NUM THREADS=1 ./a.out hello world $OMP NUM THREADS=4 ./a.out hello world world world world HPC Lab - CSE - HCMUT" }, { "page_index": 117, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_021.png", "page_index": 117, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:19+07:00" }, "raw_text": "what does parallel do? BK TP.HCM You may assume an OpenMP thread OS-supported thread (e.g. Pthread) that is, if you write this program int main( { #pragma omp parallel worker() i and run it as follows $ OMP NUM THREADS=20 ./a.out You will get 20 OS-level threads, each doing worker( HPC Lab - CSE - HCMUT" }, { "page_index": 118, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_022.png", "page_index": 118, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:23+07:00" }, "raw_text": "How to distribute work among threads? BK TP.HCM creates threads, all executing #pragma omp parallel the same statement It's not a means to parallelize work, but just a means to create a number of similar threads (SPMD) So how to distribute (or partition) work among them? (1) do it yourself (2) use work sharing constructs HPC Lab - CSE - HCMUT" }, { "page_index": 119, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_023.png", "page_index": 119, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:27+07:00" }, "raw_text": "Parallel region construct (1) BK TP.HCM A block of code that will be executed by multiple threads #pragma omp parallel [clause ...] (implied barrier) Clauses: if (expression), private (list), shared (list), default (shared I none) reduction (operator: list), firstprivate(list), lastprivate(list) if (expression) : only in parallel if expression evaluates to true private(list): everything private and local (no relation with variables outside the block) shared(list): data accessed by all threads default (none I shared) HPC Lab - CSE - HCMUT" }, { "page_index": 120, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_024.png", "page_index": 120, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:31+07:00" }, "raw_text": "Parallel region construct (2) BK TP.HCM default (none I shared) > default(none) clause forces a programmer to explicitly specify the data- sharing attributes of all variables, thus making it obvious which variables are referenced, and what is their data sharing attribute, thus increasing readability and possibly making errors easier to spot int n = 10; std::vector vector(n) i int a = 10i #pragma omp parallel for default(none) shared(n, vector, a) for (int i = 0j i < n; i++) } HPC Lab - CSE - HCMUT" }, { "page_index": 121, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_025.png", "page_index": 121, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:37+07:00" }, "raw_text": "Parallel region construct (3) BK TP.HCM default (none I shared) > default(shared) clause sets the data-sharing attributes of all variables in the construct to shared int a, b, c, ni #pragma omp parallel for default(shared for (int i = 0j i < n; i++ // using a, b, c, n are shared variables int a, b, c, ni #pragma omp o parallel for default(shared) private(a, b for (int i = 0j i < nj i++) // a and b are private variables 11 c a and n are shared variables } HPC Lab - CSE - HCMUT" }, { "page_index": 122, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_026.png", "page_index": 122, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:30:41+07:00" }, "raw_text": "BK Parallel region construct (4) TP.HCM The reduction clause: Sum = 0.0; #pragma parallel default(none shared(n,x private(i) reduction(+:sum) for(i=0; i 2000 HPC Lab - CSE - HCMUT" }, { "page_index": 138, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_042.png", "page_index": 138, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:31:48+07:00" }, "raw_text": "Scheduling BK TP.HCM Schedule clause in work-sharing for loop determines how iterations are divided among threads There are three alternatives (static, dynamic, and guidea) HPC Lab - CSE - HCMUT" }, { "page_index": 139, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_043.png", "page_index": 139, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:31:53+07:00" }, "raw_text": "static, dynamic, and guided BK TP.HCM #pragma omp for schedule(static) schedule(static [,chunk]) 0 1 2 3 predictable round-robin #pragma omp for schedule(static,3) schedule(dynamic [,chunk]) : each thread repeats fetching chunk iterations #pragma omp for schedule(dynamic) schedule(guided [,chunk): threads grab many iterations in #pragma omp for schedule(dynamic,2) early stages; gradually reduce iterations to fetch at a time chunk specifies the minimum #pragma omp for schedule(guided) granularity (iteration counts) #pragma omp for schedule(guided,2) HPC Lab - CSE - HCMUT" }, { "page_index": 140, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_044.png", "page_index": 140, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:31:57+07:00" }, "raw_text": "Performance with scheduling BK TP.HCM We can use schedule clause to specify how iterations of a loop should be allocated to threads Static schedule: all iterations allocated to threads before any iterations executed Dynamic schedule: only some iterations allocated to threads at beginning of loop's execution. Remaining iterations allocated to threads that complete their assigned iterations HPC Lab - CSE - HCMUT" }, { "page_index": 141, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_045.png", "page_index": 141, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:31:59+07:00" }, "raw_text": "Static & Dynamic BK TP.HCM Static scheduling Low overhead - May exhibit high workload imbalance Dynamic scheduling - Higher overhead - Can reduce workload imbalance HPC Lab - CSE - HCMUT" }, { "page_index": 142, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_046.png", "page_index": 142, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:02+07:00" }, "raw_text": "Chunks BK TP.HCM A chunk is a contiguous range of iterations Increasing chunk size reduces overhead and may increase cache hit rate Decreasing chunk size allows finer balancing of workloads HPC Lab - CSE - HCMUT" }, { "page_index": 143, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_047.png", "page_index": 143, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:05+07:00" }, "raw_text": "Other scheduling options and notes BK TP.HCM Schedule(runtime) determines the schedule by $ OMP_SCHEDULE=dynamiC,2 ./a.out Schedule(auto) or no schedule clause choose an implementation dependent default HPC Lab - CSE - HCMUT" }, { "page_index": 144, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_048.png", "page_index": 144, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:09+07:00" }, "raw_text": "Parallelizing loop nests by collapse BK TP.HCM #pragma omp for collapse(2) for (i = 0; i < n; i++ for (j = 0j J < nj j++) S will partition n2 iterations of the doubly-nested loop Schedule clause applies to nested loops as if the nested loop is an equivalent flat loop Restriction: the loop must be \"perfectly nested\" (the iteration space must be a rectangular and no intervening statement between different levels of the nest) HPC Lab - CSE - HCMUT" }, { "page_index": 145, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_049.png", "page_index": 145, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:13+07:00" }, "raw_text": "Task parallelism in OpenMP BK TP.HCM OpenMP's initial focus was simple parallel loops since 3.0, it supports task parallelism but why it's necessary? aren't parallel and for all we need? HPC Lab - CSE - HCMUT" }, { "page_index": 146, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_050.png", "page_index": 146, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:17+07:00" }, "raw_text": "Limitation of parallel for BK TP.HCM what if you have a parallel what for parallel recursions? loop inside another qs( { if (...) for (... { else { for qs()i qs()i } perhaps inside a separate function main( { for (... g(); } g( { for 1.. } HPC Lab - CSE - HCMUT" }, { "page_index": 147, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_051.png", "page_index": 147, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:32:21+07:00" }, "raw_text": "parallel for can't handle nested parallelism BK TP.HCM OpenMP generally ignores nested parallel pragma when enough threads have been created by the outer parallel pragma, for good reasons #pragma omp parallel The fundamental limitation is its simplistic work-sharing mechanism Tasks address these issues, by allowing #pragma omp for for (i=0; i A race condition occurs when two i=0 i=3 i=3 threads can access (read or write) a i=1 data variable simultaneously, and at least one of the two accesses is a 3 write - +2 Thread 2 HPC Lab - CSE - HCMUT" }, { "page_index": 161, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_065.png", "page_index": 161, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:26+07:00" }, "raw_text": "BK TP.HCM A race condition is a situation, in which the result of an operation depends on the interleaving of certain individual operations Thread 1 Thread 1 (A) Read before Write (B) Write before Read R i=0 R 0 1 Time Time i=0 i=1 i=0 i=1 1 W 1>i W 1>i Thread 2 Thread 2 HPC Lab - CSE - HCMUT" }, { "page_index": 162, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_066.png", "page_index": 162, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:30+07:00" }, "raw_text": "Data race BK TP.HCM Thread 1 A data race occurs when two instructions from different threads access the same memory location +1 +2 at least one of these accesses is a write 3 i=3 i=0 i=1 i=5 5 +4 Thread 2 HPC Lab - CSE - HCMUT" }, { "page_index": 163, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_067.png", "page_index": 163, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:35+07:00" }, "raw_text": "Critical section BK TP.HCM In concurrent #pragma omp critical [name] programming S concurrent accesses to shared Critical section (s) : a portion of code that only resources can lead thread at a time may execute to unexpected or A thread waits at the beginning of a critical section erroneous until no other thread in the team is executing a behavior, so parts of the program critical section having the same name where the shared All unnamed critical sections are considered to resource is have the same unspecified name accessed need to be protected in ways that avoid the concurrent access HPC Lab - CSE - HCMUT" }, { "page_index": 164, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_068.png", "page_index": 164, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:39+07:00" }, "raw_text": "Computing Pi by method of BK TP.HCM Numerical Integration Mathematically,we know: 1 4.0 dx= (1+x2) 0 12x+1)/0=x) And this can be approximated 2.0 as a sum of the area of rectangles: N E F(x;)x i=1 Where each rectangle has a width of x anda height of Fxatthe 0.0 x middle of interval i. HPC Lab - CSE - HCMUT" }, { "page_index": 165, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_069.png", "page_index": 165, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:43+07:00" }, "raw_text": "Serial code BK TP.HCM static long num_steps = 100000; double step; void main ( 1 int i; double x, pi, sum = 0.0; step = 1.0 / (double) num_steps; for (i = 0; i <= num_steps; i++ 1 x = (i + 0.5) * step; sum = sum + 4.0 7 (1.0 + x*x) } pi = step * sum; HPC Lab - CSE - HCMUT" }, { "page_index": 166, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_070.png", "page_index": 166, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:48+07:00" }, "raw_text": "Parallel code (1) BK TP.HCM sum is a shared variable #include #define NUM THREADS 16 Race condition: one static long num_steps = 100000; process may \"race ahead' double step; of another and not see its void main ) change to shared variable 1 int i; sum double x, pi, sum = 0.0; sum=10 step = 1.0/ (double) num_steps; Omp set num threads(NUM THREADS); #pragma omp parallel for private(x) for (i = 0; i <= num_steps; i++) x = (i + 0.5) * step; +3 sum=13 sum = sum + 4.0 7(1.0 + x*x) } pi = step * sum; W } +2 sum=12 HPC Lab - CSE - HCMUT" }, { "page_index": 167, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_071.png", "page_index": 167, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:54+07:00" }, "raw_text": "Parallel code (2) BK TP.HCM Update to sum inside a #include #define NUM THREADS l6 critical section static long num_steps = 100000; Only one thread at a time double step; may execute the statement: void main ( i.e., it is sequential code int i; double x, pi, sum = 0.0 sum=10 step = 1.07 (double) num_steps, omp_set_num_threads(NUM_THREADS); Waiting #pragma omp parallel for private(x) for (i = 0; i <= num_steps; i++) 1 x = (i + 0.5) * step; +3 sum=13 #pragma omp critical sum = sum + 4.0 7 (1.0 + x*x): } W pi = step * sum; +2 sum=15 What is a better solution? HPC Lab - CSE - HCMUT" }, { "page_index": 168, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_072.png", "page_index": 168, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:33:59+07:00" }, "raw_text": "Parallel code (3): Reduction BK TP.HCM #include #define NUM THREADS 6 16 static long num_steps = 100000; static long num_steps = 100000; double step; double step; void main void main 1 int i; int i; double x, pi, sum = 0.0; double x, pi, sum = 0.0; step = 1.0 / (double) num_steps, step = 1.0 / (double) num_steps; omp_set_num_threads(NUM_THREADS); #pragma omp parallel for reduction(+:sum) private(x) for (i = 0; i<= num_steps; i++) for (i = 0; i<= num_steps; i++) 1 x = (i + 0.5) * step; x = (i + 0.5) * step; sum = sum + 4.0 7 (1.0 + x*x) : sum = sum + 4.0 / (1.0 + x*x) } } pi = step * sum; pi = step * sum; } Serial code Parallel code HPC Lab - CSE - HCMUT" }, { "page_index": 169, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_073.png", "page_index": 169, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:04+07:00" }, "raw_text": "OpenMP discussion BK TP.HCM Exposing architecture features (performance) : Not much, similar to the pthread approach Assumption: dividing job into threads = improved performance How valid is this assumption in reality? > Overheads, contentions, synchronizations, etc This is one weak point for OpenMP: the performance of an OpenMP program is somewhat hard to understand. HPC Lab - CSE - HCMUT" }, { "page_index": 170, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_3/slide_074.png", "page_index": 170, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:07+07:00" }, "raw_text": "OpenMP final thoughts BK TP.HCM Main issues with OpenMP: performance o Is there any obvious way to solve this? > Exposing more architecture features? Is the performance issue more related to the fundamental way that we write parallel program? > OpenMP programs begin with sequential programs > May need to find a new way to write efficient parallel programs in order to really solve the problem HPC Lab - CSE - HCMUT" }, { "page_index": 171, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_001.png", "page_index": 171, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:10+07:00" }, "raw_text": "Slide 125 Chapter 4 Partitioning and Divide-and-Conquer Strategies Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 172, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_002.png", "page_index": 172, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:13+07:00" }, "raw_text": "Slide 126 Partitioning Partitioning simply divides the problem into parts Divide and Conquer Characterized by dividing problem into subproblems of same form as larger problem. Further divisions into still smaller sub-problems usually done by recursion. Recursive divide and conquer amenable to parallelization because separate processes can be used for divided parts Also usually data is naturally localized Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 173, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_003.png", "page_index": 173, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:17+07:00" }, "raw_text": "Slide 127 Partitioning/Divide and Conquer Examples Many possibilities. Operations on sequences of number such as simply adding them together Several sorting algorithms can often be partitioned or constructed in a recursive fashion Numerical integration N-body problem Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 174, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_004.png", "page_index": 174, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:20+07:00" }, "raw_text": "Slide 128 Bucket sort One \"bucket\" assiqned to hold numbers that fall within each region Numbers in each bucket sorted using a sequential sorting algorithm Unsorted numbers Buckets Sort contents of buckets Merge lists Sorted numbers Sequental sorting time complexity: O(nlog(n/m) Works well if the original numbers uniformly distributed across a known interval, say 0 to a -1. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 175, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_005.png", "page_index": 175, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:23+07:00" }, "raw_text": "Slide 129 Parallel yersion of bucket sort Simple approach Assign one processor for each bucket. Unsorted numbers p processors Buckets Sort contents of buckets Merge lists Sorted numbers Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 176, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_006.png", "page_index": 176, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:26+07:00" }, "raw_text": "Slide 130 Further Parallelization Partition seguence into m regions, one region for each processor. Each processor maintains p \"small\" buckets and separates the numbers in its region into its own small buckets. Small buckets then emptied into p final buckets for sortinq, which requires each processor to send one small bucket to each of the other processors (bucket i to processor i) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 177, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_007.png", "page_index": 177, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:29+07:00" }, "raw_text": "Slide 131 Another parallel version of bucket sort n/m numbers Unsorted numbers p processors Small buckets Empty small buckets Large buckets Sort contents of buckets Merge lists Sorted numbers Introduces new message-passing operation - all-to-all broadcast. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 178, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_008.png", "page_index": 178, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:33+07:00" }, "raw_text": "Slide 132 \"all-to-all\" broadcast routine Sends data from each process to every other process See also next slide Corresponds to one Process 0 Process n -1 big bucket Send Receive buffer buffer Corresponds to set of small buckets Send buffer 0 n-1 0 n-1 0 n-1 0 n-1 Process 1 Process n -1 Process 0 Process n -2 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 179, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_009.png", "page_index": 179, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:37+07:00" }, "raw_text": "Slide 133 all-to-all\" routine actually transfers rows of an array to columns. Tranposes a matrix. \"All-to-all' P Ao 2 Ao.3 A0.0A1.0 42.0 A3.0 P D A0.2 A3,2 A0.3 A3 3 Effect of \"all-to-all\" on an array Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 180, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_010.png", "page_index": 180, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:40+07:00" }, "raw_text": "Slide 134 Numerical integration using rectangles Each region calculated using an approximation given by rectangles Aligning the rectangles: f(p f(q a b x p q Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 181, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_011.png", "page_index": 181, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:43+07:00" }, "raw_text": "Slide 135 Numerical integration using trapezoidal method f(p) f(q) a b & x p q May not be better! Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 182, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_012.png", "page_index": 182, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:46+07:00" }, "raw_text": "Slide 136 Adaptive Quadrature Solution adapts to shape of curve. Use three areas, A, B, and C. Computation terminated when largest of A and B sufficiently close to sum of remain two areas . f(x) C A B Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 183, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_013.png", "page_index": 183, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:48+07:00" }, "raw_text": "Slide 137 Adaptive quadrature with false termination. Some care might be needed in choosing when to terminate. f(x) A B x Might cause us to terminate early, as two large regions are the same (i.e., C = 0) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 184, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_014.png", "page_index": 184, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:51+07:00" }, "raw_text": "Slide 138 Simple program to compute t Using C++ MPl routines Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 185, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_015.png", "page_index": 185, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:34:57+07:00" }, "raw_text": "Slide 139 **** pi_calc.cpp calculates value of pi and compares with actual value (to 25 digits) of pi to give error. Integrates function f(x)=4/(1+x2) : July 6, 2001 K. Spry CSCI3145 #include //include files #include #include \"mpi.h\" void printitO: //function prototypes int main(int argc, char *argv[]) double actual pi = 3.141592653589793238462643; //for comparison later int n, rank, num_proc, i; double temp_pi, calc_pi, int_size, part_sum, x; char response = 'y', resp1 = 'y'; MPI::Init(argc, argv); //initiate MPI num_proc = MPI::COMM_WORLD.Get_sizeQ; rank = MPI::COMM_WORLD.Get_rankO); if (rank == o) printitO: /* I am root node, print out welcome */ while (response == 'y') { if (resp1 == 'y') { if (rank == o) /*I am root node*/ cout <<\" <> n; } else n = 0; Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 186, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_016.png", "page_index": 186, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:00+07:00" }, "raw_text": "Slide 140 MPI: :COMM_WORLD.BcaSt(&n, 1, MPI: :INT, 0); //broadcast n if (n==0) break; //check for quit condition else { int_size = 1.o / (double) n;//calcs interval size part_sum = 0.0; for (i = rank + 1; i <= n; i += num_proc) { //calcs partial sums x = int_size * ((double)i - 0.5); part_sum += (4.0 / (1.0 + x*x)); temp_pi = int_size * part_sum; //collects all partial sums computes pi MPI::COMM_WORLD.Reduce(&temp_pi,&calc_pi, 1, MPI::DOUBLE, MPI::SUM, 0); Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 187, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_017.png", "page_index": 187, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:04+07:00" }, "raw_text": "Slide 141 if (rank == o) { /*I am server*/ cout << \"pi is approximately u << calc_pi < Error is \" << fabs(calc_pi - actual_pi << endl <<\" 1 endl; } } //end else if (rank == o) { /*I am root node*/ cout << \"nCompute with new intervals? (y/n)\" << endl; cin >> resp1; } }//end while MPI: :Finalize(): //terminate MPI return 0; } //end main Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 188, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_018.png", "page_index": 188, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:07+07:00" }, "raw_text": "Slide 142 //functions void printitQ cout << end1 < \"welcome to the pi calculator!\" << end1 \"Programmer : K. Spry\" << endl \"You set the number of divisions nfor estimating the integral: ntf(x)=4/(1+x2)\" << end1 *X**11 << endl; }//end printit Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 189, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_019.png", "page_index": 189, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:10+07:00" }, "raw_text": "Slide 143 Gravitational N-Body Problem Finding positions and movements of bodies in space subject to gravitational forces from other bodies, using Newtonian laws of physics. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 190, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_020.png", "page_index": 190, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:13+07:00" }, "raw_text": "Slide 144 Gravitational N-Body Problem Eguations Gravitational force between two bodies of masses ma and mo is: F 2 r G is the gravitational constant and r the distance between the bodies. Subject to forces, body accelerates according to Newton's 2nd law: E = ma m is mass of the body, F is force it experiences, and a the resultant acceleration Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 191, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_021.png", "page_index": 191, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:17+07:00" }, "raw_text": "Slide 145 Details Let the time interval be t. For a body of mass m, the force is: t+ 1 t m(v V l t New velocity is. t+ 1 t EI t V V m t+1 is the velocity at time t + 1 and v' is the velocity at time t. where v Over time interval t, position changes by t+ 1 x where x' is its position at time t. Once bodies move to new positions, forces change. Computation has to be repeated. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 192, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_022.png", "page_index": 192, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:22+07:00" }, "raw_text": "Slide 146 Sequential Code Overall gravitational N-body computation can be described by: for (t = 0; t < tmax; t++ /* for each time period */ for (i = O; i < N; i++) /* for each body */ F = Force routine(i): /* compute force on ith body */ v[i]new = v[i] + F * dt m; /* compute new velocity */ x[i]new x[i] + v[i]new * dt; /* and new position */ } for (i = 0; i < nmax; i++) /* for each body */ x[i] = x[i]newi /* update velocity & position*/ v[i] = v[i]newi Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 193, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_023.png", "page_index": 193, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:24+07:00" }, "raw_text": "Slide 147 Parallel Code The sequential algorithm is an O(N2) algorithm (for one iteration) as each of the N bodies is influenced by each of the other N -1 bodies Not feasible to use this direct algorithm for most interesting N-body problems where N is very large Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 194, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_024.png", "page_index": 194, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:27+07:00" }, "raw_text": "Slide 148 Time complexity can be reduced using observation that a cluster of distant bodies can be approximated as a single distant body of the total mass of the cluster sited at the center of mass of the cluster: Center of mass Distant cluster of bodies Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 195, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_025.png", "page_index": 195, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:30+07:00" }, "raw_text": "Slide 149 Barnes-Hut Algorithm Start with whole space in which one cube contains the bodies (or particles First, this cube is divided into eight subcubes If a subcube contains no particles, the subcube is deleted from further consideration If a subcube contains one body, this subcube retained If a subcube contains more than one body, it is recursively divided until every subcube contains one body. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 196, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_026.png", "page_index": 196, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:33+07:00" }, "raw_text": "Slide 150 Creates an octtree - a tree with up to eight edges from each node. The leaves represent cells each containing one body After the tree has been constructed. the total mass and center of mass of the subcube is stored at each node. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 197, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_027.png", "page_index": 197, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:36+07:00" }, "raw_text": "Slide 151 Force on each body obtained by traversing tree starting at root. stopping at a node when the clustering approximation can be used. e.g. when: r>d 0 where e is a constant typically 1.0 or less Constructing tree requires a time of O(nlogn), and so does computing all the forces, so that the overall time complexity of the method is O(nlogn) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 198, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_028.png", "page_index": 198, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:39+07:00" }, "raw_text": "Slide 152 Recursive division of two-dimensional space Subdivision direction Particles Partial quadtree Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 199, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_029.png", "page_index": 199, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:42+07:00" }, "raw_text": "Slide 153 Orthogonal Recursive Bisection (For 2-dimensional area) First, a vertical line found that divides area into two areas each with egual number of bodies. For each area, a horizontal line found that divides it into two areas each with equal number of bodies. Repeated as required. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1._2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 200, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_a/slide_030.png", "page_index": 200, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:44+07:00" }, "raw_text": "Slide 154 Intentionally blank Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 201, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_001.png", "page_index": 201, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:46+07:00" }, "raw_text": "MPI Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 202, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_002.png", "page_index": 202, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:48+07:00" }, "raw_text": "Outline BK TP.HCM Communication modes MPI - Message Passing Interface Standard HPC Lab - CSE - HCMUT" }, { "page_index": 203, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_003.png", "page_index": 203, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:52+07:00" }, "raw_text": "TERMs (1) BK TP.HCM Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking If the procedure may return before the operation completes, and before the user is allowed to reuse resources specified in the call Collective If all processes in a process group need to invoke the procedure Message envelope Information used to distinguish messages and selectively receive them HPC Lab - CSE - HCMUT" }, { "page_index": 204, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_004.png", "page_index": 204, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:57+07:00" }, "raw_text": "TERMS (2) BK TP.HCM Communicator The communication context for a communication operation - Messages are always received within the context they were sent - Messages sent in different contexts do not interfere - MPI COMM WORLD Process group - The communicator specifies the set of processes that share this communication context. - This process group is ordered and processes are identified by their rank within this group HPC Lab - CSE - HCMUT" }, { "page_index": 205, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_005.png", "page_index": 205, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:35:59+07:00" }, "raw_text": "MPI BK TP.HCM Environment 1 Point-to-point communication Collective communication Derived data type Group management HPC Lab - CSE - HCMUT" }, { "page_index": 206, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_006.png", "page_index": 206, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:03+07:00" }, "raw_text": "MPI BK TP.HCM Po P1 P2 P3 P4 Daemon Daemon Daemon Po P1 P2 P3 P4 HPC Lab - CSE - HCMUT" }, { "page_index": 207, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_007.png", "page_index": 207, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:06+07:00" }, "raw_text": "Environment BK TP.HCM MPI INIT MPI COMM 1 SlZE MPI COMM RANK o MP1 FINALIZE MPI ABORT - HPC Lab - CSE - HCMUT" }, { "page_index": 208, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_008.png", "page_index": 208, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:09+07:00" }, "raw_text": "MPl lnit BK TP.HCM Usage - int MPI_Init( int* argc_ptr * in */ char** a argv_ptr[]); * in */ Description - Initialize MPl - All MPI programs must call this routines once and only once before any other MPI routines HPC Lab - CSE - HCMUT" }, { "page_index": 209, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_009.png", "page_index": 209, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:13+07:00" }, "raw_text": "MPl Finalize BK TP.HCM Usage int MPl Finalize (void) Description - Terminates all MPI processing - Make sure this routine is the last MPl call - All pending communications involving a process have completed before the process calls MPL FINALIZE HPC Lab - CSE - HCMUT" }, { "page_index": 210, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_010.png", "page_index": 210, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:16+07:00" }, "raw_text": "MPI Comm Size BK TP.HCM Usage int MPI_Comm size( MPI Comm comm,/* in */ int* size ); /* out *7 Description - Return the number of processes in the group associated with a communicator HPC Lab - CSE - HCMUT" }, { "page_index": 211, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_011.png", "page_index": 211, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:19+07:00" }, "raw_text": "MPI Comm Rank BK TP.HCM Usage - int MPI_Comm rank ( MPI Comm comm,/* in */ int* rank ); /* out */ Description - Returns the rank of the local process in the group associated with a communicator - The rank of the process that calls it in the range from 0 ... size - 1 HPC Lab - CSE - HCMUT" }, { "page_index": 212, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_012.png", "page_index": 212, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:22+07:00" }, "raw_text": "MPl Abort BK TP.HCM Usage - int MPI_Abort( MPI_Comm comm, 7* in */ int errorcode ): 7* in */ Description - Forces all processes of an MPI job to terminate HPC Lab - CSE - HCMUT" }, { "page_index": 213, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_013.png", "page_index": 213, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:25+07:00" }, "raw_text": "Simple Program BK TP.HCM #include \"mpi.h int main( int argc, char* argv[] int rank, int nproc MPI_Init( &argc, &argv ); MPI Comm size( MPI COMM_WORLD,&nproc)) MPI Comm rank( MPI_COMM_WORLD,&rank); /* write codes for you */ MPl_FinalizeO: HPC Lab - CSE - HCMUT" }, { "page_index": 214, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_014.png", "page_index": 214, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:28+07:00" }, "raw_text": "Point-to-Point Communication BK TP.HCM MPI SEND MPI RECV MPI ISEND o MPl lRECV MPL WAIT o MPl GET COUNT HPC Lab - CSE - HCMUT" }, { "page_index": 215, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_015.png", "page_index": 215, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:32+07:00" }, "raw_text": "Modes in MPI (1) Communication 1 BK TP.HCM Standard mode - It is up to MPI to decide whether outgoing messages will be buffered - Non-local operation - Buffered or synchronous? Buffered(asynchronous) mode - A send operation can be started whether or not a matching receive has been posted - It may complete before a matching receive is posted - Local operation HPC Lab - CSE - HCMUT" }, { "page_index": 216, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_016.png", "page_index": 216, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:36+07:00" }, "raw_text": "Modes in MPI (2) Communication I BK TP.HCM Synchronous mode - A send operation can be started whether or not a matching receive was posted - The send will complete successfully only if a matching receive was posted and the receive operation has started to receive the message - The completion of a synchronous send not only indicates that the send buffer can be reused but also indicates that the receiver has reached a certain point in its execution - Non-local operation HPC Lab - CSE - HCMUT" }, { "page_index": 217, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_017.png", "page_index": 217, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:40+07:00" }, "raw_text": "Modes in MPI (3) Communication I BK TP.HCM Ready mode 7 - A send operation may be started only if the matching receive is already posted - The completion of the send operation does not depend on the status of a matching receive and merely indicates the send buffer can be reused - EAGER LIMlT of SP system HPC Lab - CSE - HCMUT" }, { "page_index": 218, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_018.png", "page_index": 218, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:44+07:00" }, "raw_text": "MPI Send BK TP.HCM Usage int MPI Send( void* buf /* in*/ int count /* in */ MPI_Datatype datatype, /* in */ int dest. /* in */ int tag. /* in*/ MPI Comm comm ); * in */ Description - Performs a blocking standard mode send operation - The message can be received by either MPI_RECV or MPI IRECV HPC Lab - CSE - HCMUT" }, { "page_index": 219, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_019.png", "page_index": 219, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:49+07:00" }, "raw_text": "MPl RecV BK TP.HCM Usage 01 int MPl Recv( void* buf /* out */ int count /* in *7 MPI_Datatype datatype,/* in */ int source /* in */ int tag /* in *7 MPl Comm comm /* in */ MPI Status* status )! /* out */ Description - Performs a blocking receive operation - The message received must be less than or equal to the length of the receive buffer - MPI RECV can receive a message sent by either MPI_SEND or MPI ISEND HPC Lab - CSE -HCMUT" }, { "page_index": 220, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_020.png", "page_index": 220, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:54+07:00" }, "raw_text": "Process 0 Uscr Mode Kernei Mode BK TP.HCM sencbut Call mpi_send(sendbuf, dest=1) Copying data from (blocked) sendbuf to sysbuf Now sendbuf can be reused Send data from sysbuf to dest Process 1 User Mode Kernei Modc Call mpi_recv(recvbuf, src=o) Receive data from src to sysbuf (blocked) Copying data from sysbuf to recybuf Mow recvbuf contains yalid data SvEhuT HPC Lab - CSE - HCMUT" }, { "page_index": 221, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_021.png", "page_index": 221, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:36:57+07:00" }, "raw_text": "Sample Program for E Blocking BK TP.HCM Operations (1) #include \"mpi.h int main( int argc, char* argv[] int rank, nproc int isbuf, irbuf, MPl_Init( &argc,&argv); MPI Comm size( MPl COMM_WORLD,&nproc) MPI Comm rank(MPl_COMM_WORLD,&rank); HPC Lab - CSE - HCMUT" }, { "page_index": 222, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_022.png", "page_index": 222, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:02+07:00" }, "raw_text": "Sample Program for Blocking BK TP.HCM Operations (2) if(rank == 0) { isbuf = 9; MPl Send(&isbuf,1,MPI INTEGER,1,TAG MPI COMM_WORLD): } else if(rank == 1) { MPI Recv( &irbuf,1, MPI INTEGER, 0, TAG MPl COMM WORLD,&status); printf( \"%dn\", irbuf ); MPI FinalizeO: HPC Lab - CSE - HCMUT" }, { "page_index": 223, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_023.png", "page_index": 223, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:07+07:00" }, "raw_text": "MPl lsend BK TP.HCM Usage int MPl lsend( void* buf 7* in *7 int count /* in *7 MPI Datatype datatype /* in *7 int dest. /* in */ int tag, * in */ MPI Comm comm /* in */ MPl Request* request ); /* out */ Description - Performs a nonblocking standard mode send operation - The send buffer may not be modified until the request has been completed by MPl WAlT or MPl TEST - The message can be received by either MPI_RECV or MPI_IRECV. HPC Lab - CSE - HCMUT" }, { "page_index": 224, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_024.png", "page_index": 224, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:10+07:00" }, "raw_text": "MPI lrecv (1) BK TP.HCM Usage int MPI Irecv( void* buf * out */ int count /* in */ MPI_Datatype datatype, /* in */ int source /* in */ int tag. /* in */ MPI Comm comm /* in *7 MPI_Request* request ); /* out */ HPC Lab - CSE - HCMUT" }, { "page_index": 225, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_025.png", "page_index": 225, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:13+07:00" }, "raw_text": "MPl lrecv (2) BK TP.HCM Description - Performs a nonblocking receive operation - Do not access any part of the receive buffer until the receive is complete to the length of the receive buffer - MPI_IRECV can receive a message sent by either MPl SEND or MPl ISEND HPC Lab - CSE - HCMUT" }, { "page_index": 226, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_026.png", "page_index": 226, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:16+07:00" }, "raw_text": "MPl Wait BK TP.HCM Usage - int MPI Wait( MPI Request* request. /* inout */ MPI Status* status ); * out */ Description - Waits for a nonblocking operation to complete - Information on the completed operation is found in status - If wildcards were used by the receive for either the source or tag, the actual source and tag can be retrieved by status->MPl SOURCE and status->MPl TAG HPC Lab - CSE - HCMUT" }, { "page_index": 227, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_027.png", "page_index": 227, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:22+07:00" }, "raw_text": "Process 0 Uscr Moac KernelMoae BK senchat TP.HCM Call mpi_isend(sendbuf, dest, req) sysbut (not blocked) Copying data from Call mpi_wait(req) sendbuf to sysbuf (blocked Now sendbuf can be reused Send data from sysbuf to dest Process 1 User Mode KernelMode Call mpi_irecv(recvbuf, src, req) Receiv e data from (not blocked) src to sysbuf Copying data from Call mpi wait(req) sysbuf to recvbuf (blocked) SVCOUT Now recvbuf contains valid data rECVbST HPC Lab - CSE - HCMUT" }, { "page_index": 228, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_028.png", "page_index": 228, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:25+07:00" }, "raw_text": "MPI Get count BK TP.HCM Usage - int MPI Get count( MPI Status status, /* in */ MPI Datatype datatype /* in */ int* count ); * out */ Description - Returns the number of elements in a message - The datatype argument and the argument provided by the call that set the status variable should match HPC Lab - CSE - HCMUT" }, { "page_index": 229, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_029.png", "page_index": 229, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:29+07:00" }, "raw_text": "Sample Program for BK TP.HCM Non-Blocking Operations (1) #include \"mpi.h' int main( int argc, char* argv[] int rank, nproc, int isbuf, irbuf, count. MPl Request request; MPl Status status: MPI Init( &argc, &argv ); MPI Comm size( MPl_COMM_WORLD,&nproc); MPI Comm rank( MPI_COMM_WORLD,&rank) if(rank == 0){ isbuf = 9; MPI Isend( &isbuf,1,MPI INTEGER,1,TAG,MPl COMM WORLD &request ) HPC Lab - CSE - HCMUT" }, { "page_index": 230, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_030.png", "page_index": 230, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:33+07:00" }, "raw_text": "Sample Program for BK TP.HCM Non-Blocking Operations (2 } else if (rank == 1){ MPI Irecv( &irbuf,1,MPI INTEGER,0,TAG MPl COMM WORLD,&request): MPI_Wait(&request, &status); MPI Get count(&status,MPI_INTEGER, &count); printf( \"irbuf = %d source = %d tag = %d count = %dn\" irbuf, status.MPI SOURCE, status.MPI TAG, count) MPl FinalizeO: HPC Lab - CSE - HCMUT" }, { "page_index": 231, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_031.png", "page_index": 231, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:36+07:00" }, "raw_text": "Collective Operations BK TP.HCM MPI BCAST MPL SCATTER MPL SCATTERV MPI GATHER MPL GATHERV MPI ALLGATHER MPL ALLGATHERV MPL ALLTOALL HPC Lab - CSE - HCMUT" }, { "page_index": 232, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_032.png", "page_index": 232, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:40+07:00" }, "raw_text": "MPl Bcast (1) BK TP.HCM Usage - int MPI Bcast( void* buffer * inout */ int count /* in */ MPI_Datatype datatype, /* in */ int root. /* in */ MPI Comm comm) /* in*/ Description - Broadcasts a message from root to all processes in communicator - The type signature of count, datatype on any process must be equal to the type signature of count, datatype at the root HPC Lab - CSE - HCMUT" }, { "page_index": 233, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_033.png", "page_index": 233, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:44+07:00" }, "raw_text": "MPI Bcast (2) BK TP.HCM comm rank=0=root rank=1 rank=2 12345 O 12345 12345 count buffer buffer buffer HPC Lab - CSE - HCMUT" }, { "page_index": 234, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_034.png", "page_index": 234, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:48+07:00" }, "raw_text": "MPI Scatter BK TP.HCM Usage int MPI Scatter( void* sendbuf /* in */ int sendcount /* in */ MPI Datatype sendtype. 7* in *7 void* recvbuf /* out *7 int recvcount 7* in*7 MPI Datatype recvtype. /* in */ int root. /* in */ MPI Comm comm);/* in */ Description - Distribute individual messages from root to each process in communicator - Inverse operation to MPI GATHER HPC Lab - CSE - HCMUT" }, { "page_index": 235, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_035.png", "page_index": 235, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:52+07:00" }, "raw_text": "Example of MPI_Scatter (1) BK TP.HCM #include \"mpi.h int main( int argc, char* argv[] int i; int rank, nproc int isend[31, irecv, MPI Init(&argc,&argv ); MPI Comm size( MPl_COMM_WORLD,&nproc); MPl Comm rank( MPI_COMM_WORLD,&rank); HPC Lab - CSE - HCMUT" }, { "page_index": 236, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_4_b/slide_036.png", "page_index": 236, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:37:55+07:00" }, "raw_text": "Example of MPI_Scatter (2) BK TP.HCM if(rank == 0) { for(i=0; i o) { recv(&accumulation, P-1); accumulation = accumulation + number; } if (process < n-1) send(&accumulation,iP1); The final result is in the last process Instead of addition, other arithmetic operations could be done Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 286, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_017.png", "page_index": 286, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:13+07:00" }, "raw_text": "Slide 171 Pipelined addition numbers with a master process and ring configuration Master process Slaves Po d2d1d P. P n-1 Sum Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 287, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_018.png", "page_index": 287, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:18+07:00" }, "raw_text": "Slide 172 Sorting Numbers A parallel version of insertion sort. P. P. P2 numbers 1 4,3,1,2,5 2 4,3,1,2 2 3 4,3,1 4 4,3 3 5 4 Time (cycles) 2 1 C 6 3 - 7 2 8 3 9 10 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 288, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_019.png", "page_index": 288, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:20+07:00" }, "raw_text": "Slide 173 Pipeline for sorting using insertion sort Po Smaller P1 P numbers Series of numbers Compare Xn-l ... XqXo Xmax Next largest Largest number number Type 2 pipeline computation Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 289, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_020.png", "page_index": 289, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:24+07:00" }, "raw_text": "Slide 174 The basic algorithm for process P; is recv(&number, P-1): if (number > x) { send(&x, Pi+1); x = number: } else send(&number, P+1); With n numbers, how many the ith process is to accept is known; it is given by n -i. How many to pass onward is also known; it is given by n - i - 1 since one of the numbers received is not passed onward. Hence, a simple loop could be used. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 290, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_021.png", "page_index": 290, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:27+07:00" }, "raw_text": "Slide 175 Insertion sort with results returned to the master process using a bidirectional line configuration Master process P n-1 P Sorted sequence Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 291, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_022.png", "page_index": 291, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:29+07:00" }, "raw_text": "Slide 176 Insertion sort with results returned Sorting phase Returning sorted numbers 2n - 1 n P4 Shown for n = 5 P3 P2 P1 Po Timé Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 292, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_023.png", "page_index": 292, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:32+07:00" }, "raw_text": "Slide 177 Prime Number Generation Sieve of Eratosthenes Series of all integers is qenerated from 2. First number, 2, is prime and kept. All multiples of this number are deleted as they cannot be prime. Process repeated with each remaining g number. The algorithm removes nonprimes, leaving only primes Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 293, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_024.png", "page_index": 293, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:36+07:00" }, "raw_text": "Slide 178 Pipeline for Prime Number Generation Not multiples of 1st prime number Po P1 Series of numbers Xn-1 ...5 4 3 2 Compare 1st prime 2nd prime 3rd prime multiples number number number Type 2 pipeline computation Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 294, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_025.png", "page_index": 294, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:39+07:00" }, "raw_text": "Slide 179 The code for a process, P, could be based upon recv(&x, Pi-1); /* repeat following for each number */ recv(&number, P-1): if ((number % x) != 0) send(&number,iP1); Each process will not receive the same amount of numbers and the amount is not known beforehand. Use a \"terminator\" message, which is sent at the end of the sequence: recv(&x, P-1); for (i = o; i < n; i++) { recv(&number, P-1: if (number == terminator) break; if (number % x) != 0) send(&number,i1); Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 295, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_026.png", "page_index": 295, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:42+07:00" }, "raw_text": "Slide 180 Solving a System of Linear Equations Upper-triangular form an-1,0Xo + an-1,1X1+ an-1,2X2 + an-1,n-1Xn-1 bn-1 = b2 a2.0Xo + a2.1X1 + a2.2X2 = b1 a1,oXo + a1,1X1 ao,oXo bo where the a's and b's are constants and the x's are unknowns to be found. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 296, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_027.png", "page_index": 296, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:45+07:00" }, "raw_text": "Slide 181 Back Substitution First, the unknown xo is found from the last equation; i.e. ao,0 Value obtained for xo substituted into next equation to obtain x1; i.e.. b1- a1.0xo a1,1 Values obtained for x, and xo substituted into next equation to obtain X2: 2-d2.0X0 -d2.1x1 a2,2 and so on until all the unknowns are found Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 297, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_028.png", "page_index": 297, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:48+07:00" }, "raw_text": "Slide 182 Pipeline Solution First pipeline stage computes xo and passes xo onto the second stage, which computes x1 from xo and passes both Xo and x1 onto the next stage, which computes x2 from xo and x1, and so on. P1 P X0 X0 X0 X1 X1 Compute xo Compute x1 X1 Compute x2 Compute X3 X2 X2 rX3 Type 3 pipeline computation Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 298, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_029.png", "page_index": 298, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:51+07:00" }, "raw_text": "Slide 183 The ith process (0 < i < n) receives the values Xo,X1,X2, ...,Xj-1 and computes x; from the equation: i - 1 b;- E ai,jj j = 0 i,i Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 299, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_030.png", "page_index": 299, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:54+07:00" }, "raw_text": "Slide 184 Sequential Code Given the constants aji and bk stored in arrays a[][] and b[], respectively, and the values for unknowns to be stored in an array. x[], the sequential code could be x[o] = b[o]/a[o][0]; /* computed separately */ for (i = 1; i < n; i++) X* for remaining unknowns */ sum = 0: for (j = 0; j < i; j++ sum = sum + a[i][j]*x[j] x[i] = (b[i] - sum)/a[i][i]; } Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 300, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_031.png", "page_index": 300, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:41:57+07:00" }, "raw_text": "Slide 185 Parallel Code Pseudocode of process P; (1 < i< n) of could be for (j = o; j < i; j++) { recv(&x[j], P-1); send(&x[j], P+1); } sum = 0 for (j = 0; j < i; j++) sum sum + a[i][j]*x[j: x[i] = (b[i] - sum)/a[i][i]; send(&x[i], P+1); Now additional computations after receiving and resending values Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 301, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_5/slide_032.png", "page_index": 301, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:00+07:00" }, "raw_text": "Slide 186 Pipeline processing using back substitution P 5 P4 P3 Final computed value Processes P2 P1 Po First value passed onward Time Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 302, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_001.png", "page_index": 302, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:03+07:00" }, "raw_text": "Chapter 6 Parallel Computer Architectures Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 303, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_002.png", "page_index": 303, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:06+07:00" }, "raw_text": "BK Outline TP.HCM Flynn' s Taxonomy Classification of Parallel Computers Based on Architectures HPC Lab - CSE - HCMUT" }, { "page_index": 304, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_003.png", "page_index": 304, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:09+07:00" }, "raw_text": "Flynn's Taxonomy BK TP.HCM Based on notions of instruction and data streams - SisD (a Single Instruction stream, a Single Data stream ) - SiMD (Single Instruction stream, Multiple Data streams ) - MisD (Multiple Instruction streams, a Single Data stream) - MiMD (Multiple Instruction streams, Multiple Data stream Popularity - MIMD > SlMD > MISD HPC Lab - CSE - HCMUT" }, { "page_index": 305, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_004.png", "page_index": 305, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:12+07:00" }, "raw_text": "BK SISD TP.HCM SISD - Conventional sequential machines IS : Instruction Stream DS : Data Stream CU : Control Unit PU : Processing Unit MU : Memory Unit IS IS DS CU PU MU 1/0 HPC Lab - CSE - HCMUT" }, { "page_index": 306, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_005.png", "page_index": 306, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:16+07:00" }, "raw_text": "BK SIMD TP.HCM SIMD - Vector computers, processor arrays - Special purpose computations PE : Processing Element LM : Local Memory DS DS PE1 LM1 IS CU IS Data sets DS loaded from host DS PEn LMn Program loaded from host SIMD architecture with distributed memory HPC Lab - CSE - HCMUT" }, { "page_index": 307, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_006.png", "page_index": 307, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:19+07:00" }, "raw_text": "BK MISD TP.HCM MISD - Systolic arrays - Special purpose computations IS IS CU1 CU2 CUn Memory IS IS l IS Program DS DS DS Data) PU1 PU2 PUn 1/0 DS MISD architecture (the systolic array) HPC Lab - CSE - HCMUT" }, { "page_index": 308, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_007.png", "page_index": 308, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:24+07:00" }, "raw_text": "BK SIMD array TP.HCM Mesh Type SIMD array Control Unit Control Bus Processing Processing Processing Units Units Units T T Data Bus Interconnection Network(Local) HPC Lab - CSE - HCMUT" }, { "page_index": 309, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_008.png", "page_index": 309, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:29+07:00" }, "raw_text": "BK Systolic array TP.HCM Systolic Array. Control Control Control Unit Unit Unit Processing Processing Processing Units Units Units Interconnection Network(Local) An SIMD array is a synchronous array of PEs under the supervision of one control unit and all PEs receive the same instruction broadcast from the control unit but operate on different data sets from distinct data streams. SIMD array usually loads data into its local memories before starting the computation. Systolic arrays usually pipe data from an outside host and also pipe the results back to the host. HPC Lab - CSE - HCMUT" }, { "page_index": 310, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_009.png", "page_index": 310, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:33+07:00" }, "raw_text": "Systolic array for convolution BK TP.HCM Systolic array. uj......u Wo W. W, W3k 0 yi......yo Each cell operation. aout ain a a out W; bout in =b. +a out W. HPC Lab - CSE - HCMUT" }, { "page_index": 311, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_010.png", "page_index": 311, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:37+07:00" }, "raw_text": "Systolic array for matrix multiplication BK TP.HCM Multiplication B Here the matrix B is Transposed! nl Each PE function is to first multiply and then add. B13 B22 B31 PE ij=Cj B12 B Bi A1n......A12 A PE PE PE PE .. A22 A21 PE PE PE PE PE PEPE PE HPC Lab - CSE - HCMUT" }, { "page_index": 312, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_011.png", "page_index": 312, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:40+07:00" }, "raw_text": "BK MIMD TP.HCM MIMD - General purpose parallel computers IS IS DS CU1 PU1 I/0 Shared Memory IS DS I/0 CUn PUn IS MIMD architecture with shared memory HPC Lab - CSE - HCMUT" }, { "page_index": 313, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_012.png", "page_index": 313, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:43+07:00" }, "raw_text": "BK Classification based on Architecture TP.HCM Multiprocessors Multicomputers Pipelined Computers Dataflow Architectures Data Parallel Systems HPC Lab - CSE - HCMUT" }, { "page_index": 314, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_013.png", "page_index": 314, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:47+07:00" }, "raw_text": "BK Multiprocessor TP.HCM Consists of many fully programmable processors each capable of executing its own program Shared address space architecture Classified into 2 types - Uniform Memory Access (UMA) Multiprocessors - Non-Uniform Memory Access (NUMA) Multiprocessors HPC Lab - CSE - HCMUT" }, { "page_index": 315, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_014.png", "page_index": 315, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:51+07:00" }, "raw_text": "UMA Multiprocessor (1) BK TP.HCM Uses a central switching mechanism to reach a centralized shared memory All processors have equal access time to global memory 1 Tightly coupled system Problem: cache consistency P: Processor i C1 C2 Cn P2 P1 P Ci Cache i n Switching mechanism I/0 Memory banks HPC Lab - CSE - HCMUT" }, { "page_index": 316, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_015.png", "page_index": 316, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:54+07:00" }, "raw_text": "UMA Multiprocessor (2) BK TP.HCM Crossbar switching mechanism Mem Mem Mem Mem Cache Cache 1/0 1/0 P P HPC Lab - CSE - HCMUT" }, { "page_index": 317, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_016.png", "page_index": 317, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:42:57+07:00" }, "raw_text": "UMA Multiprocessor (3) BK TP.HCM Shared-bus switching mechanism Mem Mem Mem Mem 1 Cache Cache 1/0 1/0 P P HPC Lab - CSE - HCMUT" }, { "page_index": 318, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_017.png", "page_index": 318, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:00+07:00" }, "raw_text": "UMA Multiprocessor (4) BK TP.HCM Packet-switched network Mem Mem Mem 1 1 17 Network Cache Cache Cache P P P HPC Lab - CSE - HCMUT" }, { "page_index": 319, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_018.png", "page_index": 319, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:05+07:00" }, "raw_text": "NUMA Multiprocessor BK TP.HCM Distributed shared P P memory combined by local memory of Mem Cache Mem Cache all processors Memory access time Network depends on whether it is local to the processor Mem Cache Mem Cache Caching shared (particularly P P nonlocal) data? Distributed Memory HPC Lab - CSE - HCMUT" }, { "page_index": 320, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_019.png", "page_index": 320, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:09+07:00" }, "raw_text": "BK Current Types of Multiprocessors TP.HCM PVP (Parallel Vector Processor) - A small number of proprietary vector processors connected by a high-bandwidth crossbar switch SMP (Symmetric Multiprocessor) - A small number of COsT microprocessors connected by a high-speed bus or crossbar switch DSM (Distributed Shared Memory) 0 Similar to SMP - The memory is physically distributed among nodes. HPC Lab - CSE - HCMUT" }, { "page_index": 321, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_020.png", "page_index": 321, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:12+07:00" }, "raw_text": "PVP (Parallel Vector Processor) BK TP.HCM VP : Vector Processor SM : Shared Memory VP VP VP Crossbar Switch SM SM SM HPC Lab - CSE - HCMUT" }, { "page_index": 322, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_021.png", "page_index": 322, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:15+07:00" }, "raw_text": "SMP (Symmetric Multi-Processor) BK TP.HCM P/C : Microprocessor and Cache SM: Shared Memory P/C P/C P/C Bus or Crossbar Switch SM SM SM HPC Lab - CSE - HCMUT" }, { "page_index": 323, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_022.png", "page_index": 323, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:19+07:00" }, "raw_text": "(Distributed Shared Memory) BK DSM TP.HCM MB: Memory Bus MB MB P/C: Microprocessor & Cache P/C P/C LM: Local Memory DIR: Cache Directory LM LM NIC: Network Interface Circuitry DIR DIR NIC NIC Custom-Designed Network HPC Lab - CSE - HCMUT" }, { "page_index": 324, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_023.png", "page_index": 324, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:22+07:00" }, "raw_text": "Multicomputers BK TP.HCM No shared memory 1 Processors interact via message passing -> loosely coupled system Message-passing Interconnection Network P/C: Microprocessor & Cache M: Memory P/C P/C P/C M M M HPC Lab - CSE - HCMUT" }, { "page_index": 325, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_024.png", "page_index": 325, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:25+07:00" }, "raw_text": "BK Current Types of Multicomputers TP.HCM MPP (Massively Parallel Processing) - Total number of processors > 1000 Cluster - Each node in system has less than 16 processors Constellation - Each node in system has more than 16 processors HPC Lab - CSE - HCMUT" }, { "page_index": 326, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_025.png", "page_index": 326, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:28+07:00" }, "raw_text": "MPP (Massively Parallel Processing) BK TP.HCM P/C: Microprocessor & Cache MB: Memory Bus NiC: Network Interface Circuitry LM: Local Memory MB MB P/C P/C LM LM NIC NIC Custom-Designed Network HPC Lab - CSE - HCMUT" }, { "page_index": 327, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_026.png", "page_index": 327, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:33+07:00" }, "raw_text": "BK Clusters TP.HCM MB: Memory Bus MB MB P/C: Microprocessor & P/C P/C Cache M: Memory M M LD: Local Disk IOB:l/O Bus Bridge Bridge NlC: Network Interface LD IOB LD IOB Circuitry NIC NIC Commodity Network (Ethernet, ATM, Myrinet, InfiniBand (VIA)) HPC Lab - CSE - HCMUT" }, { "page_index": 328, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_027.png", "page_index": 328, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:37+07:00" }, "raw_text": "BK Constellations TP.HCM P/C: Microprocessor & Cache MB: Memory Bus NiC: Network Interface Circuitry SM: Shared Memory I0C: l/0 Controller LD: Local Disk >= 16 >= 16 P/C P/C P/C P/C Hub IOC Hub I0C LD NIC LD NIO SM SM SM SM Custom or Commodity Network HPC Lab - CSE - HCMUT" }, { "page_index": 329, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_028.png", "page_index": 329, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:40+07:00" }, "raw_text": "Pipelined Computers (1) BK TP.HCM Instructions are divided into a number of steps (segments, stages) At the same time, several instructions can be loaded in the machine and be executed in different steps HPC Lab - CSE - HCMUT" }, { "page_index": 330, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_029.png", "page_index": 330, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:46+07:00" }, "raw_text": "Pipeline Computers (2) BK TP.HCM - IF: instruction fetch - ID: instruction decode and register fetch - Ex: execution and effective address calculation - MEM: memory access - WB: write back Cycles Instruction # 1 2 3 4 5 6 7 8 9 Instruction i IF ID EX MEM WB Instruction i+1 IF ID EX MEM WB Instruction i+2 IF ID EX MEM WB IF Instruction i+3 ID EX MEM WB Instruction i+4 IF ID EX MEM WB HPC Lab - CSE - HCMUT" }, { "page_index": 331, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_030.png", "page_index": 331, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:50+07:00" }, "raw_text": "BK Dataflow Architecture TP.HCM Data-driven model - A program is represented as a directed acyclic graph in which a node represents an instruction and an edge represents the data dependency relationship between the connected nodes - Firing rule > A node can be scheduled for execution if and only if its input data become valid for consumption Dataflow languages - Id, SISAL, Silage, LISP,.. - Single assignment, applicative(functional) language - Explicit parallelism HPC Lab - CSE - HCMUT" }, { "page_index": 332, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_031.png", "page_index": 332, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:54+07:00" }, "raw_text": "BK Dataflow Graph TP.HCM a z = (a+ b) * c b Z c The dataflow representation of an arithmetic expression HPC Lab - CSE - HCMUT" }, { "page_index": 333, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_032.png", "page_index": 333, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:43:57+07:00" }, "raw_text": "BK Dataflow Computer TP.HCM Execution of instructions is driven by data availability - What is the difference between this and normal (control flow) computers? Advantages - Very high potential for parallelism - High throughput Free from side-effect Disadvantages - Time lost waiting for unneeded arguments - High control overhead Difficult in manipulating data structures HPC Lab - CSE - HCMUT" }, { "page_index": 334, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_033.png", "page_index": 334, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:02+07:00" }, "raw_text": "BK Dataflow Representation TP.HCM input d,e,f co = O d1 d2 d3 d4 e1 e2 e3 e4 for i from 1 to 4 do begin ai := di/ ei a1 a2 a3 a4 bi := ai * fi f1 -(* f2 -(* f3 -(* f4-(* Ci := bi + Ci-1 b1 b2 b3 b4 end C0 C4 C1 C2 C3 output a, b, c HPC Lab - CSE - HCMUT" }, { "page_index": 335, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_034.png", "page_index": 335, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:06+07:00" }, "raw_text": "Execution on BK TP.HCM a Control Flow Machine Assume all the external inputs are available before entering do loop + : 1 cycle, * : 2 cycles,/ : 3 cycles al b1 c1 a2 b2 c2 a4 b4 c4 Sequential execution on a uniprocessor in 24 cycles How long will it take to execute this program on a dataflow comp uter with 4 processors? HPC Lab - CSE - HCMUT" }, { "page_index": 336, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_035.png", "page_index": 336, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:10+07:00" }, "raw_text": "BK Execution on a Dataflow Machine TP.HCM al b1 c1 c2 c3 c4 a2 b2 a3 b3 a4 b4 Data-driven execution on a 4-processor dataflow computer i n 9 cycles Can we further reduce the execution time of this program ? HPC Lab - CSE - HCMUT" }, { "page_index": 337, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_036.png", "page_index": 337, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:13+07:00" }, "raw_text": "Data Parallel Systems (1) BK TP.HCM Programming model - Operations performed in parallel on each element of data structure - Logically single thread of control, performs sequential or parallel steps - Conceptually, a processor associated with each data element HPC Lab - CSE - HCMUT" }, { "page_index": 338, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_037.png", "page_index": 338, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:16+07:00" }, "raw_text": "Data Parallel Systems (2) BK TP.HCM SIMD Architectural model - Array of many simple, cheap processors with little memory each > Processors don't seguence through instructions - Attached to a control processor that issues instructions - Specialized and general communication, cheap global synchronization HPC Lab - CSE - HCMUT" }, { "page_index": 339, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_038.png", "page_index": 339, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:19+07:00" }, "raw_text": "BK Vector Processors TP.HCM Instruction set includes operations on vectors as well as scalars 2 types of vector computers - Processor arrays - Pipelined vector processors HPC Lab - CSE - HCMUT" }, { "page_index": 340, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_039.png", "page_index": 340, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:24+07:00" }, "raw_text": "Processor Array BK TP.HCM A sequential computer connected with a set of identical processing elements simultaneouls doing the same operation on different data. Eg CM-200 Processor array Processing Front-end computer element Data Data path memory Program and Data Memory Processing element Data CPU memory Instructior path I/0 processor Processing element Data I/0 memory I/0 HPC Lab - CSE - HCMUT" }, { "page_index": 341, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_a/slide_040.png", "page_index": 341, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:27+07:00" }, "raw_text": "BK Pipeline Vector Processor TP.HCM 1976 Stream vector from memory to the CPU Use pipelined arithmetic units to manipulate data Eg: Cray-1, Cyber-205 HPC Lab - CSE - HCMUT" }, { "page_index": 342, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_001.png", "page_index": 342, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:29+07:00" }, "raw_text": "Chapter 6 Processor Organization Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 343, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_002.png", "page_index": 343, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:32+07:00" }, "raw_text": "Outline BK TP.HCM Criteria: C - Diameter, bisection width, etc Processor Organizations: - Mesh, binary tree, hypertree, pyramid, butterfly, hypercube shuffle-exchange HPC Lab - CSE - HCMUT" }, { "page_index": 344, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_003.png", "page_index": 344, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:36+07:00" }, "raw_text": "Criteria BK TP.HCM Diameter - The largest distance between two nodes Lower diameter is better Bisection width The minimum number of edges that must be removed in order to divide the network into two halves (within one) Number of edges per node Maximum edge length HPC Lab - CSE - HCMUT" }, { "page_index": 345, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_004.png", "page_index": 345, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:38+07:00" }, "raw_text": "Mesh (1) BK TP.HCM Q-dimensional lattice Communication is allowed only between neighboring nodes. Interior nodes communicate with 2g other nodes. HPC Lab - CSE - HCMUT" }, { "page_index": 346, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_005.png", "page_index": 346, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:41+07:00" }, "raw_text": "Mesh (2) BK TP.HCM Q-dimensional mesh with kq nodes - Diameter: q(k-1) - Bisection width: kq-1 - The maximum number of edges per node: 2q - The maximum edge length is a constant HPC Lab - CSE - HCMUT" }, { "page_index": 347, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_006.png", "page_index": 347, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:44+07:00" }, "raw_text": "Binary Tree BK TP.HCM Depth k-1: 2k-1 nodes Diameter: 2(k-1) Bisection width: 1 Length of the longest edge: increasing HPC Lab - CSE - HCMUT" }, { "page_index": 348, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_007.png", "page_index": 348, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:46+07:00" }, "raw_text": "Fat Tree BK TP.HCM Bandwidth problem on binary tree HPC Lab - CSE - HCMUT" }, { "page_index": 349, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_008.png", "page_index": 349, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:49+07:00" }, "raw_text": "Hypertree (1) BK TP.HCM Hypertree of degree k and depth d: a complete k-ary tree of height d. HPC Lab - CSE - HCMUT" }, { "page_index": 350, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_009.png", "page_index": 350, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:52+07:00" }, "raw_text": "(2) Hypertree BK TP.HCM A 4-ary hypertree with depth d has 4d leaves and 2d(2d+1-1) nodes in all Diameter: 2d - Bisection width: 2d+1 - The number of edges per node 6 - Length of the longest edge: increasing HPC Lab - CSE - HCMUT" }, { "page_index": 351, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_010.png", "page_index": 351, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:44:56+07:00" }, "raw_text": "Pyramid BK TP.HCM Size k2: base a 2D mesh network containing k2 processors, the total number of processors=(4/3)k2 -1/3 A pyramid of size k2: - Diameter: 2logk Bisection width: 2k - Maximum of links per node: 9 - Length of the longest edge: increasing HPC Lab- CSE - HCMUT" }, { "page_index": 352, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_011.png", "page_index": 352, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:00+07:00" }, "raw_text": "Butterfly (1) BK TP.HCM (k+1)2k nodes divided into k+1 rows (rank), each contains n=2k nodes. Ranks are labeled 0 through k Node(i,j): j-th node on the i-th rank Node(i,j) is connected to two nodes on rank i-1: node(i-1,j) and node (i-1,m), where m is the integer found by inverting the i-th most significant bit in the binary representation of j If node(i,j) is connected to node(i-1,m), then node(i,m) is connected to node(i-1,j) Diameter=2k Bisection width=2k Length of the longest edge: increasing HPC Lab- CSE - HCMUT" }, { "page_index": 353, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_012.png", "page_index": 353, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:04+07:00" }, "raw_text": "Butterfly (2) BK TP.HCM 0 1 2 3 4 5 6 7 Rank 0 Node(1,5): i=1,j=5 j = 5 = 101 (binary) i=1 001 = 1 Rank 1 Node(1,5) is connected to node(0,1) Rank 2 Rank 3 HPC Lab - CSE - HCMUT" }, { "page_index": 354, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_013.png", "page_index": 354, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:07+07:00" }, "raw_text": "Hypercube (1) BK TP.HCM 2k nodes form a k-dimensional hypercube Nodes are labeled 0, 1, 2,.., 2k-1 Two nodes are adjacent if their labels differ in exactly one bit position Diameter=k Bisection width= 2k-1 Number of edges per node is k Length of the longest edge: increasing HPC Lab - CSE - HCMUT" }, { "page_index": 355, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_014.png", "page_index": 355, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:10+07:00" }, "raw_text": "Hypercube (2) BK TP.HCM 5 6 2 2 3 3 HPC Lab - CSE - HCMUT" }, { "page_index": 356, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_015.png", "page_index": 356, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:15+07:00" }, "raw_text": "Hypercube (3) BK TP.HCM 12 5 = 0101 4 8 1 = 0001 5 13 4 = 0100 13 = 1101 6 14 2 10 15 3 11 HPC Lab - CSE - HCMUT" }, { "page_index": 357, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_b/slide_016.png", "page_index": 357, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:18+07:00" }, "raw_text": "Others BK TP.HCM Torus http://clusterdesign.org/torus/ http://www.fujitsu.com/global/about/tech/k/whatis/network/ Cube-Connected cycles Shuffle-Exchange De Bruijn HPC Lab - CSE - HCMUT" }, { "page_index": 358, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_001.png", "page_index": 358, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:20+07:00" }, "raw_text": "Slide 187 Chapter 6 Synchronous Computations Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 359, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_002.png", "page_index": 359, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:23+07:00" }, "raw_text": "Slide 188 Synchronous Computations In a (fully) synchronous application, all the processes synchronized at regular points Barrier A basic mechanism for synchronizing processes - inserted at the point in each process where it must wait All processes can continue from this point when all the processes have reached it (or, in some implementations, when a stated number of processes have reached this point) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 360, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_003.png", "page_index": 360, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:26+07:00" }, "raw_text": "Slide 189 Processes reaching barrier at different times Processes P P1 P2 n- Active Time Waiting Barrier Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 361, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_004.png", "page_index": 361, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:29+07:00" }, "raw_text": "Slide 190 In message-passing systems, barriers provided with library routines: Processes Po P1 D n- BarrierO BarrierO) Processes wait until all BarrierO reach their barrier call Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 362, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_005.png", "page_index": 362, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:32+07:00" }, "raw_text": "Slide 191 MPI MpI Barrier(O Barrier with a named communicator being the only parameter. called by each process in the group, blocking until all members of the group have reached the barrier call and only returning then Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 363, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_006.png", "page_index": 363, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:35+07:00" }, "raw_text": "Slide 192 PVM pvm_barrierO similar barrier routine used with a named group of processes. PVM has the unusual feature of specifying the number of processes that must reach the barrier to release the processes. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 364, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_007.png", "page_index": 364, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:37+07:00" }, "raw_text": "Slide 193 Barrier Implementation Centralized counter implementation ( linear barrier) : Processes P1 P n-1 Counter, C BarrierO; Increment and check for n BarrierO: BarrierO: Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 365, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_008.png", "page_index": 365, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:41+07:00" }, "raw_text": "Slide 194 Counter-based barriers often have two phases: A process enters arrival phase and does not leave this phase until all processes have arrived in this phase Then processes move to departure phase and are released. Good implementations must take into account that a barrier might be used more than once in a process. Might be possible for a process to enter barrier for a second time before previous processes have left barrier for the first time. Two-phase handles this scenario. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 366, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_009.png", "page_index": 366, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:44+07:00" }, "raw_text": "Slide 195 Example code: Master : for - (i = 0; i < n; i++X*count slaves as they reach barrier*/ recv(Pany) ; for (i = 0; i < n; i++y* release slaves */ send(Pi) ; Slave processes: send(Pmaster) i recv(Pmaster) i Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 367, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_010.png", "page_index": 367, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:45:48+07:00" }, "raw_text": "Slide 196 Barrier implementation in a message-passing system Master Slave processes Arrival Barrier: for(i=0;iP1,P2<>P3,P4>P5,P6<>P7 Second stage Po<>P2,P1<>P3,P4<>P6,P5<>P7 Third stage Po>P4,P1>P5,P2>P6,P3>P7 Po P1 P2 P 3 P 4 P7 1st stage Time 2nd stage 3rd stage Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 371, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_014.png", "page_index": 371, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:02+07:00" }, "raw_text": "Slide 200 Local Synchronization Suppose a process P; needs to be synchronized and to exchange data with process Pi-1 and process Pi+1 before continuing: Process Pi-1 Process P: Process Pi+1 recv(Pi) ; send(Pi-1) ; recv(Pi) ; send(Pi) : send(Pi+1) ; send(Pi) ; Not a perfect three-process barrier because process P;-1 will only synchronize with P; and continue as soon as P; allows. Similarly process Pi+1 only synchronizes with P; Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 372, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_015.png", "page_index": 372, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:05+07:00" }, "raw_text": "Slide 201 Deadlock When a pair of processes each send and receive from each other, deadlock may occur. Deadlock will occur if both processes perform the send, using synchronous routines first (or blocking routines without sufficient buffering). This is because neither will return; they will wait for matching receives that are never reached. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 373, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_016.png", "page_index": 373, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:08+07:00" }, "raw_text": "Slide 202 A Solution Arrange for one process to receive first and then send and the other process to send first and then receive. Example Linear pipeline, deadlock can be avoided by arranging so the even- numbered processes perform their sends first and the odd- numbered processes perform their receives first. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 374, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_017.png", "page_index": 374, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:10+07:00" }, "raw_text": "Slide 203 Combined deadlock-free blocking sendrecv( routines MPl provides MPI_Sendrecv(Oand MPI_Sendrecv_rep1ace(O) Example Process P;-1 Process P: Process Pi+1 sendrevOs have 12 parameters! Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 375, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_018.png", "page_index": 375, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:14+07:00" }, "raw_text": "Slide 204 Synchronized Computations Can be classififed as. Fully synchronous or Locally synchronous In fully synchronous, all processes involved in the computation must be synchronized. In locally synchronous, processes only need to synchronize with a set of logically nearby processes, not all processes involved in the computation Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 376, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_019.png", "page_index": 376, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:17+07:00" }, "raw_text": "Slide 205 Fully Synchronized Computation Examples Data Parallel Computations Same operation performed on different data elements simultaneously; i.e., in parallel. Particularly convenient because: Ease of programming (essentially only one program) Can scale easily to larger problem sizes. Many numeric and some non-numeric problems can be cast in a data parallel form. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 377, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_020.png", "page_index": 377, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:20+07:00" }, "raw_text": "Slide 206 Example To add the same constant to each element of an array: for (i = o; i < nj i++ a[i] = a[i] + k; The statement a[i] = a[i] + k could be executed simultaneously by multiple processors, each using a different index i(0=2j) x[i] = x[i] + x[i - ]; Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 383, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_026.png", "page_index": 383, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:49+07:00" }, "raw_text": "Slide 212 Data parallel prefix sum operation NumbersXoXiX2X3 X4X5X6X7 X8X,X10X11X12X13X14X15 AAAAAAAAAAAAAA Add 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Step 1 > > > 2 i=0 i=0 i=1 i=2 i=3 i=4 i=5 i=6 i=7 i=8 i=9 =10 =11 1=12 i=13 i=14 (j =0) Add 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Step 2 > > > > 2 i=0 i=0 i=0 i=0 i=2 i=3 i=4 i=5 i=6 i=7 i=8 i=9 i=12 i=1 =10 =11 (j = 1) Add 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Step 3 2 2 2 i=0 i=0 i=1 i=2 i=3 i=4 i=5 i=6 i=7 i=8 (j = 2) Add 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Final step i=0 i=0 i=0 i=0 i=0 i=0 i=0 (j = 3) i=0 i=0 i=0 i=0 i=0 i=0 i=0 i=0 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved" }, { "page_index": 384, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_027.png", "page_index": 384, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:54+07:00" }, "raw_text": "Slide 213 Synchronous Iteration (Synchronous Parallelism) Each iteration composed of several processes that start together at beginning of iteration. Next iteration cannot begin until all processes have finished previous iteration. Using fora11 : for 1 (j = 0; j < n; j++)/*for each synch. iteration */ forall (i = 0; i < N; i++) A*N procs each using*/ body(i); /* specific value of i */ } or: for (j = 0; j < n; j++) X*for each synchr.iteration */ i = myrank; /*find value of i to be used */ body(i); barrier(mygroup) Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 385, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_028.png", "page_index": 385, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:46:57+07:00" }, "raw_text": "Slide 214 Another fully synchronous computation example Solving a General System of Linear Equations by Iteration Suppose the equations are of a general form with n equations and n unknowns an-1,oXo + an-1,1X1 + an-1,2X2 ... + an-1,n-1Xn-1 = bn-1 = b2 a2.oXo + a2,1X1 + a2.2X2 .. + a2,n-1Xn-1 = b1 a1,oXo + a1,1X1 + a1,2X2 .. + a1,n-1Xn-1 = bo ao,oXo + ao,1X1 + ao,2X2 .. + ao,n-1Xn-1 where the unknowns are Xo, X1, X2, ... Xn-1 (0 i < n) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 386, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_029.png", "page_index": 386, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:47:00+07:00" }, "raw_text": "Slide 215 By rearranging the ith equation. + aj,n-1Xn-1 ai,oXo + ai,1X1+ ai,2X2 . = bj to x; = (1/aj,i)[b;-(aj,oXo+aj,1X1+ai,2X2...ai,i-1Xi-1+aj,i+1Xi+1...+ai,n-1Xn-1) or - Z ai,jxj j +i This equation gives x; in terms of the other unknowns and can be be used as an iteration formula for each of the unknowns to obtain better approximations Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 387, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_c/slide_030.png", "page_index": 387, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:47:04+07:00" }, "raw_text": "Slide 216 Jacobi Iteration All values of x are updated together. Can be proven that the Jacobi method will converge if the diagonal values of a have an absolute value qreater than the sum of the absolute values of the other a's on the row (the array of a's is diagonally dominant) i.e. if ai,j tolerance) ; /* test convergence */ Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 424, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_005.png", "page_index": 424, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:08+07:00" }, "raw_text": "Slide 253 First section of code computing the next iteration values based on the immediate previous iteration values is traditional Jacobi iteration method. Suppose however, processes are to continue with the next iteration before other processes have completed. Then, the processes moving forward would use values computed from not only the previous iteration but maybe from earlier iterations. Method then becomes an asynchronous iterative method. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 425, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_006.png", "page_index": 425, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:12+07:00" }, "raw_text": "Slide 254 Asynchronous Iterative Method - Convergence Mathematical conditions for convergence may be more strict. Each process may not be allowed to use any previous iteration values if the method is to converge. Chaotic Relaxation A form of asynchronous iterative method introduced by Chazan and Miranker (1969) in which the conditions are stated as \"there must be a fixed positive integer s such that, in carrying out the evaluation of the ith iterate, a process cannot make use of any value of the components of the jth iterate if j < i - s\" (Baudet, 1978) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved" }, { "page_index": 426, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_007.png", "page_index": 426, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:14+07:00" }, "raw_text": "Slide 255 The final part of the code, checking for convergence of every iteration can also be reduced. It may be better to allow iterations to continue for several iterations before checking for convergence. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 427, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_008.png", "page_index": 427, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:18+07:00" }, "raw_text": "Slide 256 Overall Parallel Code Each process s allowed to perform s iterations before being synchronized and also to update the array as it goes. At s iterations, maximum divergence recorded. Convergence is checked then. The actual iteration corresponding to the elements of the array being used at any time may be from an earlier iteration but only up to s iterations previously. There may be a mixture of values of different iterations as the array is updated without synchronizing with other processes - truly a chaotic situation. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 428, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_009.png", "page_index": 428, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:20+07:00" }, "raw_text": "Slide 257 Chapter 7 Load Balancing and Termination Detection Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 429, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_010.png", "page_index": 429, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:23+07:00" }, "raw_text": "Slide 258 Load balancing - used to distribute computations fairly across processors in order to obtain the highest possible execution speed. Termination detection - detecting when a computation has been completed. More difficult when the computation is distributed. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 430, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_011.png", "page_index": 430, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:26+07:00" }, "raw_text": "Slide 259 Load balancing P5 P4 P3 Processors P2 P1 Po Time (a) Imperfect load balancing leading to increased execution time P5 P4 P3 Processors P2 P1 Po (b) Perfect load balancing Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 431, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_012.png", "page_index": 431, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:30+07:00" }, "raw_text": "Slide 260 Static Load Balancing Before the execution of any process. Some potential static load- balancing technigues: Round robin algorithm - passes out tasks in sequential order of processes coming back to the first when all processes have been given a task Randomized algorithms - selects processes at random to take tasks Recursive bisection - recursively divides the problem into subproblems of equal computational effort while minimizing message passing Simulated annealing -- an optimization technique Genetic algorithm - another optimization technigue, described in Chapter 12 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 432, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_013.png", "page_index": 432, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:33+07:00" }, "raw_text": "Slide 261 Static Load Balancing Balance load prior to the execution. Various static load-balancing algorithms. Several fundamental flaws with static load balancinq even if a mathematical solution exists: Very difficult to estimate accurately the execution times of various parts of a program without actually executing the parts. Communication delays that vary under different circumstances Some problems have an indeterminate number of steps to reach their solution Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 433, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_014.png", "page_index": 433, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:36+07:00" }, "raw_text": "Slide 262 Dynamic Load Balancing Vary load during the execution of the processes. All previous factors are taken into account by making the division of load dependent upon the execution of the parts as they are being executed. Does incur an additional overhead during execution, but it is much more effective than static load balancing Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 434, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_015.png", "page_index": 434, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:39+07:00" }, "raw_text": "Slide 263 Processes and Processors Computation will be divided into work or tasks to be performed, and processes perform these tasks. Processes are mapped onto processors. Since our objective is to keep the processors busy, we are interested in the activity of the processors. However, we often map a single process onto each processor, so we will use 1 the termsprocess and l processor somewhat interchangeably Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 435, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_016.png", "page_index": 435, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:42+07:00" }, "raw_text": "Slide 264 Dynamic Load Balancing Can be classified as: . Centralized Decentralized Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 436, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_017.png", "page_index": 436, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:46+07:00" }, "raw_text": "Slide 265 Centralized dynamic load balancing Tasks handed out from a centralized location. Master-slave structure. Decentralized dynamic load balancing Tasks are passed between arbitrary processes. A collection of worker processes operate upon the problem and interact among themselves, finally reporting to a single process. A worker process may receive tasks from other worker processes and may send tasks to other worker processes (to complete or pass on at their discretion) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 437, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_018.png", "page_index": 437, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:48+07:00" }, "raw_text": "Slide 266 Centralized Dynamic Load Balancing Master process(or) holds the collection of tasks to be performed Tasks are sent to the slave processes. When a slave process completes one task, it requests another task from the master process. Terms used : work pool, replicated worker, processor farm Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 438, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_019.png", "page_index": 438, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:51+07:00" }, "raw_text": "Slide 267 Centralized work pool Work pool Queue Tasks Master process Send task Request task (and possibly submit new tasks Slave \"worker\" processes Slides for Parallel Proqramming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 439, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_020.png", "page_index": 439, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:54+07:00" }, "raw_text": "Slide 268 Termination Computation terminates when: The task queue is empty and Every process has made a request for another task without any new tasks being generated Not sufficient to terminate when task queue empty if one or more processes are still running if a running process may provide new tasks for task queue Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 440, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_021.png", "page_index": 440, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:56+07:00" }, "raw_text": "Slide 269 Decentralized Dynamic Load Balancing Distributed Work Pool Initial tasks Process M Process Mn-1 Slaves Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 441, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_022.png", "page_index": 441, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:49:59+07:00" }, "raw_text": "Slide 270 Fully Distributed Work Pool Processes to execute tasks from each other Process Process Requests/tasks Process Process Slides for Parallel Proqramming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 442, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_023.png", "page_index": 442, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:02+07:00" }, "raw_text": "Slide 271 Task Transfer Mechanisms Receiver-Initiated Method A process reguests tasks from other processes it selects. Typically, a process would request tasks from other processes when it has few or no tasks to perform Method has been shown to work well at high system load. Unfortunately, it can be expensive to determine process loads. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 443, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_024.png", "page_index": 443, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:05+07:00" }, "raw_text": "Slide 272 Sender-Initiated Method A process sends tasks to other processes it selects. Typically, a process with a heavy load passes out some of its tasks to others that are willing to accept them Method has been shown to work well for light overall system loads Another option is to have a mixture of both methods. Unfortunately, it can be expensive to determine process loads. In very heavy system loads, load balancing can also be difficult to achieve because of the lack of available processes. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 444, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_025.png", "page_index": 444, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:08+07:00" }, "raw_text": "Slide 273 Decentralized selection algorithm requesting tasks between slaves Slave Pj Slave P Requests Requests Local Local selection selection algorithm algorithm Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 445, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_026.png", "page_index": 445, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:11+07:00" }, "raw_text": "Slide 274 Process Selection Algorithms for selecting a process: Round robin algorithm - process P; requests tasks from process P X where x is given by a counter that is incremented after each request, using modulo n arithmetic (n processes), excluding x = i. Random polling a/gorithm - process P; requests tasks from process Px, where x is a number that is selected randomly between 0 and n -1 (excluding i) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 446, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_027.png", "page_index": 446, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:14+07:00" }, "raw_text": "Slide 275 Load Balancing Using a Line Structure Master process Po n-1 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1._2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 447, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_028.png", "page_index": 447, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:17+07:00" }, "raw_text": "Slide 276 The master process (Po in Figure 7.6) feeds the queue with tasks at one end, and the tasks are shifted down the queue. When a \"worker\" process, P; (1 i < n), detects a task at its input from the queue and the process is idle, it takes the task from the queue. Then the tasks to the left shuffle down the queue so that the space held by the task is filled. A new task is inserted into the left side end of the queue. Eventually, all processes will have a task and the queue is filled with new tasks. High- priority or larger tasks could be placed in the gueue first. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 448, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_029.png", "page_index": 448, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:21+07:00" }, "raw_text": "Slide 277 Shifting Actions could be orchestrated by using messages between adjacent processes: For left and right communication For the current task P comm If buffer empty. Request for task make request Receive task If buffer full. from request send task If free, Receive request task from task request Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 449, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_030.png", "page_index": 449, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:25+07:00" }, "raw_text": "Slide 278 Code Using Time Sharing Between Communication and Computation Master process (Po) for (i = 0; i < no_tasks; i++) { recv(P1, request_tag): /* request for task */ send(&task, Pi, task_tag); /* send tasks into queue */ 2 recv(P1, request_tag); /* request for task */ send(&empty, Pi, task_tag); /* end of tasks */ Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 450, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_031.png", "page_index": 450, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:29+07:00" }, "raw_text": "Slide 279 Process P; (1 < i< n) if (buffer == empty) { send(Pi-1, request_tag); /* request new task */ recv(&buffer, Pi-1, task_tag);/* task from left proc */ if ((buffer == full) && (!busy)) * get next task */ task = buffer: /* get task*/ buffer = empty; /* set buffer empty */ busy = TRUE; /* set process busy */ } if (request && (buffer == full)) { send(&buffer, Pi+1); /* shift task forward */ buffer = empty: if (busy) { /* continue on current task */ Do some work on task. If task finished, set busy to false. } Nonblocking nrecv( is necessary to check for a request being received from the right. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 451, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_032.png", "page_index": 451, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:32+07:00" }, "raw_text": "Slide 280 Nonblocking Receive Routines PVM Nonblocking receive, pvm_nrecvQ returned a value that is zero if no message has been received. A probe routine, pvm_probe( could be used to check whether a message has been received without actual reading the message Subsequently, a normal recvQ routine is needed to accept and unpack the message. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 452, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_033.png", "page_index": 452, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:35+07:00" }, "raw_text": "Slide 281 Nonblocking Receive Routines MPI Nonblocking receive, MpI_IrecvQ returns a request \"handle,' which is used in subsequent completion routines to wait for the message or to establish whether the message has actually been received at that point (MPI_WaitQand MPI_TestQ respectively) In effect, the nonblocking receive, MpI_Irecv(Q posts a request for message and returns immediately Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 453, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_034.png", "page_index": 453, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:38+07:00" }, "raw_text": "Slide 282 Load balancing using a tree Tasks passed from node into one of the two nodes below it when node buffer empty. Task when requested Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 454, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_035.png", "page_index": 454, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:42+07:00" }, "raw_text": "Slide 283 Distributed Termination Detection Algorithms Termination Conditions At time t reguires the following conditions to be satisfied: Application-specific c local 1 termination conditions exist throughout the collection of processes, at time t. There are no messages in transit between processes at time t. Subtle difference between these termination conditions and those given for a centralized load-balancing system is having to take into account messages in transit. Second condition necessary because a message in transit might restart a terminated process. More difficult to recognize. The time that it takes for messages to travel between processes will not be known in advance. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 455, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_036.png", "page_index": 455, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:45+07:00" }, "raw_text": "Slide 284 One very general distributed termination algortithm Each process in one of two states: 1. Inactive - without any task to perform 2.Active Process that sent task to make it enter the active state becomes its \"parent.\" Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 456, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_037.png", "page_index": 456, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:48+07:00" }, "raw_text": "Slide 285 When process receives s a task, it immediately sends an acknowledgment message, except if the process it receives the task from is its parent process. Only sends an acknowledgment message to its parent when it is ready to become inactive, i.e. when . Its local termination condition exists (all tasks are completed, and . It has transmitted all its acknowledgments for tasks it has received and It has received all its acknowledgments for tasks it has sent out. A process must become inactive before its parent process. When first process becomes idle, the computation can terminate. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 457, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_038.png", "page_index": 457, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:51+07:00" }, "raw_text": "Slide 286 Termination using message acknowledgments Parent Process Final acknowledgment Inactive First task Acknowledgment Task Other processes Active Other termination algorthms in textbook Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 458, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_039.png", "page_index": 458, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:55+07:00" }, "raw_text": "Slide 287 Ring Termination Algorithms Single-pass ring termination algorithm 1. When Po has terminated, it generates a token that is passed to P1. 2. When P; (1 i< n) receives the token and has already terminated. it passes the token onward to Pi+1. Otherwise, it waits for its local passes the token to Po. 3. When Po receives a token, it knows that all processes in the ring have terminated. A message can then be sent to all processes informing them of global termination, if necessary. The algorithm assumes that a process cannot be reactivated after reaching its local termination condition. Does not apply to work poo problems in which a process can pass a new task to an idle process Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 459, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_040.png", "page_index": 459, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:50:58+07:00" }, "raw_text": "Slide 288 Ring termination detection algorithm Token passed to next processor when reached local termination condition P P 2 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 460, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_041.png", "page_index": 460, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:00+07:00" }, "raw_text": "Slide 289 Process algorithm for local termination Token AND Terminated Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 461, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_042.png", "page_index": 461, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:03+07:00" }, "raw_text": "Slide 290 Dual-Pass Ring Termination Algorithm Can handle processes being reactivated but requires two passes around the ring. The reason for reactivation is for process P, to pass a task to P; where j < i and after a token has passed P,. If this occurs, the token must recirculate through the ring a second time. To differentiate these circumstances, tokens colored white or black Processes are also colored white or black Receiving a black token means that global termination may not have occurred andtoken must be recirculated around ring again Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 462, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_043.png", "page_index": 462, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:08+07:00" }, "raw_text": "Slide 291 The algorithm is as follows, again starting at Po: 1.Po becomes white when it has terminated and generates a white token to P1. 2.The token is passed through the ring from one process to the next when each process has terminated, but the color of the token may be changed. If P; passes a task to P; where j < i (that is, before this process in the ring), it becomes a b/ack process; otherwise it is a white process. A black process will color a token black and pass it on. A white process will pass on the token in its original color (either black or white). After P; has passed on a token, it becomes a white process. Pn-1 passes the token to Po. 3.When Po receives a black token, it passes on a white token; if it receives a white token, all processes have terminated. Notice that in both ring algorithms, Po becomes the central point for global termination. Also, assumed that an acknowledge signal is generated to each request. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 463, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_044.png", "page_index": 463, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:10+07:00" }, "raw_text": "Slide 292 Passing task to previous processes Task D D P n-1 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 464, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_045.png", "page_index": 464, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:14+07:00" }, "raw_text": "Slide 293 Tree Algorithm Local actions described can be applied to various structures, notably a tree structure, to indicate that processes up to that point have terminated. AND AND Terminated AND Terminated Terminated Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc.All rights reserved." }, { "page_index": 465, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_046.png", "page_index": 465, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:18+07:00" }, "raw_text": "Slide 294 Fixed Energy Distributed Termination Algorithm A fixed quantity within system, colorfully termed \"energy. System starts with all the energy being held by one process, the root process. Root process passes out portions of energy with tasks to processes making requests for tasks. If these processes receive requests for tasks, the energy is divided further and passed to these processes. When a process becomes idle, it passes the energy it holds back before requesting a new task. A process will not hand back its energy until all the energy it handed out is returned and combined to the total energy held. When all the energy returned to root and the root becomes idle, all the processes must be idle and the computation can terminate. Significant disadvantage - dividing energy will be of finite precision and adding partial energies may not equate to original energy. In addition, can only divide energy so far before it becomes essentially zero Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 466, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_047.png", "page_index": 466, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:21+07:00" }, "raw_text": "Slide 295 Load balancing/termination detection Example Shortest Path Problem Finding the shortest distance between two points on a graph It can be stated as follows: Given a set of interconnected nodes where the links between the nodes are marked with \"weights,\" find the path from one specific node to another specific node that has the smallest accumulated weights. The interconnected nodes can be described by a graph. The nodes are called vertices, and the links are called edges. If the edges have implied directions (that is, an edge can only be traversed in one direction, the graph is a directed graph. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 467, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_048.png", "page_index": 467, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:26+07:00" }, "raw_text": "Slide 296 Graph could be used to find solution to many different problems; eg: 1. The shortest distance between two towns or other points on a map, where the weights represent distance 2. The quickest route to travel, where the weights represent time (the quickest route may not be the shortest route if different modes of travel are available; for example, flying to certain towns) 3. The least expensive way to travel by air, where the weights represent the cost of the flights between cities (the vertices) 4. The best way to climb a mountain given a terrain map with contours 5. The best route through a computer network for minimum message delay (the vertices represent computers, , and the weights represent the delay between two computers) 6. The most efficient manufacturing system, where the weights represent hours of work 'The best way to climb a mountain\" will be used as an example. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 468, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_049.png", "page_index": 468, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:28+07:00" }, "raw_text": "Slide 297 Example: The Best Way to Climb a Mountain Summit F E B C A Possible intermediate camps Base camp Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 469, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_050.png", "page_index": 469, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:32+07:00" }, "raw_text": "Slide 298 Graph of mountain climb 17 q 51 24 13 14 10 8 B Weights in graph indicate amount of effort that would be expended in traversing the route between two connected camp sites The effort in one direction may be different from the effort in the opposite direction (downhill instead of uphill!). (directed graph) Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 470, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_051.png", "page_index": 470, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:36+07:00" }, "raw_text": "Slide 299 Graph Representation Two basic ways that a graph can be represented in a program: 1. Adjacency matrix - a two-dimensional array, a, in which a[i][j] holds the weight associated with the edge between vertex i and vertex / if one exists 2. Adjacency list - for each vertex, a list of vertices directly connected to the vertex by an edge and the corresponding weights associated with the edges Adjacency matrix used for dense graphs. The adjacency list is used for sparse graphs. Difference based upon space (storage) requirements. Accessing the adjacency list is slower than accessing the adjacency matrix. Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 471, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_052.png", "page_index": 471, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:39+07:00" }, "raw_text": "Slide 300 Representing the graph Destination A B C D E F A 10 B 8 13 24 51 C 14 Source D 9 E 17 F (a) Adjacency matrix Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 472, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_053.png", "page_index": 472, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:43+07:00" }, "raw_text": "Slide 301 Weight NULL A B 10X B C 8 13 E24 F51X C D 14 Source D E 9 E 17X F X F (b) Adjacency list Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 473, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_054.png", "page_index": 473, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:46+07:00" }, "raw_text": "Slide 302 Searching a Graph Two well-known single-source shortest-path algorithms: . Moore's single-source shortest-path algorithm (Moore, 1957 Dijkstra's single-source shortest-path algorithm (Dijkstra, 1959 which are similar Moore's algorithm is chosen because it is more amenable to parallel implementation although it may do more work The weights must all be positive values for the algorithm to work Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 474, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_055.png", "page_index": 474, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:49+07:00" }, "raw_text": "Slide 303 Moore's Algorithm Starting with the source vertex, the basic algorithm implemented when vertex i is being considered as follows. Find the distance to vertex i through vertex i and compare with the current minimum distance to vertex j. Change the minimum distance if the distance through vertex i is shorter. If dj is the current minimum distance from the source vertex to vertex i and wj.j is the weight of the edge from vertex i to vertex / dj= min(dj, dj+ wj.i Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 475, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_056.png", "page_index": 475, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:51+07:00" }, "raw_text": "Slide 304 Moore's Shortest-path Algorithm Vertex j d Vertex i Wi Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 476, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_057.png", "page_index": 476, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:54+07:00" }, "raw_text": "Slide 305 Date Structures First-in-first-out vertex queue created to hold a list of vertices to examine. Initially, only source vertex is in queue. Current shortest distance from source vertex to vertex i stored in array dist[i] At first, none of these distances known and array elements are initialized to infinity Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 477, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_058.png", "page_index": 477, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:51:57+07:00" }, "raw_text": "Slide 306 Code Suppose w[i][j] holds the weight of the edge from vertex i and vertex (infinity if no edge). The code could be of the form newdist_j = dist[i]+ w[i]] if (newdist j < dist[j1) dist[j] = newdist_j; When a shorter distance is found to vertex i, vertex i is added to the queue (if not already in the queue), which will cause vertex j to be examined again - Important aspect of this algorithm, which is not present in Dijkstra's algorithm Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 478, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_059.png", "page_index": 478, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:00+07:00" }, "raw_text": "Slide 307 Stages in Searching a Graph Example The initial values of the two key data structures are Vertices to consider Current minimum distances A 0 vertex A B C D E F vertex queue dist[] Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 479, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_060.png", "page_index": 479, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:04+07:00" }, "raw_text": "Slide 308 After examining A to Vertices to consider Current minimum distances B 0 10 vertex A B C D E F vertex_queue dist[ Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 480, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_061.png", "page_index": 480, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:07+07:00" }, "raw_text": "Slide 309 After examining B to F, E,D, and C:: Vertices to consider Current minimum distances E D C 0 10 18 23 34 61 vertex A B C D E F vertex_queue dist[] Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 481, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_062.png", "page_index": 481, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:10+07:00" }, "raw_text": "Slide 310 After examining E to F Vertices to consider Current minimum distances D C 0 10 18 23 34 50 vertex A B C D E F vertex_queue dist[] Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 482, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_063.png", "page_index": 482, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:14+07:00" }, "raw_text": "Slide 311 After examining D to E: Vertices to consider Current minimum distances C E 0 10 18 23 32 50 vertex A B C D E F vertex_queue dist[] Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 483, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_064.png", "page_index": 483, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:18+07:00" }, "raw_text": "Slide 312 After examininq C to D: No changes After examining E (again) to F : Vertices to consider Current minimum distances 0 10 18 23 32 49 vertex A B C D E F vertex queue dist[] No more vertices to consider. We haye the minimum distance from vertex A to each of the other vertices, including the destination vertex, F. Usually, the actual path is also required in addition to the distance. Then the path needs to be stored as distances are recorded. The path in our case isA -> B -> D -> E -> F Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. _2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 484, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_065.png", "page_index": 484, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:23+07:00" }, "raw_text": "Slide 313 Sequential Code Let next_vertex() return the next vertex from the vertex queue or no vertex if none Assume that adjacency matrix used, named w [] []. while ((i = next_vertexO) != no_vertexy* while a vertex */ for (j = 1; j < n; j++) /* get next edge */ if (w[i][j] != infinity) { /* if an edge */ newdist_j = dist[i] + w[i][j]; if (newdist_j < dist[j] { dist[j] = newdist_j; append_queue(j) ; /* add to queue if not there */ } } /*no more to consider*/ Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 485, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_066.png", "page_index": 485, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:26+07:00" }, "raw_text": "Slide 314 Parallel Implementations Centralized Work Pool Centralized work pool holds vertex queue, vertex_queue[ as tasks. Each slave takes vertices from the vertex queue and returns new vertices. Since the structure holding the graph weights is fixed, this structure could be copied into each slave, say a copied adjacency matrix Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 486, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_067.png", "page_index": 486, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:30+07:00" }, "raw_text": "Slide 315 Master while (vertex_queueO != empty) { recv(PANY, source = Pi); /* request task from slave */ v = get_vertex_queue(); send(&v, Pi); / * send next vertex and */ send(&dist, &n, P): /* current dist array */ recv(&j, &dist[j], PAnY, r source = Pi);/* new distance */ append_queue(j, dist[j]); /* append vertex to queue */ /* and update distance array */ }; recv(PANY, source = Pi); / * request task from slave */ send(Pi, termination_tag); /* termination message*/ Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 487, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_068.png", "page_index": 487, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:34+07:00" }, "raw_text": "Slide 316 Slave (process ) send(Pmaster) ; /* send request for task */ / * get vertex number */ if (tag != termination_tag) { recv(&dist, &n, Pmaster); /* and dist array */ for (j = 1; j < n; j++) /* get next edge */ if (w[v][j] != infinity) {* if an edge */ newdist_j = dist[v] + w[v][j]; if (newdist_j < dist[j] { dist[j] = newdist_j; } /* send updated distance */ } } Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 488, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_069.png", "page_index": 488, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:37+07:00" }, "raw_text": "Slide 317 Decentralized Work Pool Convenient approach is to assign slave process i to search around vertex i only and for it to have the vertex queue entry for vertex i if this exists in the queue. The array dist[1 will also be distributed among the processes so that process i maintains the current minimum distance to vertex i. Process also stores an adjacency matrix/list for vertex i, for the purpose of identifying the edges from vertex i. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 489, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_070.png", "page_index": 489, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:40+07:00" }, "raw_text": "Slide 318 Search Algorithm VertexA is the first vertex to search. The process assiqned to vertex A is activated. This process will search around its vertex to find distances to connected vertices Distance to process / will be sent to process : for it to compare with its currently stored value and replace if the currently stored value is larger. In this fashion, all minimum distances will be updated during the search. If the contents of d[i] changes, process i will be reactivated to search again. Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 490, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_071.png", "page_index": 490, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:44+07:00" }, "raw_text": "Slide 319 Distributed graph search Master process Start at source vertex Vertex Vertex w[] w[] New distance dist Vertex w dist Process C New Process A distance Other processes dist Process B Slides for Parallel Proqramming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc. All rights reserved." }, { "page_index": 491, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_072.png", "page_index": 491, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:47+07:00" }, "raw_text": "Slide 320 Slave (process i) recv(newdist, Pany); if (newdist < dist) { dist = newdist: vertex_queue = TRuE; /* add to queue */ } e1se vertex_queue == FALsE; if (vertex_queue == TRuE)/*start searching around vertex*/ for (j = 1; j < n; j++) /* get next edge */ if (w[j] != infinity) { d = dist + w[j]; send(&d, Pj); /* send distance to proc j */ } Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 492, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_073.png", "page_index": 492, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:50+07:00" }, "raw_text": "Slide 321 Simplified slave (process i) recv(newdist, PAny); if (newdist < dist) dist = newdist; /* start searching around vertex */ for (j = 1; j < n; j++) /* get next edge */ if (w[j] != infinity) { d = dist + w[j]; send(&d, Pj); /* send distance to proc j */ 2 Slides for Parallel Programming Techniques and Applications Using Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen, Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 493, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_6_d/slide_074.png", "page_index": 493, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:54+07:00" }, "raw_text": "Slide 322 Mechanism necessary to repeat actions and terminate when all processes idle - must cope with messages in transit. Simplest solution Use synchronous message passing, in which a process cannot proceed until the destination has received the message. Process only active after its vertex is placed on queue. Possible for many processes to be inactive, leading to an inefficient solution. Method also impractical for a large graph if one vertex is allocated to each processor. Group of vertices could be allocated to each processor. Slides for Parallel Proqramming Techniques and Applications Usinq Networked Workstations and Parallel Computers by Barry Wilkinson and Michael Allen. Prentice Hall, Upper Saddle River, New Jersey, USA, ISBN 0-13-671710-1. 2002 by Prentice Hall Inc._All rights reserved." }, { "page_index": 494, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_001.png", "page_index": 494, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:56+07:00" }, "raw_text": "BIG DATA TRAINING B K TP.HCM Intro to Spark Thoai Nam 01/2021 hpcc.hcmut.edu.vn www.cce.hcmut.edu.vn" }, { "page_index": 495, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_002.png", "page_index": 495, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:52:59+07:00" }, "raw_text": "Content Shared/Distributed memory MapReduce drawbacks Spark OF BK TP.HCM OF 2 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 496, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_003.png", "page_index": 496, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:03+07:00" }, "raw_text": "Multiprocessor Consists of many fully programmable processors each capable of executing its own program Shared address space architecture Classified into 2 types Uniform Memory Access (UMA) Multiprocessors Non-Uniform Memory Access (NUMA) Multiprocessors OF BK TP.HCM OF COM ENGIN 3 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 497, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_004.png", "page_index": 497, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:08+07:00" }, "raw_text": "Memory hierarchy Most programs have a high degree of locality in their accesses spatial locality: accessing things nearby previous accesses temporal locality: reusing an item that was previously accessed Memory hierarchy tries to exploit locality to improve average processor control Second Secondary Main Tertiary level storage memory storage cache (Disk) datapath (SRAM) (DRAM) (Disk/Tape) on-chip registers cache (\"Cloud\") Speed 1ns 10ns 100ns 10ms 10sec TECI BK Size KB MB GB TB PB TP.HCM TER OF 4 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 498, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_005.png", "page_index": 498, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:12+07:00" }, "raw_text": "Traditional Network Programming Message-passing between nodes (MPI, RPC, etc.) Really hard to do at scale: How to split problem across nodes? - Important to consider network and datalocality How to deal withfailures? - If a typical server fails every 3 years, a 10,000-node cluster sees 10 faults/day! OF BK TP.HCM OF COM 5 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 499, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_006.png", "page_index": 499, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:16+07:00" }, "raw_text": "Data-Paralle Models Restrict the programming interface so that thesystem can do moreautomatically \"Here's an operation, run it on all of the data I don't care where it runs (you schedulethat) In fact, feel free to run it twice on different nodes OF BK TP.HCM OF COM 6 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 500, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_007.png", "page_index": 500, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:19+07:00" }, "raw_text": "MapReduce Programming Model MapReduce turned out to be an incredibly useful and widely-deployed framework for processing large amounts of data. However, its design forces programs to comply with its computation model, which is: Map: create a pairs Shuffle: combine common keys together and partition them to reduce workers Reduce: process each unique key and all of its associated values OF BK TP.HCM CON ENGII 7 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 501, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_008.png", "page_index": 501, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:24+07:00" }, "raw_text": "MapReduce drawbacks Many applications had to run MapReduce over multiple passes to process their data All intermediate data had to be stored back in the file system (GFS at Google, HDFS elsewhere), which tended to be slow since stored data was not just written to disks but also replicated The next MapReduce phase could not start until the previous MapReduce job completed fully MapReduce was also designed to read its data from a distributed file system (GFS/HDFS). In many cases, however, data resides within an SQL database or is streaming in (e.g., activity logs, remote monitoring). CKA OF BK TP.HCM OF 8 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 502, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_009.png", "page_index": 502, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:28+07:00" }, "raw_text": "MapReduce programmability Most real applications require multiple MRsteps . Google indexing pipeline: 21steps . Analytics queries (e.g. count clicks &top K): 2-5steps . Iterative algorithms (e.g. PageRank): 10's ofsteps Multi-step jobs create spaghetti code 21 MRsteps -> 21 mapper and reducerclasses OF BK TP.HCM OF COM ENGIM 9 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 503, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_010.png", "page_index": 503, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:31+07:00" }, "raw_text": "Problems with MapReduce MapReduce use cases showed two majorlimitations: 1) Difficulty of programming directly inMR (2) Performance bottlenecks In short, MapReduce doesn't compose well for large-scale applications Therefore, people built high level frameworks and specialized systems OF LOGY BK TP.HCM OF 10 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 504, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_011.png", "page_index": 504, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:36+07:00" }, "raw_text": "Specialized Systems Pregel Giraph GraphLab Dremel Drill MapReduce Tez FI Impala Storm MillWheel S4 General Batch Processing Specialized Systems: iterative, interactive,streaming,graph,etc. LOGY + 11 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 505, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_012.png", "page_index": 505, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:38+07:00" }, "raw_text": "Spark OF BK TP.HCM OF ENG 12 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 506, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_013.png", "page_index": 506, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:43+07:00" }, "raw_text": "Spark: A Brief History 2004 2010 MapReduce paper Spark paper 2002 2004 2006 2008 2010 2012 2014 2002 2008 2014 MapReduce @ Google Hadoop Summit Apache Spark top-level 2006 Hadoop @Yahoo! BK TP.HCM OF ENG 13 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 507, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_014.png", "page_index": 507, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:48+07:00" }, "raw_text": "Spark Summary Highly flexible and general-purpose way of dealing with big data processing needs Does not impose a rigid computation model, and supports a variety of input types Deal with text files, graph data, database queries, and streaming sources and not be confined to a two-stage processing model Programmers can develop arbitrarily-complex, multi-step data pipelines arranged in an arbitrary directed acyclic graph (DAG) pattern. Programming in Spark involves defining a sequence of transformations and actions Spark has support for a map action and a reduce operation, so it can implement traditional MapReduce operations but it also supports SQl queries, graph processing, and machine learning Stores its intermediate results in memory, providing for dramatically higher performance. X BK TP.HCM COI 14 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 508, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_015.png", "page_index": 508, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:52+07:00" }, "raw_text": "Spark ecosystem Spark SQL Streaming MLlib GraphX SQL Queries SQL Queries (Machine Learning) Graph Processing Spark Core API Structured & Unstructured) Scala Python Java R Compute Engine Memory Management, Task Scheduling, Fault Recovery, Interaction with Cluster Management Cluster Resource Manager OF TECH Distributed Storage HNOL BK TP.HCM ENGIN 15 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 509, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_016.png", "page_index": 509, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:53:56+07:00" }, "raw_text": "Spark Selected Big Data activity on Google Trends Spark Hive MapReduce Cassandra HBase Jul2012 Jan 2013 Jul2013 Jan 2014 Jul2014 Jan 2015 OF BK TP.HCM COM ENG 16 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 510, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_017.png", "page_index": 510, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:08+07:00" }, "raw_text": "Programmability leclosrKelC pobAScctatic clemTokeniserfappe - cotesRectTeTe.i 4 piteriltotcSritclecei1D prlteTetroTtD . pubcwod mplbjectyTet oluConrt cotet 31r10rtC 10 Strigfoeniser1troaStrigfolrtcl-tstrigDD 11 cord.cettito.nettoe coteet.wrifefrc.ce 14 1 val f=sc.textFile(inputPath) 1s 16 17 2 valw=f.flatMapl=>l.split\"\").mapword=>word1).cache 10 19 cstendo Reecerdeat,Batritable,let.Bnitalo 3 w.reduceByKey(_+_).saveAsTextoutputPath) 20 pricteTritlerelSit 2 22 p4cwolrfTeoty.Ttelerilels 24 WordCount in 3 lines ofSpark 20 orritlelwls 27 wolgot 20 29 renolt.net(oum)s 30 ceeteat.write(ey.cesolt 31 32 1 30 34 pecttiweiccadeStrilgs)rEceptie 35 Cofsgurotlon confo c Conflgretsos 36 Sncigt1cterargscCercynprrtcf.arys-eteniigargsD 37 1onerrg.lengc2 30 pst.er.igeirejg 39 Syoten.eci1(2)s 40 41 Jbc.rc Jco.set3oyClass(rot.clas3 4 jce.eonapperClassq1onsnemappe7.clasp 44 jco.tetCooinerClassclotsAece7.clss 45 40 jco.betovtpvteyCless(Test.classs 4 jce.cetovtpvtvoleclessCDawritale.classo 40 forClto1oerArgplegt-aC 49 s0 Pileotpvt/oraat.setatpetPothijc. OF 2 nePotoiotherargolotnerArgo.length-190s Syste.itjc.itFerCpletiefrP 54 55 1 BK TP.HCM WordCount in 50+ lines of Java MR OF 17 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 511, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_018.png", "page_index": 511, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:11+07:00" }, "raw_text": "Performance Time to sort 100TB 2100 machines 2013 Record : Hadoop 72 minutes 2014 Record : 207 machines Spark 23 minutes OF BK TP.HCM OF ENGI 18 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 512, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_019.png", "page_index": 512, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:16+07:00" }, "raw_text": "RDD: Core Abstraction Write programs in terms of distributed datasets and operations on them Resilient Distributed Datasets Operations Transformations (e.g . Collections of objects spread across map, filter, groupBy) a cluster, stored inRAMor on Disk Actions . Built through parallel (e.g. count, collect, save) transformations Automatically rebuilt on failure LOGY B K TP.HCM OF COM 19 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 513, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_020.png", "page_index": 513, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:19+07:00" }, "raw_text": "RDD Resilient Distributed Datasets are the primary abstraction in Spark - a fault-tolerant collection of elements that can be operated on in parallel Two types: . parallelized collections -take an existing single-node collection and parallelit . Hadoop datasets: files on HDFS or other compatible storage OF BK TP.HCM OF 20 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 514, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_021.png", "page_index": 514, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:24+07:00" }, "raw_text": "RDD: Core Abstraction An application that uses Spark identifies data sources and the operations on that data. The main application, called the driver program is linked with the Spark API, which creates a SparkContext (heart of the Spark system and coordinates all processing activity.) This SparkContext in the driver program connects to a Spark cluster manager. The cluster manager responsible for allocating worker nodes, launching executors on them, and keeping track of their status Each worker node runs one or more executors. An executor is a process that runs an instance of a Java Virtual Machine (JVM) When each executor is launched by the manager, it establishes a connection back to the driver program The executor runs tasks on behalf of a specific SparkContext (application) and keeps related data in memory or disk storage OF A task is a transformation or action; the executor remains running for th@ A BK duration of the driver program. TP.HCM OF CO 21 HPC Lab & CCE -HCMUT Big Data 2021" }, { "page_index": 515, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_022.png", "page_index": 515, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:30+07:00" }, "raw_text": "RDD: Core Abstraction Each worker node runs one or more executors. An executor is a process that runs an instance of a Java Virtual Machine (JVM) When each executor is launched by the manager, it establishes a connection back to the driver program The executor runs tasks on behalf of a specific SparkContext (application) and keeps related data in memory or disk storage A task is a transformation or action. The executor remains running for the duration of the application. This provides a performance advantage over the MapReduce approach since new tasks can be started very quickly The executor also maintains a cache, which stores frequently-used data in memory instead of having to store it to a disk-based file as the MapReduce framework does The driver goes through the user's program, which consists of actions and transformations on data and converts that into a series of tasks, The driver then sends tasks to the executors that registered with it OF A task is application code that runs in the executor on a Java Virtual Machine B K TP.HCM (JVM) and can be written in languages such as Scala, Java, Python, Clojure, and ENG R. It is transmitted as a jar file to an executor, which then runs it. 22 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 516, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_023.png", "page_index": 516, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:36+07:00" }, "raw_text": "RDD Data in Spark is a collection of Resilient Distributed Datasets (RDDs). This is often a huge collection of stuff. Think of an individual RDD as a table in a database or a structured file. Input data is organized into RDDs, which will often be partitioned across many computers. RDDs can be created in three ways: (1) They can be present as any file stored in HDFS or any other storage system supported in Hadoop. This includes Amazon S3 (a key-value server, similar in design to Dynamo), HBase (Hadoop's version of Bigtable), and Cassandra (a no-SQL eventually-consistent database). This data is created by other services, such as event streams, text logs, or a database. For instance, the results of a specific query can be treated as an RDD. A list of files in a specific directory can also be an RDD. OF BK TP.HCM CO 23 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 517, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_024.png", "page_index": 517, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:39+07:00" }, "raw_text": "RDD (2) RDDs can be streaming sources using the Spark Streaming extension. This could be a stream of events from remote sensors, for example. For fault tolerance, a sliding window is used, where the contents of the stream are buffered in memory for a predefined time interval. BK TP.HCM OF 24 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 518, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_025.png", "page_index": 518, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:43+07:00" }, "raw_text": "RDD (3) An RDD can be the output of a transformation function. This allows one task to create data that can be consumed by another task and is the way tasks pass data around. For example, one task can filter out unwanted data and generate a set of key value pairs, writing them to an RDD. This RDD will be cached in memory (overflowing to disk if needed) and will be read by a task that reads the output of the task that created the key/value data. LOGY BK TP.HCM OF 25 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 519, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_026.png", "page_index": 519, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:48+07:00" }, "raw_text": "RDD properties They are immutable. That means their contents cannot be changed. A task can read from an RDD and create a new RDD but it cannot modify an RDD. The framework magically garbage collects unneeded intermediate RDDs. They are typed. An RDD will have some kind of structure within in, such as a key-value pair or a set of fields. Tasks need to be able to parse RDD streams. They are ordered. An RDD contains a set of elements that can be sorted. In the case of key-value lists, the elements will be sorted by a key. The sorting function can be defined by the programmer but sorting enables one to implement things like Reduce operations They are partitioned. Parts of an RDD may be sent to different servers. The default partitioning function is to send a row of data to the server corresponding to hash(key) mod server count. OF B K TP.HCM ENG 26 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 520, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_027.png", "page_index": 520, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:53+07:00" }, "raw_text": "RDD operations Spark allows two types of operations on RDDs: transformations and actions o Transformations read an RDD and return a new RDD. Example transformations are map, filter, groupByKey, and reduceByKey Transformations are evaluated lazily, which means they are computed only when some task wants their data (the RDD that they generate). At that point, the driver schedules them for execution o Actions are operations that evaluate and return a new value. When an action is requested on an RDD object, the necessary transformations are computed and the result is returned. Actions tend to be the things that generate the final output needed by a program. Example actions are reduce, grab samples, and write to file BK TP.HCM 27 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 521, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_028.png", "page_index": 521, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:54:59+07:00" }, "raw_text": "Spark Essentials: Transformations transformation Description when called on a dataset of (K, v) pairs groupByKey([numTasks] returns a dataset of (K, Seg[v]) pairs when called on a dataset of (K, v) pairs, returns reduceByKey(func, a dataset of (K, v) pairs where the values for [numTasks] each key are aggregated using the given reduce function when called on a dataset of (K, V) pairs where sortByKey([ascending], implements Ordered, returns a dataset of (k [numTasks] V) pairs sorted by keys in ascending or descending order, as specified in the boolean ascending argument when called on datasets of type (K, V) and (K, join(otherDataset, W), returns a dataset of (K, (V, w) ) pairs with [numTasks] all pairs of elements for each key when called on datasets of type (K, V) and (K,) cogroup(otherDataset, W), returns a dataset of (K, Seq[V], Seq[W]) [numTasks] tuples - also called groupwith when called on datasets of types Tand U, Mll LOGY cartesian(otherDataset) returns a dataset of (T, U) pairs (all pairs of B K TP.HCM elements) OF 28 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 522, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_029.png", "page_index": 522, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:06+07:00" }, "raw_text": "Spark Essentials: Actions action description function func (which takes two arguments and reduce(func) returns one), and should also be commutative and associative so that it can be computed correctly in parallel return all the elements of the dataset as an array at the driver program - usually useful after a filter or collect0 other operation that returns a sufficiently small subset of the data count() return the number of elements in the dataset return the first element of the dataset - similar to first() take(!) return an array with the first n elements of the dataset take(n) - currently not executed in parallel, instead the driver program computes all the elements return an array with a random sample of num takesample(withReplacement, elements of the dataset, with or without fraction, seed) replacement, using the given random number LOGY B K generator seed TP.HCM OF COM 29 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 523, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_030.png", "page_index": 523, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:09+07:00" }, "raw_text": "Data storage Spark does not care how data is stored. The appropriate RDD connector determines how to read data. For example, RDDs can be the result of a query in a Cassandra database and new RDDs can be written to Cassandra tables. Alternatively, RDDs can be read from HDFS files or written to an HBASE table. OF BK TP.HCM OF COM 30 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 524, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_031.png", "page_index": 524, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:13+07:00" }, "raw_text": "Fault tolerance For each RDD, the driver tracks the sequence of transformations used to create it That means every RDD knows which task needed to create it. If any RDD is lost (e.g., a task that creates one died), the driver can ask the task that generated it to recreate it The driver maintains the entire dependency graph, so this recreation may end up being a chain of transformation tasks going back to the original data. OF OGY BK TP.HCM CON 31 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 525, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_032.png", "page_index": 525, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:16+07:00" }, "raw_text": "Working with RDDs textFile = sc.textFile(\"someFile.txt\" RDD OF BK TP.HCM OF ENG 32 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 526, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_033.png", "page_index": 526, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:19+07:00" }, "raw_text": "Working with RDDs textFile = sc.textFile(\"someFile.txt\" RDD Transformations TECH BK TP.HCM lineswithSpark = textFile.filter(lambda line: \"spark\" in line) OF COM ENGIN 33 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 527, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_034.png", "page_index": 527, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:23+07:00" }, "raw_text": "Working with RDDs textFile = sc.textFile(\"someFile.txt\" RDD Action Value Transformations lineswithSpark.countQ 74 Tineswithspark.firstQ # Apache Spark OF TECH BK TP.HCM lineswithSpark = textFile.filter(lambda line: \"spark\" in line) OF COM ENGIN 34 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 528, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_035.png", "page_index": 528, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:25+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns HPC Lab & CCE - HCMUT Big Data 2021 35" }, { "page_index": 529, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_036.png", "page_index": 529, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:28+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Worker Driver Worker Worker HPC Lab & CCE -HCMUT Big Data 2021 36" }, { "page_index": 530, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_037.png", "page_index": 530, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:31+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns 1ines = spark.textFile(\"hdfs://...\") Worker Driver Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 37" }, { "page_index": 531, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_038.png", "page_index": 531, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:34+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns 1ines = spark.textFile(\"hdfs://...\") Worker Driver Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 38" }, { "page_index": 532, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_039.png", "page_index": 532, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:37+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\" Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) Driver Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 39" }, { "page_index": 533, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_040.png", "page_index": 533, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:41+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Tines park.textFile(\"hdfs://...\" Worker = lines.filter(lambda s: s.startswith(\"ERRoR\")) errors Driver Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 40" }, { "page_index": 534, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_041.png", "page_index": 534, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:45+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Driver messages.cacheQ messages.filter(lambda s: \"mysql\" in s).count( Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 41" }, { "page_index": 535, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_042.png", "page_index": 535, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:49+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Driver messages.cacheQ Action messages.filter(lambda s: \"mysql\" in s).countQ Worker Worker HPC Lab & CCE - HCMUT Big Data 2021 42" }, { "page_index": 536, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_043.png", "page_index": 536, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:54+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\" Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Bock 1 Driver messages.cacheO messages.filter(lambda s: \"mysql\" in s).count( Worker Worker Bock 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 43" }, { "page_index": 537, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_044.png", "page_index": 537, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:55:58+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] tasks Bock 1 Driver messages.cacheQ tasks messages.filter(lambda s: \"mysql\" in s).countQ tasks Worker Worker Block 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 44" }, { "page_index": 538, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_045.png", "page_index": 538, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:03+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns lines = spark.textFile(\"hdfs://.:.\" Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2]) Bock 1 Driver messages.cacheQ Read HDFS Bock messages.filter(lambda s: \"mysql\" in s).countQ Worker Worker Block 2 Read Read HDFS HDFS Bock Bock 3 Bock 45 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 539, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_046.png", "page_index": 539, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:08+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://. : .\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2]) Bock 1 Driver messages.cacheO Process & Cache Data messages.filter(lambda s: \"mysql\" in s).countQ Cache 2 Worker Cache 3 Block 2 Worker Process Process & Cache & Cache Bock 3 Data Data HPC Lab & CCE - HCMUT Big Data 2021 46" }, { "page_index": 540, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_047.png", "page_index": 540, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:12+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://.:.\" Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) results messages = errors.map(lambda s: s.split(\"t\")[2]) Block 1 Driver messages.cacheQ esults messages.filter(lambda s: \"mysql\" in s).count( Cache 2 results Worker Cache 3 Worker Block 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 47" }, { "page_index": 541, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_048.png", "page_index": 541, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:17+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://.:.\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Bock 1 Driver messages.cacheQ messages.filter(lambda s: \"mysql\" in s).countQ Cache 2 messages.filter(lambda s: \"php\" in s).count( Worker Cache 3 Worker Block 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 48" }, { "page_index": 542, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_049.png", "page_index": 542, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:21+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://.:.\" Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2]) tasks Bock 1 Driver messages.cacheQ tasks messages.filter(lambda s: \"mysql\" in s).countQ Cache 2 messages.filter(lambda s: \"php\" in s).count( tasks Worker Cache 3 Worker Block 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 49" }, { "page_index": 543, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_050.png", "page_index": 543, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:26+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://. : .\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Bock 1 Driver messages.cacheO Process from Cache messages.filter(lambda s: \"mysql\" in s).count( Cache 2 messages.filter(lambda s: \"php\" in s).count( Worker Cache 3 Worker Block 2 Process Process from from Bock 3 Cache Cache HPC Lab & CCE - HCMUT Big Data 2021 50" }, { "page_index": 544, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_051.png", "page_index": 544, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:31+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://. : .\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) results messages = errors.map(lambda s: s.split(\"t\")[2]) Bock 1 Driver messages.cacheQ esults messages.filter(lambda s: \"mysql\" in s).countQ Cache 2 results messages.filter(lambda s: \"php\" in s).count( Worker Cache 3 Worker Block 2 Bock 3 HPC Lab & CCE - HCMUT Big Data 2021 51" }, { "page_index": 545, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_052.png", "page_index": 545, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:36+07:00" }, "raw_text": "Example: Log Mining Load error messages from a log into memory, then interactively search for variouspatterns Cache 1 lines = spark.textFile(\"hdfs://.:.\") Worker errors = lines.filter(lambda s: s.startswith(\"ERRoR\")) messages = errors.map(lambda s: s.split(\"t\")[2] Bock 1 Driver messages.cacheQ messages.filter(lambda s: \"mysql\" in s).count( Cache 2 messages.filter(lambda s: \"php\" in s).count( Worker Cache 3 Cache your data FasterResults Worker Block 2 Full-text search of Wikipedia 60GB on 20 EC2 machines Bock 3 0.5 sec from mem vs. 20s foron-disk HPC Lab & CCE -HCMUT Big Data 2021 52" }, { "page_index": 546, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_053.png", "page_index": 546, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:40+07:00" }, "raw_text": "Language Support Python Standalone Programs Python, Scala, &Java lines = sc.textFile(.:: ines.filter(lambda s: ERROR\" in s).countQ Interactive Shells Python &Scala Scala val lines = sc.textFile(::: Performance lines.filter(x => x.contains(\"ERROR\")).countQ Java &Scala are faster dueto static typing ..but Python isoften fine Java JavaRDD lines = sc.textFile(-:-) ; lines-filter(new Function( Boolean caTl(string s t return s.contains(\"error\") ; HNOL } }.countQ; BK TP.HCM OF ENGIN 53 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 547, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_054.png", "page_index": 547, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:44+07:00" }, "raw_text": "Expressive APl map reduce OF BK TP.HCM 54 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 548, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_055.png", "page_index": 548, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:48+07:00" }, "raw_text": "Expressive APl map reduce sample filter count take fold groupBy first sort reduceByKey partitionBy union groupByKey join mapwith cogroup leftouterJoin cross pipe rightouterJoin zip save BK TP.HCM OF 55 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 549, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_056.png", "page_index": 549, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:53+07:00" }, "raw_text": "Fault Recovery RDDs track lineage information that can be used to efficiently reconstruct lost partitions Ex: messages = textFile(---).filter(_.startswith(\"ERROR\")) .map(-.split(t')(2)) HDFS File Filtered RDD Mapped RDD filter map ERS OF (func =_.contains(...)) (func =_.split(...)) LOGY BK TP.HCM OF COM 56 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 550, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_a/slide_057.png", "page_index": 550, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:56:58+07:00" }, "raw_text": "Fault Recovery Results 140 Failure happens 119 S 120 100 81 80 57 58 56 58 57 59 57 59 60 40 20 0 6 7 8 1 2 3 4 5 9 10 Iteration OF TECH BK TP.HCM OF ENGIN 57 HPC Lab & CCE - HCMUT Big Data 2021" }, { "page_index": 551, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_001.png", "page_index": 551, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:00+07:00" }, "raw_text": "MapReduce Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology HPC Lab-CSE-HCMUT 1" }, { "page_index": 552, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_002.png", "page_index": 552, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:02+07:00" }, "raw_text": "Ref - MapReduce algorithm design, Jimmy Lin HPC Lab-CSE-HCMUT 2" }, { "page_index": 553, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_003.png", "page_index": 553, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:08+07:00" }, "raw_text": "JPMorganChase processes 20 PB a day (2008) 150 PBon 50k+ servers crawls 20B web pages a day (2012) running 15k apps (6/2011) ebaY >0 PB data,75B DB ARCHIVE Wayback Machine:240B web calls per day (6/202) pages archived,5 PB (l/2013) >00 PB of user data + facebook 500 TB/day(8/202) LHC:5 PB a year CERN S3: 449B objects,peak 290k amazon request/second (7/2011) webservices IT objects (6/2012) LSST:6-10 PB a year 640K ought to be (2015) enough for anybody SKA0.3-1.5EB per year (2020) 111 How much data? HPC Lab-CSE-HCMUT 3" }, { "page_index": 554, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_004.png", "page_index": 554, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:15+07:00" }, "raw_text": "No data like data! more s/knowledge/data/g; 1.00 0.95 0.44 +0.51BP2 +0.15BP/x2 0.42 3.K 0.90 +0.39BP/x2 +0.56BP/x2 +0.70BP/x2 0.85 rp sel 0.38 +0.62BP/x2 5 target KN E 0.80 tldcnews KN 0.36 twebnews KN target SB +0.66BP/x2 tldcnews SB 0.75 0.34 +webnews SB +web SB 10 100 1000 10000 100000 1e+06 0.70 LM training data size in million tokens 1 10 Millions of Words HPC Lab-CSE-HCMUT 4 Banko and Brill.ACL2001 Brants et al..EMNLP2007" }, { "page_index": 555, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_005.png", "page_index": 555, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:20+07:00" }, "raw_text": "MapReduce HPC Lab-CSE-HCMl 5 Source: Google" }, { "page_index": 556, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_006.png", "page_index": 556, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:24+07:00" }, "raw_text": "Typical Big Data Problem o Iterate over a large number of records 01 Shuffle and sort intermediate results o Aggregate intermediate results Reduce o Generate final output Key idea: provide a functional abstraction for these two operations HPC Lab-CSE-HCMUT 6 (Dean and Ghemawat,OSDI 2004" }, { "page_index": 557, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_007.png", "page_index": 557, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:27+07:00" }, "raw_text": "MapReduce: A Real World Analogy Coins Deposit ? Deposit Jreainreine HPC Lab-CSE-HCMUT 7" }, { "page_index": 558, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_008.png", "page_index": 558, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:30+07:00" }, "raw_text": "MapReduce: A Real World Analogy Coins Deposit Deposit Coins Counting Machine HPC Lab-CSE-HCMUT 8" }, { "page_index": 559, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_009.png", "page_index": 559, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:33+07:00" }, "raw_text": "MapReduce: A Real World Analogy Coins Deposit Deposit Mapper: Categorize coins by their face values Reducer: Count the coins in each face value in parallel HPC Lab-CSE-HCMUT" }, { "page_index": 560, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_010.png", "page_index": 560, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:36+07:00" }, "raw_text": "MapReduce Programmers specify two functions: map(k1,v1) ->[] reduce (k2,[v2]) -> [] (All values with the same key are sent to the same reducer) The execution framework handles everything else.. HPC Lab-CSE-HCMUT 10" }, { "page_index": 561, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_011.png", "page_index": 561, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:41+07:00" }, "raw_text": "V4 V5 map map map map b 2 3 C 6 5 c 2 C a b 7 C 8 a Shuffle and Sort: aggregate values by keys 5 b 2 2 3 6 8 reduce reduce reduce HPC Lab-CSE-HCMUT 11" }, { "page_index": 562, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_012.png", "page_index": 562, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:44+07:00" }, "raw_text": "MapReduce Programmers specify two functions: Map (k1,v1) ->* Reduce (k2,list (v2)) -> list (v3) (All values with the same key are sent to the same reducer) The execution framework handles everything else.. HPC Lab-CSE-HCMUT 12" }, { "page_index": 563, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_013.png", "page_index": 563, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:48+07:00" }, "raw_text": "MapReduce runtime\" Handles scheduling Assigns workers to map and reduce tasks Handles \"data distribution - Moves processes to data Handles synchronization Gathers, sorts, and shuffles intermediate data Handles errors and faults Detects worker failures and restarts Everything happens on top of a distributed file system HPC Lab-CSE-HCMUT 13" }, { "page_index": 564, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_014.png", "page_index": 564, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:50+07:00" }, "raw_text": "Synchronization & ordering Barrier between map and reduce phases But intermediate data can be copied over as soon as mappers finish Keys arrive at each reducer in sorted order No enforced ordering across reducers HPC Lab-CSE-HCMUT 14" }, { "page_index": 565, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_015.png", "page_index": 565, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:57:55+07:00" }, "raw_text": "MapReduce Programmers specify two functions: Map (k1,v1) ->* Reduce (k2,list (v2)) -> list (v3) (All values with the same key are sent to the same reducer) The execution framework handles everything else.. Not quite...usually, programmers also specify: partition (k2, number of partitions) -> partition for k2 Often a simple hash of the key, e.g., hash(k') mod n Divides up key space for parallel reduce operations combine (k2,v2) ->* Mini-reducers that run in memory after the map phase Used as an optimization to reduce network traffic HPC Lab-CSE-HCMUT 15" }, { "page_index": 566, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_016.png", "page_index": 566, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:00+07:00" }, "raw_text": "map map map map b 3 C 6 5 c 2 a 7 C 8 combine combine combine combine 2 9 a 5 2 b 8 : 2 partition partition partition partition Shuffle and Sort: aggregate values by keys a 5 b 27 1 2 9 6 C 8 reduce reduce reduce HPC Lab-CSE-HCMUT 16" }, { "page_index": 567, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_017.png", "page_index": 567, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:04+07:00" }, "raw_text": "What's the big deal? Developers need the right level of abstraction - Moving beyond the von Neumann architecture - We need better programming models Abstractions hide low-level details from the developers - No more race conditions, lock contention, etc. MapReduce separating the what from how Developer specifies the computation that need to be performed - Execution framework (\"runtime\") handles actual execution HPC Lab-CSE-HCMUT 17" }, { "page_index": 568, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_018.png", "page_index": 568, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:10+07:00" }, "raw_text": "The data center is the computer? HPC Lab-CSE-HCMUT Source:Google" }, { "page_index": 569, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_019.png", "page_index": 569, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:13+07:00" }, "raw_text": "MapReduce can refers to... The programming model The execution framework (aka \"runtime\") The specific implementation HPC Lab-CSE-HCMUT 19" }, { "page_index": 570, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_020.png", "page_index": 570, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:16+07:00" }, "raw_text": "MapReduce Implementations Google has a proprietary implementation in C++ Bindings in Java, Python Hadoop is an open-source implementation in Java Development led by Yahoo, now an Apache project Used in production at Yahoo, Facebook, Twitter, Linked In Netflix, etc. The de facto big data processing platform - Rapidly expanding software ecosystem Lots of custom research implementations For GPUs, cell processors, etc. hadoop HPC Lab-CSE-HCMUT 20" }, { "page_index": 571, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_021.png", "page_index": 571, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:20+07:00" }, "raw_text": "MapReduce algorithm design The execution framework handles \"everything else\"... - Scheduling: assigns workers to map and reduce tasks - \"Data distribution\" : moves processes to data - Synchronization: gathers, sorts, and shuffles intermediate data Errors and faults: detects worker failures and restarts Limited control over data and execution flow - All algorithms must expressed in m,r,c,p You don't know: - Where mappers and reducers run - When a mapper or reducer begins or finishes - Which input a particular mapper is processing - Which intermediate key a particular reducer is processing HPC Lab-CSE-HCMUT 21" }, { "page_index": 572, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_022.png", "page_index": 572, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:22+07:00" }, "raw_text": "Apache Hadoop HPC Lab-CSE-HCMUT 22" }, { "page_index": 573, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_023.png", "page_index": 573, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:25+07:00" }, "raw_text": "Data volumes: Google Example Analyze 1o billion web pages Average size of a webpage: 2oKB Size of the collection: 1o billion x 2oKBs = 2ooTB HDD hard disk read bandwidth: 15oMB/sec Time needed to read all web pages (without analyzing them): 2 million seconds = more than 15 days A single node architecture is not adequate 4" }, { "page_index": 574, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_024.png", "page_index": 574, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:28+07:00" }, "raw_text": "Data volumes: Google Example with SSD Analyze 1o billion web pages Average size of a webpage: 2oKB Size of the collection: 1o billion x 2oKBs = 2ooTB SSD hard disk read bandwidth: 55oMB/sec Time needed to read all web pages (without analyzing them): 2 million seconds = more than 4 days A single node architecture is not adequate" }, { "page_index": 575, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_025.png", "page_index": 575, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:31+07:00" }, "raw_text": "Apache Hadoop Scalable fault-tolerant distributed system for Big Data Distributed Data Storage Distributed Data Processing Borrowed concepts/ideas from the systems designed at Google (Google File System for Google's MapReduce) Open source project under the Apache license But there are also many commercial implementations (e.g., Cloudera Hortonworks, MapR) 26" }, { "page_index": 576, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_026.png", "page_index": 576, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:35+07:00" }, "raw_text": "Hadoop History Dec 2oo4 - Google published a paper about GFS July 2oo5 - Nutch uses MapReduce Feb 2oo6 - Hadoop becomes a Lucene subproject Apr 2ooz -Yahoo! runs it on a 1ooo-node cluster Jan 2oo8 - Hadoop becomes an Apache Top Level Project Jul 2oo8 - Hadoop is tested on a 4ooo node cluster Feb 2oo9 - The Yahoo! Search Webmap is a Hadoop application that runs on more than 1o,ooo core Linux cluster June 2oog - Yahoo! made available the source code of its production version of Hadoop In 2o1o Facebook claimed that they have the largest Hadoop cluster in the world with 21 PB of storage On July 27, 2o11 they announced the data has grown to 3oPB 27" }, { "page_index": 577, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_027.png", "page_index": 577, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:38+07:00" }, "raw_text": "Hadoop vs.HPC Hadoop Designed for Data intensive workloads Usually, no CPU demanding/intensive tasks HPc (High-performance computing) A supercomputer with a high-level computational capacity Performance of a supercomputer is measured in floating-point operations per second (FLOPS) Designed for CPU intensive tasks Usually it is used to process \"small\" data sets 30" }, { "page_index": 578, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_028.png", "page_index": 578, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:41+07:00" }, "raw_text": "Hadoop: main components Core components of Hadoop: Distributed Big Data Processing Infrastructure based on the MapReduce programming paradigm Provides a high-level abstraction view Programmers do not need to care about task scheduling and synchronization Fault-tolerant Node and task failures are automatically managed by the Hadoop system HDFS (Hadoop Distributed File System) High availability distributed storage Fault-tolerant 31" }, { "page_index": 579, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_029.png", "page_index": 579, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:43+07:00" }, "raw_text": "HDFS (Hadoop System) File HPC Lab-CSE-HCMUT 29" }, { "page_index": 580, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_030.png", "page_index": 580, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:46+07:00" }, "raw_text": "HDFS HDFS is a distributed file system that is fault tolerant, scalable and extremely easy to expand HDFS is the primary distributed storage for Hadoop applications HDFS provides interfaces for applications to move themselves closer to data HDFS is designed to 'just work', however a working knowledge helps in diagnostics and improvements HPC Lab-CSE-HCMUT 30" }, { "page_index": 581, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_031.png", "page_index": 581, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:51+07:00" }, "raw_text": "HDFS: a distributed file system Switch Switch Switch Switch CPU CPU CPU CPU Mem Mem Mem Mem Disk Disk Disk Disk C6 C. C. C C C HDFS C6 C, C CS C, CS Server1 Server2 ServerN-1 Server N Rack 1 Rack ... Rack M Example with number of replicas per chunk = 2 HPC Lab-CSE-HCMUT 31" }, { "page_index": 582, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_032.png", "page_index": 582, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:55+07:00" }, "raw_text": "HDFS - Data Organization Each file written into HDFS is split into data blocks Each block is stored on one or more nodes Each copy of the block is called replica Block placement policy First replica is placed on the local node Second replica is placed in a different rack Third replica is placed in the same rack as the second replica HPC Lab-CSE-HCMUT 32" }, { "page_index": 583, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_033.png", "page_index": 583, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:58:58+07:00" }, "raw_text": "architecture (1) HDFS HDFS namenode Application /foo/bar File namespace HDFS Client bock 3df2 HDFS datanode HDFS datanode Linux file system Linux file system HPC Lab-CSE-HCMUT 33" }, { "page_index": 584, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_034.png", "page_index": 584, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:04+07:00" }, "raw_text": "architecture (2) HDES There are two (and a half) fslmage types of machines in a Metadata(name,replicas,block id HDFS cluster /users/pkothuri/data/part0, r:3,{1,3,5} /users/pkothuri/data/part1,r:2,{2,4} Secondary HDFS Client Name Node Name Node NameNode is the heart of an HDFS namespace backup filesystem, it maintains and Replication, balanging, Heartbeatsetc manages the file system metadata. E.g; what blocks make up a Data Node Data Node Data Node Data Node Data Node file, and on which datanodes those blocks are stored local disks local disks local disks local disks local disks DataNode where HDFS stores the actual data, there are usually quite a few of these HPC Lab-CSE-HCMUT 34" }, { "page_index": 585, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_035.png", "page_index": 585, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:09+07:00" }, "raw_text": "Read operation in HDFS Address of block location on node 1 Distributed 2.Get blocklocations (RPC Address of block location on node 2 FileSystem HDFS 3.read Client 1. FSData close0 Address of block location on node n InputStream Client JVM Metadata Client Node NameNode 000 Packets Packets 4.read() Packets 6. read() 5. read() Blocks Blocks Blocks DataNode 1 DataNode 2 DataNode n HPC Lab-CSE-HCMUT 35" }, { "page_index": 586, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_036.png", "page_index": 586, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:13+07:00" }, "raw_text": "Write operation in HDFS 2. create() Distributed 7. complete() FileSystem NameNode HDFS Client 3. write( 6. close() FSData namenode OutputStream Client JVM Client Node 4.write packet 5. ack packet Pipeline of datanodes DataNode DataNode DataNode datanode 1 datanode 2 datanode 3 HPC Lab-CSE-HCMUT 36" }, { "page_index": 587, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_037.png", "page_index": 587, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:18+07:00" }, "raw_text": "Unique features of HDFS HDFS also has a bunch of unique features that make it ideal for distributed systems: Failure tolerant - data is duplicated across multiple DataNodes to protect against machine failures. The default is a replication factor of 3 (every block is stored on three machines). Scalability - data transfers happen directly with the DataNodes so your read/write capacity scales fairly well with the number of DataNodes Space - need more disk space? Just add more DataNodes and re- balance Industry standard - Other distributed applications are built on top of HDFS (HBase,Map-Reduce) HDFS is designed to process large data sets with write- once-read-many, it is not for low latency access HPC Lab-CSE-HCMUT 37" }, { "page_index": 588, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_038.png", "page_index": 588, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:21+07:00" }, "raw_text": "MapReduce & HDFS namenode job submission node namenode daemon jobtracker tasktracker tasktracker tasktracker datanode daemon datanode daemon datanode daemon Linux file system Linux file system Linux file system slave node slave node slave node HPC Lab-CSE-HCMUT 38" }, { "page_index": 589, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_039.png", "page_index": 589, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:24+07:00" }, "raw_text": "Algorithm & programming HPC Lab-CSE-HCMUT 39" }, { "page_index": 590, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_040.png", "page_index": 590, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:28+07:00" }, "raw_text": "MapReduce Example: Word Count Split Map Input Shuttle/Sort Reduce Output Deer, 1 Beer, 1 Beer, 2 Beer, 1 Dear Beer River Beer, 1 River, 1 Car, 1 Car, 1 Beer, 2 Deer Beer River Car, 3 Car Car River Car, 1 Car, 1 Car, 3 Car Car River Car, 1 River, 1 Deer, 2 Deer Car Beer River, 2 Deer, 1 Deer, 1 Deer, 2 Deer Car Beer Car, 1 Deer, 1 Beer, 1 River, 1 River, 2 River, 1 HPC Lab-CSE-HCMUT 40" }, { "page_index": 591, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_041.png", "page_index": 591, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:35+07:00" }, "raw_text": "MapReduce Example: Word Count Split Map Input Shuttle/Sort Reduce Output Deer, 1 Beer, 1 Beer, 2 Beer, 1 Dear Beer River Beer, 1 River, 1 Car, 1 Car, 1 Beer, 2 Deer Beer River Car, 3 Car Car River Car, 1 Car, 1 Car, 3 Car Car River Car, 1 River, 1 Deer, 2 Deer Car Beer River, 2 Deer, 1 Deer, 1 Deer, 2 Deer Car Beer Car, 1 Deer, 1 Beer, 1 River, 1 River, 2 River, 1 Q: What are the Key and Value Pairs of Map and Reduce? Map: Key=word,Value=1 Reduce: Key=word, Value=aggregated count HPC Lab-CSE-HCMUT 41" }, { "page_index": 592, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_042.png", "page_index": 592, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:38+07:00" }, "raw_text": "Word Count: baseline 1: class MAPPER method MAP(docid a,doc d) 2: 3: for all term t E doc d do 4: EMIT(term t, count 1) 1: class REDUCER method REDUCE(term t, counts [c1, C2,...]) 2: 3: sum 0 for all count c E counts [c1, c2,.. .] do 4: 5: sum sum + c EMIT(term t,count s) 6: HPC Lab-CSE-HCMUT 42" }, { "page_index": 593, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_043.png", "page_index": 593, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:44+07:00" }, "raw_text": "MapReduce Example: Word Count Input Split Map Shuttle/Sort Reduce Output Deer, 1 Beer, 1 Beer, 2 Beer, 1 Dear Beer River Beer, 1 River, 1 Car, 1 Car, 1 Beer, 2 Deer Beer River Car, 3 Car Car River Car, 1 Car, 1 Car, 3 Car Car River Car, 1 River, 1 Deer, 2 Deer Car Beer River, 2 Deer, 1 Deer, 1 Deer, 2 Deer Car Beer Car, 1 Deer, 1 Beer, 1 River, 1 River, 2 River, 1 Q: Do you see any place we can improve the efficiency? Local aggregation at mapper will be able to improve MapReduce efficiency. HPC Lab-CSE-HCMUT 43" }, { "page_index": 594, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_044.png", "page_index": 594, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:48+07:00" }, "raw_text": "MapReduce: Combiner Combiner: do local aggregation/combine task at mapper Car, 1 Car, 2 Car, 2 Car,3 Car, 1 River,1 Car,1 River,1 Q: What are the benefits of using combiner: - Reduce memory/disk requirement of Map tasks - Reduce network traffic Q: Can we remove the reduce function? - No, reducer still needs to process records with same key but from different mappers Q: How would you implement combiner? It is the same as Reducer! HPC Lab-CSE-HCMUT 44" }, { "page_index": 595, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_045.png", "page_index": 595, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:51+07:00" }, "raw_text": "Shuffle and sort Mapper intermediate files (on disk) merged spills (on disk) Combiner Reducer circular buffer (in memory) Combiner other reducers spills (on disk) other mappers HPC Lab-CSE-HCMUT 45" }, { "page_index": 596, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_046.png", "page_index": 596, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:55+07:00" }, "raw_text": "Preserving : state Mapper object Reducer object one object per task state state setup APl initialization hook setup one call per input key-value pair map reduce one call per intermediate key cleanup API cleanup hook close HPC Lab-CSE-HCMUT 46" }, { "page_index": 597, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_047.png", "page_index": 597, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T08:59:57+07:00" }, "raw_text": "Implementation don't Don't unnecessarily create objects - Object creation is costly - Garbage collection is costly Don't buffer objects - Processes have limited heap size (remember, commodity machines) - May work for small datasets, but won't scale! HPC Lab-CSE-HCMUT 47" }, { "page_index": 598, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_048.png", "page_index": 598, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:01+07:00" }, "raw_text": "Word Count: version 1 1: class MAPPER 2: method Map(docid a,doc d) 3: H - neW ASSOCIATIVEARRAY 4: for all term t E doc d do 5: H{t}- H{t}+1 > Tally counts for entire document 6: for all term t E H do 7: EMIT(term t,count H{t}) HPC Lab-CSE-HCMUT 48" }, { "page_index": 599, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_049.png", "page_index": 599, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:06+07:00" }, "raw_text": "Word Count: version 2 1: class MAPPER 2: method INITIALIZE 3: H neW ASSOCIATIVEARRAY method MAP(docid a,doc d) 4: 5: for all term t E doc d do H{t}-H{t}+1 6: input key-value pairs! > Tally counts across documents 7: method CLOsE 8: for all term t E H do 9: EMIT(term t,count H{t}) Are combiners still need? HPC Lab-CSE-HCMUT 49" }, { "page_index": 600, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_050.png", "page_index": 600, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:09+07:00" }, "raw_text": "Design pattern for local aggregation \"In-mapper combining' * - Fold the functionality of the combiner into the mapper by preserving state across multiple map calls Advantages - Speed - Why is this faster than actual combiners? Disadvantages - Explicit memory management required - Potential for order-dependent bugs HPC Lab-CSE-HCMUT 50" }, { "page_index": 601, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_051.png", "page_index": 601, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:12+07:00" }, "raw_text": "Combiner design Combiners and reducers share same method signature - Sometimes, reducers can serve as combiners Often, not.. Remember: combiner are optional optimizations - Should not affect algorithm correctness May be run 0, 1, or multiple times Example: find average of integers associated with the same key HPC Lab-CSE-HCMUT 51" }, { "page_index": 602, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_052.png", "page_index": 602, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:16+07:00" }, "raw_text": "Computing the Mean: version 1 1: class MAPPER 2: method MAP(string t,integer r) 3: EMIT(string t, integer r) 1: class REDUCER method REDUCE(string t, integers [r1,T2,.. .] 2: 3: sum 0 4: cnt 0 5: for all integer r E integers [r1, T2, . . .] do 6: sum sum+T 7: cnt - cnt + 1 8: Tavg - sum/cnt EMIT(string t, integer Tavg) 9: Why can't we use Reducer as Combiner? HPC Lab-CSE-HCMUT 52" }, { "page_index": 603, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_053.png", "page_index": 603, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:21+07:00" }, "raw_text": "Computing the Mean: version 2 1: class MAPPER 2: method MaP(string t, integer r) 3: EMIT(string t, integer r) 1: class COMBINER 2: method CoMBINE(string t, integers [1,T2,.. .] 3: sum 0 4: cnt 0 5: for all integer r e integers [r1, T2,.. .] do 6: sumsum+r 7: cnt - cnt +1 8: EMIT(string t, pair (sum, cnt)) > Separate sum and count 1: class REDUCER 2: method REDUCE(string t, pairs [(s1, c1), (s2, c2) ...]) 3: sum 0 4: cnt 0 5: for all pair (s, c) E pairs [(s1, c1), (s2, c2) . . .] do 6: sum sum +s 7: cnt cnt + c 8: Tavg sum/cnt 9: EMIT(string t, integer Tavg) Why doesn't this work? HPC Lab-CSE-HCMUT 53" }, { "page_index": 604, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_054.png", "page_index": 604, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:27+07:00" }, "raw_text": "Computing the Mean: version 3 1: class MAPPER 2: method Map(string t, integer r) 3: EMIT(string t, pair (r,1)) 1: class COMBINER 2: method CoMBINE(string t,pairs [(s1, c1), (s2, c) .. .]) 3: sum 0 4: cnt 0 5: for all pair (s,c) E pairs [(s1, c1), (s2, c2) . . .] do 6: sumsum+s 7: cnt - cnt+c 8: EMIT(string t, pair (sum, cnt)) 1: class REDUCER 2: method REDUCE(string t,pairs [(s1, c1), (s2, c2) .. .]) 3: sum 0 4: cnt 0 5: for all pair (s, c) E pairs [(s1, c1), (s2, c2) . ..] do 6: sum+sum+s 7: cnt cnt+c 8: Tavg - sum/cnt 9: EMIT(string t,pair (ravg,cnt)) Fixed? HPC Lab-CSE-HCMUT 54" }, { "page_index": 605, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_055.png", "page_index": 605, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:30+07:00" }, "raw_text": "Computing the Mean: version 4 l: class MAPPER 2: method INITIALIZE 3: S- neW ASSOCIATIVEARRAY 4: C neW ASSOCIATIVEARRAY method MAP(string t, integer r) 5: S{t}-S{t}+r 6: C{t} - C{t}+1 7: 8: method CLosE 9: for all term t E S do 10: EMIT(term t,pair (S{t},C{t} Are combiners still need? HPC Lab-CSE-HCMUT 55" }, { "page_index": 606, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_056.png", "page_index": 606, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:33+07:00" }, "raw_text": "Word Count & sorting New Goal: output all words sorted by their frequencies (total counts) in a document. Question: How would you adopt the basic word count program to solve it? Solution: - Sort words by their counts in the reducer - Problem: what happens if we have more than one reducer? HPC Lab-CSE-HCMUT 56" }, { "page_index": 607, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_057.png", "page_index": 607, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:37+07:00" }, "raw_text": "Word Count & sorting New Goal: output all words sorted by their frequencies (total counts) in a document. Question: How would you adopt the basic word count program to solve it? Solution: Do two rounds of MapReduce - In the 2nd round, take the output of WordCount as input but switch key and value pair! Leverage the sorting capability of shuffle/sort to do the global sorting! HPC Lab-CSE-HCMUT 57" }, { "page_index": 608, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_058.png", "page_index": 608, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:40+07:00" }, "raw_text": "Word Count & top K words New Goal: output the top K words sorted by their frequencies (total counts) in a document. Question: How would you adopt the basic word count program to solve it? Solution: Use the solution of previous problem and only grab the top K in the final output - Problem: is there a more efficient way to do it? HPC Lab-CSE-HCMUT 58" }, { "page_index": 609, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_059.png", "page_index": 609, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:43+07:00" }, "raw_text": "Word Count & top K words New Goal: output the top K words sorted by their frequencies (total counts) in a document. Question: How would you adopt the basic word count program to solve it? Solution: - Add a sort function to the reducer in the first round and only output the top K words - Intuition: the global top K must be a local top K in any reducer! HPC Lab-CSE-HCMUT 59" }, { "page_index": 610, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_060.png", "page_index": 610, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:47+07:00" }, "raw_text": "MapReduce In-class Exercise Problem: Find the maximum monthly temperature for each year from weather reports Input: A set of records with format as: - (200707,100), (200706,90 - (200508, 90), (200607,100 - (200708, 80), (200606,80) Question: write down the Map and Reduce function to solve this problem - Assume we split the input by line HPC Lab-CSE-HCMUT 60" }, { "page_index": 611, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_061.png", "page_index": 611, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:50+07:00" }, "raw_text": "Mapper and Reducer of Max Temperature Map(key, value){ // key: line number / value: tuples in a line for each tuple t in value: Combiner is the same Emit(t->year, t->temperature); as Reducer Reduce(key, list of values)} / key: year //list of values: a list of monthly temperature int max temp = -100; for each v in values: max_temp= max(v, max_temp); Emit(key, max_temp);} HPC Lab-CSE-HCMUT 61" }, { "page_index": 612, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_062.png", "page_index": 612, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:54+07:00" }, "raw_text": "MapReduce Example: Max Temperature 200707,100), (200706,90) Input 200508, 90), (200607,100) 200708, 80), (200606,80) Map 2005, 90),(2006,100 2007, 80),(2006, 80) 2007,100), (2007,90 Combine (2007,100) 2005,90), (2006,100 2007, 80),(2006, 80) Shuttle/Sort 2005,[90] 2006,[100, 80] 2007,[100, 80] Reduce (2005,90) 2006,100) (2007,100) HPC Lab-CSE-HCMUT 62" }, { "page_index": 613, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_063.png", "page_index": 613, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:00:57+07:00" }, "raw_text": "MapReduce In-class Exercise Key-Value Pair of Map and Reduce: - Map: (year, temperature) - Reduce: (year, maximum temperature of the year) Question: How to use the above Map Reduce program (that contains the combiner) with slight changes to find the average monthly temperature of the year? HPC Lab-CSE-HCMUT 63" }, { "page_index": 614, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_064.png", "page_index": 614, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:00+07:00" }, "raw_text": "Mapper and Reducer of Average Temperature Map(key, value){ // key: line number / value: tuples in a line for each tuple t in value: Emit(t->year, t->temperature);} Combiner is the same Reduce(key, list of values){ as Reducer / key: year // list of values: a list of monthly temperatures int total temp = 0 for each v in values: total temp= total_temp+v; Emit(key, total_temp/size_of(values));} HPC Lab-CSE-HCMUT 64" }, { "page_index": 615, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_065.png", "page_index": 615, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:05+07:00" }, "raw_text": "MapReduce Example: Average Temperature 200707,100), (200706,90) Input Real average of 200508, 90, (200607,100) 2007:90 (200708, 80),(200606,80) Map 2005, 90), (2006,100 2007, 80,(2006,80 2007,100), (2007,90 Combine (2007,95) 2005, 90, (2006,100 2007, 80), (2006,80 Shuttle/Sort 2005,[90] 2006,[100, 80] 2007,[95,80] Reduce (2005,90) (2006,90) (2007,87.5) HPC Lab'CSE-HCMUT 65" }, { "page_index": 616, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_066.png", "page_index": 616, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:08+07:00" }, "raw_text": "MapReduce In-class Exercise The problem is with the combiner! Here is a simple counterexample: - (2007,100), (2007,90) ->(2007,95) 2007,80)->(2007,80) - Average of the above is: (2007,87.5) - However, the real average is: (2007,90 However, we can do a small trick to get around this - Mapper: (2007, 100), (2007,90) ->(2007,<190,2>) 2007,80)->(2007,<80,1>) - Reducer: (2007,<270,3>)->(2007,90 HPC Lab-CSE-HCMUT 66" }, { "page_index": 617, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_067.png", "page_index": 617, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:13+07:00" }, "raw_text": "MapReduce Example: Average Temperature 200707,100), (200706,90) Input 200508, 90), (200607,100) 200708, 80), (200606,80) Map 2005, 90),(2006,100 2007, 80,(2006,80 2007,100), (2007,90 Combine 2005,<90,1>) 2007,<80,1>) 2007,<190,2> (2006,<80,1>) 2006,<100,1> Shuttle/Sort 2005,[<90,1>] 2006,[<100,1>,<80,1>] 2007,[<190,2>,<80,1>] Reduce (2005,90) (2006,90 (2007,90) HPC Lab'CSE-HCMUT 67" }, { "page_index": 618, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_068.png", "page_index": 618, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:18+07:00" }, "raw_text": "Mapper and Reducer of Average Temperature Map(key, value{ Combine(key, list of values)} / key: line number / key: year / value: tuples in a line // list of values: a list of monthly temperature for each tuple t in value: int total temp = 0; Emit(t->year, t->temperature); for each v in values: Reduce (key, list of values{ total temp= total_temp+v; / key: year Emit(key);) // list of values: a list of tuples int total temp = 0; int total count=0; for each v in values: total temp= total temp+v->sum; total count=total count+v->count Emit(key,total_temp/total_count) Lab-CSE-HCMUT 68" }, { "page_index": 619, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_069.png", "page_index": 619, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:21+07:00" }, "raw_text": "MapReduce In-class Exercise Functions that can use combiner are called distributive: - Distributive: Min/MaxO, SumO, CountO, TopKO - Non-distributive: MeanO, MedianO, RankO Gray, Jim*, et al. \"Data cube: A relational aggregation operator generalizing group-by, cross-tab, and sub- totals.\" Data Mining and Knowledge Discovery 1.1 1997): 29-53. *Jim Gray received Turing Award in 1998 HPC Lab-CSE-HCMUT 69" }, { "page_index": 620, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_070.png", "page_index": 620, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:25+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 1: Find Word Length Distribution Statement: Given a set of documents, use Map- Reduce to find the length distribution of all words contained in the documents Question: - What are the Mapper and Reducer Functions? 12: 1 MapReduce 7: 1 This is a test data for 6:1 the word length 4:4 distribution problem 3:2 2:1 1: 1 HPC Lab-CSE-HCMUT 70" }, { "page_index": 621, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_071.png", "page_index": 621, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:28+07:00" }, "raw_text": "Mapper and Reducer of Word Length Distribution Map(key, value{ // key: document name / value: words in a document for each word w in value: Emit(length(w), w);} Reduce(key, list of valuesf // key: length of a word // list of values: a list of words with the same length Emit(key, size_of(values);} HPC Lab-CSE-HCMUT 71" }, { "page_index": 622, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_072.png", "page_index": 622, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:31+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 1: Find Word Length Distribution Mapper and Reducer: - Mapper(document) { Emit (Length(word), word) } -Reducer(output of map) { Emit (Length(word), Size of (List of words at a particular length))} HPC Lab-CSE-HCMUT 72" }, { "page_index": 623, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_073.png", "page_index": 623, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:35+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 2: Indexing & Page Rank Statement: Given a set of web pages, each page has a page rank associated with it, use Map-Reduce to find, for each word, a list of pages (sorted by rank) that contains that word Question: - What are the Mapper and Reducer Functions? Word 1: [page x1 MapReduce page x2, ... 34.3% Word 2: [page y1 page y2, ... HPC Lab-CSE-HCMUT 73" }, { "page_index": 624, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_074.png", "page_index": 624, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:38+07:00" }, "raw_text": "Page Rank Given page x with inlinks t,...t., where C(t) is the out-degree of t a is probability of random jump N is the total number of nodes in the graph () n PR(ti) PR(x) =a C(ti) i=1 x 12 t HPC Lab-CSE-HCMUT 74" }, { "page_index": 625, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_075.png", "page_index": 625, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:41+07:00" }, "raw_text": "Mapper and Reducer of Indexing and PageRank Map(key, value} // key: a page // value: words in a page for each word w in value: Emit(w,);} Reduce(key, list of values){ / key: a word // list of values: a list of pages containing that word sorted_pages=sort(values, page_rank) Emit(key, sorted_pages);} HPC Lab-CSE-HCMUT 75" }, { "page_index": 626, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_076.png", "page_index": 626, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:44+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 2: Indexing and Page Rank Mapper and Reducer: -Mapper(page_id, ) { Emit (word,) } -Reducer(output of map) { Emit (word, List of pages contains the word sorted by their page_ranks)} HPC Lab-CSE-HCMUT 76" }, { "page_index": 627, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_077.png", "page_index": 627, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:48+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 3: Find Common Friends Statement: Given a group of people on online social media (e.g., Facebook), each has a list of friends, use Map-Reduce to find common friends of any two persons who are friends Question: - What are the Mapper and Reducer Functions? Mutualfriends Wal Wrnesornet Close AIF af Have how HPC Lab-CSE-HCMUT 77" }, { "page_index": 628, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_078.png", "page_index": 628, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:52+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 3: Find Common Friends Input: Simple example: A -> B,C,D B-> A,C,D A C C-> A,B MapReduce D->A,B Output: (A,B) -> C,D B D (A,C) -> B (A,D) ->. HPC Lab-CSE-HCMUT 78" }, { "page_index": 629, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_079.png", "page_index": 629, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:55+07:00" }, "raw_text": "Mapper and Reducer of Common Friends Map(key, value){ // key: person_id // value: the list of friends of the person for each friend f id in value: Emit(, value);} Reduce(key, list of values){ / key: // list of values: a set of friend lists related with the friend pair for v1, v2 in values: common friends = v1 intersects v2; Emit(key, common_friends);} HPC Lab-CSE-HCMUT 79" }, { "page_index": 630, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_080.png", "page_index": 630, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:01:58+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 3: Find Common Friends Mapper and Reducer: - Mapper(friend list of a person) { for each person in the friend list: Emit (, ) } Reducer(output of map) { Emit (, Intersection of two (i.e, the one in friend pair) friend lists)} HPC Lab-CSE-HCMUT 80" }, { "page_index": 631, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_081.png", "page_index": 631, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:02+07:00" }, "raw_text": "Map Reduce Problems Discussion Problem 3: Find Common Friends Mapper and Reducer: Reduce: Map: Input: Suggest Fiends 9 (A,B) -C,D (A,B) -> B,C,D A -> B,C,D (A,C) -> B B-> A,C,D (A,C) -> B,C,D (A,D) -> B C-> A,B A,D) -> B,C,D (B,C) -> A D->A,B (A,B) -> A,C,D (B,D) -> A (B,C) -> A,C,D (B,D) -> A,C,D (A,C) -> A,B (B,C) -> A,B (A,D) -> A,B (B,D) -> A,B HPC Lab-CSE-HCMUT 81" }, { "page_index": 632, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_7_b/slide_082.png", "page_index": 632, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:04+07:00" }, "raw_text": "Enjoy and Hadoop@ MR HPC Lab-CSE-HCMUT 82" }, { "page_index": 633, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_001.png", "page_index": 633, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:06+07:00" }, "raw_text": "Speedup Thoai Nam High Performance Computing Lab (HPC Lab) Faculty of Computer Science and Engineering HCMC University of Technology" }, { "page_index": 634, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_002.png", "page_index": 634, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:08+07:00" }, "raw_text": "BK Outline TP.HCM Speedup & Efficiency Amdahl's Law Gustafson's Law Sun & Ni's Law HPC Lab-CSE-HCMUT" }, { "page_index": 635, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_003.png", "page_index": 635, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:11+07:00" }, "raw_text": "Speedup & Efficiency BK TP.HCM Speedup: S = Time(the most efficient sequential algorithm) / Time(parallel algorithm) Efficiency: E =S/ N with N is the number of processors HPC Lab-CSE-HCMUT" }, { "page_index": 636, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_004.png", "page_index": 636, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:17+07:00" }, "raw_text": "Speedup BK TP.HCM The fundamental concept in parallelism T(1) = time to execute task on a single resource T(n) = time to execute task on n resources Speedup = T(1)/T(n) Speedup 0 Utopia 0 Excellent 64 8 0 Neat 55 0 Good 0 So-so 0 Not Good linear speedup O 19 sublinear speedup peees DD 10 0 O 1 8 16 24 32 40 48 56 64 #of procs http://web.eecs.utk.edu/huangj/hpc/hpc_intro.php resources From Silberschatz,Korth, and Sudarshan. Database Systems Concepts,4th Ed HPC Lab-CSE-HCMUT" }, { "page_index": 637, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_005.png", "page_index": 637, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:20+07:00" }, "raw_text": "BK TP.HCM The main objective is to produce the results as soon as possible - (ex) video compression, computer graphics, VLsI routing, etc Implications - Upper-bound is - Make Sequential bottleneck as small as possible - Optimize the common case Modified Amdahl's law for fixed problem size including the overhead HPC Lab-CSE-HCMUT" }, { "page_index": 638, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_006.png", "page_index": 638, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:23+07:00" }, "raw_text": "BK TP.HCM Sequential Parallel Sequential Ts Tp T(1) Sequential Po P1 P2 P3 P 4 P5 P 6 P7 P8 Pg Parallel T(N) Number of Ts=aT(1) =>Tp=(1-x)T(1) processors T(N) = T(1)+ (1-)T(1)/N HPC Lab-CSE-HCMUT" }, { "page_index": 639, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_007.png", "page_index": 639, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:26+07:00" }, "raw_text": "BK TP.HCM Time(1) Speedup = Time(N) T(1) 1 1 Speedup = - > - as N -> (1-)T(1) (1 -) xT(1) + 0 + N N HPC Lab-CSE-HCMUT" }, { "page_index": 640, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_008.png", "page_index": 640, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:29+07:00" }, "raw_text": "Enhanced AmdahI's Law BK TP.HCM The overhead includes parallelism and interaction overheads T(1) 1 Speedup = - as N -> o T. (1-a)T(1) aT(1) + T overhead X + overhead N T(1) HPC Lab-CSE-HCMUT" }, { "page_index": 641, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_009.png", "page_index": 641, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:35+07:00" }, "raw_text": "AmdahI's Law BK TP.HCM Amdahl's Law 20.00 18.00 Parallel Portion 16.00 50% 75% 14.00 90% 95% 12.00 10.00 8.00 6.00 4.00 2.00 0.00 2 A 2 256 1224 2043 9ESS9 Numberof Processors Source:Wikipedia HPC Lab-CSE-HCMUT" }, { "page_index": 642, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_010.png", "page_index": 642, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:38+07:00" }, "raw_text": "BK TP.HCM User wants more accurate results within a time limit - Execution time is fixed as system scales - (ex) FEM (Finite element method) for structural analysis, FDM (Finite difference method) for fluid dynamics Properties of a work metric Easy to measure - Architecture independent - Easy to model with an analytical expression - No additional experiment to measure the work - The measure of work should scale linearly with sequential time complexity of the algorithm Time constrained seems to be most generally viable model! HPC Lab-CSE-HCMUT" }, { "page_index": 643, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_011.png", "page_index": 643, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:42+07:00" }, "raw_text": "BK TP.HCM &= Ws/ W(N) P9 W(N) = aW(N) +(1-a)W(N) >W(1)= aW(N) +(1-a)W(N)*N Parallel Sequential Po Ws Wo W(N) Sequential Sequential Po P1 P2 P3 P4 P5 P6 P7 P 8 P9 W(1) HPC Lab-CSE-HCMUT" }, { "page_index": 644, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_012.png", "page_index": 644, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:45+07:00" }, "raw_text": "Gustafson's Law - Fixed Time BK TP.HCM without overhead Time = Work * k W(N) = W T(1) W(1)*k xW +(1-x)NW Speedup =a+(1-)N T(N) W(N)*k W HPC Lab-CSE-HCMUT" }, { "page_index": 645, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_013.png", "page_index": 645, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:48+07:00" }, "raw_text": "Gustafson's Law - Fixed Time BK TP.HCM with overhead W(N) = W + Wo T(1) W(1)*k xW +(1-c)NW x+(1-)N Speedup W+Wo W. T(N) W(N)*k 1+ W HPC Lab-CSE-HCMUT" }, { "page_index": 646, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_014.png", "page_index": 646, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:52+07:00" }, "raw_text": "Gustafson's Law - Fixed Time BK TP.HCM Suppose only a fraction f of a computation was parallelized f=0.9 f=0.8 f=0.7 f=0.6 60 f=0.5 dnpaads f=0.4 40 f =0.3 20 f=0.2 f = 0.1 0 0 20 40 60 80 100 120 Number of Processors Source:Wikipedia HPC Lab-CSE-HCMUT" }, { "page_index": 647, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_015.png", "page_index": 647, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:55+07:00" }, "raw_text": "Sun and Ni's Law - Fixed Memory (1) BK TP.HCM Scale the largest possible solution limited by the memory space. Or, fix memory usage per processor Speedup - Time(1)/Time(N) for scaled up problem is not appropriate - For simple profile, and G(N) is the increase of parallel workload as the memory capacity increases N times HPC Lab-CSE-HCMUT" }, { "page_index": 648, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_016.png", "page_index": 648, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:02:58+07:00" }, "raw_text": "BK TP.HCM 0 W = aW+(1- a)W Let M be the memory capacity of a single node N nodes: - the increased memory N*M - The scaled work: W = aW+(1- &)W*G(N) x +(1-)G(N) Speedup Mc G(N) a+(1-) N HPC Lab-CSE-HCMUT" }, { "page_index": 649, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_017.png", "page_index": 649, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:03+07:00" }, "raw_text": "BK TP.HCM Definition : A function g is homomorphism if there exists a function such that g for any real number c and variable x, g(cx) = g(c) * g(x) Theorem: data being shared by all available processors, the simplified memory-bounced speedup is W+g(N)Wv a+(1-)G(N) * V g(N) G(N) W + W a +(1-) N N N HPC Lab-CSE-HCMUT" }, { "page_index": 650, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_018.png", "page_index": 650, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:08+07:00" }, "raw_text": "Sun and Ni's Law - Fixed Memory (4) BK TP.HCM Proof: Let the memory requirement of W. be M, W, = g(M) . M is the memory requirement when 1 node is available. With N nodes available, the memory capacity will increase to N*M. Using all of the available memory, for the scaled parallel X W. W =g(N*M) =g(N)*g(M) =g(N)*W portion * W*+W W+ g(N)Wv * N S N W g(N) N W+ X W N N N HPC Lab-CSE-HCMUT" }, { "page_index": 651, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_019.png", "page_index": 651, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:12+07:00" }, "raw_text": "Speedup BK TP.HCM W+G(N)Wx S A G(N) W+ W N N - When the problem size is independent of the system, the problem size is fixed, G(N)=1 => Amdahl's Law. - When memory is increased N times, the workload also increases N times, G(N)=N => Gustafson's Law - For most of the scientific and engineering applications, the computation requirement increases faster than the memory requirement,G(N)>N. HPC Lab-CSE-HCMUT" }, { "page_index": 652, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_020.png", "page_index": 652, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:16+07:00" }, "raw_text": "Scalability BK TP.HCM Parallelizing a code does not always result in a speedup; sometimes it actually slows the code down! This can be due to a poor choice of algorithm or to poor coding The best possible speedup is linear, i.e. it is proportional to the number of processors: T(N) = T(1)/N where N = number of processors, T(1) = time for serial run. A code that continues to speed up reasonably close to linearly as the number of processors increases is said to be scalable. Many codes scale up to some number of processors but adding more processors then brings no improvement. Very few, if any, codes are indefinitely scalable. HPC Lab-CSE-HCMUT" }, { "page_index": 653, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_021.png", "page_index": 653, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:20+07:00" }, "raw_text": " That Limit Speedup BK Factors TP.HCM Software overhead Even with a completely equivalent algorithm, software overhead arises in the concurrent implementation. (e.g. there may be additional index calculations necessitated by the manner in which data are \"split up\" among processors.) i.e. there is generally more lines of code to be executed in the parallel program than the sequential program. Load balancing Communication overhead HPC Lab-CSE-HCMUT" }, { "page_index": 654, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_022.png", "page_index": 654, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:23+07:00" }, "raw_text": "Superlinear S Speedup BK TP.HCM processors) But in practice superlinear speedup is sometimes observed, that is S, > p, (why?) O. Reasons for superlinear speedup Cache effects Exploratory decomposition HPC Lab-CSE-HCMUT" }, { "page_index": 655, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_023.png", "page_index": 655, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:28+07:00" }, "raw_text": "Superlinear Speedup ( Cache Effects) Let cache access latency = 2 ns DRAM DRAM access latency = 100 ns 1 1 cache cache Suppose we want solve a problem 1 1 instance that executes k FLOPs. CPU CPU With 1 Core: Suppose cache hit rate is 80%. core core If the computation performs 1 FLOP/memory access, then each FLOP will take 2 x 0.8 + 100 x 0.2 = 21.6 ns to execute. With 2 Cores: Cache hit rate will improve. ( why? ) Suppose cache hit rate is now 90%. Then each FL0P will take 2 x 0.9 + 100 x 0.1 = 11.8 ns to execute. Since now each core will execute only k/ 2 FLOPs, kx21.6 Speedup, Sz=- 3.66 > 2 (k/2)x11.8" }, { "page_index": 656, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_024.png", "page_index": 656, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:33+07:00" }, "raw_text": "Superlinear Speedup ( Due to Exploratory Decomposition ) Consider searching an array of 2n unordered elements for a specific element x. Suppose x is located at array location k > n and k is odd. Serial runtime, T, = k A[1]A[2]A[3] A[k] A[2n] x Parallel running time with n seguential search processing elements, Tn = 1 A[1]A[2] A[3] A[k] A[2n] Tn Speedup is superlinear! P P2 P parallel search" }, { "page_index": 657, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_025.png", "page_index": 657, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:37+07:00" }, "raw_text": "Scalable Parallel Algorithms Efficiency pTp p Fixed problem size (W) Fixed number of processors(p E E p W A parallel algorithm is called sca/able if its efficiency can be maintained at a fixed value by simultaneously increasing the number of processing elements and the problem size. Scalability reflects a parallel algorithm's ability to utilize increasing processing elements effectively." }, { "page_index": 658, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3067", "source_file": "/workspace/data/converted/CO3067_Parallel_Computing/Chapter_8/slide_026.png", "page_index": 658, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T09:03:43+07:00" }, "raw_text": "Scalable Parallel Algorithms In order to keep Ep fixed at a constant k, we need T1 = k=>T1= kpTp pTp For the algorithm that adds n numbers using p processing elements: n p So in order to keep Ep fixed at k, we must have: 2k n n = kp =+ 2log p p log p n 1 - k p n p=1 p=4 p=8 p=16 p=32 64 1.0 0.80 0.57 0.33 0.17 192 1.0 0.92 0.80 0.60 0.38 320 1.0 0.95 0.87 0.71 0.50 512 1.0 0.97 0.91 0.80 0.62 Fig: Efficiency for adding n numbers using p processing elements Source: Grama et al.,\"Introduction to Parallel Computing\",2d Edition" } ] }