{ "course": "Data_Mining", "course_id": "CO3029", "schema_version": "material.v1", "slides": [ { "page_index": 0, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_001.png", "page_index": 0, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:13+07:00" }, "raw_text": "Data Mining: (3rd ed.) - Chapter 1 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 1, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_002.png", "page_index": 1, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:17+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 2" }, { "page_index": 2, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_003.png", "page_index": 2, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:22+07:00" }, "raw_text": "Why Data Mining? - The Explosive Growth of Data: from terabytes to petabytes Data collection and data availability Automated data collection tools, database systems, Web computerized society Major sources of abundant data Business: Web, e-commerce, transactions, stocks, ... Science: Remote sensing, bioinformatics, scientific simulation, ... Society and everyone: news, digital cameras, YouTube We are drowning in data, but starving for knowledge! \"Necessity is the mother of invention\"-Data mining-Automated analysis of massive data sets 3" }, { "page_index": 3, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_004.png", "page_index": 3, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:28+07:00" }, "raw_text": "Evolution of Sciences Before 1600, empirical science 1600-1950s, theoretical science Each discipline has grown a theoretica/component. Theoretical models often motivate experiments and generalize our understanding. 1950s-1990s, computational science Over the last 50 years, most disciplines have grown a third, computationa/branch (e.g. empirical, theoretical, and computational ecology, or physics, or linguistics.) Computational Science traditionally meant simulation. It grew out of our inability to find closed-form solutions for complex mathematical models. 1990-now, data science The flood of data from new scientific instruments and simulations The ability to economically store and manage petabytes of data online The Internet and computing Grid that makes all these archives universally accessible Scientific info. management, acquisition, organization, query, and visualization tasks scale almost linearly with data volumes. Data mining is a major new challenge! Jim Gray and Alex Szalay, The World Wide Telescope: An Archetype for Online Science Comm. ACM, 45(11): 50-54, Nov. 2002 4" }, { "page_index": 4, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_005.png", "page_index": 4, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:34+07:00" }, "raw_text": "e Technology Evolution of Database 1960s: Data collection, database creation, IMS and network DBMS 1970s: Relational data model, relational DBMS implementation 1980s: RDBMS, advanced data models (extended-relational, OO, deductive, etc.) Application-oriented DBMS (spatial, scientific, engineering, etc.) 1990s: Data mining, data warehousing, multimedia databases, and Web databases 2000s Stream data management and mining Data mining and its applications Web technology (XML, data integration) and global information systems 5" }, { "page_index": 5, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_006.png", "page_index": 5, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:37+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 6" }, { "page_index": 6, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_007.png", "page_index": 6, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:42+07:00" }, "raw_text": "What Is Data Mining? Data mining (knowledge discovery from data) Extraction of interesting (non-trivial, implicit, previously unknown and potentially usefuD) patterns or knowledge from huge amount of data Data mining: a misnomer? Alternative names Knowledge discovery (mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, data dredging, information harvesting, business intelligence, etc. Watch out: Is everything \"data mining\"? Simple search and query processing (Deductive) expert systems 7" }, { "page_index": 7, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_008.png", "page_index": 7, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:47+07:00" }, "raw_text": "Knowledge Discovery (KDD) Process Knowledge This is a view from typical database systems and data Pattern Evaluation warehousing communities Data mining plays an essential role in the knowledge discovery Data Mining process Task-releyant Data Selection Data Warehouse Data Integration Databases 8" }, { "page_index": 8, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_009.png", "page_index": 8, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:52+07:00" }, "raw_text": "Example: A Web Mining Framework Web mining usually involves Data cleaning Data integration from multiple sources Warehousing the data Data cube construction Data selection for data mining Data mining Presentation of the mining results Patterns and knowledge to be used or stored into knowledge-base 9" }, { "page_index": 9, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_010.png", "page_index": 9, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:06:57+07:00" }, "raw_text": "Data Mining in Business Intelligence Increasing potential to support End User business decisions Decision Making Business Data Presentation Analyst Visualization Techniques Data Mining Data Information Discovery Analyst Data Exploration Statistical Summary, Querying, and Reporting Data Preprocessing/Integration, Data Warehouses DBA Data Sources Paper, Files, Web documents, Scientific experiments, Database Systems 10" }, { "page_index": 10, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_011.png", "page_index": 10, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:00+07:00" }, "raw_text": "Example: Mining vs. Data Exploration Business intelligence view Warehouse, data cube, reporting but not much mining Business objects vs. data mining tools Supply chain example: tools Data presentation Exploration 11" }, { "page_index": 11, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_012.png", "page_index": 11, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:07+07:00" }, "raw_text": "KDD Process: A Typical View from ML and Statistics paitern Data Post- nformmation Input Data - Data Pre- Processing Processing Mining Pattern discovery Data integration Pattern evaluation Association & correlation Normalization Pattern selection Classification Feature selection Pattern interpretation Clustering Dimension reduction Pattern visualization Outlier analysis This is a view from typical machine learning and statistics communities 12" }, { "page_index": 12, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_013.png", "page_index": 12, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:10+07:00" }, "raw_text": "Example: Medical Data Mining Health care & medical data mining - often adopted such a view in statistics and machine learning Preprocessing of the data (including feature extraction and dimension reduction) Classification or/and clustering processes Post-processing for presentation 13" }, { "page_index": 13, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_014.png", "page_index": 13, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:14+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 14" }, { "page_index": 14, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_015.png", "page_index": 14, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:19+07:00" }, "raw_text": "Multi-Dimensional View of Data Mining Data to be mined Database data (extended-relational, object-oriented, heterogeneous legacy), data warehouse, transactional data, stream, spatiotemporal time-series, sequence, text and web, multi-media, graphs & social and information networks Knowledge to be mined (or: Data mining functions) Characterization, discrimination, association, classification, clustering, trend/deviation, outlier analysis, etc. Descriptive vs. predictive data mining Multiple/integrated functions and mining at multiple levels Technigues utilized Data-intensive, data warehouse (OLAP), machine learning, statistics, pattern recognition, visualization, high-performance, etc. Applications adapted Retail, telecommunication, banking, fraud analysis, bio-data mining, stock market analysis, text mining, Web mining, etc. 15" }, { "page_index": 15, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_016.png", "page_index": 15, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:24+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 16" }, { "page_index": 16, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_017.png", "page_index": 16, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:28+07:00" }, "raw_text": "Data Mining: On What Kinds of Data? Database-oriented data sets and applications Relational database, data warehouse, transactional database Advanced data sets and advanced applications Data streams and sensor data Time-series data, temporal data, sequence data (incl. bio-sequences Structure data, graphs, social networks and multi-linked data Object-relational databases Heterogeneous databases and legacy databases Spatial data and spatiotemporal data Multimedia database Text databases The World-Wide Web 17" }, { "page_index": 17, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_018.png", "page_index": 17, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:32+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 18" }, { "page_index": 18, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_019.png", "page_index": 18, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:37+07:00" }, "raw_text": "Data Mining Function: (1) Generalization Information integration and data warehouse construction Data cleaning, transformation, integration, and multidimensional data model Data cube technology Scalable methods for computing (i.e., materializing) multidimensional aggregates OLAP (online analytical processing) Multidimensional concept description: Characterization and discrimination Generalize, summarize, and contrast data characteristics, e.g., dry vs. wet region 19" }, { "page_index": 19, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_020.png", "page_index": 19, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:42+07:00" }, "raw_text": "Mining Function: (2) Association and Data 1 Correlation Analysis Frequent patterns (or frequent itemsets) What items are frequently purchased together in your Walmart? Association, correlation vs. causality A typical association rule Diaper -> E Beer [0.5%, 75%] (support, confidence) Are strongly associated items also strongly correlated? How to mine such patterns and rules efficiently in large datasets? How to use such patterns for classification, clustering, and other applications? 20" }, { "page_index": 20, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_021.png", "page_index": 20, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:47+07:00" }, "raw_text": "Data Mining Function: (3) Classification Classification and label prediction Construct models (functions) based on some training examples Describe and distinguish classes or concepts for future prediction E.g., classify countries based on (climate), or classify cars based on (gas mileage) Predict some unknown class labels Typical methods Decision trees, naive Bayesian classification, support vector machines, neural networks, rule-based classification, pattern- based classification, logistic regression, ... Typical applications: Credit card fraud detection, direct marketing, classifying stars diseases, web-pages, ... 21" }, { "page_index": 21, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_022.png", "page_index": 21, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:50+07:00" }, "raw_text": "Mining Data Function: (4) Cluster Analysis Unsupervised learning (i.e., Class label is unknown) Group data to form new categories (i.e., clusters), e.g., cluster houses to find distribution patterns Principle: Maximizing intra-class similarity & minimizing interclass similarity Many methods and applications 22" }, { "page_index": 22, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_023.png", "page_index": 22, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:54+07:00" }, "raw_text": "Mining Data I Function: (5) Outlier Analysis Outlier analysis Outlier: A data object that does not comply with the general behavior of the data Noise or exception? - One person's garbage could be another person's treasure Methods: by product of clustering or regression analysis, ... Useful in fraud detection, rare events analysis 23" }, { "page_index": 23, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_024.png", "page_index": 23, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:07:58+07:00" }, "raw_text": "Time and Ordering: Sequential Pattern, Trend l and Evolution Analysis Sequence, trend and evolution analysis Trend, time-series, and deviation analysis: e.g., regression and value prediction Sequential pattern mining e.g., first buy digital camera, then buy large SD memory cards Periodicity analysis Motifs and biological sequence analysis Approximate and consecutive motifs Similarity-based analysis Mining data streams Ordered, time-varying, potentially infinite, data streams 24" }, { "page_index": 24, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_025.png", "page_index": 24, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:04+07:00" }, "raw_text": "Structure and Network Analysis Graph mining Finding frequent subgraphs (e.g., chemical compounds), trees (XML), substructures (web fragments) Information network analysis Social networks: actors (objects, nodes) and relationships (edges) e.g., author networks in CS, terrorist networks Multiple heterogeneous networks - A person could be multiple information networks: friends, family, classmates, ... Links carry a lot of semantic information: Link mining Web mining Web is a big information network: from PageRank to Google Analysis of Web information networks Web community discovery, opinion mining, usage mining, .. 25" }, { "page_index": 25, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_026.png", "page_index": 25, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:08+07:00" }, "raw_text": "Evaluation of Knowledge Are all mined knowledge interesting? One can mine tremendous amount of \"patterns\" and knowledge Some may fit only certain dimension space (time, location, ...) Some may not be representative, may be transient, ... Evaluation of mined knowledge -> directly mine only interesting knowledge? Descriptive vs. predictive Coverage Typicality vs. novelty Accuracy Timeliness 26" }, { "page_index": 26, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_027.png", "page_index": 26, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:14+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 27" }, { "page_index": 27, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_028.png", "page_index": 27, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:17+07:00" }, "raw_text": "Data Mining: Confluence of Multiple Disciplines Machine Pattern Statistics Learning Recognition Visualization Applications Data Mining Database Algorithm High-Performance Technology Computing 28" }, { "page_index": 28, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_029.png", "page_index": 28, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:22+07:00" }, "raw_text": "Why Confluence of Multiple Disciplines? Tremendous amount of data Algorithms must be highly scalable to handle such as tera-bytes of data High-dimensionality of data Micro-array may have tens of thousands of dimensions High complexity of data Data streams and sensor data Time-series data, temporal data, sequence data Structure data, graphs, social networks and multi-linked data Heterogeneous databases and legacy databases Spatial, spatiotemporal, multimedia, text and Web data Software programs, scientific simulations New and sophisticated applications 29" }, { "page_index": 29, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_030.png", "page_index": 29, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:26+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 30" }, { "page_index": 30, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_031.png", "page_index": 30, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:32+07:00" }, "raw_text": "Applications of Data Mining Web page analysis: from web page classification, clustering to PageRank & HITS algorithms Collaborative analysis & recommender systems Basket data analysis to targeted marketing Biological and medical data analysis: classification, cluster analysis (microarray data analysis), biological sequence analysis, biological network analysis Data mining and software engineering (e.g., IEEE Computer, Aug 2009 issue) From major dedicated data mining systems/tools (e.g., SAS, MS SQL Server Analysis Manager, Oracle Data Mining Tools) to invisible data mining 31" }, { "page_index": 31, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_032.png", "page_index": 31, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:36+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 32" }, { "page_index": 32, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_033.png", "page_index": 32, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:40+07:00" }, "raw_text": "Major Issues in Data Mining (1) Mining Methodology Mining various and new kinds of knowledge Mining knowledge in multi-dimensional space Data mining: An interdisciplinary effort Boosting the power of discovery in a networked environment Handling noise, uncertainty, and incompleteness of data Pattern evaluation and pattern- or constraint-guided mining User Interaction Interactive mining Incorporation of background knowledge Presentation and visualization of data mining results 33" }, { "page_index": 33, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_034.png", "page_index": 33, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:45+07:00" }, "raw_text": "Major Issues in Data Mining (2) Efficiency and Scalability Efficiency and scalability of data mining algorithms Parallel, distributed, stream, and incremental mining methods Diversity of data types Handling complex types of data Mining dynamic, networked, and global data repositories Data mining and society Social impacts of data mining Privacy-preserving data mining Invisible data mining 34" }, { "page_index": 34, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_035.png", "page_index": 34, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:48+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 35" }, { "page_index": 35, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_036.png", "page_index": 35, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:08:54+07:00" }, "raw_text": "A Brief History of Data Mining Society 1989 IJCAI Workshop on Knowledge Discovery in Databases Knowledge Discovery in Databases (G. Piatetsky-Shapiro and W. Frawley, 1991) 1991-1994 Workshops on Knowledge Discovery in Databases Advances in Knowledge Discovery and Data Mining (U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy, 1996 1995-1998 International Conferences on Knowledge Discovery in Databases and Data Mining (KDD'95-98) Journal of Data Mining and Knowledge Discovery (1997) ACM SIGKDD conferences since 1998 and SIGKDD Explorations More conferences on data mining PAKDD (1997), PKDD (1997), SIAM-Data Mining (2001), (IEEE) ICDM 2001), etc ACM Transactions on KDD starting in 2007 36" }, { "page_index": 36, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_037.png", "page_index": 36, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:00+07:00" }, "raw_text": "Conferences and Journals on Data Mining KDD Conferences Other related conferences ACM SIGKDD Int. Conf. on DB conferences: ACM SIGMOD Knowledge Discovery in VLDB, ICDE, EDBT, ICDT, ... Databases and Data Mining (KDD) Web and IR conferences: WWW SIAM Data Mining Conf. (SDM) SIGIR, WSDM (IEEE) Int. Conf. on Data Mining ML conferences: ICML, NIPS (ICDM) PR conferences: CVPR European Conf. on Machine Learning and Principles and Journals practices of Knowledge Discovery Data Mining and Knowledge and Data Mining (ECML-PKDD) Discovery (DAMI or DMKD) Pacific-Asia Conf. on Knowledge IEEE Trans. On Knowledge and Discovery and Data Mining Data Eng. (TKDE) (PAKDD) KDD Explorations Int. Conf. on Web Search and Data Mining (WSDM) ACM Trans. on KDD 37" }, { "page_index": 37, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_038.png", "page_index": 37, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:06+07:00" }, "raw_text": "Where to Find References? DBLP, CiteSeer, Google Data mining and KDD (SIGKDD: CDROM) Conferences: ACM-SIGKDD, IEEE-ICDM, SIAM-DM, PKDD, PAKDD, etc. Journal: Data Mining and Knowledge Discovery, KDD Explorations, ACM TKDD Database systems (SIGMOD: ACM SIGMOD Anthology-CD ROM) Conferences: ACM-SIGMOD, ACM-PODS, VLDB, IEEE-ICDE, EDBT, ICDT, DASFAA Journals: IEEE-TKDE, ACM-TODS/TOIS, JIIS, J. ACM, VLDB J., Info. Sys., etc AI & Machine Learning Conferences: Machine learning (ML), AAAI, IJCAI, COLT (Learning Theory), CVPR, NIPS, etc. Journals: Machine Learning, Artificial Intelligence, Knowledge and Information Systems IEEE-PAMI, etc. Web and IR Conferences: SIGIR, WWW, CIKM, etc Journals: WWW: Internet and Web Information Systems Statistics Conferences: Joint Stat. Meeting, etc Journals: Annals of statistics, etc. Visualization Conference proceedings: CHI, ACM-SIGGraph, etc Journals: IEEE Trans. visualization and computer graphics, etc 38" }, { "page_index": 38, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_039.png", "page_index": 38, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:10+07:00" }, "raw_text": "Chapter 1. ] Introduction Why Data Mining? What Is Data Mining? A Multi-Dimensional View of Data Mining What Kind of Data Can Be Mined? What Kinds of Patterns Can Be Mined? What Technology Are Used? What Kind of Applications Are Targeted? Major Issues in Data Mining A Brief History of Data Mining and Data Mining Society Summary 39" }, { "page_index": 39, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_040.png", "page_index": 39, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:15+07:00" }, "raw_text": "Summary Data mining: Discovering interesting patterns and knowledge from massive amount of data A natural evolution of database technology, in great demand, with wide applications A KDD process includes data cleaning, data integration, data selection, transformation, data mining, pattern evaluation, and knowledge presentation Mining can be performed in a variety of data Data mining functionalities: characterization, discrimination, association, classification, clustering, outlier and trend analysis, etc. Data mining technologies and applications Major issues in data mining 40" }, { "page_index": 40, "chapter_num": 0, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_0/slide_041.png", "page_index": 40, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:22+07:00" }, "raw_text": "Recommended Reference Books S. Chakrabarti. Mining the Web: Statistical Analysis of Hypertex and Semi-Structured Data. Morgan Kaufmann, 2002 R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, 2ed., Wiley-Interscience, 2000 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley & Sons, 2003 U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy. Advances in Knowledge Discovery and Data Mining. AAAI/MIT Press, 1996 U. Fayyad, G. Grinstein, and A. Wierse, Information Visualization in Data Mining and Knowledge Discovery, Morgan Kaufmann, 2001 J. Han and M. Kamber. Data Mining: Concepts and Techniques. Morgan Kaufmann, 3rd ed., 2011 D. J. Hand, H. Mannila, and P. Smyth, Principles of Data Mining, MIT Press, 2001 T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., Springer-Verlag, 2009 B. Liu, Web Data Mining, Springer 2006. T. M. Mitchell, Machine Learning, McGraw Hill, 1997 G. Piatetsky-Shapiro and W. J. Frawley. Knowledge Discovery in Databases. AAAI/MIT Press, 1991 P.-N. Tan, M. Steinbach and V. Kumar, Introduction to Data Mining, Wiley, 2oo5 S. M. Weiss and N. Indurkhya, Predictive Data Mining, Morgan Kaufmann, 1998 I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Technigues with Java Implementations, Morgan Kaufmann, 2nd ed. 2005 41" }, { "page_index": 41, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_001.png", "page_index": 41, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:25+07:00" }, "raw_text": "Mining: Data Chapter 2 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign Simon Fraser University O2011 Han, Kamber, and Pei. All rights reserved. 1" }, { "page_index": 42, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_002.png", "page_index": 42, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:29+07:00" }, "raw_text": "Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Data Visualization Measuring Data Similarity and Dissimilarity Summary 2" }, { "page_index": 43, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_003.png", "page_index": 43, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:40+07:00" }, "raw_text": "Types of Data Sets Record Relational records Data matrix, e.g., numerical matrix, game timeout season team coach score ball lost crosstabs Document data: text documents: term- frequency vector Document 1 3 0 5 0 2 6 0 2 0 2 Transaction data Graph and network Document 2 0 7 0 2 1 0 0 3 0 O World Wide Web Document 3 0 1 0 0 1 2 2 0 3 0 Social or information networks Molecular Structures Ordered TID Items Video data: sequence of images 1 Bread, Coke, Milk Temporal data: time-series 2 Beer, Bread Sequential Data: transaction sequences 3 Beer, Coke, Diaper, Milk Genetic sequence data + Beer, Bread, Diaper, Milk Spatial, image and multimedia: 5 Spatial data: maps Coke, Diaper, Milk Image data: Video data: 3" }, { "page_index": 44, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_004.png", "page_index": 44, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:43+07:00" }, "raw_text": "Important Characteristics of Structured Data Dimensionality Curse of dimensionality Sparsity Only presence counts Resolution Patterns depend on the scale Distribution Centrality and dispersion 4" }, { "page_index": 45, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_005.png", "page_index": 45, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:48+07:00" }, "raw_text": "Data Objects Data sets are made up of data objects. A data object represents an entity. Examples sales database: customers, store items, sales medical database: patients, treatments university database: students, professors, courses Also called samples , examples, instances, data points objects, tuples. Data objects are described by attributes. Database rows -> data objects; columns ->attributes. 5" }, { "page_index": 46, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_006.png", "page_index": 46, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:53+07:00" }, "raw_text": "Attributes Attribute (or dimensions, features, variables) a data field, representing a characteristic or feature of a data object. E.g., customer _ID, name, address Types: Nominal Binary Numeric: quantitative Interval-scaled Ratio-scaled 6" }, { "page_index": 47, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_007.png", "page_index": 47, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:09:58+07:00" }, "raw_text": "Attribute Types Nominal: categories, states, or \"names of things\" Hair_color ={auburn, black, blond, brown, grey, red, white} marital status, occupation, ID numbers, zip codes Binary Nominal attribute with only 2 states (0 and 1) Symmetric binary: both outcomes equally important e.g., gender Asymmetric binary: outcomes not equally important. e.g., medical test (positive vs. negative) Convention: assign 1 to most important outcome (e.g., HIV positive) Ordinal Values have a meaningful order (ranking) but magnitude between successive values is not known. Size = {small, medium, large}, grades, army rankings 7" }, { "page_index": 48, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_008.png", "page_index": 48, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:02+07:00" }, "raw_text": "Numeric Attribute Types Quantity (integer or real-valued) Interval Measured on a scale of egual-sized units Values have order E.g., temperature in C or F- calendar dates No true zero-point Ratio Inherent zero-point We can speak of values as being an order of magnitude larger than the unit of measurement (10 K° is twice as high as 5 K\"). e.g., temperature in Kelvin, length, counts monetary quantities 8" }, { "page_index": 49, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_009.png", "page_index": 49, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:07+07:00" }, "raw_text": "Discrete vs. Continuous Attributes Discrete Attribute Has only a finite or countably infinite set of values E.g., zip codes, profession, or the set of words in a collection of documents Sometimes, represented as integer variables Note: Binary attributes are a special case of discrete attributes Continuous Attribute Has real numbers as attribute values - E.g., temperature, height, or weight Practically, real values can only be measured and represented using a finite number of digits Continuous attributes are typically represented as floating-point variables 9" }, { "page_index": 50, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_010.png", "page_index": 50, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:10+07:00" }, "raw_text": "Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Data Visualization Measuring Data Similarity and Dissimilarity Summary 10" }, { "page_index": 51, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_011.png", "page_index": 51, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:15+07:00" }, "raw_text": "Basic Statistical Descriptions of Data Motivation variation and spread Data dispersion characteristics median, max, min, quantiles, outliers, variance, etc Numerical dimensions correspond to sorted intervals Data dispersion: analyzed with multiple granularities of precision Boxplot or quantile analysis on sorted intervals Dispersion analysis on computed measures Folding measures into numerical dimensions Boxplot or quantile analysis on the transformed cube 11" }, { "page_index": 52, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_012.png", "page_index": 52, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:22+07:00" }, "raw_text": "Mean (algebraic measure) (sample vs. population): n x x = X j U = N Note: n is sample size and W is population size n i=1 Weighted arithmetic mean: n M Trimmed mean: chopping extreme values i=1 x = n Median: i=1 Middle value if odd number of values, or average of age frequency the middle two values otherwise 1-5 200 Estimated by interpolation (for grouped data) : 6-15 450 n/2-(Z freq)l 16-20 300 median = L +( )width 21-50 1500 freq median Mode 51-80 700 Value that occurs most frequently in the data 81-110 44 Unimodal, bimodal, trimodal Empirical formula: mean-mode=3x(mean-median) 12" }, { "page_index": 53, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_013.png", "page_index": 53, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:28+07:00" }, "raw_text": "Symmetric vs. Skewed Da Mean Median Median, mean and mode of Mode symmetric symmetric, positively and negatively skewed data Mean Mode Mode Mean I positively skewed negatively skewed II Median Median" }, { "page_index": 54, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_014.png", "page_index": 54, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:34+07:00" }, "raw_text": "Measuring the Dispersion of Data Quartiles, outliers and boxplots Quartiles: Q1 (25th percentile), Q3 (75th percentile) Inter-quartile range: IQR = Q3 -Q1 Five number summary: min, Qy, median, Q3, max Boxplot: ends of the box are the quartiles; median is marked; add whiskers, and plot outliers individually Outlier: usually, a value higher/lower than 1.5 x IQR Variance and standard deviation (sample: s, population: ) Variance: (algebraic, scalable computation) 1 n 1 n n 1 n (x, -x) = > 2 c x;) (x; -u)2 = 2 x u N N n-1 n-1 n i=1 i=1 i=1 i=1 i=1 Standard deviation s (or g) is the square root of variance s2 (oro2) 14" }, { "page_index": 55, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_015.png", "page_index": 55, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:44+07:00" }, "raw_text": "Lower Upper Lower Quartile Quartile Upper Extreme Median Extreme Boxplot Analysis -+ -+ ++++++ 10 20 30 40 50 60 70 80 90 100 Five-number summary of a distribution Minimum, Q1, Median, Q3, Maximum Boxplot Data is represented with a box 5.8 The ends of the box are at the first and third 5.1 1 quartiles, i.e., the height of the box is IQR 13 The median is marked by a line within the 3.0 box 1.55 Whiskers: two lines outside the box extended 13 10 to Minimum and Maximum 03 Outliers: points beyond a specified outlier threshold, plotted individually 15" }, { "page_index": 56, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_016.png", "page_index": 56, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:51+07:00" }, "raw_text": "Visualization of Data Dispersion: 3-D Boxplots 4 cost revenue 5000.00+ 4000.005000.00 3000.004000.00 2000.003000.00 1000.002000.00 0.001000.00 5500 5000. 0.00 5500 6000 1000.00 2000.00 00" }, { "page_index": 57, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_017.png", "page_index": 57, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:10:56+07:00" }, "raw_text": "Properties of Normal Distribution Curve The normal (distribution) curve From - to +o: contains about 68% of the measurements (: mean, o: standard deviation) From u-2o to +2o: contains about 95% of it From u-3o to u+3o: contains about 99.7% of it 95% 68% 99.7% -3 -2 -1 0 +1 +2 +3 -3 -2 -1 0 +1 +2 +3 -3 -2 -1 0 +1 +2 +3 17" }, { "page_index": 58, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_018.png", "page_index": 58, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:01+07:00" }, "raw_text": "Graphic Displays of Basic Statistical Descriptions Boxplot: graphic display of five-number summary Histogram: x-axis are values, y-axis repres. frequencies Quantile plot: each value x; is paired with f; indicating that approximately 100 f:% of data are < xj Quantile-quantile (q-g) plot: graphs the quantiles of one univariant distribution against the corresponding quantiles of another Scatter plot: each pair of values is a pair of coordinates and plotted as points in the plane 18" }, { "page_index": 59, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_019.png", "page_index": 59, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:06+07:00" }, "raw_text": "Histogram Analysis Histogram: Graph display of 40 tabulated frequencies, shown as bars 35 It shows what proportion of cases 30 fall into each of several categories 25 Differs from a bar chart in that it is 20 the area of the bar that denotes the value, not the height as in bar 15 categories are not of uniform width 5 The categories are usually specified as non-overlapping intervals of 10000 30000 50000 70000 90000 some variable. The categories (bars) must be adjacent 19" }, { "page_index": 60, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_020.png", "page_index": 60, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:10+07:00" }, "raw_text": "Histograms Often Tell More than 1 Boxpots The two histograms shown in the left may have the same boxplot representation The same values for: min, Q1, median, Q3, max But they have rather different data distributions 20" }, { "page_index": 61, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_021.png", "page_index": 61, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:15+07:00" }, "raw_text": "Quantile Plot Displays all of the data a (allowing the user to assess both the overall behavior and unusual occurrences) Plots quantile information For a data x, data sorted in increasing order, f indicates that approximately 100 f% of the data are below or equal to the value xj 140 120 100 80 60 40 20 0 0.000 0.250 0.500 0.750 1.000 f-value 21" }, { "page_index": 62, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_022.png", "page_index": 62, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:20+07:00" }, "raw_text": "uantile-Ouantile (Q-Q) Plot Graphs the quantiles of one univariate distribution against the corresponding quantiles of another View: Is there is a shift in going from one distribution to another? Example shows unit price of items sold at Branch 1 vs. Branch 2 for each quantile. Unit prices of items sold at Branch 1 tend to be lower than those at Branch 2. 120 110 100 90 80 70 60 50 40 40 50 60 70 80 90 100 110 120 Branch 1 (unit price $) 22" }, { "page_index": 63, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_023.png", "page_index": 63, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:24+07:00" }, "raw_text": "Scatter plot Provides a first look at bivariate data to see clusters of points, outliers, etc Each pair of values is treated as a pair of coordinates and plotted as points in the plane 700 600 500 400 300 200 100 0 0 20 40 60 80 100 120 140 Unit price ($) 23" }, { "page_index": 64, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_024.png", "page_index": 64, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:27+07:00" }, "raw_text": "Positively and Negatively Correlated Data The left half fragment is positively correlated The right half is negative correlated 24" }, { "page_index": 65, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_025.png", "page_index": 65, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:30+07:00" }, "raw_text": "Uncorrelated Data 5" }, { "page_index": 66, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_026.png", "page_index": 66, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:33+07:00" }, "raw_text": "Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Data Visualization Measuring Data Similarity and Dissimilarity Summary 26" }, { "page_index": 67, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_027.png", "page_index": 67, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:37+07:00" }, "raw_text": "Data Visualization Why data visualization? Gain insight into an information space by mapping data onto graphical primitives Provide qualitative overview of large data sets Search for patterns, trends, structure, irregularities, relationships among data Help find interesting regions and suitable parameters for further quantitative analysis Provide a visual proof of computer representations derived Categorization of visualization methods: Pixel-oriented visualization technigues Geometric projection visualization techniques Icon-based visualization technigues Hierarchical visualization technigues Visualizing complex data and relations 27" }, { "page_index": 68, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_028.png", "page_index": 68, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:41+07:00" }, "raw_text": "Pixel-Oriented Visualization Techniques For a data set of m dimensions, create m windows on the screen, one for each dimension The m dimension values of a record are mapped to m pixels at the corresponding positions in the windows The colors of the pixels reflect the corresponding values (a) Income (b) Credit Limit (c) transaction volume (d) age 28" }, { "page_index": 69, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_029.png", "page_index": 69, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:46+07:00" }, "raw_text": "Laying Out Pixels in Circle Segments To save space and show the connections among multiple dimensions space filling is often done in a circle segment Dim 6 one datarecord Dim 6 Dim 1 Dim 5 Dim 5 Dim 1 Dim 2 Dim 4 Dim 2 Dim 4 Dim 3 Dim 3 (a) Representing a data record (b) Laying out pixels in circle segment in circle segment 29" }, { "page_index": 70, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_030.png", "page_index": 70, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:51+07:00" }, "raw_text": "Geometric Projection Visualization Technigues Visualization of geometric transformations and projections of the data Methods Direct visualization Scatterplot and scatterplot matrices Landscapes Projection pursuit technique: Help users find meaningful projections of multidimensional data Prosection views Hyperslice Parallel coordinates 30" }, { "page_index": 71, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_031.png", "page_index": 71, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:11:55+07:00" }, "raw_text": "Direct Data Visualization Ribbons with Twists Based on Vorticity datq courtesy of NCSA, University of Illinois qt Vrbana-Chqmpqign 31" }, { "page_index": 72, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_032.png", "page_index": 72, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:01+07:00" }, "raw_text": "Scatterplot Matrices Iong lat. depth vall DCH 11+ h I pasn Matrix of scatterplots (x-y-diagrams) of the k-dim. data [total of (k2/2-k) scatterplots] 32" }, { "page_index": 73, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_033.png", "page_index": 73, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:08+07:00" }, "raw_text": "Landscapes SPIAE SGI Version) Cot wels cour oosni pelcent soldiers news articles children X visualized as children baby a landscape fields mcveigh fbi rescue Prabsne Pacific Northwest Laboratory Visualization of the data as perspective landscape The data needs to be transformed into a (possibly artificial) 2D spatial representation which preserves the characteristics of the data 33" }, { "page_index": 74, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_034.png", "page_index": 74, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:12+07:00" }, "raw_text": "Parallel Coordinates n eguidistant axes which are parallel to one of the screen axes and correspond to the attributes The axes are scaled to the [minimum, maximum]: range of the corresponding attribute Every data item corresponds to a polygonal line which intersects each of the axes at the point which corresponds to the value for the attribute Attr. 1 Attr. 2 Attr. 3 Attr. k 34" }, { "page_index": 75, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_035.png", "page_index": 75, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:16+07:00" }, "raw_text": "Parallel Coordinates of a Data Set 35" }, { "page_index": 76, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_036.png", "page_index": 76, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:20+07:00" }, "raw_text": "Icon-Based Visualization Techniques Visualization of the data values as features of icons Typical visualization methods Chernoff Faces Stick Figures General techniques Shape coding: Use shape to represent certain information encoding Color icons: Use color icons to encode more information Tile bars: Use small icons to represent the relevant feature vectors in document retrieval 36" }, { "page_index": 77, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_037.png", "page_index": 77, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:26+07:00" }, "raw_text": "Chernoff Faces A way to display variables on a two-dimensional surface, e.g., let x be eyebrow slant, y be eye size, z be nose length, etc. The figure shows faces produced using 10 characteristics--head eccentricity, eye size, eye spacing, eye eccentricity, pupil size, eyebrow slant, nose size, mouth shape, mouth size, and mouth opening): Each assigned one of 10 possible values, generated using Mathematica (S. Dickson) a REFERENCE: Gonick, L. and Smith, W. 7he Cartoon Guide to Statistics. New York: Harper Perennial, p. 212, 1993 OO 4 Weisstein, Eric W. \"Chernoff Face.\" From MathWor/d-A Wolfram Web Resource. al 4 mathworld.wolfram.com/ChernoffFace.html 37" }, { "page_index": 78, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_038.png", "page_index": 78, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:39+07:00" }, "raw_text": "Stick Figure a. A census data figure showing age, income. gender, 1 education,etc Ah r F .-1 A 5-piece stick 33 Ml figure (1 body and 4 limbs w. pasn different angle/length) INCOME Two attributes mapped to axes, remaining attributes mapped to angle or length of limbs\". Look at texture pattern 38" }, { "page_index": 79, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_039.png", "page_index": 79, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:42+07:00" }, "raw_text": "Hierarchical Visualization Techniaues Visualization of the data using a hierarchical partitioning into subspaces Methods Dimensional Stacking Worlds-within-Worlds Tree-Map Cone Trees InfoCube 39" }, { "page_index": 80, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_040.png", "page_index": 80, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:48+07:00" }, "raw_text": "Dimensional Stacking attribute 4 attribute 2 attribute 3 attribute 1 Partitioning of the n-dimensional attribute space in 2-D subspaces, which are `stacked' into each other Partitioning of the attribute value ranges into classes. The important attributes should be used on the outer levels. Adequate for data with ordinal attributes of low cardinality But, difficult to display more than nine dimensions Important to map dimensions appropriately 40" }, { "page_index": 81, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_041.png", "page_index": 81, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:12:53+07:00" }, "raw_text": "Dimensional Stacking Used by permission of M. Ward, Worcester Polytechnic Institute Visualization of oil mining data with longitude and latitude mapped to the outer x-, y-axes and ore grade and depth mapped to the inner x-, y-axes 41" }, { "page_index": 82, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_042.png", "page_index": 82, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:02+07:00" }, "raw_text": "Worlds-within-Worlds Assign the function and two most important parameters to innermost world Fix all other parameters at constant values - draw other (1 or 2 or 3 dimensional worlds choosing these as the axes) Software that uses this paradigm N-vision: Dynamic interaction through data 10 0.672 glove and stereo 0.8 displays, including rotation, scaling (inner) 06 and translation 04 (inner/outer) 0.31 0.845 0.64.6 0.8 L0 XI 0.2 0.8 Auto Visual: Static 10 X2 0.489 0.526 0.2 06 08 1.0 02 X3 interaction by means of 0.4 0.6 0.8 queries 1.0 x5 42" }, { "page_index": 83, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_043.png", "page_index": 83, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:08+07:00" }, "raw_text": "Tree-Map Screen-filling method which uses a hierarchical partitioning of the screen into regions depending on the attribute values The x- and y-dimension of the screen are partitioned alternately according to the attribute values (classes) sex MSR Netscan Image alt pictubbnaries erotica muitimedia Ack.: http://www.cs.umd.edu/hcil/treemap-history/all102001.jpg 43" }, { "page_index": 84, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_044.png", "page_index": 84, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:15+07:00" }, "raw_text": "Tree-Map of a File System (Schneiderman) Baby.TV 4.3.91 1008 files Suitc 2 44" }, { "page_index": 85, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_045.png", "page_index": 85, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:20+07:00" }, "raw_text": "InfoCube A 3-D visualization technique where hierarchical information is displayed as nested semi-transparent cubes The outermost cubes correspond to the top level data, while the subnodes or the lower level data are represented as smaller cubes inside the outermost cubes, and so on laiMacdoc REKIMOTO ASIC work" }, { "page_index": 86, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_046.png", "page_index": 86, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:26+07:00" }, "raw_text": "Three-D Cone Trees 3D cone tree visualization technique works well for up to a thousand nodes or so First build a 2D circ/e tree that arranges its nodes in concentric circles centered on the root node Cannot avoid overlaps when projected to 2D G. Robertson, J. Mackinlay, S. Card. \"Cone Trees: Animated 3D Visualizations of Hierarchical Information\", ACM SIGCHI'91 Graph from Nadeau Software Consulting website: Visualize a social network data set that models the way an infection spreads from one person to the next Ack. : http://nadeausoftware.com/articles/visualization 46" }, { "page_index": 87, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_047.png", "page_index": 87, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:51+07:00" }, "raw_text": "Visualizing Complex Data and Relations Visualizing non-numerical data: text and social networks Tag cloud: visualizing user-generated tags The importance of aboy permulini +SELECTALLCOTE RUSTRFLI IUSTE CAMD FRANCE EUTECHH EIUCERL SPHAH Pentag Oil Charges Harman Ruling on Khodorkov Fever Missed 'Mother Of Called Kind wine sky awaits cut not tag is represented All to lragis at shipments guilty all bad on Smokescree Sentencing pitch coulduncork verdict for Tiger ns' Denies by font size/color newmarkets Wanted Cuban LA Mayor Daily Derby Winner Stormy Morgan Stanley Unitec Air Desecr Tries to Giacomo Has Exile Surfaces Grim over Legal inks labour Playoff Hold on to hurrican Plenty to Costs ceal with in US office e season mechanics Essentials Prove Besides text data, ation Harvard USorders predicte Toyota Amber TRSICEn flight from Armerge Alerts on The Termpl Federer Pran eenc HA 0 to Make Wreless ners dso Italydiverted Hybrids Phones Golden ioin there are also Tonrams in US Bear MAC Saudi Amber Alert DeathHcw USW Senate nears inmate aske leader: cihirn roars ssueo1or todonte Georgia ysno Kingdom EFe battle on Stocks man pleads ad. volahor canmeet rise after methods to visualize Bush not guilty E Treasury Cule insast maiia oil demand Gunners Have Byrc Takes to warns etdg of N courthouse China shoo dea Debut nominees murders Lucas Confirms Social Worker Nintend Microsoft's relationships, such as Troops. Ethiopia Italianaid Young Boy Denied Desktop ccoon worker IndianaJones o Plans COT Opposition: lsraelis Sexual Abuse Glue Militants kicnapped in Government Oppose Kabul is well IV by Jackson a Victory Claim -foreign Gaza KanDebate Battle In ?Revo minister Pullout visualizing social isPremature CIFRCE Kylie CBS How wil the lution? Mosul UK Heipline KCawaiti India Raymond' network fil Ccts 320 can run for rushcs to Minogue Chaopelle's in?06 cabinet posts finale Calls,70 shoes? speaker ontor ellte Battling draws 32 Fmails POLIDETA club networks million Abcut Lost Pitt proudof Cancer ExpertsDebate Neutral Piano Man vewers mariage Study on Fat mediation is nisAre Firefight Breast Cancer aMreCad 67DeadAnd Movies needed on 35Missing In er Tom Banglades Beslan Siege still in is Sole New study North Korea FerryMishap suggests kudzu Trial Begins SUSLECTS slump Survivor helps curb hinge drinking TuesdayMay17.200517.38 +SELECT ALLCTEORIES LAYOUT. SQUAFIFIED STADIFO LESSTHAN10IMUTESAGO WIRLI OETHAN1OUTESAG MRETHHP1HOURHG Newsmap: Google News Stories in 2005" }, { "page_index": 88, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_048.png", "page_index": 88, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:55+07:00" }, "raw_text": "Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Data Visualization Measuring Data Similarity and Dissimilarity Summary 48" }, { "page_index": 89, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_049.png", "page_index": 89, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:13:59+07:00" }, "raw_text": "Similarity and Dissimilarity Similarity Numerical measure of how alike two data objects are Value is higher when objects are more alike Often falls in the range [0,1] Dissimilarity (e.g., distance) Numerical measure of how different two data objects are Lower when objects are more alike Minimum dissimilarity is often 0 Upper limit varies Proximity refers to a similarity or dissimilarity 49" }, { "page_index": 90, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_050.png", "page_index": 90, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:04+07:00" }, "raw_text": "Data Matrix and Dissimilarity Matrix Data matrix n data points with p x 11 x1f x 1p dimensions x i1 Two modes x if x ip x X x nl nt np Dissimilarity matrix n data points, but d(2,1) 0 registers only the d(3,1) d(3,2) 0 distance A triangular matrix d(n,1) d(n,2) Single mode 50" }, { "page_index": 91, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_051.png", "page_index": 91, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:08+07:00" }, "raw_text": "Proximity Measure for Nominal Attributes Can take 2 or more states, e.g., red, yellow, blue, green (generalization of a binary attribute) Method 1: Simple matching m: # of matches, p: total # of variables p Method 2: Use a large number of binary attributes creating a new binary attribute for each of the M nominal states 51" }, { "page_index": 92, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_052.png", "page_index": 92, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:16+07:00" }, "raw_text": "Proximity Measure for Binary Attributes Object j 1 sum A contingency table for binary data 1 r q+r Object i t s+t q+s r+t sum p Distance measure for symmetric r+s binary variables: d(i,j)= q+r+s+t Distance measure for asymmetric r+s binary variables: di,j)= q+r+s Jaccard coefficient (similarity q measure for asymmetric binary simJaccard(i, j) q+r+s variables): Note: Jaccard coefficient is the same as \"coherence\". sup(i,j) q coherence(i, j) = sup(i)+sup(j) -sup(i,j) .(q+r)+(q+s) -q 52" }, { "page_index": 93, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_053.png", "page_index": 93, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:22+07:00" }, "raw_text": "Dissimilarity between Binary Variables Example Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4 Jack M Y N P N N N Mary F Y N P N P N Jim M Y P N N N N Gender is a symmetric attribute The remaining attributes are asymmetric binary Let the values Y and P be 1, and the value N 0 0 + 1 d ( jack, mary 0.33 2 + 0 + 1 1 + 1 d jack, jim l= 0.67 1 + 1 + 1 1 + 2 d (jim , mary 0.75 - 1 + 1 + 2 53" }, { "page_index": 94, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_054.png", "page_index": 94, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:28+07:00" }, "raw_text": "Standardizing Numeric Data x-ul Z = Z-score: 0 X: raw score to be standardized, u: mean of the population, o: standard deviation the distance between the raw score and the population mean in units of the standard deviation negative when the raw score is below the mean, \"+\" when above An alternative way: Calculate the mean absolute deviation ...+x -m c where 1(xi f m +x .+X n x -m if Z standardized measure if S (z-score) Using mean absolute deviation is more robust than using standard deviation 54" }, { "page_index": 95, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_055.png", "page_index": 95, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:34+07:00" }, "raw_text": "Example: Data Matrix and Dissimilarity Matrix Data Matrix x point attributel attribute2 4 x1 1 2 x2 3 5 x3 2 x4 4 5 2 Dissimilarity Matrix (with Euclidean Distance) x1 x2 x3 x4 0 2 1 x1 0 x2 3.61 x3 5.1 5.1 0 x4 4.24 1 5.39 0 55" }, { "page_index": 96, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_056.png", "page_index": 96, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:39+07:00" }, "raw_text": "Distance on Numeric Data: Minkowski Distance Minkowski distance: A popular distance measure d(i,j)=v h - i=(X1, Xizr., Xip) and j=(Xj1r Xj2r.,xjp) a where are two p-dimensional data objects, and h is the order (the distance so defined is also called L-h norm) Properties d(i, j) > 0 if i + j, and d(i, i) = 0 (Positive definiteness) d(i,j) = d(j, i) (Symmetry) d(i, j) d(i, k) + d(k, j) (Triangle Inequality) A distance that satisfies these properties is a metric 56" }, { "page_index": 97, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_057.png", "page_index": 97, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:45+07:00" }, "raw_text": "Special Cases of Minkowski Distance h = 1: Manhattan (city block, L1 norm) distance E.g., the Hamming distance: the number of bits that are different between two binary vectors d(i,j)=x;-x x. -x. x. -x 11 2 12 lp /p h = 2: (L, norm) Euclidean distance d(i,j)= 2 +x 1 x.-x - x l1 12 12 lp 1p h -> co. \"supremum\" (L norm, L.. norm) distance. max This is the maximum difference between any component (attribute) of the vectors h p P d(i,j) = lim Xif -Xjf h max Xif - Xjf h- f f=1 57" }, { "page_index": 98, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_058.png", "page_index": 98, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:55+07:00" }, "raw_text": "Example: Minkowski Distance Dissimilarity Matrices Manhattan (L) point attribute 1 attribute 2 x1 1 2 L x1 x2 x3 x4 x2 5 x1 0 x3 2 O x2 5 0 x4 4 5 x3 3 6 x4 6 1 7 0 Euclidean 1 (Lz) L2 x1 x2 x3 x4 x1 4 x2 3.61 0 x3 2.24 5.1 O x4 4.24 1 5.39 0 Supremum 2 AN L.o x1 x2 x3 x4 x1 x2 3 0 x3 2 5 4 2 x4 3 1 5 0 58" }, { "page_index": 99, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_059.png", "page_index": 99, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:14:59+07:00" }, "raw_text": "Ordinal Variables An ordinal variable can be discrete or continuous Order is important, e.g., rank Can be treated like interval-scaled replace xif by their rank E map the range of each variable onto [0, 1] by replacing /th object in the fth variable by r - 1 if Z if M - 1 1 compute the dissimilarity using methods for interval- scaled variables 59" }, { "page_index": 100, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_060.png", "page_index": 100, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:03+07:00" }, "raw_text": "Attributes of Mixed Type A database may contain all attribute types Nominal, symmetric binary, asymmetric binary, numeric, ordinal One may use a weighted formula to combine their effects f = d(i,j)= 1] p S(f) f=1ij f is binary or nominal: f is numeric: use the normalized distance f is ordinal - Compute ranks rif and 60" }, { "page_index": 101, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_061.png", "page_index": 101, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:12+07:00" }, "raw_text": "Cosine Similarity A document can be represented by thousands of attributes, each recording the frequency of a particular word (such as keywords) or phrase in the document. Document hockey baseball penalty loss teamcoach soccer score win season Document1 3 0 2 0 2 0 Document2 3 2 1 1 1 0 1 Document3 7 0 2 1 0 0 3 0 0 Document4 0 1 0 0 1 2 2 0 3 Applications: information retrieval, biologic taxonomy, gene feature mapping, ... Cosine measure: If d, and d, are two vectors (e.g., term-frequency vectors), then cos(dy d2)= (d* d,)/11dl111d,l11 where : indicates vector dot product, Ial: the length of vector d 61" }, { "page_index": 102, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_062.png", "page_index": 102, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:16+07:00" }, "raw_text": "Example: Cosine Similarity cos(dy d,)= (d d2)711dl1l1d,I1r where : indicates vector dot product, la: the length of vector d Ex: Find the similarity between documents 1 and 2. d= (5,0,3,0,2,0,0,2,0,0 d,= (3,0,2,0, 1, 1, 0, 1, 0, 1 d;-d,= 5*3+0*0+3*2+0*0+2*1+0*1+0*1+2*1+0*0+0*1 = 25 ldll= (5*5+0*0+3*3+0*0+2*2+0*0+0*0+2*2+0*0+0*0)o.5=(42)o.5 = 6.481 = 4.12 cos(dy d,) = 0.94 62" }, { "page_index": 103, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_063.png", "page_index": 103, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:20+07:00" }, "raw_text": "Chapter 2: Getting to Know Your Data Data Objects and Attribute Types Basic Statistical Descriptions of Data Data Visualization Measuring Data Similarity and Dissimilarity Summary 63" }, { "page_index": 104, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_064.png", "page_index": 104, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:24+07:00" }, "raw_text": "Summary Data attribute types: nominal, binary, ordinal, interval-scaled, ratio- scaled Many types of data sets, e.g., numerical, text, graph, Web, image. Gain insight into the data by: Basic statistical data description: central tendency, dispersion graphical displays Data visualization: map data onto graphical primitives Measure data similarity Above steps are the beginning of data preprocessing. Many methods have been developed but still an active area of research. 64" }, { "page_index": 105, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_1/slide_065.png", "page_index": 105, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:29+07:00" }, "raw_text": "References W. Cleveland, Visualizing Data, Hobart Press, 1993 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003 U. Fayyad, G. Grinstein, and A. Wierse. Information Visualization in Data Mining and Knowledge Discovery, Morgan Kaufmann, 2001 L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990. H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Tech. Committee on Data Eng., 20(4), Dec. 1997 D. A. Keim. Information visualization and visual data mining, IEEE trans. on Visualization and Computer Graphics, 8(1), 2002 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999 S. Santini and R. Jain,\" Similarity measures\", IEEE Trans. on Pattern Analysis and Machine Intelligence, 21(9), 1999 E. R. Tufte. The Visual Display of Quantitative Information, 2nd ed., Graphics Press 2001 C. Yu , et al., Visual data mining of multimedia data for social and behavioral studies Information Visualization, 8(1), 2009 65" }, { "page_index": 106, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_001.png", "page_index": 106, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:33+07:00" }, "raw_text": "Mining: Data (3rd ed.) - Chapter 3 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved 1" }, { "page_index": 107, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_002.png", "page_index": 107, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:36+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 2" }, { "page_index": 108, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_003.png", "page_index": 108, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:39+07:00" }, "raw_text": "Data Quality: Why Preprocess the Data? Measures for data quality: A multidimensional view Accuracy: correct or wrong, accurate or not Timeliness: timely update? Believability: how trustable the data are correct? Interpretability: how easily the data can be understood? 3" }, { "page_index": 109, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_004.png", "page_index": 109, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:44+07:00" }, "raw_text": "Major Tasks in Data Preprocessing Data cleaning Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies Data integration Integration of multiple databases, data cubes, or files Data reduction Dimensionality reduction Numerosity reduction Data compression Data transformation and data discretization Normalization Concept hierarchy generation 4" }, { "page_index": 110, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_005.png", "page_index": 110, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:47+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 5" }, { "page_index": 111, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_006.png", "page_index": 111, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:52+07:00" }, "raw_text": "Data Cleaning Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g., instrument faulty, human or computer error, transmission error incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data e.g., Occupation=\" \" (missing data) noisy: containing noise, errors, or outliers e.g., Salary=\"-10\" (an error) inconsistent: containing discrepancies in codes or names, e.g., Age=\"42\" Birthday=\"03/07/2010\" Was rating \"1, 2, 3\", now rating \"A, B, C\" discrepancy between duplicate records Intentional_(e.g., disguised missing data) Jan. 1 as everyone's birthday? 6" }, { "page_index": 112, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_007.png", "page_index": 112, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:15:56+07:00" }, "raw_text": "Incomplete (Missing) Data Data is not always available E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of entry not register history or changes of the data Missing data may need to be inferred 7" }, { "page_index": 113, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_008.png", "page_index": 113, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:01+07:00" }, "raw_text": "How to Handle Missing Data? Ignore the tuple: usually done when class label is missing (when doing classification)-not effective when the % of missing values per attribute varies considerably Fill in the missing value manually: tedious + infeasible? Fill in it automatically with a global constant : e.g., \"unknown\", a new class?! the attribute mean the attribute mean for all samples belonging to the same class: smarter the most probable value: inference-based such as Bayesian formula or decision tree 8" }, { "page_index": 114, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_009.png", "page_index": 114, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:05+07:00" }, "raw_text": "Noisy Data Noise: random error or variance in a measured variable Incorrect attribute values may be due to faulty data collection instruments data entry problems data transmission problems technology limitation inconsistency in naming convention Other data problems which require data cleaning duplicate records Incomplete data inconsistent data 9" }, { "page_index": 115, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_010.png", "page_index": 115, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:11+07:00" }, "raw_text": "How to Handle Noisy Data? Binning first sort data and partition into (equal-frequency) bins then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc. Regression smooth by fitting the data into regression functions Clustering detect and remove outliers Combined computer and human inspection detect suspicious values and check by human (e.g. deal with possible outliers) 10" }, { "page_index": 116, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_011.png", "page_index": 116, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:16+07:00" }, "raw_text": "Data Cleaning as a Process Data discrepancy detection Use metadata (e.g., domain, range, dependency, distribution) Check field overloading Check uniqueness rule, consecutive rule and null rule Use commercial tools Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers) Data migration and integration Data migration tools: allow transformations to be specified ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface Integration of the two processes Iterative and interactive (e.g., Potter's Wheels) 11" }, { "page_index": 117, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_012.png", "page_index": 117, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:20+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 12" }, { "page_index": 118, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_013.png", "page_index": 118, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:24+07:00" }, "raw_text": "Data Integration Data integration: Combines data from multiple sources into a coherent store Schema integration: e.g., A.cust-id = B.cust-# Integrate metadata from different sources Entity identification problem: Identify real world entities from multiple data sources, e.g., Bill Clinton = William Clinton Detecting and resolving data value conflicts For the same real world entity, attribute values from different sources are different Possible reasons: different representations, different scales, e.g., metric vs. British units 13" }, { "page_index": 119, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_014.png", "page_index": 119, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:30+07:00" }, "raw_text": "Handling Redundancy in Data Integration Redundant data occur often when integration of multiple databases Object identification: The same attribute or object may have different names in different databases Derivable data: One attribute may be a \"derived\" attribute in another table, e.g., annual revenue Redundant attributes may be able to be detected by correlation analysis and covariance analysis Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality 14" }, { "page_index": 120, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_015.png", "page_index": 120, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:35+07:00" }, "raw_text": "Correlation Analysis (Nominal Data) X2 (chi-square) test (Observed-Expected)2 2 X Expected The larger the X2 value, the more likely the variables are related The cells that contribute the most to the X2 value are those whose actual count is very different from the expected count Correlation does not imply causality # of hospitals and # of car-theft in a city are correlated Both are causally linked to the third variable: population 15" }, { "page_index": 121, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_016.png", "page_index": 121, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:41+07:00" }, "raw_text": "Chi-Sauare Calculation: An Example Play chess Not play chess Sum (row) Like science fiction 250(90) 200(360) 450 Not like science fiction 50(210) 1000(840) 1050 Sum(col.) 300 1200 1500 X2 (chi-square) calculation (numbers in parenthesis are expected counts calculated based on the data distribution in the two categories) (250-90) (50-210)2 (200-360) (1000-840)2 2 = 507.93 X 90 210 360 840 It shows that like science_fiction and play_chess are correlated in the group 16" }, { "page_index": 122, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_017.png", "page_index": 122, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:45+07:00" }, "raw_text": "Correlation Analysis (Numeric Data) Correlation coefficient (also called Pearson's product) moment coefficient) n n (a; - A)(b; -B) 2 (a;b;)-nAB (n-1)04OB (n-1)04OB and B are the respective means of A and B, and are the respective standard deviation of A and B, and (abi) is the sum of the AB cross-product. increase as B's). The higher, the stronger correlation. 17" }, { "page_index": 123, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_018.png", "page_index": 123, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:51+07:00" }, "raw_text": "Visually Evaluating Correlation -1.00 -0.90 -0.80 -0.70 -0.60 -0.50 -0.40 0.30 -0.20 -0.10 0.00 0.10 0.20 0.30 Scatter plots showing the similarity from -1 to 1. 0.40 0.50 0.60 0.70 0.80 0.90 1.00 18" }, { "page_index": 124, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_019.png", "page_index": 124, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:54+07:00" }, "raw_text": "Correlation (viewed as linear relationship) Correlation measures the linear relationship between objects To compute correlation, we standardize data objects, A and B, and then take their dot product a'r =(ax -mean(A))/ std(A) b' k =(bk -mean(B))/ std(B) correlation(A, B) = A'-B' 19" }, { "page_index": 125, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_020.png", "page_index": 125, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:16:59+07:00" }, "raw_text": "Covariance (Numeric Data) Covariance is similar to correlation Zi=1(ai - A)(bi - B) Cou(A,B)=E((A-A)(B-B)= n Cov(A,B) Correlation coefficient: rA.B OAOB are the respective mean or B expected values of A and B, and are the respective standard deviation of A and B. than their expected values. value, B is likely to be smaller than its expected value. Some pairs of random variables may have a covariance of o but are not independent. Only under some additional assumptions (e.g., the data follow" }, { "page_index": 126, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_021.png", "page_index": 126, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:03+07:00" }, "raw_text": "Co-Variance: An Example Di=1(ai - A)(bi - B) Cov(A,B) = E((A - A)(B - B)) = n It can be simplified in computation as Cov(A,B) =E(A.B) -AB Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8), (5, 10),(4,11), (6, 14) Question: If the stocks are affected by the same industry trends, will their prices rise or fall together? E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4 E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6 Cov(A,B) = (2x5+3x8+5x10+4x11+6x14)/5 - 4 x 9.6 = 4 Thus, A and B rise together since Cov(A, B) > 0." }, { "page_index": 127, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_022.png", "page_index": 127, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:06+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 22 22" }, { "page_index": 128, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_023.png", "page_index": 128, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:12+07:00" }, "raw_text": "Data Reduction Strategies Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same (or almost the same) analytical results Why data reduction? - A database/data warehouse may store terabytes of data. Complex data analysis may take a very long time to run on the complete data set. Data reduction strategies Dimensionality reduction, e.g., remove unimportant attributes Wavelet transforms Principal Components Analysis (PCA) Feature subset selection, feature creation Numerosity reduction (some simply call it: Data Reduction) Regression and Log-Linear Models Histograms, clustering, sampling Data cube aggregation Data compression 23" }, { "page_index": 129, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_024.png", "page_index": 129, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:17+07:00" }, "raw_text": "Data Reduction 1: Dimensionality Reduction Curse of dimensionality When dimensionality increases, data becomes increasingly sparse Density and distance between points, which is critical to clustering, outlier analysis, becomes less meaningful The possible combinations of subspaces will grow exponentially Dimensionality reduction Avoid the curse of dimensionality Help eliminate irrelevant features and reduce noise Reduce time and space required in data mining Allow easier visualization Dimensionality reduction technigues Wavelet transforms Principal Component Analysis Supervised and nonlinear techniques (e.g., feature selection) 24" }, { "page_index": 130, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_025.png", "page_index": 130, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:23+07:00" }, "raw_text": "Mapping Data to a New Space Fourier transform Wavelet transform 15 10 0 0.5 5 0 0 0 0 -5 0 -0.5 10 0 0 0.2 0.4 0.6 0.8 0.2 0.4 0.6 0.8 0 10 20 30 40 50 60 70 80 90 Time (seconds) Time (seconds) Iwo Sine Waves Two Sine Waves + Noise Frequency 25" }, { "page_index": 131, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_026.png", "page_index": 131, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:28+07:00" }, "raw_text": "What Is Wayelet Transform? Decomposes a signal into different frequency subbands Applicable to n- dimensional signals Data are transformed to preserve relative distance between objects at different levels of resolution Allow natural clusters to become more distinguishable Used for image compression 26" }, { "page_index": 132, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_027.png", "page_index": 132, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:33+07:00" }, "raw_text": "Wavelet Transformation Haar2 Daubechie4 Discrete wavelet transform (DWT) for linear signal processing, multi-resolution analysis Compressed approximation: store only a small fraction of the strongest of the wavelet coefficients Similar to discrete Fourier transform (DFT), but better lossy compression, localized in space Method: Length, L, must be an integer power of 2 (padding with 0's, when necessary) Each transform has 2 functions: smoothing, difference Applies to pairs of data, resulting in two set of data of length L/2 Applies two functions recursively, until reaches the desired length 27" }, { "page_index": 133, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_028.png", "page_index": 133, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:38+07:00" }, "raw_text": "Wavelet Decomposition Wavelets: A math tool for space-efficient hierarchical decomposition of functions S = [2, 2, 0, 2, 3, 5, 4, 4] can be transformed to S. = [23/4, -11/4, 1/2, 0, 0, -1, -1, 0] Compression: many small detail coefficients can be replaced by 0's, and only the significant coefficients are retained Resolution Averages Detail Coefficients 8 2,2,0,2,3,5,4,4 4 [2, 1, 4, 4] [0, -1, -1, 0] [14] [z,0] 2 2] [2 [-1] 1 28" }, { "page_index": 134, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_029.png", "page_index": 134, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:45+07:00" }, "raw_text": "Haar Wavelet Coefficients Coefficient \"Supports Hierarchical + 2.75 2.75 decomposition + structure (a.k.a. -1.25 error tree\") -1.25 + 0.5 0.5 + 0 0 + 0 + + 1 -1 2 2 0 2 3 5 4 -1 + Original freguency distribution 0 29" }, { "page_index": 135, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_030.png", "page_index": 135, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:49+07:00" }, "raw_text": "Why Wavelet Transform? Use hat-shape filters Emphasize region where points cluster Suppress weaker information in their boundaries Effective removal of outliers Insensitive to noise, insensitive to input order Multi-resolution Detect arbitrary shaped clusters at different scales Efficient Complexity O(N) Only applicable to low dimensional data 30" }, { "page_index": 136, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_031.png", "page_index": 136, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:53+07:00" }, "raw_text": "Principal Component Analysis s (PCA) Find a projection that captures the largest amount of variation in data The original data are projected onto a much smaller space, resulting in dimensionality reduction. We find the eigenvectors of the covariance matrix, and these eigenvectors define the new space X2 e 0 X1 31" }, { "page_index": 137, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_032.png", "page_index": 137, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:17:57+07:00" }, "raw_text": "Principal Component Analysis (Steps) Given Wdata vectors from n-dimensions, find k n orthogonal vectors (principa/ components) that can be best used to represent data Normalize input data: Each attribute falls within the same range Compute k orthonormal (unit) vectors, i.e., principal components Each input data (vector) is a linear combination of the k principal component vectors The principal components are sorted in order of decreasing \"significance\" or strength Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data) Works for numeric data only 32" }, { "page_index": 138, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_033.png", "page_index": 138, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:02+07:00" }, "raw_text": "Attribute Subset Selection Another way to reduce dimensionality of data Redundant attributes Duplicate much or all of the information contained in one or more other attributes E.g., purchase price of a product and the amount of sales tax paid Irrelevant attributes Contain no information that is useful for the data mining task at hand E.g., students' ID is often irrelevant to the task of predicting students' GPA 33" }, { "page_index": 139, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_034.png", "page_index": 139, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:06+07:00" }, "raw_text": "Heuristic Search in Attribute Selection There are 2d possible attribute combinations of d attributes Typical heuristic attribute selection methods: Best single attribute under the attribute independence assumption: choose by significance tests Best step-wise feature selection: The best single-attribute is picked first Step-wise attribute elimination: Repeatedly eliminate the worst attribute Best combined attribute selection and elimination Optimal branch and bound: Use attribute elimination and backtracking 34" }, { "page_index": 140, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_035.png", "page_index": 140, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:11+07:00" }, "raw_text": "Attribute Creation Generation) (Feature Create new attributes (features) that can capture the important information in a data set more effectively than the original ones Three general methodologies Attribute extraction Domain-specific Mapping data to new space (see: data reduction) E.g., Fourier transformation, wavelet transformation, manifold approaches (not covered) Attribute construction Combining features (see: discriminative frequent patterns in Chapter 7) Data discretization 35" }, { "page_index": 141, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_036.png", "page_index": 141, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:15+07:00" }, "raw_text": "Data Reduction 2: Numerosity F Reduction Reduce data volume by choosing alternative, smaller forms of data representation Parametric methods (e.g., regression) Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) Ex. : Log-linear models-obtain value at a point in m- D space as the product on appropriate marginal subspaces Non-parametric methods Do not assume models Major families: histograms, clustering, sampling, ... 36" }, { "page_index": 142, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_037.png", "page_index": 142, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:19+07:00" }, "raw_text": "Parametric Data Reduction: Regression and Log-Linear Models Linear regression Data modeled to fit a straight line Often uses the least-square method to fit the line Multiple regression Allows a response variable Y to be modeled as a linear function of multidimensional feature vector Log-linear model Approximates discrete multidimensional probability distributions 37" }, { "page_index": 143, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_038.png", "page_index": 143, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:26+07:00" }, "raw_text": " y Regression Analysis Y1 Regression analysis: A collective name for techniques for the modeling and analysis Y1j y =x + 1 of numerical data consisting of values of a dependent variable (also called O response variable or measurement) and x X1 of one or more independent variables (aka explanatory variables or predictors Used for prediction The parameters are estimated so as to give (including forecasting of time-series data), inference a \"best fit\" of the data hypothesis testing, and Most commonly the best fit is evaluated by modeling of causal using the least squares method, but relationships other criteria have also been used 38" }, { "page_index": 144, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_039.png", "page_index": 144, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:31+07:00" }, "raw_text": "Regress Analysis and Log-Linear Mocels Linear regression: Y = w X + b Two regression coefficients, w and b, specify the line and are to be estimated by using the data at hand Xy Xy ... Multiple regression: Y = bo + b1 X + bz Xz Many nonlinear functions can be transformed into the above Log-linear models: Approximate discrete multidimensional probability distributions Estimate the probability of each point (tuple) in a multi-dimensional space for a set of discretized attributes, based on a smaller subset of dimensional combinations Useful for dimensionality reduction and data smoothing 39" }, { "page_index": 145, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_040.png", "page_index": 145, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:36+07:00" }, "raw_text": "Histogram Analysis Divide data into buckets and 40 store average (sum) for each 35 bucket 30 Partitioning rules: 25 Equal-width: equal bucket 20 range 15 Equal-frequency (or equal- 10 depth) 5 00001 00002 00000 0000t 00009 00009 00008 00006 00000I 40" }, { "page_index": 146, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_041.png", "page_index": 146, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:40+07:00" }, "raw_text": "Clustering Partition data set into clusters based on similarity, and store cluster representation (e.g., centroid and diameter) only Can be very effective if data is clustered but not if data is \"smeared\" Can have hierarchical clustering and be stored in multi- dimensional index tree structures There are many choices of clustering definitions and clustering algorithms Cluster analysis will be studied in depth in Chapter 10 41" }, { "page_index": 147, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_042.png", "page_index": 147, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:45+07:00" }, "raw_text": "Sampling Sampling: obtaining a small sample s to represent the whole data set W Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data Key principle: Choose a representative subset of the data Simple random sampling may have very poor performance in the presence of skew Develop adaptive sampling methods, e.g., stratified sampling: Note: Sampling may not reduce database I/Os (page at a time 42" }, { "page_index": 148, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_043.png", "page_index": 148, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:49+07:00" }, "raw_text": "Types of Sampling Simple random sampling There is an equal probability of selecting any particular item Sampling without replacement Once an object is selected, it is removed from the population Sampling with replacement A selected object is not removed from the population Stratified sampling: Partition the data set, and draw samples from each partition (proportionally, i.e., approximately the same percentage of the data) Used in conjunction with skewed data 43" }, { "page_index": 149, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_044.png", "page_index": 149, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:52+07:00" }, "raw_text": "Sampling: With i or without t Replacement SRSWOR simple random sample without replacement) SRSWR Raw Data 44" }, { "page_index": 150, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_045.png", "page_index": 150, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:55+07:00" }, "raw_text": ": Cluster or Stratified Sampling Sampling: Cluster/Stratified Sample Raw Data 0 45" }, { "page_index": 151, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_046.png", "page_index": 151, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:18:59+07:00" }, "raw_text": "Data Cube Aggregation The lowest level of a data cube (base cuboid) E.g., a customer in a phone calling data warehouse Multiple levels of aggregation in data cubes Further reduce the size of data to deal with Reference appropriate levels Use the smallest representation which is enough to solve the task Queries regarding aggregated information should be answered using data cube, when possible 46" }, { "page_index": 152, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_047.png", "page_index": 152, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:04+07:00" }, "raw_text": "Data Reduction 3: Data Compression String compression There are extensive theories and well-tuned algorithms Typically lossless, but only limited manipulation is possible without expansion Audio/video compression Typically lossy compression, with progressive refinement Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio Typically short and vary slowly with time Dimensionality and numerosity reduction may also be considered as forms of data compression 47" }, { "page_index": 153, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_048.png", "page_index": 153, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:06+07:00" }, "raw_text": "Data Compression Original Data Compressed Data lossless lossy Original Data Approximated 48" }, { "page_index": 154, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_049.png", "page_index": 154, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:11+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 49" }, { "page_index": 155, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_050.png", "page_index": 155, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:16+07:00" }, "raw_text": "Data Transformation A function that maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be identified with one of the new values Methods Smoothing: Remove noise from data Attribute/feature construction New attributes constructed from the given ones Aggregation: Summarization, data cube construction Normalization: Scaled to fall within a smaller, specified range min-max normalization z-score normalization normalization by decimal scaling Discretization: Concept hierarchy climbing 50" }, { "page_index": 156, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_051.png", "page_index": 156, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:21+07:00" }, "raw_text": "Normalization Min-max normalization: to [new_mina, new_maxa] v - min new maxs-new mina)+new mins maxs - mlna Ex. Let income range $12,000 to $98,000 normalized to [0.0 73,600-12,000 1.01. Then $73,000 is mapped to 1.0-0)+0= 0.716 98,000-12,000 Z-score normalization (: mean, o: standard deviation): V - la O A 73,600-54,000 Ex. Let = 54,000, 0 = 16,000. Then =1.225 16,000 Normalization by decimal scaling V v'= Where / is the smallest integer such that Max(v'D < 1 10 51" }, { "page_index": 157, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_052.png", "page_index": 157, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:27+07:00" }, "raw_text": "Discretization Three types of attributes Nominal-values from an unordered set, e.g., color, profession Ordinal-values from an ordered set, e.g., military or academic rank Numericreal numbers, e.g., integer or real numbers Discretization: Divide the range of a continuous attribute into intervals Interval labels can then be used to replace actual data values Reduce data size by discretization Supervised vs. unsupervised Split (top-down) vs. merge (bottom-up) Discretization can be performed recursively on an attribute Prepare for further analysis, e.g., classification 52" }, { "page_index": 158, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_053.png", "page_index": 158, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:32+07:00" }, "raw_text": "Data Discretization Methods Typical methods: All the methods can be applied recursively Binning Top-down split, unsupervised Histogram analysis Top-down split, unsupervised Clustering analysis (unsupervised, top-down split or bottom-up merge) Decision-tree analysis (supervised, top-down split) Correlation (e.g., x2) analysis (unsupervised, bottom-up merge) 53" }, { "page_index": 159, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_054.png", "page_index": 159, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:37+07:00" }, "raw_text": "Simple Discretization: Binning Equal-width (distance) partitioning Divides the range into W intervals of equal size: uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be: W= (B-A)/. The most straightforward, but outliers may dominate presentation Skewed data is not handled well Equal-depth (frequency) partitioning Divides the range into W intervals, each containing approximately same number of samples Good data scaling Managing categorical attributes can be tricky 54" }, { "page_index": 160, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_055.png", "page_index": 160, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:42+07:00" }, "raw_text": "Binning Methods for Data Smoothing Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26 28, 29,34 + Partition into equal-frequency (equi-depth) bins: - Bin 1: 4, 8, 9, 15 - Bin 2: 21, 21, 24, 25 - Bin 3: 26, 28, 29, 34 + Smoothing by bin means: - Bin 1: 9, 9, 9, 9 - Bin 2: 23, 23, 23, 23 - Bin 3: 29, 29, 29, 29 + Smoothing by bin boundaries. - Bin 1: 4, 4, 4, 15 - Bin 2: 21, 21, 25, 25 - Bin 3: 26, 26, 26, 34 55" }, { "page_index": 161, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_056.png", "page_index": 161, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:53+07:00" }, "raw_text": "Discretization Without Using Class Labels (Binning vs. Clustering) 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0 5 10 15 20 0 10 15 20 Da ta 0.9 0.9 0.8 0.8 0.7 0.7 0.6 0.6 0.5 0.5 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 4 5 6 8 9 10 181920 1 2 3 4 5 6 7 8 9 10 13 18 1920 Equal frequency (binning) K-means clustering leads to better results 56" }, { "page_index": 162, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_057.png", "page_index": 162, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:19:58+07:00" }, "raw_text": "Correlation 1 Analysis Classification (e.g., decision tree analysis) Supervised: Given class labels, e.g., cancerous vs. benign Using entropy to determine split point (discretization point) Top-down, recursive split Details to be covered in Chapter 7 Correlation analysis (e.g., Chi-merge: x2-based discretization) Supervised: use class information Bottom-up find the best neighboring intervals (those merge: having similar distributions of classes, i.e., low x2 values) to merge Merge performed recursively, until a predefined stopping condition 57" }, { "page_index": 163, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_058.png", "page_index": 163, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:04+07:00" }, "raw_text": "Concept Hierarchy Generation Concept hierarchy organizes concepts (i.e., attribute values) hierarchically and is usually associated with each dimension in a data warehouse Concept hierarchies facilitate drilling and rolling in data warehouses to view data in multiple granularity Concept hierarchy formation: Recursively reduce the data by collecting and replacing low level concepts (such as numeric values for age) by higher level concepts (such as youth, adult, or senior) Concept hierarchies can be explicitly specified by domain experts and/or data warehouse designers Concept hierarchy can be automatically formed for both numeric and nominal data. For numeric data, use discretization methods shown 58" }, { "page_index": 164, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_059.png", "page_index": 164, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:08+07:00" }, "raw_text": "Concept Hierarchy Generation for Nominal Data Specification of a partial/total ordering of attributes explicitly at the schema level by users or experts street< city< state < country Specification of a hierarchy for a set of values by explicit data grouping {Urbana, Champaign, Chicago} < Illinois Specification of only a partial set of attributes E.g., only street< city, not others Automatic generation of hierarchies (or attribute levels) by the analysis of the number of distinct values E.g., for a set of attributes: {street, city, state, country} 59" }, { "page_index": 165, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_060.png", "page_index": 165, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:13+07:00" }, "raw_text": "Automatic Concept Hierarchy Generation Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the data set The attribute with the most distinct values is placed at the lowest level of the hierarchy Exceptions, e.g., weekday, month, quarter, year 15 distinct values country province or state 365 distinct values city 3567 distinct values 674.339 distinct values street 60" }, { "page_index": 166, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_061.png", "page_index": 166, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:17+07:00" }, "raw_text": "Chapter 3: Data Preprocessing Data Preprocessing: An Overview Data Quality Major Tasks in Data Preprocessing Data Cleaning Data Integration Data Reduction Data Transformation and Data Discretization Summary 61" }, { "page_index": 167, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_062.png", "page_index": 167, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:22+07:00" }, "raw_text": "Summary Data quality: accuracy, completeness, consistency, timeliness, believability, interpretability Data cleaning: e.g. missing/noisy values, outliers Data integration from multiple sources: Entity identification problem Remove redundancies Detect inconsistencies Data reduction Dimensionality reduction Numerosity reduction Data compression Data transformation and data discretization Normalization Concept hierarchy generation 62" }, { "page_index": 168, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_2/slide_063.png", "page_index": 168, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:28+07:00" }, "raw_text": "References D. P. Ballou and G. K. Tayi. Enhancing data quality in data warehouse environments. Comm. of ACM,42:73-78, 1999 A. Bruce, D. Donoho, and H.-Y. Gao. Wavelet analysis. 1EEE Spectrum, Oct 1996 T. Dasu and T. Johnson. Exploratory Data Mining and Data Cleaning. John Wiley, 2003 J. Devore and R. Peck. Statistics: The Exploration and Analysis of Data. Duxbury Press, 1997. H. Galhardas, D. Florescu, D. Shasha, E. Simon, and C.-A. Saita. Declarative data cleaning: Language, model, and algorithms. VLDB'01 M. Hua and J. Pei. Cleaning disguised missing data: A heuristic approach. KDD'07 H. V. Jagadish, et al., Special Issue on Data Reduction Techniques. Bulletin of the Technical Committee on Data Engineering, 20(4), Dec. 1997 H. Liu and H. Motoda (eds.). Feature Extraction, Construction, and Selection: A Data Mining Perspective. Kluwer Academic, 1998 J. E. Olson. Data Quality: The Accuracy Dimension. Morgan Kaufmann, 2003 D. Pyle. Data Preparation for Data Mining. Morgan Kaufmann, 1999 V. Raman and J. Hellerstein. Potters Wheel: An Interactive Framework for Data Cleaning and Transformation, VLDB'2001 T. Redman. Data Quality: The Field Guide. Digital Press (Elsevier), 2001 R. Wang, V. Storey, and C. Firth. A framework for analysis of data quality research. IEEE Trans. Knowledge and Data Engineering, 7:623-640, 1995 63" }, { "page_index": 169, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_001.png", "page_index": 169, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:32+07:00" }, "raw_text": "Mining: Data (3rd ed.) - Chapter 4 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 170, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_002.png", "page_index": 170, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:36+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 2" }, { "page_index": 171, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_003.png", "page_index": 171, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:41+07:00" }, "raw_text": "What is g Data Warehouse? Defined in many different ways, but not rigorously. A decision support database that is maintained separately from the organization's operational database Support information processing by providing a solid platform of consolidated, historical data for analysis. \"A data warehouse is a subject-oriented, integrated, time-variant and nonvolatile collection of data in support of management's decision-making process.\"W. H. Inmon Data warehousing: The process of constructing and using data warehouses 3" }, { "page_index": 172, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_004.png", "page_index": 172, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:45+07:00" }, "raw_text": "Data Warehouse-Subject-Oriented Organized around major subjects, such as customer product, sales Focusing on the modeling and analysis of data for decision makers, not on daily operations or transaction processing Provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process 4" }, { "page_index": 173, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_005.png", "page_index": 173, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:51+07:00" }, "raw_text": "Data Warehouse-Integrated Constructed by integrating multiple, heterogeneous data sources relational databases, flat files, on-line transaction records Data cleaning and data integration technigues are applied. Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources E.g., Hotel price: currency, tax, breakfast covered, etc. When data is moved to the warehouse, it is converted. 5" }, { "page_index": 174, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_006.png", "page_index": 174, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:55+07:00" }, "raw_text": "Data Warehouse-Time Variant The time horizon for the data warehouse is significantly longer than that of operational systems Operational database: current value data Data warehouse data: provide information from a historical perspective (e.g., past 5-10 years) Every key structure in the data warehouse Contains an element of time, explicitly or implicitly But the key of operational data may or may not contain \"time element\" 6" }, { "page_index": 175, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_007.png", "page_index": 175, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:20:59+07:00" }, "raw_text": "Data Warehouse-Nonvolatile A physically separate store of data transformed from the operational environment Operational update of data does not occur in the data warehouse environment Does not require transaction processing, recovery, and concurrency control mechanisms Requires only two operations in data accessing: initial loading of data and access of data 7" }, { "page_index": 176, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_008.png", "page_index": 176, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:06+07:00" }, "raw_text": "OLTP vs. OLAP OLTP OLAP clerk, IT professional knowledge worker users function day to day operations decision support DB design application-oriented subject-oriented data current, up-to-date historical detailed, flat relational summarized, multidimensional isolated integrated, consolidated usage repetitive ad-hoc read/write access lots of scans index/hash on prim. key unit of work short, simple transaction complex query # records accessed tens millions #users thousands hundreds DB size 100MB-GB 100GB-TB metric transaction throughput query throughput, response 8" }, { "page_index": 177, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_009.png", "page_index": 177, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:11+07:00" }, "raw_text": "Why a Separate Data Warehouse? High performance for both systems DBMS- tuned for OLTP: access methods, indexing, concurrency control, recovery Warehouse-tuned for OLAP: complex OLAP queries multidimensional view, consolidation Different functions and different data: missing data: Decision support requires historical data which operational DBs do not typically maintain data consolidation: DS requires consolidation (aggregation) summarization) of data from heterogeneous sources data quality: different sources typically use inconsistent data representations, codes and formats which have to be reconciled Note: There are more and more systems which perform OLAP analysis directly on relational databases 9" }, { "page_index": 178, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_010.png", "page_index": 178, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:16+07:00" }, "raw_text": "Data Warehouse: A Multi-Tiered Architecture I Monitor OLAP Server & Metadata Other Integrator sources Analysis Operational Query Extract Serve DBs Transform Data Reports Load Warehouse Data mining Refresh Data Marts Data Sources Data Storage OLAP Engine Front-End Tools 10" }, { "page_index": 179, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_011.png", "page_index": 179, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:21+07:00" }, "raw_text": "Three Dgta Warehouse Mocels Enterprise warehouse collects all of the information about subjects spanning the entire organization Data Mart a subset of corporate-wide data that is of value to a specific groups of users. I Its scope is confined to specific, selected groups, such as marketing data mart Independent vs. dependent (directly from warehouse) data mart Virtual warehouse A set of views over operational databases Only some of the possible summary views may be materialized 11" }, { "page_index": 180, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_012.png", "page_index": 180, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:25+07:00" }, "raw_text": "Extraction, Transformation, and Loading (ETL) Data extraction get data from multiple, heterogeneous, and external sources Data cleaning detect errors in the data and rectify them when possible Data transformation convert data from legacy or host format to warehouse format Load sort, summarize, consolidate, compute views, check integrity, and build indicies and partitions Refresh propagate the updates from the data sources to the warehouse 12" }, { "page_index": 181, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_013.png", "page_index": 181, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:31+07:00" }, "raw_text": "Metadata Repository Meta data is the data defining warehouse objects. It stores: Description of the structure of the data warehouse schema, view, dimensions, hierarchies, derived data defn, data mart locations and contents Operational meta-data data lineage (history of migrated data and transformation path) currency of data (active, archived, or purged), monitoring information (warehouse usage statistics, error reports, audit trails) The algorithms used for summarization The mapping from operational environment to the data warehouse Data related to system performance warehouse schema, view and derived data definitions Business data business terms and definitions, ownership of data, charging policies 13" }, { "page_index": 182, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_014.png", "page_index": 182, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:35+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 14" }, { "page_index": 183, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_015.png", "page_index": 183, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:41+07:00" }, "raw_text": "From Tables and Spreadsheets to Data Cubes A data warehouse is based on a multidimensional data mode which views data in the form of a data cube A data cube, such as sales, allows data to be modeled and viewed in multiple dimensions Dimension tables, such as item (item_name, brand, type), or time(day, week, month, quarter, year) Fact table contains measures (such as dollars_sold) and keys to each of the related dimension tables In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The lattice of cuboids forms a data cube 15" }, { "page_index": 184, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_016.png", "page_index": 184, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:46+07:00" }, "raw_text": "Cube: A Lattice of Cuboids all 0-D (apex) cuboid time item locatien supplier 1-D cuboids time,location item,location location,supplier 2-D cuboids time,em time,supplier item,supplier time,location,supplier 3-D cuboids time,item,location time,item,supplier item,location,supplier 4-D (base) cuboid time, item, location, supplier 16" }, { "page_index": 185, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_017.png", "page_index": 185, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:51+07:00" }, "raw_text": "Conceptual Modeling of Data Warehouses Modeling data warehouses: dimensions & measures Star schema: A fact table in the middle connected to a set of dimension tables Snowflake schema: A refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake Fact constellations: Multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation 17" }, { "page_index": 186, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_018.png", "page_index": 186, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:21:56+07:00" }, "raw_text": "Example of Star Schema time item time_key day item key day_of the week Sales Fact Table item name month brand time_key quarter type year item_key supplier_type branch_key location branch location_key- location key branch key street branch name units sold city branch type dollars sold state or province country avg sales Measures 18" }, { "page_index": 187, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_019.png", "page_index": 187, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:02+07:00" }, "raw_text": "Example of Snowflake Schema time item time_key item_key day supplier Sales Fact Table day_of the week item name supplier_key brand month supplier_type time_key quarter type supplier_key year item_key branch_key location branch location_key-.... location key branch_key street units sold branch name city_key. branch type city dollars sold city_key avg sales city state or province Measures country 19" }, { "page_index": 188, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_020.png", "page_index": 188, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:07+07:00" }, "raw_text": "Example of Fact Constellation time item time_key Shipping Fact Table day item_key day_of_the week time_key Sales Fact Table item name month brand item_key quarter time_key type year supplier_type shipper_key item_key from location branch_key branch location_key to location location branch_key dollars cost location key units sold branch name street branch_type units shipped dollars sold city province or state avg_sales country shipper Measures shipper_key shipper_name location key shipper_type" }, { "page_index": 189, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_021.png", "page_index": 189, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:12+07:00" }, "raw_text": "A Concept Hierarchy: Dimension (location) all all Europe North America region 11111 Germany Spain Canada Mexico country 11 Vancouver city Frankfurt Toronto L. Chan M. Wind office 21" }, { "page_index": 190, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_022.png", "page_index": 190, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:16+07:00" }, "raw_text": "Data Cube Measures: Three Categories Distributive: if the result derived by applying the function to n aggregate values is the same as that derived by applying the function on all the data without partitioning E.g., countO, sumO, minO, max( Algebraic: if it can be computed by an algebraic function with M arguments (where M is a bounded integer), each of which is obtained by applying a distributive aggregate function E.g., avgO, min_NO, standard_deviationO Holistic: if there is no constant bound on the storage size needed to describe a subaggregate. E.g., medianO, modeO, rank0 22" }, { "page_index": 191, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_023.png", "page_index": 191, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:31+07:00" }, "raw_text": "View of Warehouses and Hierarchies dbminei Ox File Edit Query ViewWindow Help H6x EE T E ? WareHouse X T D Dimensions J Level Name WareHouse Dimensions DemowH Description B-SCHEMAS region 8-2 ANY MasterDemoDB.dbo.SalesD country Europe branch_r A COLUMNS + Belgium E-DIMENSIONS rep_nam + France Specification of hierarchies Product Germany 2 Region + Essen t revenue + Frankfurt Schema hierarchy t cost + Spain t profit + Sweden order_qty United Kingdon + MEASUREMENTS day < {month < Far East CUBES NorthAmerica SalesData_Cube B Canada quarter; week} < year O Small_Cube Montreal .ODMQLs +- Toronto stockdata.dbo.stock Vancouver Set_grouping hierarchy A COLUMNS tB Charles Loo am DIMENSIONS tB Hari Krain date , price 1 Kaley Gregson {1..10} < inexpensive 1B Lee Chan t1 price1 tB Malcom Young MEASUREMENTS Marthe Whiteduck + CUBES Torey Wandiko O DMQLs Mexico United States For Help.press F1 NUM 23" }, { "page_index": 192, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_024.png", "page_index": 192, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:36+07:00" }, "raw_text": "Multidimensional Data Sales volume as a function of product, month, and region Dimensions: Product, Location, Time Hierarchical summarization paths Industry Region Year Region Category Country Quarter Product City Month Week Office Day Month 24" }, { "page_index": 193, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_025.png", "page_index": 193, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:42+07:00" }, "raw_text": "A Sample Data Cube Total annual sales Date of TVs in U.S.A 1Qtr 2Qtr 3Qtr 4Qtr sum TV PC U.S.A Product VCR sum Canada Mexico sum All, All, All 25" }, { "page_index": 194, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_026.png", "page_index": 194, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:46+07:00" }, "raw_text": "Cuboids Corresponding to the Cube all 0-D (apex) cuboid country product date 1-D cuboids product,date groduct,eóuntry date, country 2-D cuboids 3-D (base) cuboid product, date, country 26" }, { "page_index": 195, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_027.png", "page_index": 195, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:22:51+07:00" }, "raw_text": "Typical OLAP Operations Roll up (drill-up): summarize data by climbing up hierarchy or by dimension reduction Drill down (roll down): reverse of roll-up from higher level summary to lower level summary or detailed data, or introducing new dimensions Slice and dice: project and select Pivot (rotate) : reorient the cube, visualization, 3D to series of 2D planes Other operations drill across: involving (across) more than one fact table drill through: through the bottom level of the cube to its back-end relational tables (using SQL) 27" }, { "page_index": 196, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_028.png", "page_index": 196, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:04+07:00" }, "raw_text": "Toronto 195 USA Vancouve 2000 Canada Iocation (citles Q1 605 lmme (staeab aa!t Q1 1000 Q2 Q2 computer Q3 home entertainment Q4 Item (types) computer security home phone dlce for entertainment (location = TorontoO.or OVancouveró) Item (types) and (time = OQ16 or OQ2O) and (item = Ohome entertainmentó or Ocompute. roll-up on location (from cities to countries) Chlcag9/220Z Fig. 3.10 Typical OLAP New York 1560 Toronto 395 ocation (cities) Vancouver Operations (stauenb) aan Q1 605 825 14 400 Q2 Q3 Q4 sllce computer security for time = OQ1 home phone entertainment drill-down Item (types) on time (from guarters Chicago to months) New York Toronto Vancouver 605 825 14 400 Chlcago computer security New York home Toronto phone entertalnment location (clties) Vancouver January 150 Item (typos) February 100 March 150 pivot April (smuom) am!! May June July home entertainment 605 August (sadp} wed! Septomber computer 825 October phone 14 November 400 December security computer security New York Vancouver home phone Chicago Toronto entertainment 28 location (cities) item (types)" }, { "page_index": 197, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_029.png", "page_index": 197, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:11+07:00" }, "raw_text": "A Star-Net Query Model Customer Orders Shipping Method Customer CONTRACTS AIR-EXPRESS ORDER TRUZK PRODUCT LIAIE Time Product ANNUALY QTRIY DAILY PRODUOTITEM PRODUCT GROUP CfTY SALES PEXSON COUNTR! DISTRICT REGION DIVISION Each circle is Location called a footprint Promotion Organization 29" }, { "page_index": 198, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_030.png", "page_index": 198, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:16+07:00" }, "raw_text": "Browsing a Data Cube revenue location Outdoor Products pnpoid GO Spart Line EnvironmentalLine EurO? .02000.0 Farast Visualization 2000.004006.0 NortnAnerice OLAP capabilities 4000.00-6000.00 6000.00 Interactive manipulation 30" }, { "page_index": 199, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_031.png", "page_index": 199, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:21+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 31" }, { "page_index": 200, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_032.png", "page_index": 200, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:25+07:00" }, "raw_text": "Design of Data Warehouse: A Business Analysis Framework Four views regarding the design of a data warehouse Top-down view allows selection of the relevant information necessary for the data warehouse Data source view exposes the information being captured, stored, and managed by operational systems Data warehouse view consists of fact tables and dimension tables Business query view sees the perspectives of data in the warehouse from the view of end-user 32" }, { "page_index": 201, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_033.png", "page_index": 201, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:31+07:00" }, "raw_text": "Data Warehouse Design Process Top-down, bottom-up approaches or a combination of both Top-down: Starts with overall design and planning (mature) Bottom-up: Starts with experiments and prototypes (rapid) From software engineering point of view Waterfall: structured and systematic analysis at each step before proceeding to the next Spiral: rapid generation of increasingly functional systems, short turn around time, quick turn around Typical data warehouse design process Choose a business process to model, e.g., orders, invoices, etc. Choose the grain (atomic /evel of data) of the business process Choose the dimensions that will apply to each fact table record Choose the measure that will populate each fact table record 33" }, { "page_index": 202, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_034.png", "page_index": 202, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:35+07:00" }, "raw_text": "Data Warehouse Development: Multi-Tier Data Warehouse Distributed Data Marts Enterprise Data Data Data Mart Mart Warehouse A . Model tefinement Model refinement Define a high-level corporate data model 34" }, { "page_index": 203, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_035.png", "page_index": 203, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:41+07:00" }, "raw_text": "Data Warehouse . Usage Three kinds of data warehouse applications Information processing supports querying, basic statistical analysis, and reporting using crosstabs, tables, charts and graphs Analytical processing multidimensional analysis of data warehouse data supports basic OLAP operations, slice-dice, drilling, pivoting Data mining knowledge discovery from hidden patterns supports associations, constructing analytical models performing classification and prediction, and presenting the mining results using visualization tools 35" }, { "page_index": 204, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_036.png", "page_index": 204, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:45+07:00" }, "raw_text": "From On-Line Analytical Processing (OLAP) to On Line Analytical Mining (OLAM) Why online analytical mining? High quality of data in data warehouses DW contains integrated, consistent, cleaned data Available information processing structure surrounding data warehouses ODBC, OLEDB, Web accessing, service facilities reporting and OLAP tools OLAP-based exploratory data analysis Mining with drilling, dicing, pivoting, etc. On-line selection of data mining functions Integration and swapping of multiple mining functions, algorithms, and tasks 36" }, { "page_index": 205, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_037.png", "page_index": 205, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:50+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 37" }, { "page_index": 206, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_038.png", "page_index": 206, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:55+07:00" }, "raw_text": "Efficient Data Cube Computation Data cube can be viewed as a lattice of cuboids The bottom-most cuboid is the base cuboid The top-most cuboid (apex) contains only one cell How many cuboids in an n-dimensional cube with L levels? n i = 1 Materialization of data cube materialization Materialize every (cuboid) (full none_(no materialization), or some (partial materialization) Selection of which cuboids to materialize Based on size, sharing, access frequency, etc. 38" }, { "page_index": 207, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_039.png", "page_index": 207, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:23:59+07:00" }, "raw_text": "The \"Compute Cube\" Operator Cube definition and computation in DMQL define cube sales [item, city, year]: sum (sales_in_dollars compute cube sales Transform it into a SQL-like language (with a new operator cube by, introduced by Gray et al.'96) SELECT item, city, year, SUM (amount) (city) (item) (year) FROM SALES CUBE BY item, city, year Need compute the following Group-Bys (city, item) (city, year) Crtem, year (date, product, customer) date,product),(date, customer), (product, customer (date), (product), (customer) (city, item, year) 0 39" }, { "page_index": 208, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_040.png", "page_index": 208, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:09+07:00" }, "raw_text": "Indexing OLAP Data: Bitmap Index Index on a particular column Each value in the column has a bit vector: bit-op is fast The length of the bit vector: # of records in the base table The /th bit is set if the e /th row of the base table has the value for the indexed column not suitable for high cardinality domains A recent bit compression technique, Word-Aligned Hybrid (WAH) makes it work for high cardinality domain as well [Wu, et al. TODs'06] Base table Index on Region Index on Type Cust Region Type - ReclDAsia Europe America ReclD Retail Dealer C1 Asia Retail 1 1 O 1 1 0 C2 Europe Dealer 2 O 1 O 2 0 1 C3 Asia Dealer 3 1 O 3 O 1 C4 America O 1 Retail 4 O 4 1 0 C5 Europe Dealer 5 O 1 5 0 1 40" }, { "page_index": 209, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_041.png", "page_index": 209, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:15+07:00" }, "raw_text": "Indexing g OLAP Data: Join Indices Join index: JI(R-id, S-id) where R (R-id, ...) >< S (S-id, ...) sales Traditional indices map the values to a list of record ids location icIn It materializes relational join in JI file and Main Stteel TS7 speeds up relational join Sony-TY In data warehouses, join index relates the values of the dimensions of a start schema to rows in T238 the fact table. E.g. fact table: Sales and two dimensions city and product T459 A join index on city maintains for each distinct city a list of R-IDs of the tuples recording the Sales in the city T884 Join indices can span multiple dimensions 41" }, { "page_index": 210, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_042.png", "page_index": 210, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:21+07:00" }, "raw_text": "Efficient Processing OLAP Queries Determine which operations should be performed on the available cuboids Transform drill, roll, etc. into corresponding SQL and/or OLAP operations e.g., dice = selection + projection Determine which materialized cuboid(s) should be selected for OLAP op. Let the query to be processed be on {brand, province_or_state} with the condition \"year = 2004\", and there are 4 materialized cuboids available: 1) {year, item_name, city} 2) {year, brand, country} 3) {year, brand, province_or_state} 4) {item_name, province_or_state} where year = 2004 Which should be selected to process the query? Explore indexing structures and compressed vs. dense array structs in MOLAP 42" }, { "page_index": 211, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_043.png", "page_index": 211, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:25+07:00" }, "raw_text": "OLAP Server Architectures Relational OLAP (ROLAP) Use relational or extended-relational DBMS to store and manage warehouse data and OLAP middle ware Include optimization of DBMS backend, implementation of aggregation navigation logic, and additional tools and services Greater scalability Multidimensional OLAP (MOLAP) Sparse array-based multidimensional storage engine Fast indexing to pre-computed summarized data Hybrid OLAP (HOLAP) (e.g., Microsoft SQLServer) Flexibility, e.g., low level: relational, high-level: array Specialized SQL servers (e.g., Redbricks) Specialized support for SQL queries over star/snowflake schemas 43" }, { "page_index": 212, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_044.png", "page_index": 212, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:28+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 44" }, { "page_index": 213, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_045.png", "page_index": 213, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:33+07:00" }, "raw_text": "Attribute-Oriented lnduction Proposed in 1989 (KDD `89 workshop) Not confined to categorical data nor particular measures How it is done? Collect the task-relevant data (initial relation) using a relational database query Perform generalization by attribute removal or attribute generalization Apply aggregation by merging identical, generalized tuples and accumulating their respective counts Interaction with i users for knowledge presentation 45" }, { "page_index": 214, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_046.png", "page_index": 214, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:38+07:00" }, "raw_text": "Attribute-Oriented Induction: An Example Example: Describe general characteristics of graduate students in the University database Step 1. Fetch relevant set of data using an SQL statement, e.g., Select * (i.e., name, gender, major, birth_place, birth date, residence, phone#, gpa) from student where student_status in {\"Msc\",\"MBA\",\"PhD\" } Step 2. Perform attribute-oriented induction Step 3. Present results in generalized relation, cross-tab or rule forms 46" }, { "page_index": 215, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_047.png", "page_index": 215, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:48+07:00" }, "raw_text": "Class Characterization: An Example Name Gender Major Birth-Place Birth date Residence Phone # GPA Jim M CS Vancouver,BC 8-12-76 3511 Main St.. 687-4598 3.67 Initial Woodman Canada Richmond Relation Scott M CS Montreal, Que 28-7-75 345 1st Ave. 253-9106 3.70 Lachance Canada Richmond Laura Lee F Physics Seattle, WA, USA 25-8-70 125 Austin Ave. 420-5232 3.83 Burnaby Removed Retained Sci,Eng, Country Age range City Removed Excl, Bus VG,. Gender Major Birth region Age_range Residence GPA Count Prime M Science Canada 20-25 Richmond Very-good 16 Generalized F Science Foreign 25-30 Burnaby Excellent 22 Relation Birth_Region Canada Foreign Total Gender M 16 14 30 F 10 22 32 Total 26 36 62 47" }, { "page_index": 216, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_048.png", "page_index": 216, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:54+07:00" }, "raw_text": "Basic Principles of Attribute-Oriented Induction Data focusing: task-relevant data, including dimensions, and the result is the initial relation Attribute-removal: remove attribute A if there is a large set of distinct values for A but (1) there is no generalization operator on A, or (2) A's higher level concepts are expressed in terms of other attributes Attribute-generalization: If there is a large set of distinct values for A, and there exists a set of generalization operators on A, then select an operator and generalize A Attribute-threshold control: typical 2-8, specified/default Generalized relation threshold control: control the final relation/rule size 48" }, { "page_index": 217, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_049.png", "page_index": 217, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:24:59+07:00" }, "raw_text": "Attribute-Oriented lncuction: Basic Algorithm InitialRel: Query processing of task-relevant data, deriving the initial relation. PreGen: Based on the analysis of the number of distinct values in each attribute, determine generalization plan for each attribute: removal? or how high to generalize? PrimeGen: Based on the PreGen plan, perform generalization to the right level to derive a \"prime generalized relation\", accumulating the counts. Presentation: User interaction: (1) adjust levels by drilling (2) pivoting, (3) mapping into rules, cross tabs, visualization presentations. 49" }, { "page_index": 218, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_050.png", "page_index": 218, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:04+07:00" }, "raw_text": "Presentation of Generalized Results Generalized relation : Relations where some or all attributes are generalized, with counts or other aggregation values accumulated. Cross tabulation: Mapping results into cross tabulation form (similar to contingency tables). Visualization technigues Pie charts, bar charts, curves, cubes, and other visual forms. Quantitative characteristic rules: Mapping generalized result into characteristic rules with quantitative information associated with it, e.g., grad(x)male(x)=) birth_region(x)=\"Canada\"[t:53%]vbirth_region(x)=\" foreign\"[t:47%] 50" }, { "page_index": 219, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_051.png", "page_index": 219, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:08+07:00" }, "raw_text": "Mining Class Comparisons Comparison: Comparing two or more classes Method: Partition the set of relevant data into the target class and the contrasting class(es) Generalize both classes to the same high level concepts Compare tuples with the same high level descriptions Present for every tuple its description and two measures support - distribution within single class comparison - distribution between classes Highlight the tuples with strong discriminant features Relevance Analysis: Find attributes (features) which best distinguish different classes 51" }, { "page_index": 220, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_052.png", "page_index": 220, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:14+07:00" }, "raw_text": "Concept Description vs. Cube-Based OLAP Similarity: Data generalization Presentation of data summarization at multiple levels of abstraction Interactive drilling, pivoting, slicing and dicing Differences: OLAP has systematic preprocessing, query independent and can drill down to rather low level AoI has automated desired level allocation, and may perform dimension relevance analysis/ranking when there are many relevant dimensions AOI works on the data which are not in relational forms 52" }, { "page_index": 221, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_053.png", "page_index": 221, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:17+07:00" }, "raw_text": "Chapter 4: Data Warehousing and On-line Analytical Processing Data Warehouse: Basic Concepts Data Warehouse Modeling: Data Cube and OLAP Data Warehouse Design and Usage Data Warehouse Implementation Data Generalization by Attribute-Oriented Induction Summary 53" }, { "page_index": 222, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_054.png", "page_index": 222, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:22+07:00" }, "raw_text": "Summary Data warehousing: A multi-dimensional model of a data warehouse A data cube consists of dimensions & measures Star schema, snowflake schema, fact constellations OLAP operations: drilling, rolling, slicing, dicing and pivoting Data Warehouse Architecture, Design, and Usage Multi-tiered architecture Business analysis design framework Information processing, analytical processing, data mining, OLAM (Online Analytical Mining) Implementation: Efficient computation of data cubes Partial vs. full vs. no materialization Indexing OALP data: Bitmap index and join index OLAP query processing OLAP servers: ROLAP, MOLAP, HOLAP Data generalization: Attribute-oriented induction 54" }, { "page_index": 223, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_055.png", "page_index": 223, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:28+07:00" }, "raw_text": "References (I) S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan, and S Sarawagi. On the computation of multidimensional aggregates. VLDB'96 D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data warehouses.SlGMOD'97 R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE'97 S. Chaudhuri and U. Dayal. An overview of data warehousing and OLAP technology. ACM SlGMOD Record, 26:65-74,1997 E. F. Codd, S. B. Codd, and C. T. Salley. Beyond decision support. Computer World, 27, July 1993. J. Gray, et al. Data cube: A relational aggregation operator generalizing group-by, cross-tab and sub-totals. Data Mining and Knowledge Discovery, 1:29-54, 1997. A. Gupta and I. S. Mumick. Materialized Views: Techniques, Implementations, and Applications. MIT Press, 1999. J. Han. Towards on-line analytical mining in large databases. ACM S/GM0D Record, 27:97-107 1998. V. Harinarayan, A. Rajaraman, and J. D. Ullman. Implementing data cubes efficiently SIGMOD'96 J. Hellerstein, P. Haas, and H. Wang. Online aggregation. SIGMOD'97 55" }, { "page_index": 224, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_056.png", "page_index": 224, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:35+07:00" }, "raw_text": "References (11) C. Imhoff, N. Galemmo, and J. G. Geiger. Mastering Data Warehouse Design: Relational and Dimensional Techniques. John Wiley, 2003 W. H. Inmon. Building the Data Warehouse. John Wiley, 1996 R. Kimball and M. Ross. The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling. 2ed. John Wiley, 2002 P. O'Neil and G. Graefe. Multi-table joins through bitmapped join indices. S/GMOD Record, 24:8- 11, Sept. 1995. P. O'Neil and D. Quass. Improved query performance with variant indexes. SIGMOD'97 Microsoft. OLEDB for OLAP programmer's reference version 1.0. In http://www.microsoft.com/data/oledb/olap, 1998 S. Sarawagi and M. Stonebraker. Efficient organization of large multidimensional arrays. ICDE'94 A. Shoshani. OLAP and statistical databases: Similarities and differences. PODS'O0. D. Srivastava, S. Dar, H. V. Jagadish, and A. V. Levy. Answering queries with aggregation using views.VLDB'96 P. Valduriez. Join indices. ACM Trans. Database Systems, 12:218-246, 1987. J. Widom. Research problems in data warehousing. ClKM'95 K. Wu, E. Otoo, and A. Shoshani, Optimal Bitmap Indices with Efficient Compression, ACM Trans. on Database Systems (TODS), 31(1): 1-38, 2006 56" }, { "page_index": 225, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_057.png", "page_index": 225, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:37+07:00" }, "raw_text": "Surplus Slides 57" }, { "page_index": 226, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_3/slide_058.png", "page_index": 226, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:42+07:00" }, "raw_text": "Compression of Bitmap Indices Bitmap indexes must be compressed to reduce I/O costs and minimize CPU usage-majority of the bits are 0's Two compression schemes: Byte-aligned Bitmap Code (BBC) Word-Aligned Hybrid (WAH) code Time and space required to operate on compressed bitmap is proportional to the total size of the bitmap Optimal on attributes of low cardinality as well as those of high cardinality. WAH out performs BBC by about a factor of two 58" }, { "page_index": 227, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_001.png", "page_index": 227, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:45+07:00" }, "raw_text": "Data Mining: and Techniques Concepts (3rd ed.) - Chapter 5 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2010 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 228, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_002.png", "page_index": 228, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:48+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Multidimensional Data Analysis in Cube Space Summary 2" }, { "page_index": 229, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_003.png", "page_index": 229, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:25:53+07:00" }, "raw_text": "Data Cube: A Lattice of Cuboids all 0-D(apex) cuboid time item locatien supplier 1-D cuboids time,locatin itemlocation location,supplier 2-D cuboids time,item time,supplier item,supplier time,location,supplier 3-D cuboids time,item,locationtime,item,supplier item,location,supplier 4-D(base) cuboid time, item, location, supplierc 3" }, { "page_index": 230, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_004.png", "page_index": 230, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:02+07:00" }, "raw_text": "a Cube: A Lattice of Cuboids Data all 0-D(apex) cuboid time item location supplier 1-D cuboids time,item time,location item.location location,supplier 2-D cuboids time,supplier item/supplier time,location,supplier 3-D cuboids time,item,location item,location,supplier time,item,supplier 4-D(base) cuboid time, item, location, supplier Base vs. aggregate cells; ancestor vs. descendant cells; parent vs. child cells (9/15, milk, Urbana, Dairy_land) 1. (9/15, milk, Urbana, *) 2. (*, milk, Urbana, *) 3. (*, milk, Urbana, *) 4. 5. (*, milk, Chicago, *) (*, milk, *, *) 6. 4" }, { "page_index": 231, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_005.png", "page_index": 231, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:07+07:00" }, "raw_text": "Cube Materialization: Full Cube vs. Iceberg Cube Full cube vs. iceberg cube compute cube sales iceberg as select month, city, customer group, count(*) iceberg from salesInfo condition cube by month, city, customer group having count(*) >= min support Computing on/y the cuboid cells whose measure satisfies the iceberg condition Only a small portion of cells may be \"above the water\" in a sparse cube Avoid explosive growth: A cube with 100 dimensions 2 base cells: (a1, a2, ..., a100), (b1, b2, .., b100) What about \"having count >= 2\"? 5" }, { "page_index": 232, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_006.png", "page_index": 232, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:13+07:00" }, "raw_text": "Iceberg Cube, Closed Cube & Cube Shell Is iceberg cube good enough? 2 base cells: {(a1, a2, a3 . . . , a10o):10, (a1, a2, b3, . . . , b10o):10} How many cells will the iceberg cube have if having count(*) >= 10? Hint: A huge but tricky number! Close cube: Closed cell c: if there exists no cell d, s.t. d is a descendant of c and d has the same measure value as c. Closed cube: a cube consisting of only closed cells What is the closed cube of the above base cuboid? Hint: only 3 cells Cube Shell Precompute only the cuboids involving a small # of dimensions, e.g., 3 For (A1, A,, ... A1), how many combinations to compute? More dimension combinations will need to be computed on the fly 6" }, { "page_index": 233, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_007.png", "page_index": 233, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:17+07:00" }, "raw_text": "Roadmap for E Efficient Computation General cube computation heuristics (Agarwal et al.'96) Computing full/iceberg cubes: 3 methodologies Bottom-Up: Multi-Way array aggregation (Zhao, Deshpande & Naughton, SIGMOD'97) Top-down: BUC (Beyer & Ramarkrishnan, SIGMOD'99) H-cubing technique (Han, Pei, Dong & Wang: SIGMOD'01) Integrating Top-Down and Bottom-Up: Star-cubing algorithm (Xin, Han, Li & Wah: VLDB'03) High-dimensional OLAP: A Minimal Cubing Approach (Li, et al. VLDB'04) Computing alternative kinds of cubes: Partial cube, closed cube, approximate cube, etc. 7" }, { "page_index": 234, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_008.png", "page_index": 234, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:23+07:00" }, "raw_text": "General Heuristics (Agarwal et al. VLDB'96) Sorting, hashing, and grouping operations are applied to the dimension attributes in order to reorder and cluster related tuples Aggregates may be computed from previously computed aggregates rather than from the base fact table Smallest-child: computing a cuboid from the smallest, previously computed cuboid Cache-results: caching results of a cuboid from which other cuboids are computed to reduce disk I/Os Amortize-scans: computing as many as possible cuboids at the same time to amortize disk reads Share-sorts: : sharing sorting costs cross multiple cuboids when sort-based method is used Share-partitions: sharing the partitioning cost across multiple cuboids when hash-based algorithms are used 8" }, { "page_index": 235, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_009.png", "page_index": 235, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:26+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Multidimensional Data Analysis in Cube Space Summary 9" }, { "page_index": 236, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_010.png", "page_index": 236, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:29+07:00" }, "raw_text": "Data Cube Computation Methods Multi-Way Array Aggregation BUC Star-Cubing High-Dimensional OLAP 10" }, { "page_index": 237, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_011.png", "page_index": 237, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:34+07:00" }, "raw_text": "Multi-Way Array Aggregation All Array-based \"bottom-up\" algorithm Using multi-dimensional chunks No direct tuple comparisons BCO ABO AC dimensions Intermediate aggregate values are re- ABO used for computing ancestor cuboids Cannot do Apriori pruning: No iceberg optimization 11" }, { "page_index": 238, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_012.png", "page_index": 238, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:42+07:00" }, "raw_text": "Multi-way Array Aggregation for Cube Computation (MOLAP) Partition arrays into chunks (a small subcube which fits in memory) Compressed sparse array addressing: (chunk_id, offset) Compute aggregates in \"multiway\" by visiting cube cells in the order which minimizes the # of times to visit each cell, and reduces memory access and storage cost. C cX 6IZ 62 Z 63 Z 64 45 46 47 48 29 30 31 32 What is the best traversing order 13 14 15 16 b3 to do multi-way b2 9 56 B aggregation? b1 5 155 B6 1 2 3 4 b0 a0 al a2 a3 A 12" }, { "page_index": 239, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_013.png", "page_index": 239, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:52+07:00" }, "raw_text": "Multi-way Array Aggregation 1 for Cube Computation (3-D to 2-D) All * * 6162/63 * * * A OB 64 46 4 * * * 60 48 56 BC 29/30/31 AB ACC 44 52 32 40 c3 28 36 c2 ABC 24 b3 13 14 15 16 * * 12 20 C * The best order is * cl * b2 9 10 11 * B 8 the one that * * bl 5 6 7 * 4 minimizes the co bo 1 2 3 memory a0 al a2 a3 reguirement and A reduced I/Os * 13" }, { "page_index": 240, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_014.png", "page_index": 240, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:26:59+07:00" }, "raw_text": "Multi-way Array 1 Aggregation for Cube Computation (2-D to 1-D) ao a1a2a3 b3 b2 AB AC All * * * b1 4 B * * * * * * * * bo **** * * BC ABQ ACQ ao al a2 a3 B A * -k-1 C * * (b) (a) ABO 14" }, { "page_index": 241, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_015.png", "page_index": 241, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:03+07:00" }, "raw_text": "Multi-Way Array Aggregation for Cube Computation (Method Summary) Method: the planes should be sorted and computed according to their size in ascending order Idea: keep the smallest plane in the main memory, fetch and compute only one chunk at a time for the largest plane Limitation of the method: computing well only for a small number of dimensions If there are a large number of dimensions, \"top-down' computation and iceberg cube computation methods can be explored 15" }, { "page_index": 242, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_016.png", "page_index": 242, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:06+07:00" }, "raw_text": "Data Cube Computation Methods Multi-Way Array Aggregation BUC Star-Cubing High-Dimensional OLAP 16" }, { "page_index": 243, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_017.png", "page_index": 243, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:13+07:00" }, "raw_text": "Bottom-Up Computation (BUC) all BUC (Beyer & Ramakrishnan SIGMOD'99) B Bottom-up cube computation AB AC AD BC BD CD (Note: top-down in our view!) ABC ABD ACD BCD Divides dimensions into partitions and facilitates iceberg pruning ABCD 1 all If a partition does not satisfy min_sup, its descendants can 2 A 10 B 14 C 16 D be pruned If minsup = 1 => compute full 3 AB 7 AC 9 AD 11 BC 13 BD 15 CD CUBE! 4 ABC 6 ABD 8 ACD 12 BCD No simultaneous aggregation 5 ABCD 17" }, { "page_index": 244, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_018.png", "page_index": 244, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:18+07:00" }, "raw_text": "b2 d1 d2 BUC: Partitioning c1 bl c2 al b3 Usually, entire data set b4 can't fit in main memory a2 Sort distinctvalues a3 partition into blocks that fit Continue processing a4 Optimizations Partitioning External Sorting, Hashing, Counting Sort Ordering dimensions to encourage pruning - Cardinality, Skew, Correlation Collapsing duplicates Can't do holistic aggregates anymore! 18" }, { "page_index": 245, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_019.png", "page_index": 245, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:22+07:00" }, "raw_text": "Data Cube Computation Methods Multi-Way Array Aggregation BUC Star-Cubing High-Dimensional OLAP 19" }, { "page_index": 246, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_020.png", "page_index": 246, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:27+07:00" }, "raw_text": "Star-Cubing: An Integrating Method D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration, VLDB'03 Explore shared dimensions E.g., dimension A is the shared dimension of ACD and AD ABD/AB means cuboid ABD has shared dimensions AB Allows for shared computations e.g., cuboid AB is computed simultaneously as ABD C/C D Aggregate in a top-down manner but with the bottom-up AC/AC AD/A BC/BC BD/B CD sub-layer underneath which will allow Apriori pruning ACD/A ABC/ABO ABD/AB BCD Shared dimensions grow in bottom-up fashion ABCD/all 20" }, { "page_index": 247, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_021.png", "page_index": 247, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:32+07:00" }, "raw_text": "Iceberg Pruning in Shared Dimensions Anti-monotonic property of shared dimensions If the measure is anti-monotonic, and if the aggregate value on a shared dimension does not satisfy the iceberg condition, then all the cells extended from this shared dimension cannot satisfy the condition either Intuition: if we can compute the shared dimensions before the actual cuboid, we can use them to do Apriori pruning Problem: how to prune while still aggregate simultaneously on multiple dimensions? 21" }, { "page_index": 248, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_022.png", "page_index": 248, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:35+07:00" }, "raw_text": "Cell Trees r00t: 100 Use a tree structure similar a1: 30 a2: 20 a3: 20 a4: 20 to H-tree to represent b1: 10 b2: 10 b3: 10 cuboids c2: 5 to save memory d1: 2 d2: 3 Keep count at node Traverse the tree to retrieve a particular tuple 22" }, { "page_index": 249, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_023.png", "page_index": 249, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:43+07:00" }, "raw_text": "Star Attributes and Star Nodes Intuition: If a single-dimensional aggregate on an attribute value p does not satisfy the iceberg A B C D Count condition, it is useless to distinguish a1 b1 c1 d1 1 them during the iceberg a1 b1 c4 d3 1 computation a1 b2 c2 d2 1 E.g., bz, b3r b4, C1, C2, C4, d1, d2 a2 b3 c3 d4 1 d3 a2 b4 c3 d4 1 Solution: Replace such attributes by attributes, and the corresponding nodes in the cell tree are star nodes 23" }, { "page_index": 250, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_024.png", "page_index": 250, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:51+07:00" }, "raw_text": "Example: Star Reduction A B C D Count Suppose minsup = 2 a1 b1 * * 1 Perform one-dimensional a1 b1 * * 1 aggregation. Replace attribute a1 * * 1 values whose count < 2 with *. And a2 * c3 d4 1 collapse all *'s together a2 + c3 d4 1 Resulting table has all such attributes replaced with the star- attribute A B C D Count With regards to the iceberg a1 b1 * * 2 a1 * * * 1 computation, this new table is a a2 * c3 d4 2 loss/ess compression of the original table 24" }, { "page_index": 251, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_025.png", "page_index": 251, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:27:57+07:00" }, "raw_text": "Star Tree A B C D Count Given the new compressed a1 b1 * + 2 table, it is possible to a1 + * * 1 a2 * c3 d4 2 construct the corresponding r00t:5 Star Table cell tree-called star tree b2-* Keep a star table at the side a1:3 a2:2 b3 - * for easy lookup of star b4 - * b*:1 b1:2 b*:2 c1 -* attributes c2 -* The star tree is a /oss/ess c*:1 c*:2 c3:2 d1-* compression of the original d*:1 d*:2 d4:2 cell tree 25" }, { "page_index": 252, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_026.png", "page_index": 252, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:02+07:00" }, "raw_text": "Star-Cubing Algorithm-DFS on Lattice Tree all BCD: 5 B C/C D/D b*: 33 b1: 26 root: 5 c*: 14 c3: 211 c*: 27 A BC/BC BD/B CD a1: 3 a2: 2 d*: 15 d4: 212 d*: 28 ABC/ABO ABD/AB ACD/A BCD b*: 1 b1: 2 b*: 2 c*: 1 c*: 2 c3: 2 ABCD d*: 1 d*: 2 d4: 2 26" }, { "page_index": 253, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_027.png", "page_index": 253, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:07+07:00" }, "raw_text": "BCD ACD/A ABD/AB ABC/ABC Multi-Way Aggregation ABCD root:5 BCD:5 a1CD/a1:3 a a1b*D/a1b*:1 : a1b*c*/a1b*c*:1 a1:3 a2:2 b*:1 d*:1 b*:1 b1:2 b*:2 c*:1 d*:1 c*:1 c*:2 c3:2 d*:1 d*:1 d*:2 d4:2 Base-Tree BCD-Tree ACD/A-Tree ABD/AB-Tree ABC/ABC-Tree 27" }, { "page_index": 254, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_028.png", "page_index": 254, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:18+07:00" }, "raw_text": "BCD ACD/A ABD/AB ABC/ABC Star-Cubing Algorithm-DFS Star-Tree on ABCD r00t: 5 BCD:5 a2CD/a2:2. 1 - I I a2:2 g 1 I 1 b1:2g c3:211 x d4:212 - 1 I - 1 - I I 1 1 1 b*:21oic*:1 - 1 c3:211 d4:212 c*2 1 1 1 I - 1 1 1 1 c3:211d*:1 d4:2 d*:2 1 1 1 / 1 1 1 1 1 d4:21 1 1 1 1 Base-Tree BCD-Tree ACD/A-Tree I ABD/AB-Tree - ABC/ABC-Tree I I 1 1 28" }, { "page_index": 255, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_029.png", "page_index": 255, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:24+07:00" }, "raw_text": "BCD ACD/A ABD/AB ABC/ABC Multi-Way Star-Tree Aggregation ABCD Start depth-first search at the root of the base star tree At each new node in the DFS, create corresponding star tree that are descendants of the current tree according to the integrated traversal ordering E.g., in the base tree, when DFS reaches a1, the ACD/A tree is created When DFS reaches b*, the ABD/AD tree is created The counts in the base tree are carried over to the new trees When DFS reaches a leaf node (e.g., d*), start backtracking On every backtracking branch, the count in the corresponding trees are output, the tree is destroyed, and the node in the base tree is destroyed Example When traversing from d* back to c*, the a1b*c*/a1b*c* tree is output and destroyed When traversing from c* back to b*, the a1b*D/a1b* tree is output and destroyed When at b*, jump to b1 and repeat similar process 29" }, { "page_index": 256, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_030.png", "page_index": 256, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:27+07:00" }, "raw_text": "Data Cube Computation Methods Multi-Way Array Aggregation BUC Star-Cubing High-Dimensional OLAP 30" }, { "page_index": 257, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_031.png", "page_index": 257, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:33+07:00" }, "raw_text": "The Curse of Dimensionality None of the previous cubing method can handle high dimensionality! A database of 600k tuples. Each dimension has cardinality of 100 and zipfof 2. 1600 Fu11 Data Cube Iceberg Cube,Minsup=5 1400 Quotient Cube 1200 (aW) 1000 az 800 600 400 200 7 8 9 10 11 12 Dimensionality 31" }, { "page_index": 258, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_032.png", "page_index": 258, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:37+07:00" }, "raw_text": "Motivation of High-D OLAP X. Li, J. Han, and H. Gonzalez, High-Dimensional OLAP: A Minimal Cubing Approach, VLDB'04 Challenge to current cubing methods: The \"curse of dimensionality\" problem Iceberg cube and compressed cubes: only delay the inevitable explosion Full materialization: still significant overhead in accessing results on disk High-D OLAP is needed in applications Science and engineering analysis Bio-data analysis: thousands of genes Statistical surveys: hundreds of variables 32" }, { "page_index": 259, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_033.png", "page_index": 259, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:41+07:00" }, "raw_text": "with Minimal Cubing Fast High-D OLAP Observation: OLAP occurs only on a small subset of dimensions at a time Semi-Online Computational Model 1. Partition the set of dimensions into shell fragments 2. Compute data cubes for each shell fragment while retaining inverted indices or value-list indices Given the pre-computed fragment cubes, 3. dynamically compute cube cells of the high- dimensional data cube on/ine 33" }, { "page_index": 260, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_034.png", "page_index": 260, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:45+07:00" }, "raw_text": "Properties of Proposed Method Partitions the data vertically Reduces high-dimensional cube into a set of lower dimensional cubes Online re-construction of original high-dimensional space Lossless reduction Offers tradeoffs between the amount of pre-processing and the speed of online computation 34" }, { "page_index": 261, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_035.png", "page_index": 261, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:51+07:00" }, "raw_text": "Example Computation Let the cube aggregation function be count tid A B C D E 1 al b1 c1 d1 el 2 al b2 c1 d2 el 3 al b2 c1 d1 e2 4 a2 b1 c1 d1 e2 5 a2 b1 c1 d1 e3 Divide the 5 dimensions into 2 shell fragments: (A, B, C) and(D, E) 35" }, { "page_index": 262, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_036.png", "page_index": 262, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:28:56+07:00" }, "raw_text": "1-D Inverted Indices Build traditional invert index or RID list Attribute Value TID List List Size a1 1 2 3 3 a2 4 5 2 b1 1 4 5 3 b2 2 3 2 c1 1 2 3 4 5 5 d1 1 3 4 5 4 d2 2 1 el 1 2 2 e2 3 4 2 e3 5 1 36" }, { "page_index": 263, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_037.png", "page_index": 263, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:02+07:00" }, "raw_text": "Shell Fragment Cubes: Ideas Generalize the 1-D inverted indices to multi-dimensional ones in the data cube sense Compute all cuboids for data cubes ABC and DE while retaining the inverted indices For example, shell Cell Intersection TID List List Size fragment cube ABC al b1 1 2 3 n1 4 5 1 1 contains 7 cuboids: al b2 1 2 3 02 3 2 3 2 A,B,C a2 b1 4 51 4 5 4 5 2 AB,AC,BC a2 b2 4 5n2 3 X 0 ABC This completes the offline computation stage 37" }, { "page_index": 264, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_038.png", "page_index": 264, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:07+07:00" }, "raw_text": "Shell Fragment Cubes: Size and Design Given a database of T tuples, D dimensions, and F shell fragment size, the fragment cubes' space requirement is: D 1 2 -1 0 For F < 5, the growth is sub-linear A Shell fragments do not have to be disjoint Fragment groupings can be arbitrary to allow for maximum online performance Known common combinations (e.g.,) should be grouped together. Shell fragment sizes can be adjusted for optimal balance between offline and online computation 38" }, { "page_index": 265, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_039.png", "page_index": 265, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:13+07:00" }, "raw_text": "ID l Measure Table If measures other than count are present, store in ID_measure table separate from the shell fragments tid count sum 1 5 70 2 3 10 3 8 20 4 5 40 5 2 30 39" }, { "page_index": 266, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_040.png", "page_index": 266, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:17+07:00" }, "raw_text": "The Frag-Shells Algorithm Partition set of dimension (A1,...,A,) into a set of k fragments 1. (P1,...,Pk) Scan base table once and do the following 2. insert into ID_measure table. 3. for each attribute value a: of each dimension A 4. build inverted index entry 5. For each fragment partition P. 6. build local fragment cube S: by intersecting tid-lists in bottom- 7. up fashion. 40" }, { "page_index": 267, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_041.png", "page_index": 267, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:22+07:00" }, "raw_text": "Frag-Shells (2) Dimensions D Cuboid EF Cuboid A B C D E F DE Cuboid Cell Iuple-ID List d1 e1 {1,3,8,9} d1 e2 {2,4, 6, 7} d2 e1 {5,10} ABC DEF Cube Cube 41" }, { "page_index": 268, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_042.png", "page_index": 268, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:26+07:00" }, "raw_text": "Online Query Computation: Query A query has the general form a,a,...,a, : M Each a: has 3 possible values Instantiated value 1. Aggregate * function 2. Inquire ? function 3. For example, 3 ? ? * 1 : count) returns a 2-D data cube 42" }, { "page_index": 269, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_043.png", "page_index": 269, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:31+07:00" }, "raw_text": "Online Query Computation: Method Given the fragment cubes, process a query as follows Divide the query into fragment, same as the shell 1 Fetch the corresponding TID list for each 2. fragment from the fragment cube Intersect the TID lists from each fragment to 3. construct instantiated base table Compute the data cube using the base table with 4. any cubing algorithm 43" }, { "page_index": 270, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_044.png", "page_index": 270, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:35+07:00" }, "raw_text": "Online Query Computation: S Sketch A B C D E F G H T 1 K M N L nstantiated Online Base Table Cube 44" }, { "page_index": 271, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_045.png", "page_index": 271, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:39+07:00" }, "raw_text": "Experiment: Size vs. Dimensionality (50 and 100 cardinality 1000 50-C 100-C 750 (8W) 500 250 0 20 30 40 50 60 70 80 Dimensionality (50-C): 106 tuples, 0 skew, 50 cardinality, fragment size 3. 100-C): 106 tuples, 2 skew, 100 cardinality, fragment size 2 45" }, { "page_index": 272, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_046.png", "page_index": 272, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:44+07:00" }, "raw_text": "Experiments on Real World Data UCI Forest CoverType data set 54 dimensions, 581K tuples Shell fragments of size 2 took 33 seconds and 325MB to compute 3-D subquery with 1 instantiate D: 85ms1.4 sec. Longitudinal Study of Vocational Rehab. Data 24 dimensions, 8818 tuples Shell fragments of size 3 took 0.9 seconds and 60MB to compute 5-D query with 0 instantiated D: 227ms2.6 sec. 46" }, { "page_index": 273, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_047.png", "page_index": 273, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:47+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Sampling Cube Ranking Cube Multidimensional Data Analysis in Cube Space Summary 47" }, { "page_index": 274, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_048.png", "page_index": 274, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:52+07:00" }, "raw_text": "l Queries by Processing Advanced Data Cube Technology Exploring Sampling Cube X. Li, J. Han, Z. Yin, J.-G. Lee, Y. Sun, \"Sampling Cube: A Framework for Statistical OLAP over Sampling Data\", SIGMOD'08 Ranking Cube D. Xin, J. Han, H. Cheng, and X. Li. Answering top-k queries with multi-dimensional selections: The ranking cube approach. VLDB'06 Other advanced cubes for processing data and queries Stream cube, spatial cube, multimedia cube, text cube, RFID cube, etc. to be studied in volume 2 48" }, { "page_index": 275, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_049.png", "page_index": 275, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:29:57+07:00" }, "raw_text": "Statistical survey: A popular tool to collect information about a population based on a sample Ex.: TV ratings, US Census, election polls A common tool in politics, health, market research, science, and many more An efficient way of collecting information (Data collection is expensive) Many statistical tools available, to determine validity Confidence intervals Hypothesis tests OLAP (multidimensional analysis) on survey data highly desirable but can it be done well? 49" }, { "page_index": 276, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_050.png", "page_index": 276, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:03+07:00" }, "raw_text": "Surveys: s Sample vs. Whole Population Data is only a sample of population AgeEducation High-school College Graduate 18 19 20 50" }, { "page_index": 277, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_051.png", "page_index": 277, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:07+07:00" }, "raw_text": "Problems for Drilling in Multidim. Space Data is only a sample of population but samples could be small when drilling to certain multidimensional space AgeEducation High-school College Graduate 18 19 20 51" }, { "page_index": 278, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_052.png", "page_index": 278, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:11+07:00" }, "raw_text": "(i.e., : Sampling) Data OLAP Survey on Semantics of query is unchanged Input data has changed Age/Education High-school College Graduate 18 19 20 52" }, { "page_index": 279, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_053.png", "page_index": 279, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:15+07:00" }, "raw_text": "Challenges for OLAP Sampling on Data Computing confidence intervals in OLAP context No data? Not exactly. No data in subspaces in cube Sparse data Causes include sampling bias and query selection bias Curse of dimensionality Survey data can be high dimensional Over 600 dimensions in real world example Impossible to fully materialize 53" }, { "page_index": 280, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_054.png", "page_index": 280, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:19+07:00" }, "raw_text": "Example 1: Confidence Interval What is the average income of I 9-year-old high-school students? Return not only query result but also confidence interval Age/Education High-schoo College Graduate 18 19 20 54" }, { "page_index": 281, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_055.png", "page_index": 281, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:24+07:00" }, "raw_text": "Confidence Interval Confidence interval at x : x tcx x is a sample of data set: x is the mean of sample t. is the critical t-value, calg- a look-up 0 S - is the estimated standard error of the mean Vi Example: $50,000 + $3,000 with 95% confidence Treat points in cube cell as samples Compute confidence interval as traditional sample set Return answer in the form of confidence interval Indicates quality of query answer User selects desired confidence interval 55" }, { "page_index": 282, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_056.png", "page_index": 282, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:29+07:00" }, "raw_text": "Efficient Computing Confidence Interval Measures Efficient computation in all cells in data cube Both mean and confidence interval are algebraic Why confidence interval mea gebraic? x tcx is algebraic x 1 S x where both s and / (count) are algebraic Vi Thus one can calculate cells efficiently at more general cuboids without having to start at the base cuboid each time 56" }, { "page_index": 283, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_057.png", "page_index": 283, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:33+07:00" }, "raw_text": "Example 2: Query Expansion What is the average income of 19-year-old college students? Age/Education High-school College Graduate 18 19 20 57" }, { "page_index": 284, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_058.png", "page_index": 284, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:39+07:00" }, "raw_text": "Boosting g Confidence by Query Expansion From the example: The queried cell \"19-year-old college students\" contains only 2 samples Confidence interval is large (i.e., low confidence). why? Small sample size High standard deviation with samples Small sample sizes can occur at relatively low dimensiona selections Collect more data? expensive! Use data in other cells? Maybe, but have to be careful 58" }, { "page_index": 285, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_059.png", "page_index": 285, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:44+07:00" }, "raw_text": "Intra-Cuboid Expansion: Choice 1 Expand query to include 18 and 20 year olds? Age/Education High-school College Graduate 18 19 r 20 59" }, { "page_index": 286, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_060.png", "page_index": 286, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:48+07:00" }, "raw_text": "Intra-Cuboid Expansion: Choice 2 Expand query to include high-school and graduate students? Age/Education High-school College Graduate 18 19 20 60" }, { "page_index": 287, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_061.png", "page_index": 287, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:53+07:00" }, "raw_text": "Query Expansion (Age, Occupation) cuboid (a) Intra-Cuboid Expan- sion (Age, Occupation) cuboid Age cuboid Occupation cuboid Expansion 61" }, { "page_index": 288, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_062.png", "page_index": 288, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:30:58+07:00" }, "raw_text": "Intra-Cuboid Expansion Combine other cells' data into own to \"boost\" confidence If share semantic and cube similarity Use only if necessary Bigger sample size will decrease confidence interval Cell segment similarity Some dimensions are clear: Age Some are fuzzy: Occupation May need domain knowledge Cell value similarity How to determine if two cells' samples come from the same population? Two-sample t-test (confidence-based) 62" }, { "page_index": 289, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_063.png", "page_index": 289, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:02+07:00" }, "raw_text": "Inter-Cuboid Expansion If a query dimension is Not correlated with cube value But is causing small sample size by drilling down too much more general cuboid Can use two-sample t-test to determine similarity between two cells across cuboids Can also use a different method to be shown later 63" }, { "page_index": 290, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_064.png", "page_index": 290, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:12+07:00" }, "raw_text": "Query Expansion Experiments Real world sample data: 600 dimensions and 750,000 tuples 0.05% to simulate \"sample\" (allows error checking) (a) Intra-Cuboid Expansion with Age dimension and Average Number of Children cube measure Query Average Query Answer Error Sampling Sizes Gender Marital No Expand Expand % Improve Population Sample Expanded FEMALE MARRIED 0.48 0.32 33% 2473.0 2.2 28.3 FEMALE SINGLE 0.31 0.21 30% 612.6 0.6 6.4 FEMALE DIVORCED 0.49 0.43 11% 321.1 0.3 3.4 MALE MARRIED 0.42 0.21 49% 4296.8 4.4 37.6 MALE SINGLE 0.26 0.21 16% 571.8 0.5 3.6 MALE DIVORCED 0.33 0.27 19% 224.7 0.2 1.2 Average 0.38 0.27 26% 1416.7 1.4 13.4 64" }, { "page_index": 291, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_065.png", "page_index": 291, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:16+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Sampling Cube Ranking Cube Multidimensional Data Analysis in Cube Space Summary 65" }, { "page_index": 292, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_066.png", "page_index": 292, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:21+07:00" }, "raw_text": "Ranking Cubes - Efficient Computation of Ranking queries Data cube helps not only OLAP but also ranked search (top-k) ranking query: only returns the best k results according to a user-specified preference, consisting of (1) a selection condition and (2) a ranking function Ex.: Search for apartments with expected price 1000 and expected square feet 800 Select top 1 from Apartment where City =\"LA\" and Num_Bedroom = 2 order by [price - 100012 + [sq feet - 80012 asc Efficiency question: Can we only search what we need? Build a ranking cube on both selection dimensions and ranking dimensions 66" }, { "page_index": 293, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_067.png", "page_index": 293, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:29+07:00" }, "raw_text": "Ranking j Cube: Partition Data on Both Selection and Ranking Dimensions t3 One single data 1120 t8 partition as the template t2 Sq feet t4. t1 Partition for Slice the data partition t7 all data t6 by selection conditions t5 200 500 1350 Price 1120 1120 t2 Sq feet Sq feet t7 t7 t6 t6 t5 200 200 500 1350 500 1350 Price Price Sliced Partition Sliced Partition for city=\"LA' for BR=2 67" }, { "page_index": 294, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_068.png", "page_index": 294, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:40+07:00" }, "raw_text": "Materialize Ranking-Cube Step 1: Partition Data on Ranking Dimensions tid City BR Price Sq feet t3 1120 t1 SEA 1 500 600 1 2 3 4 t8 t2 CLE 2 700 800 t2 t3 SEA 1 800 900 Sq feet 5 6 t4. 7 8 t1 t4 CLE 3 1000 1000 t5 LA 1 1100 200 9 10 t6 LA 2 1200 500 t5 15 13 14 16 t7 LA 2 1200 560 200 500 1350 t8 CLE 3 1350 1120 Price Step 3: Compute Measures for each group Step 2: Group data by Selection Dimensions 1120 For the cell (LA) City Block-level:{11$q15} SEA Data-level: {11: t6, t7;15: t5} LA t7 t6 CLE City & BR t5 200 BR 500 1350 Price 1 2 3 4 68" }, { "page_index": 295, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_069.png", "page_index": 295, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:48+07:00" }, "raw_text": "Search with Ranking-Cube: Push Selection and Ranking Simultaneously Select top 1 from Apartment where city = \"LA\" order by [price - 1000]2 + [sq feet - 800]2 asc Bin boundary for price 500,600, 800, 1100,1350] Given the bin boundaries Bin boundary for sg feet [200, 400,600, 800, 1120] locate the block with top score t3 1120 1120 t8 800 t2 Sq feet Sq feet t4. t1 t7 t7 11 t6 t6 t5 t5 15 200 200 1000 500 1350 500 1350 Price Price Measure for LA: Without ranking-cube: start With ranking-cube: {11,15} search from here start search from here {11: t6,t7; 15:t5} 69" }, { "page_index": 296, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_070.png", "page_index": 296, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:55+07:00" }, "raw_text": "Processing Ranking j Query: Execution Trace Select top 1 from Apartment where city = \"LA\" order by [price - 1000]2 + [sq feet - 800]2 asc Bin boundary for price [500, 600, 800, 1100,1350] f=[price-1000]2 + [sq feet - 800]2 Bin boundary for sq feet [200, 400,600, 800, 1120] 1120 Execution Trace: 800 1. Retrieve High-level measure for LA {11, 15} Sq feet 2. Estimate /ower bound score for block 11, 15 11 t7 t6 f(block 11) = 40,000,f(block 15) = 160,000 t5 15 200 3. Retrieve block 11 1000 500 1350 Price 4. Retrieve low-level measure for block 11 5. f(t6) = 130,000, f(t7) = 97,600 With ranking- Measure for LA: Output t7, done! cube: start search {11,15} from here {11: t6,t7; 15:t5] 70" }, { "page_index": 297, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_071.png", "page_index": 297, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:31:59+07:00" }, "raw_text": "Ranking Cube: Methodology Extension and Ranking cube methodology Push selection and ranking simultaneously It works for many sophisticated ranking functions How to support high-dimensional data? Materialize only those atomic cuboids that contain single selection dimensions Uses the idea similar to high-dimensional OLAP Achieves low space overhead and high performance in answering ranking queries with a high number of selection dimensions 71" }, { "page_index": 298, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_072.png", "page_index": 298, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:03+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Multidimensional Data Analysis in Cube Space Summary 72" }, { "page_index": 299, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_073.png", "page_index": 299, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:06+07:00" }, "raw_text": "Multidimensional Data Analysis in Cube Space Prediction Cubes: Data Mining in Multi- Dimensional Cube Space Multi-Feature Cubes: Complex Aggregation at Multiple Granularities Discovery-Driven Exploration of Data Cubes 73" }, { "page_index": 300, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_074.png", "page_index": 300, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:12+07:00" }, "raw_text": "Data Mining g in Cube Space Data cube greatly increases the analysis bandwidth Four ways to interact OLAP-styled analysis and data mining Using cube space to define data space for mining Using OLAP queries to generate features and targets for mining, e.g., multi-feature cube Using data-mining models as building blocks in a multi- step mining process, e.g., prediction cube Using data-cube computation techniques to speed up repeated model construction Cube-space data mining may require building a model for each candidate data space Sharing computation across model-construction for different candidates may lead to efficient mining 74" }, { "page_index": 301, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_075.png", "page_index": 301, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:16+07:00" }, "raw_text": "Prediction Cubes Prediction cube: A cube structure that stores prediction models in multidimensional data space and supports prediction in OLAp manner Prediction models are used as building blocks to define the interestingness of subsets of data, i.e., to answer which subsets of data indicate better prediction 75" }, { "page_index": 302, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_076.png", "page_index": 302, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:21+07:00" }, "raw_text": "How to Determine the Prediction Power of an Attribute? Ex. A customer table D: Two dimensions z: Time (Month, Year) and Location (State, Country) s X: Gender and Salary Two features One class-label attribute Y: Valued Customer Q: \"Are there times and locations in which the value of a customer depended greatly on the customers gender (i.e., Gender: predictiveness attribute V)?\" Idea: Compute the difference between the model built on that using x to predict Y and that built on using X - V to predict Y If the difference is large, V must play an important role at predicting Y 76" }, { "page_index": 303, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_077.png", "page_index": 303, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:25+07:00" }, "raw_text": "Efficient Computation of Prediction Cubes Naive method: Fully materialize the prediction cube, i.e., exhaustively build models and evaluate them for each cell and for each granularity Better approach: Explore score function decomposition that reduces prediction cube computation to data cube computation 77" }, { "page_index": 304, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_078.png", "page_index": 304, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:29+07:00" }, "raw_text": "Multidimensional Data Analysis in Cube Space Prediction Cubes: Data Mining in Multi- Dimensional Cube Space Multi-Feature Cubes: Complex Aggregation at Multiple Granularities Discovery-Driven Exploration of Data Cubes 78" }, { "page_index": 305, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_079.png", "page_index": 305, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:36+07:00" }, "raw_text": "Complex Aggregation at Multiple Granularities: Multi-Feature Cubes Multi-feature cubes (Ross, et al. 1998): Compute complex queries involving multiple dependent aggregates at multiple granularities Ex. Grouping by all subsets of {item, region, month}, find the maximum price in 2010 for each group, and the total sales among all maximum price tuples select item, region, month, max(price), sum(R.sales) from purchases where year = 2010 cube by item, region, month: R such that R.price = max(price) Continuing the last example, among the max price tuples, find the min and max shelf live, and find the fraction of the total sales due to tuple that have min shelf life within the set of all max price tuples 79" }, { "page_index": 306, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_080.png", "page_index": 306, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:40+07:00" }, "raw_text": "Multidimensional Data Analysis in Cube Space Prediction Cubes: Data Mining in Multi- Dimensional Cube Space Multi-Feature Cubes: Complex Aggregation at Multiple Granularities Discovery-Driven Exploration of Data Cubes 80" }, { "page_index": 307, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_081.png", "page_index": 307, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:45+07:00" }, "raw_text": "Discovery-Driven Exploration of Data Cubes Hypothesis-driven exploration by user, huge search space Discovery-driven (Sarawagi, et al.'98) Effective navigation of large OLAP data cubes pre-compute measures indicating exceptions, guide user in the data analysis, at all levels of aggregation Exception: significantly different from the value anticipated, based on a statistical model Visual cues such as background color are used to reflect the degree of exception of each cell 81" }, { "page_index": 308, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_082.png", "page_index": 308, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:32:49+07:00" }, "raw_text": "Kinds of Exceptions and their Computation Parameters SelfExp: surprise of cell relative to other cells at same level of aggregation InExp: surprise beneath the cell PathExp: surprise beneath cell for each drill-down path Computation of exception indicator (modeling fitting and computing SelfExp, InExp, and PathExp values) can be overlapped with cube construction Exception themselves can be stored, indexed and retrieved like precomputed aggregates 82" }, { "page_index": 309, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_083.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_083.png", "page_index": 309, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:09+07:00" }, "raw_text": "Examples: Discovery-Driven Data Cubes item all tegion all Sum of sales month Jan Feb Mar Apt May Jun Jul Aug Sep Oct Nov Dec Total 1% -1% 0% 1% 3% -1 -9% -1% 2% -4% 3% Avg sales month item Jan Feb Mar Apt May Jun Jul Aug Sep Oct Nov Dec Sony b/w printer 9% -8% 2% -5% 14% 4% 0% 41% -13% -15% -11% Sony color printe 0% 0% 3% 2% 4% -10% -13% 0% 4% -6% 4% HP b/w printer -2% 1% 2% 3% 8% 0% -12% -9% 3% -3% 6% HP c olot printer 0% 0% -2% 1% 0% -1% -7% -2% 1% -5% 1% IBM home computer 1% -2% -1% -1% 3% 3% -10% 4% 1% 4% -1% IBM laptop computer 0% 0% -1% 3% 4% 2% -10% -2% 0% -9% 3% Toshiba home computer -2% -5% 1% 1% -1% 1% 5% -3% -5% -1% -1% Toshiba laptop computer 1% 0% 3% 0% -2% -2% -5% 3% 2% -1% 0% Logitech mouse 3% -2% -1% 0% 4% 6% -11% 2% 1% 4% 0% Ergo-way mouse 0% 0% 2% 3% 1% -2% -2% -5% 0% -5% 8% item IBM home computer Avg sales month region Jan Feb Mar Apt May Jun Jul Aug Sep Oct Nov Dec North -1% -3% -1% 0% 3% 4% -7% 1% 0% -3% -3% South -1% 1% -9% 6% -1% -39% 9% -34% 4% 1% 7% East -1% -2% 2% -3% 1% 18% -2% 11% -3% -2% -1% West 4% 0% -1% -3% 5% 1% -18% 8% 5% -8% 1% 83" }, { "page_index": 310, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_084.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_084.png", "page_index": 310, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:13+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Data Cube Computation: Preliminary Concepts Data Cube Computation Methods Processing Advanced Queries by Exploring Data Cube Technology Multidimensional Data Analysis in Cube Space Summary 84" }, { "page_index": 311, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_085.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_085.png", "page_index": 311, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:17+07:00" }, "raw_text": "Data Cube Technology: Summary Data Cube Computation: Preliminary Concepts Data Cube Computation Methods MultiWay Array Aggregation BUC Star-Cubing High-Dimensional OLAP with Shell-Fragments Processing Advanced Queries by Exploring Data Cube Technology Sampling Cubes Ranking Cubes Multidimensional Data Analysis in Cube Space Discovery-Driven Exploration of Data Cubes Multi-feature Cubes Prediction Cubes 85" }, { "page_index": 312, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_086.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_086.png", "page_index": 312, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:23+07:00" }, "raw_text": "Ref.(I) Data Cube Computation Methods S. Agarwal, R. Agrawal, P. M. Deshpande, A. Gupta, J. F. Naughton, R. Ramakrishnan, and S. Sarawagi. On the computation of multidimensional aggregates. VLDB'96 D. Agrawal, A. E. Abbadi, A. Singh, and T. Yurek. Efficient view maintenance in data warehouses. SIGMOD'97 K. Beyer and R. Ramakrishnan. Bottom-Up Computation of Sparse and Iceberg CUBEs.. SIGMOD'99 M. Fang, N. Shivakumar, H. Garcia-Molina, R. Motwani, and J. D. Ullman. Computing iceberg queries efficiently VLDB'98 J. Gray, S. Chaudhuri, A. Bosworth, A. Layman, D. Reichart, M. Venkatrao, F. Pellow, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, cross-tab and sub-totals. Data Mining and Knowledge Discovery,1:29-54, 1997. J. Han, J. Pei, G. Dong, K. Wang. Efficient Computation of Iceberg Cubes With Complex Measures. SIGMOD'01 L. V. S. Lakshmanan, J. Pei, and J. Han, Quotient Cube: How to Summarize the Semantics of a Data Cube, VLDB'02 X. Li, J. Han, and H. Gonzalez, High-Dimensional OLAP: A Minimal Cubing Approach, VLDB'04 Y. Zhao, P. M. Deshpande, and J. F. Naughton. An array-based algorithm for simultaneous multidimensional aggregates. SIGMOD'97 K. Ross and D. Srivastava. Fast computation of sparse datacubes. VLDB'97 D. Xin, J. Han, X. Li, B. W. Wah, Star-Cubing: Computing Iceberg Cubes by Top-Down and Bottom-Up Integration, VLDB'03 D. Xin, J. Han, Z. Shao, H. Liu, C-Cubing: Efficient Computation of Closed Cubes by Aggregation-Based Checking ICDE'06 86" }, { "page_index": 313, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_087.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_087.png", "page_index": 313, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:29+07:00" }, "raw_text": "Ref. (ll) Advanced Applications with Data Cubes D. Burdick, P. Deshpande, T. S. Jayram, R. Ramakrishnan, and S. Vaithyanathan. OLAP over uncertain and imprecise data. VLDB'05 X. Li, J. Han, Z. Yin, J.-G. Lee, Y. Sun, \"Sampling Cube: A Framework for Statistical OLAP over Sampling Data\", SIGMOD'08 C. X. Lin, B. Ding, J. Han, F. Zhu, and B. Zhao. Text Cube: Computing IR measures for multidimensional text database analysis. ICDM'08 D. Papadias, P. Kalnis, J. Zhang, and Y. Tao. Efficient OLAP operations in spatial data warehouses. SSTD'01 N. Stefanovic, J. Han, and K. Koperski. Object-based selective materialization for efficient implementation of spatial data cubes. IEEE Trans. Knowledge and Data Engineering, 12:938- 958,2000. T. Wu, D. Xin, Q. Mei, and J. Han. Promotion analysis in multidimensional space. VLDB'o9 T. Wu, D. Xin, and J. Han. ARCube: Supporting ranking aggregate queries in partially materialized data cubes.SlGMOD'08 D. Xin, J. Han, H. Cheng, and X. Li. Answering top-k queries with multi-dimensional selections: The ranking cube approach. VLDB'06 J. S. Vitter, M. Wang, and B. R. lyer. Data cube approximation and histograms via wavelets. ClKM'98 D. Zhang, C. Zhai, and J. Han. Topic cube: Topic modeling for OLAP on multi-dimensional text databases. SDM'09 87" }, { "page_index": 314, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_088.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_088.png", "page_index": 314, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:36+07:00" }, "raw_text": "Ref.(lll) Knowledge Discovery with Data Cubes R. Agrawal, A. Gupta, and S. Sarawagi. Modeling multidimensional databases. ICDE'97 B.-C. Chen, L. Chen, Y. Lin, and R. Ramakrishnan. Prediction cubes. VLDB'O5 B.-C. Chen, R. Ramakrishnan, J.W. Shavlik, and P. Tamma. Bellwether analysis: Predicting global aggregates from local regions. VLDB'06 Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang, Multi-Dimensional Regression Analysis of Time-Series Data Streams, VLDB'02 G. Dong, J. Han, J. Lam, J. Pei, K. Wang. Mining Multi-dimensional Constrained Gradients in Data Cubes,VLDB'01 R. Fagin, R. V. Guha, R. Kumar, J. Novak, D. Sivakumar, and A. Tomkins. Multi-structural databases, PODS'05 J. Han. Towards on-line analytical mining in large databases. SIGM0D Record, 27:97-107, 1998 T. Imielinski, L. Khachiyan, and A. Abdulghani. Cubegrades: Generalizing association rules. Data Mining & Knowledge Discovery, 6:219-258,2002. R. Ramakrishnan and B.-C. Chen. Exploratory mining in cube space. Data Mining and Knowledge Discovery, 15:29-54, 2007. K. A. Ross, D. Srivastava, and D. Chatziantoniou. Complex aggregation at multiple granularities. EDBT'98 S. Sarawagi, R. Agrawal, and N. Megiddo. Discovery-driven exploration of OLAP data cubes. EDBT'98 G. Sathe and S. Sarawagi. Intelligent Rollups in Multidimensional OLAP Data. VLDB'01 88" }, { "page_index": 315, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_089.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_089.png", "page_index": 315, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:39+07:00" }, "raw_text": "Surplus Slides 89" }, { "page_index": 316, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_090.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_090.png", "page_index": 316, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:45+07:00" }, "raw_text": "a Cube Technology Chapter 5: Data Efficient Methods for Data Cube Computation Preliminary Concepts and General Strategies for Cube Computation Multiway Array Aggregation for Full Cube Computation BuC: Computing Iceberg Cubes from the Apex Cuboid Downward H-Cubing: Exploring an H-Tree Structure Star-cubing: Computing Iceberg Cubes Using a Dynamic Star-tree Structure Precomputing Shell Fragments for Fast High-Dimensional OLAP Data Cubes for Advanced Applications Sampling Cubes: OLAP on Sampling Data Ranking Cubes: Efficient Computation of Ranking Queries Knowledge Discovery with Data Cubes Discovery-Driven Exploration of Data Cubes Complex Aggregation at Multiple Granularity: Multi-feature Cubes Prediction Cubes: Data Mining in Multi-Dimensional Cube Space Summary 90" }, { "page_index": 317, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_091.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_091.png", "page_index": 317, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:33:50+07:00" }, "raw_text": "H-Cubing: Using H-Tree Structure all Bottom-up computation B 1 Exploring an H-tree AB AC AD BC BD CD structure If the current ABC ABD ACD BCD computation of an H-tree ABCD r00t: 100 cannot pass min_sup, do a1: 30 a2: 20 a3: 20 a4: 20 not proceed further (pruning) b1: 10 b2: 10 b3: 10 No simultaneous c1: 5 c2: 5 aggregation d1: 2 d2: 3 91" }, { "page_index": 318, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_092.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_092.png", "page_index": 318, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:00+07:00" }, "raw_text": "H-tree: A Prefix Hyper-tree Attr. Val. Quant-Info Side-link Edu Sum:2285 ... root Hhd Bus Header bus hhd ... edu Jan table Feb Jan Mar Jan Feb Tor Van III Mon Tor Van Mon 101 Month City Cust_grp Prod Cost Price Jan Tor Edu Printer 500 485 Q.I. Q.I. Q.I. Quant-Info Jan Tor Hhd TV 800 1200 Sum: 1765 Jan Tor Edu Camera 1160 1280 Cnt: 2 Feb Mon Bus Laptop 1500 2500 bins Mar Van Edu HD 540 520 92" }, { "page_index": 319, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_093.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_093.png", "page_index": 319, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:07+07:00" }, "raw_text": "Computing Cells Involving \"City' Attr. Val. Q.I. Side-link From (*, *, Tor) to (*, Jan, Tor) Edu Header Hhd root Bus Table Jan Bus. I I I Hhd. Feb Edu. Side-link Jan: Mar. Jan. Feb. Attr. Val. Quant-Info Edu Sum:2285 .. Hhd Bus Tor Van. Tor: Mon. Jan Feb Q.I. Q.I. Q.I. Quant-Info Tor 1II Van Sum1765 Mon Cnt: 2 bins 93" }, { "page_index": 320, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_094.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_094.png", "page_index": 320, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:14+07:00" }, "raw_text": "Computing Cells Involving Month But No City 1. Roll up quant-info root 2. Compute cells involving Hhd. Bus. month but no city Edu. Attr. Val. Quant-Info Side-link Jan. Mar > Feb. Edu. Sum:2285 Hhd. Bus. 6.1. QI. Q.1 Q.I. Jan. Feb. I II Mar. Tor. Van. Tor. Mont. Tor. Top-k OK mark: if Q.I. in a child passes Van. Mont. top-k avg threshold, so does its parents. No binning is needed! 94" }, { "page_index": 321, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_095.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_4/slide_095.png", "page_index": 321, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:19+07:00" }, "raw_text": "Computing Cells Involving g Ony Cust_grp root Check header table directly bus hhd edu Attr. Yal. Quant-Info Side-link Jan Mar > Feb Edu Sum:2285 .. Hhd III Bus Q.I. QI.I. Q/I. Q.1 I I Jan Feb Mar Tor Tor Van Tor Mon Van Mon 95" }, { "page_index": 322, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_001.png", "page_index": 322, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:24+07:00" }, "raw_text": "Mining: Data Concepts and Techniques (3rd ed.) - Chapter 6 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 323, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_002.png", "page_index": 323, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:28+07:00" }, "raw_text": "Chapter 5: Mining Freguent Patterns, Association and Correlations: Basic Concepts and Methods Basic Concepts Frequent Itemset Mining Methods Which Patterns Are Interesting?-Pattern Evaluation Methods Summary 2" }, { "page_index": 324, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_003.png", "page_index": 324, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:33+07:00" }, "raw_text": "What Is Frequent Pattern Analysis? Frequent pattern: a pattern (a set of items, subsequences, substructures etc.) that occurs frequently in a data set First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of frequent itemsets and association rule mining Motivation: Finding inherent regularities in data What products were often purchased together? Beer and diapers?! What are the subsequent purchases after buying a PC? What kinds of DNA are sensitive to this new drug? Can we automatically classify web documents? Applications Basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis 3" }, { "page_index": 325, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_004.png", "page_index": 325, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:37+07:00" }, "raw_text": "Why Is Frea. Pattern Mining Important? Freq. pattern: An intrinsic and important property of datasets Foundation for many essential data mining tasks Association, correlation, and causality analysis Sequential, structural (e.g., sub-graph) patterns Pattern analysis in spatiotemporal, multimedia, time- series, and stream data Classification: discriminative, frequent pattern analysis Cluster analysis: frequent pattern-based clustering Data warehousing: iceberg cube and cube-gradient Semantic data compression: fascicles Broad applications 4" }, { "page_index": 326, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_005.png", "page_index": 326, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:44+07:00" }, "raw_text": "Basic Concepts: Freauent Patterns Tid Items bought itemset: A set of one or more 10 items Beer, Nuts, Diaper 20 k-itemset X = {x1, .., x} Beer, Coffee, Diaper 30 Beer, Diaper, Eggs (absolute) support, or, support 40 Nuts, Eggs, Milk count of X: Frequency or 50 Nuts, Coffee, Diaper, Eggs, Milk occurrence of an itemset x (relative) support, s, is the Customer Customer buys both fraction of transactions that buys diaper contains X (i.e., the probability that a transaction contains X) An itemset X is frequent if X's support is no less than a minsup Customer threshold buys beer 5" }, { "page_index": 327, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_006.png", "page_index": 327, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:50+07:00" }, "raw_text": "Basic Concepts: Association Rules Tid Items bought Find all the rules X-> Ywith 10 Beer, Nuts, Diaper minimum support and confidence 20 Beer, Coffee, Diaper support, s, probability that a 30 Beer, Diaper, Eggs 40 transaction contains X U Y Nuts, Eggs, Milk 50 Nuts, Coffee, Diaper, Eggs, Milk confidence, c, conditional Customer Customer probability that a transaction buys botl buys having X also contains Y diaker Let minsup = 50%, minconf = 50% Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3 {Beer, Diaper}:3 Customer buys beer Association rules: (many more!) Beer -> Diaper (60%, 100%) Diaper -> Beer (60%, 75%) 6" }, { "page_index": 328, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_007.png", "page_index": 328, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:55+07:00" }, "raw_text": "Closed Patterns and Max-Patterns A long pattern contains a combinatorial number of sub- Solution: Mine closed patterns and max-patterns instead An itemset X is closed if X is frequent and there exists no super-pattern Y X, with the same support as X (proposed by Pasquier, et al. @ ICDT'99) An itemset X is a max-pattern if X is frequent and there exists no frequent super-pattern Y X (proposed by Bayardo @ SIGMOD'98) Closed pattern is a lossless compression of freq. patterns Reducing the # of patterns and rules 7" }, { "page_index": 329, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_008.png", "page_index": 329, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:34:58+07:00" }, "raw_text": "Closed Patterns and Max-Patterns a1 ..., a5o>} Min_sup = 1. What is the set of closed itemset? What is the set of max-pattern? What is the set of all patterns? 8" }, { "page_index": 330, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_009.png", "page_index": 330, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:03+07:00" }, "raw_text": "Computational Complexity of Freauent Itemset Mining How many itemsets are potentially to be generated in the worst case? The number of frequent itemsets to be generated is senstive to the minsup threshold When minsup is low, there exist potentially an exponential number of frequent itemsets The worst case: MN where M: # distinct items, and N: max length of transactions The worst case complexty vs. the expected probability Ex. Suppose Walmart has 104 kinds of products The chance to pick up one product 10-4 The chance to pick up a particular set of 10 products: 10-40 What is the chance this particular set of 10 products to be freguent 103 times in 109 transactions? 9" }, { "page_index": 331, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_010.png", "page_index": 331, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:06+07:00" }, "raw_text": "Chapter 5: Mining Freguent Patterns, Association and Correlations: Basic Concepts and Methods Basic Concepts Frequent Itemset Mining Methods Which Patterns Are Interesting?-Pattern Evaluation Methods Summary 10" }, { "page_index": 332, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_011.png", "page_index": 332, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:10+07:00" }, "raw_text": "Scalable Freguent Itemset Mining Methods Apriori: A Candidate Generation-and-Test Approach Improving the Efficiency of Apriori FPGrowth: A Frequent Pattern-Growth Approach ECLAT: Frequent Pattern Mining with Vertical Data Format 11" }, { "page_index": 333, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_012.png", "page_index": 333, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:15+07:00" }, "raw_text": "The Downward Closure Property and Scalable Mining g Methods The downward closure property of frequent patterns Any subset of a frequent itemset must be frequent If {beer, diaper, nuts} is frequent, so is {beer, diaper} i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper} Scalable mining methods: Three major approaches Apriori (Agrawal & Srikant@VLDB'94) Freq. pattern growth (FPgrowth-Han, Pei & Yin @SIGMOD'00) Vertical data format approach (Charm-Zaki & Hsiao @SDM'02) 12" }, { "page_index": 334, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_013.png", "page_index": 334, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:20+07:00" }, "raw_text": "Apriori: A Candidate Generation & Test Approach Apriori pruning principle: If there is any itemset which is infrequent, its superset should not be generated/tested! (Agrawal & Srikant @VLDB'94, Mannila, et al. @ KDD' 94) Method: Initially, scan DB once to get frequent 1-itemset Generate length (k+1) candidate itemsets from length k frequent itemsets Test the candidates against DB Terminate when no freguent or candidate set can be generated 13" }, { "page_index": 335, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_014.png", "page_index": 335, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:29+07:00" }, "raw_text": "The Apriori Algorithm-An Example Supmin 2 Itemset sup Itemset sup Database TDB {A} 2 L1 {A} 2 C1 Tid Items {B} 3 {B} 3 10 A,C,D {C} 3 1st scan {C} 3 20 B,C, E {D} 1 {E} 3 30 A, B, C, E {E} 3 40 B, E C2 Itemset sup C2 Itemset {A, B} 1 L2 2nd scan Itemset sup {A,B} {A,C} 2 {A,C} 2 {A,C} {A, E} 1 {B,C} 2 {B,C} 2 {A, E} {B,E} 3 {B,E} 3 {B,C} {C,E} 2 {C,E} 2 {B,E} {C,E} C3 L3 Itemset Itemset sup 3rd scan {B,C,E} {B,C,E} 2 14" }, { "page_index": 336, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_015.png", "page_index": 336, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:34+07:00" }, "raw_text": "The Apriori Algorithm (Pseudo-Code) C: Candidate itemset of size k Zk : frequent itemset of size k L = {frequent items}; for (k= 1; Lk!=O; k++) do begin for each transaction tin database do that :1 are contained in t end 15" }, { "page_index": 337, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_016.png", "page_index": 337, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:37+07:00" }, "raw_text": "Implementation of Apriori How to generate candidates? Step 1: self-joining L Step 2: pruning Example of Candidate-generation L3={abc, abd, acd, ace, bcd} Self-joining: L3*L3 abcd from abc and aba acde from acd and ace Pruning : C4= {abca} 16" }, { "page_index": 338, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_017.png", "page_index": 338, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:41+07:00" }, "raw_text": "How to Count Supports of Candidates? Why counting supports of candidates a problem? The total number of candidates can be very huge One transaction may contain many candidates Method Candidate itemsets are stored in a hash-tree Leafnode of hash-tree contains a list of itemsets and counts Interior node contains a hash table Subset function: finds all the candidates contained in a transaction 17" }, { "page_index": 339, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_018.png", "page_index": 339, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:46+07:00" }, "raw_text": "Counting Supports of Candidates Using Hash Tree Subset function Transaction: 1 2 3 5 6 3,6,9 1,4,7 2,5,8 1 + 2 3 5 6 2 3 4 1 3 +5 6 - 5 6 7 3 6 7 1 4 5 3 4 5 3 5 6 1 3 6 3 6 8 3 5 7 1 2 +3 5 6 6 8 9 1 2 4 1 2 5 1 5 9 4 5 7 4 5 8 18" }, { "page_index": 340, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_019.png", "page_index": 340, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:51+07:00" }, "raw_text": "Candidate Generation: An SQL Implementation SQL Implementation of candidate generation Suppose the items in Lk- are listed in an order Step 1: self-joining Lk-1 select p.itemy p.item .... p.itemk-1 q.itemk-1 from Lk-1 P, Lk-1 9 where p.item1=q.itemy .... p.itemk-2=q.itemk-u p.itemk-1 < q.itemk-1 Step 2: pruning forall itemsets c in C do forall (k-1)-subsets s of c do Use object-relational extensions like UDFs, BLOBs, and Table functions for efficient implementation [See: S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SIGMOD'98 19" }, { "page_index": 341, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_020.png", "page_index": 341, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:55+07:00" }, "raw_text": "Scalable Freguent Itemset Mining Methods Apriori: A Candidate Generation-and-Test Approach Improving the Efficiency of Apriori FPGrowth: A Frequent Pattern-Growth Approach ECLAT: Frequent Pattern Mining with Vertical Data Format Mining Close Freguent Patterns and Maxpatterns 20" }, { "page_index": 342, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_021.png", "page_index": 342, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:35:58+07:00" }, "raw_text": "Further Improvement of the Apriori Method Major computational challenges Multiple scans of transaction database Huge number of candidates Tedious workload of support counting for candidates Improving Apriori: general ideas Reduce passes of transaction database scans Shrink number of candidates Facilitate support counting of candidates 21" }, { "page_index": 343, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_022.png", "page_index": 343, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:04+07:00" }, "raw_text": "Partition: Scan Database Only Twice Any itemset that is potentially frequent in DB must be frequent in at least one of the partitions of DB Scan 1: partition database and find local frequent patterns Scan 2: consolidate global frequent patterns A. Savasere, E. Omiecinski and S. Navathe, VLDB'95 DB1 DB2 + + DBk DB - sup1(i) < oDB sup,(i) < oDB supk(i) < oDB sup(i) < gDB" }, { "page_index": 344, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_023.png", "page_index": 344, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:09+07:00" }, "raw_text": "DHP: Reduce the Number of Candidates A k-itemset whose corresponding hashing bucket count is below the threshold cannot be freguent count itemsets Candidates: a, b, c, d, e 35 {ab, ad, ae} 88 {bd, be, de} Hash entries {ab, ad, ae} {bd, be, de} 102 {yz, qs, wt} Hash Table Frequent 1-itemset: a, b, d, e ab is not a candidate 2-itemset if the sum of count of {ab, ad, ae} is below support threshold J. Park, M. Chen, and P. Yu. An effective hash-based algorithm for mining association rules. SIGMOD'95 23" }, { "page_index": 345, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_024.png", "page_index": 345, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:14+07:00" }, "raw_text": "Sampling for Freauent Patterns Select a sample of original database, mine frequent patterns within sample using Apriori Scan database once to verify frequent itemsets found in sample, only borders of closure of frequent patterns are checked Example: check abcd instead of ab, ac, ..., etc. Scan database again to find missed frequent patterns H. Toivonen. Sampling large databases for association rules.In VLDB'96 24" }, { "page_index": 346, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_025.png", "page_index": 346, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:20+07:00" }, "raw_text": "DlC: Reduce Number of Scans ABCD Once both A and D are determined frequent, the counting of AD begins ABC ABD ACDIBCD Once all length-2 subsets of BCD are determined frequent, the counting of BCD begins AB AC BC AD BD CD Transactions 1-itemsets A B C D 2-itemsets Apriori {} Itemset lattice l-itemsets 2-items S. Brin R. Motwani, J. Ullman, DIC 3-items and S. Tsur. Dynamic itemset counting and implication rules for market basket data. SIGMOD'97 25" }, { "page_index": 347, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_026.png", "page_index": 347, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:24+07:00" }, "raw_text": "Scalable Freguent Itemset Mining Methods Apriori: A Candidate Generation-and-Test Approach Improving the Efficiency of Apriori FPGrowth: A Frequent Pattern-Growth Approach ECLAT: Frequent Pattern Mining with Vertical Data Format Mining Close Freguent Patterns and Maxpatterns 26" }, { "page_index": 348, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_027.png", "page_index": 348, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:28+07:00" }, "raw_text": "Pattern-Growth Approach: Mining Frequent Patterns Without t Candidate Generation Bottlenecks of the Apriori approach Breadth-first (i.e., level-wise) search Candidate generation and test Often generates a huge number of candidates The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD' 00) Depth-first search Avoid explicit candidate generation Major philosophy: Grow long patterns from short ones using local frequent items only \"abc\" is a frequent pattern Get all transactions having \"abc\", i.e., project DB on abc: DBabc \"d\" is a local frequent item in DBlabc -> abcd is a frequent pattern 27" }, { "page_index": 349, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_028.png", "page_index": 349, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:36+07:00" }, "raw_text": "Construct FP-tree from a Transaction Database TID Items bought ordered) freguent items 100 f, a, c, d, g, i, m, p? {f, c, a, m, p} 200 {a, b, c,,f, l, m, o? {f, c, a, b, my min support = 3 300 {b,,f, h, i, o, w? {f,b} 400 {b, c, k, s, p} {c, b,p} 500 {a, f, c, e, l, p, m, n} f, c, a, m, p? {} Header Table Scan DB once, find Item frequency head c: 1 frequent 1-itemset (single f 4 item pattern) 4 b:1 b:1 c c:3 2. Sort frequent items in 3 a frequency descending b 3 a:3 p:1 3 order, f-list m 1 3 p F m:2 ff 3. Scan DB again, construct FP-tree F-list = f-c-a-b-m-p p:2 m:1 28" }, { "page_index": 350, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_029.png", "page_index": 350, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:40+07:00" }, "raw_text": "Partition Patterns and Databases Frequent patterns can be partitioned into subsets according to f-list F-list = f-c-a-b-m-p Patterns containing p Patterns having m but no p Patterns having c but no a nor b, m, p Pattern f Completeness and non-redundency 29" }, { "page_index": 351, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_030.png", "page_index": 351, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:49+07:00" }, "raw_text": "-ind Patterns Having P From P-conditional Database Starting at the frequent item header table in the FP-tree Traverse the FP-tree by following the link of each frequent item p Accumulate all of transformed prefix paths of item p to form p' conditional pattern base {} Header Table Conditional pattern bases c.: 1 Item frequency head f item cond. pattern base 4 b:1 b:1 4 c:3 c f:3 c 3 a 1 fc:3 b 3 a a:3 p:1 3 1 m b fca:1,.f:1, c:1 3 p m:2 Jfb:1 m fca:2, fcab:1 p fcam:2, cb:1 p:2 m:1 30" }, { "page_index": 352, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_031.png", "page_index": 352, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:36:56+07:00" }, "raw_text": "From Conditional Pattern-bases to Conditional FP-trees For each pattern-base Accumulate the count for each item in the base Construct the FP-tree for the frequent items of the pattern base m-conditional pattern base: fca:2, fcab:1 Header Table All frequent Item frequency head patterns relate to m f 4 {} m, 4 b:1- b:1 c A 3 fm, cm, am, a f:3 3 p:1 fcm, fam, cam, - 3 m c: 3 fcam m:2 p 3 1 a:3 p:2 m: 1 m-conditional FP-tree 31" }, { "page_index": 353, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_032.png", "page_index": 353, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:01+07:00" }, "raw_text": "Recursion: Mining Each Conditional Fp-tree {} Cond. pattern base of \"am\": (fc:3) f:3 {} / c:3 f:3 am-conditional FP-tree c:3 { Cond. pattern base of \"cm\": (f:3) 1 a:3 f: 3 m-conditional FP-tree cm-conditional FP-tree {} 1 Cond. pattern base of \"cam\": (f:3) f:3 cam-conditional FP-tree 32" }, { "page_index": 354, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_033.png", "page_index": 354, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:07+07:00" }, "raw_text": "A Special Case: Single Prefix Path in FP-tree Suppose a (conditional) FP-tree T has a shared single prefix-path P Mining can be decomposed into two parts {} Reduction of the single prefix path into one node a:n1 Concatenation of the mining results of the two a2:n2 parts a3:n3 { 1 C1:k1 a 1:n 1 b1:m1 + C1:k1 > b1:m1 a2:n 2 C2:k2 Cz:k3 C2:k2 Cz:kz 33" }, { "page_index": 355, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_034.png", "page_index": 355, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:12+07:00" }, "raw_text": "Benefits of the FP-tree Structure Completeness Preserve complete information for frequent pattern mining Never break a long pattern of any transaction Compactness Reduce irrelevant info-infrequent items are gone Items in frequency descending order: the more frequently occurring, the more likely to be shared Never be larger than the original database (not count node-links and the count field) 34" }, { "page_index": 356, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_035.png", "page_index": 356, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:16+07:00" }, "raw_text": "The Frequent Pattern Growth Mining Method Idea: Frequent pattern growth Recursively grow frequent patterns by pattern and database partition Method For each frequent item, construct its conditional pattern-base, and then its conditional FP-tree Repeat the process on each newly created conditional FP-tree Until the resulting FP-tree is empty, or it contains only one path-single path will generate all the combinations of its sub-paths, each of which is a frequent pattern 35" }, { "page_index": 357, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_036.png", "page_index": 357, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:21+07:00" }, "raw_text": "Scaling FP-growth by Database Projection What about if FP-tree cannot fit in memory? DB projection First partition a database into a set of projected DBs Then construct and mine FP-tree for each projected DB Parallel projection vs. partition projection techniques Parallel projection Project the DB in parallel for each frequent item Parallel projection is space costly All the partitions can be processed in parallel Partition projection Partition the DB based on the ordered freguent items Passing the unprocessed parts to the subsequent partitions 36" }, { "page_index": 358, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_037.png", "page_index": 358, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:27+07:00" }, "raw_text": "Partition-Based Projection Parallel projection needs a lot Tran. DB of disk space fcamp fcabm Partition projection saves it fb cbp fcamp p-proj DB m-proi DB b-proi DB a-proi DB C-proj DB f-proj DB fcam fcab f fc cb fca cb fcam fca am-proj DB cm-pro1 DB fc fcj fc + 37" }, { "page_index": 359, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_038.png", "page_index": 359, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:35+07:00" }, "raw_text": "Performance of FPGrowth in Large Datasets 100 140 D2 FP-growth 90 - D1 FP-grow th runtime 120 1 -x- - D1 Apriori runtime - -*--D2 TreeProjection 80 1 1 70 1 100 (oes) * Foas)aal and 60 1 80 Data set T25I20D10K 50 Data set T25I20D100K 60 40 30 40 20 20 10 0 0.5 1.5 2.5 0 0.5 1.5 2 2 0 1 3 Support threshold(%) Support threshold (%) FP-Growth vs. Apriori FP-Growth vs. Tree-Projection 38" }, { "page_index": 360, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_039.png", "page_index": 360, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:39+07:00" }, "raw_text": "Advantages of the Pattern Growth Approach Divide-and-conquer: Decompose both the mining task and DB according to the frequent patterns obtained so far Lead to focused search of smaller databases Other factors No candidate generation, no candidate test Compressed database: FP-tree structure No repeated scan of entire database Basic ops: counting local freq items and building sub FP-tree, no pattern search and matching A good open-source implementation and refinement of FPGrowth FPGrowth+ (Grahne and J. Zhu, FIMI'03) 39" }, { "page_index": 361, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_040.png", "page_index": 361, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:45+07:00" }, "raw_text": "Further Improvements of Mining Methods AFOPT (Liu, et al. @ KDD'03 A \"push-right\" method for mining condensed frequent pattern (CFP) tree Carpenter (Pan, et al. @ KDD'03) Mine data sets with small rows but numerous columns Construct a row-enumeration tree for efficient mining FPgrowth+ (Grahne and Zhu, FIMI'03) Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003 TD-Close (Liu, et al, SDM'06) 40" }, { "page_index": 362, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_041.png", "page_index": 362, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:51+07:00" }, "raw_text": " Growth Mining Methodology =xtension of Pattern Mining closed frequent itemsets and max-patterns CLOSET (DMKD'00), FPclose, and FPMax (Grahne & Zhu, Fimi'03) Mining sequential patterns PrefixSpan (ICDE'01), CloSpan (SDM'03), BIDE (ICDE'04) Mining graph patterns gSpan (ICDM'02), CloseGraph (KDD'03) Constraint-based mining of frequent patterns Convertible constraints (ICDE'01), gPrune (PAKDD'03) Computing iceberg data cubes with complex measures H-tree, H-cubing, and Star-cubing (SIGMOD'01, VLDB'03) Pattern-growth-based Clustering MaPle (Pei, et al., ICDM'03) Pattern-Growth-Based Classification Mining frequent and discriminative patterns (Cheng, et al, ICDE'07) 41" }, { "page_index": 363, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_042.png", "page_index": 363, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:37:55+07:00" }, "raw_text": "Scalable Freguent Itemset Mining Methods Apriori: A Candidate Generation-and-Test Approach Improving the Efficiency of Apriori FPGrowth: A Frequent Pattern-Growth Approach ECLAT: Frequent Pattern Mining with Vertical Data Format Mining Close Freguent Patterns and Maxpatterns 42" }, { "page_index": 364, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_043.png", "page_index": 364, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:00+07:00" }, "raw_text": "ECLAT: Mining by Exploring Vertical Data Format Vertical format: t(AB) ={T11 T25, ...} tid-list: list of trans.-ids containing an itemset Deriving freguent patterns based on vertical intersections t(X) = t(Y): X and Y always happen together t(X) c t(Y): transaction having X always has Y Using diffset to accelerate mining Only keep track of differences of tids t(X)={T1,Tz,T3}, t(XY)={T1T3} Diffset (XY,X) ={T,} Eclat (Zaki et al. @KDD'97) Mining Closed patterns using vertical format: CHARM (Zaki & Hsiao@SDM'02 43" }, { "page_index": 365, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_044.png", "page_index": 365, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:04+07:00" }, "raw_text": "Scalable Freguent Itemset Mining Methods Apriori: A Candidate Generation-and-Test Approach Improving the Efficiency of Apriori FPGrowth: A Frequent Pattern-Growth Approach ECLAT: Frequent Pattern Mining with Vertical Data Format Mining Close Freguent Patterns and Maxpatterns 44" }, { "page_index": 366, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_045.png", "page_index": 366, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:09+07:00" }, "raw_text": "Mining Frequent Closed Patterns: clOsEt Flist: list of all frequent items in support ascending order Flist: d-a-f-e-c Min_sup=2 TID Items Divide search space 10 a, c, d, e, f Patterns having d 20 a, b, e 30 C, e, f Patterns having d but no a, etc. 40 a, c, d, f 50 C, e, f Find frequent closed pattern recursively Every transaction having d also has cfa -> cfad is a frequent closed pattern J. Pei, J. Han & R. Mao. \"CLOSET: An Efficient Algorithm for Mining Frequent Closed Itemsets\", DMKD'00." }, { "page_index": 367, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_046.png", "page_index": 367, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:14+07:00" }, "raw_text": "CLOsET+: Mining Closed Itemsets by Pattern-Growth Itemset merging: if Y appears in every occurrence of X, then Y is merged with X Sub-itemset pruning: if Y X, and sup(X) = sup(Y), X and all of X's descendants in the set enumeration tree can be pruned Hybrid tree projection Bottom-up physical tree-projection Top-down pseudo tree-projection Item skipping: if a local frequent item has the same support in several header tables at different levels, one can prune it from the header table at higher levels Efficient subset checking" }, { "page_index": 368, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_047.png", "page_index": 368, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:19+07:00" }, "raw_text": "MaxMiner: Mining Max-Patterns 1st scan: find frequent items Tid Items 10 A, B, C, D,E A, B, C, D, E 20 B, C,D,E 2nd scan: find support for 30 A, C, D, F AB, AC, AD, AE, ABCDE BC, BD, BE, BCDE Potential CD, CE, CDE,DE max-patterns Since BCDE is a max-pattern, no need to check BCD, BDE CDE in later scan R. Bayardo. Efficiently mining long patterns from databases. SIGMOD'98" }, { "page_index": 369, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_048.png", "page_index": 369, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:25+07:00" }, "raw_text": "CHARM: Mining by Exploring Vertical Data Format Vertical format: t(AB) = {T11 T25r ...} tid-list: list of trans.-ids containing an itemset Deriving closed patterns based on vertical intersections t(X) = t(Y): X and Y always happen together t(X) c t(Y): transaction having X always has Y Using diffset to accelerate mining Only keep track of differences of tids t(X)={T1Tz,T3}, t(XY)={T1,Tz} Diffset (XY,X) ={Tz} Eclat/MaxEclat (Zaki et al. @KDD'97), VIPER(P. Shenoy et al.@SIGMOD'00), CHARM (Zaki & Hsiao@SDM'02)" }, { "page_index": 370, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_049.png", "page_index": 370, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:30+07:00" }, "raw_text": "Visualization of Association Rules: Plane Graph DBMinerEnterprise-[#8-Associator] File Mining Associator View wVindow Options Help ? Color: 1%- Confidence -100% Height Support 2.9 Gender =[M]: [support: 37.06% .confidence: 50.55%] R7 L10 T3 L12 R4 L13 P5 16 L15 16 P 8 ForHelp.press F1 NUM 49" }, { "page_index": 371, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_050.png", "page_index": 371, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:36+07:00" }, "raw_text": "Visualization of Association Rules: Rule Graph DBMiner Enterprise-[#1-Associator] -6x File Mining Associator View Window Options Help 6x IE P + Color: Activated Neutral Disabled a Size: Support ++ Education Level=[High School Degree Marital Status=[M] Gender=[F] 2.3 EducationLevel=[BachelorsDegree] Gender=[M] Education Level=[Partial College] MaritalStatus=[S] For Help,press F1 NUM 50" }, { "page_index": 372, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_051.png", "page_index": 372, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:41+07:00" }, "raw_text": "Visualization of Association Rules (SGI/MineSet 3.O TIE ANOY IN. 51" }, { "page_index": 373, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_052.png", "page_index": 373, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:45+07:00" }, "raw_text": "Chapter 5: Mining Freguent Patterns, Association and Correlations: Basic Concepts and Methods Basic Concepts Frequent Itemset Mining Methods Which Patterns Are Interesting?-Pattern Evaluation Methods Summary 52" }, { "page_index": 374, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_053.png", "page_index": 374, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:38:51+07:00" }, "raw_text": "Interestingness Measure: Correlations (Lift) play basketball => eat cereal [40%, 66.7%] is misleading The overall % of students eating cereal is 75% > 66.7%. play basketball => not eat cereal [20%, 33.3%] is more accurate although with lower support and confidence Measure of dependent/correlated events: lift P(AUB) Basketball Not basketball Sum (row) lift = Cereal 2000 1750 3750 P(A)P(B) Not cereal 1000 250 1250 2000/5000 lift(B,C) = = 0.89 Sum(col.) 3000 2000 5000 3000/5000*3750/5000 1000/5000 lift(B,-C)= =1.33 3000/5000*1250/5000 53" }, { "page_index": 375, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_054.png", "page_index": 375, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:39:05+07:00" }, "raw_text": "Are /iftand l x2 Good Measures of Correlation? symbol measure range formula \"Buy walnuts => buy o-coefficient -1...1 P(A.B)-P(AP(B /P(AP(B)(1-P(A))(1-P(B) Yule's Q -1...1 P(A,B)P(A,B)-P(A,B)P(A,B) Q milk[1%, 80%]\" is P(A,BP(A,B)+P(A,B)P(A,B Y P(A,B)P(A,B)- VP(A,B)P(A,B Yule's Y -1...1 VP(A,B)P(A,B)+VP(A,B)P(A,B misleading if 85% of Cohen's P(A,B)+P(A,B)-P(AP(B)-P(A)P(B k -1...1 1-P(A)P(B)-P(AP(B PS Piatetsky-Shapiro's -0.25...0.25 P(A.B)-P(AP(B customers buy milk F Certainty factor -1...1 1-P(B) 1-P(A) AV added value -0.5...1 max(P(BA) -P(B),P(AB) -P(A) K Klosgcn's Q 0.33...0.38 P(A,B) max(P(B)-P(B),P(AB)-P(A Support and confidence j maxkP(Ai,B)+E maxjP(Ai,B)-max;P(Aj)-maxk P(Bk) g Goodman-kruskal's 0. 2-maxj P(Aj)-maxkP(Bk) P(Ai,Bj) are not good to indicate M Mutual Information 0...1 min(-E;P(Aj)log P(A;)log P(A,=E;P(Bq)log P(Bi)log P(Bj) J J-Measure 0...1 max(P(A,B)log( P(B) correlations P(B) A P(A) G Gini index max(P(A)[P(BA)2+P(BA)2]+P(A[P(BA)2+P(BA)2] -P(B)2-P(B)2 Over 20 interestingnes$ P(B)[P(AB)2+ P(AB)2]+ P(B[P(AB)2 + P(AB)2]-P(A)2 =P(A)2 support P(A,B) s c confidence 0...1 measures have been max(P(BA),P(AB L Laplace 0...1 NP(A,B)+1 NP(A,B)+1 max( NPA)+2 NP(B)+2 Is P(A,B) Cosine proposed (see Tan, /P(A)P(B) coherence(Jaccard PA,B Y P(A)+P(B)-P(A,B) Kumar, Sritastava all-confidence 0...1 P(A,B 0x max(P(A),P(B)) odds ratio 0...0 P(A,BP(A,B 0 P(A,BP(A,B @KDD'02) V 0.5... P(A)P(B P(B)P(A Conviction ma P(AB) P(BA) P(A,B) 1 lift 0...0 P(A)P(B) Which are good ones? s Collective strength P(A,B)+P(AB 1-P(A)P(B)-P(A)P(B) 0... P(A)P(B)+P(A)P(B 1-P(A,B)-P(AB 72 Ei 54" }, { "page_index": 376, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_055.png", "page_index": 376, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:39:30+07:00" }, "raw_text": "Null-Invariant Measures Table 6: Properties of interestingness measures. Note that none of the measures satisfies all the properties Symbol Measure Range P1 P2 P3 01 02 03 03' 04 @-coefficient -1...0...1 Yes Yes Yes Yes No Yes Yes No A Goodman-Kruskal's 0...1 Yes No No Yes No No* Yes No a odds ratio 0...1... Yes* Yes Yes Yes Yes Yes* Yes No Q Yule's Q -1...0...1 Yes Yes Yes Yes Yes Yes Yes No Y Yule's Y -1...0...1 Yes Yes Yes Yes Yes Yes Yes No K Cohen's -1...0...1 Yes Yes Yes Yes No No Yes No M Mutual Information 0...1 Yes Yes Yes No** No No* Yes No J J-Measure 0...1 Yes No No No** No No No No G Gini index 0...1 Yes No No No** No No* Yes No Sunport 0...1 s No Yes No Yes No No No No Confidence c 0...1 No Yes No No** No No NoC Yes L Laplace 0...1 No Yes No No** No No No No V Conviction 0.5...1... No Yes No No** No No Yes No 1 Interest 0...1... Yes* Yes Yes Yes No No No No IS Cosine 0...VP(A,B...1 No Yes Yes Yes No No NoC Yes PS Piatetsky-Shapiro's 0.25...0...0.25 Yes Yes Yes Yes No Yes Yes No F Certainty factor -1...0...1 Yes Yes Yes No** No No Yes No AV Added value -0.5...0...1 Yes Yes Yes No** No No No No s Collective strength 0...1... No Yes Yes Yes No Yes* Yes No c Jaccard 0...1 No Yes Yes Yes No No NoC Yes K (3-11/2[2-V3-]...0... 2 Klosgen's Yes Yes Yes No** No No No 3/3 No where: P1: O(M)=0 if det(M) = 0,i.e., whenever A and B 3 are statistically independent P2: O(M2) > O(M1) if M2=M1+k -k; -k k]. P3: O(M2)< O(M1) if Mz=M1+[0 k; 0 -k] or M2=M1+[0 0; k -k]. 01: Property l: Symmetry under variable permutation. 02: Property 2: Row and Column scaling invariance. 03: Property 3: Antisymmetry under row or column permutation. 03': Property 4:Inversion invariance. 04: Property 5:Null invariance Yes*: Yes if measure is normalized. No*: Symmetry under row or column permutation. No**: No unless the measure is symmetrized by taking max(M(A,B),M(B,A 55" }, { "page_index": 377, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_056.png", "page_index": 377, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:39:46+07:00" }, "raw_text": "Comparison of Interestingness Measures Null-(transaction) invariance is crucial for correlation analysis Lift and x2 are not null-invariant Measure Definition Range Null-Invariant x2(a,b) Ei,j=0,1 e(ai,bj)-o(ai,bj))2 [0, ] No 5 null-invariant measures e(ai,bj) Pab) Lift(a,b) [0,] No PP) sup(ab) AllConf(a,b) [0,1] Yes Milk No Milk Sum (row) max{supa.supb} sup(ab) Coherence(a,b) [0,1] Yes Coffee sup@+supb-supab m, c m, c sup(ab) Cosine(a, b) [0,1] Yes Vsup(a)sup(b No Coffee m, c m, c c sup(ab) Kulc(a,b) 1 1 [0,1] Yes 2 sup Sum(col.) m m MaxConf(a,b) [0,1] Yes supb) Table 3. nterestingness measure definitions. Null-transactions Kulczynski measure (1927) w.rt. m and c Null-invariant Data set mc TmC mc Lift AllConfCoherentstme I x KulcMaxConf - D1 10,0001,000 1,000 400,00090557 9.26 0.91 0.83 0.91 0.91 0.91 D2 10,0001,000 1,000 100 0 1 0.91 0.83 0.91 0.91 0.91 Ds 100 1,000 1,000 100,000 670 8.44 0.09 0.05 0.09 0.09 0.09 D4 1,000 1,000 1,000 100,000 24740 25.75 0.5 0.33 0.5 0.5 0.5 D5 1,000 100 10,000 100,000 8173 908 0.09 0.09 .29 0.5 0.91 D6 1,000 10 100,000100,000 965 1.97 0.01 0.01 0.10 0.5 0.99 Table 2. Example data sets. Subtle: They disagree" }, { "page_index": 378, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_057.png", "page_index": 378, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:00+07:00" }, "raw_text": "Analysis of DBLP Coauthor Relationships Recent DB conferences, removing balanced associations, low sup, etc. ID Author a Author b sup(ab)sup(@)sup(b)Coherence Cosine Kulc 1 Hans-Peter Kriegel Martin Ester 28 146 54 0.163 2) 0.315 7) 0.355 9) 2 Michael Carey Miron Livny 26 104 58 0.191 (1) 0.335 (4) 0.349 (10) 3 Hans-Peter Kriegel Joerg Sander 24 146 36 0.152 3) 0.331 (5) 0.416 (8) 4 Christos Faloutsos Spiros Papadimitriou 20 162 26 0.119(7) 0.308 10 0.446 (7) Hans-Peter Kriegel Martin Pfeifle C18 146 0.123 (6) 0.351 2) 0.562(2 6 Hector Garcia-Molina Wilburt Labic 16 144 18 0.110(9) 0.3148 0.5004 Divyakant Agrawal Wang Hsiung C16 120 16 0.133 (5) 0.365 1) 0.567 (1) Elke Rundensteiner Murali Mani 16 104 20 0.148(4) 0.351(3 0.477 (6) Divyakant Agrawal Oliver Po 2 120 12 0.100 (10) 0.316(6) 0.5503 10 Gerhard Weikum Martin Theobald 12 106 14 0.111 8) 0.312 (9485(5 Table 5. Experiment on DBLP data set. Advisor-advisee relation: Kulc: high, coherence: low, cosine: middle Tianyi Wu, Yuguo Chen and Jiawei Han, \"Association Mining in Large Databases: A Re-Examination of Its Measures\", Proc. 2007 Int. Conf. Principles and Practice of Knowledge Discovery in Databases (PKDD'07), Sept.2007 57" }, { "page_index": 379, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_058.png", "page_index": 379, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:10+07:00" }, "raw_text": "Which Null-lnvarignt Measure Is Better? IR (Imbalance Ratio): measure the imbalance of two itemsets A and B in rule implications sup(A) - sup(B) IR(A,B) = sup(A) + sup(B) - sup(A U B) Kulczynski and Imbalance Ratio (IR) together present a D4 is balanced & neutral Ds is imbalanced & neutral Data all_conf. max_conf Kulc. mc mc mc mc coszne IR D1 10,000 1.000 1,000 100.000 0.91 0.91 0.91 0.91 0.0 D2 10,000 1,000 1,000 100 0.91 0.91 0.91 0.91 0.0 D3 100 1,000 1,000 100,000 0.09 0.09 0.09 0.09 0.0 D4 1.000 1.000 1,000 100.000 0.5 0.5 0.5 0.5 0.0 D5 1.000 100 10.000 100.000 0.09 0.91 0.5 0.29 0.89 D6 1.000 10 100.000 100.000 0.01 0.99 0.5 0.10 0.99" }, { "page_index": 380, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_059.png", "page_index": 380, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:13+07:00" }, "raw_text": "Chapter 5: Mining Freguent Patterns, Association and Correlations: Basic Concepts and Methods Basic Concepts Frequent Itemset Mining Methods Which Patterns Are Interesting?-Pattern Evaluation Methods Summary 59" }, { "page_index": 381, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_060.png", "page_index": 381, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:16+07:00" }, "raw_text": "Summary Basic concepts: association rules, support- confident framework, closed and max-patterns Scalable frequent pattern mining methods Apriori (Candidate generation & test) Projection-based (FPgrowth, CLOSET+, ...) Vertical format approach (ECLAT, CHARM, ...) Which patterns are interesting? Pattern evaluation methods 60" }, { "page_index": 382, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_061.png", "page_index": 382, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:21+07:00" }, "raw_text": "Ref: Basic Concepts of Frequent Pattern Mining (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD'93 (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from databases.SlGMOD'98 (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering frequent closed itemsets for association rules. ICDT'99 (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns. lCDE'95 61" }, { "page_index": 383, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_062.png", "page_index": 383, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:26+07:00" }, "raw_text": "Ref: Apriori and Its Improvements R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94 H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering association rules. KDD'94 A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining association rules in large databases. VLDB'95 J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for mining association rules. SIGMOD'95 H. Toivonen. Sampling large databases for association rules. VLDB'96 S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and implication rules for market basket analysis. SIGMOD'97 S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational database systems: Alternatives and implications. SiGMOD'98 62" }, { "page_index": 384, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_063.png", "page_index": 384, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:32+07:00" }, "raw_text": "Ref: Depth-First, Projection-Based FP Mining R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of frequent itemsets. J. Parallel and Distributed Computing, 2002 G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc FIMI'03 B. Goethals and M. Zaki. An introduction to workshop on frequent itemset mining implementations. Proc. ICDM'03 Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03), Melbourne, FL, Nov. 2003 J. Han, J. Pei, and Y. Yin. Mining freguent patterns without candidate generation. SlGMOD' 00 J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection. KDD'o2 J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without Minimum Support. ICDM'02 J. Wang, J. Han, and J. Pei. CLOsET+: Searching for the Best Strategies for Mining Frequent Closed Itemsets. KDD'o3 63" }, { "page_index": 385, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_064.png", "page_index": 385, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:37+07:00" }, "raw_text": "Ref: Vertical Format and Row Enumeration Methods M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for discovery of association rules. DAMl:97. M. J. Zaki and C. J. Hsiao. CHARM: An Efficient Algorithm for Closed Itemset Mining, SDM'02 C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning Algorithm for Itemsets with Constraints. KDD'02. F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding Closed Patterns in Long Biological Datasets. KDD'03. H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High Dimensional Data: A Top-Down Row Enumeration Approach, SDM'06. 64" }, { "page_index": 386, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_5/slide_065.png", "page_index": 386, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:42+07:00" }, "raw_text": "Ref: Mining Correlations and Interesting Rules S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing association rules to correlations. SlGMOD'97. M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding interesting rules from large sets of discovered association rules. ClkM'94. R. J. Hilderman and H. J. Hamilton. Knowledge Discovery and Measures of Interest. Kluwer Academic, 2001 C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining causal structures. VLDB'98. P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure for Association Patterns. KDD'o2 E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE'O3 T. Wu, Y. Chen, and J. Han, \"Re-Examination of Interestingness Measures in Pattern Mining: A Unified Framework\", Data Mining and Knowledge Discovery, 21(3):371- 397,2010 65" }, { "page_index": 387, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_001.png", "page_index": 387, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:47+07:00" }, "raw_text": "Mining: Data Concepts and Techniques (3rd ed.) - Chapter 7 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2010 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 388, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_002.png", "page_index": 388, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:52+07:00" }, "raw_text": "II 11" }, { "page_index": 389, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_003.png", "page_index": 389, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:40:57+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 3" }, { "page_index": 390, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_004.png", "page_index": 390, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:08+07:00" }, "raw_text": "freguentpattern den piai n :nuaadd association rule Basic Patterns closed/max pattern generator Kinds of Multilevel& multilevel(uniform,varied,or itemset-based support) patterns multidimensionalpattern incl.high-dimensionalpattern Multidimensional continuous datadiscretization-based,or statistical) andrules Patterns approximate pattem uncertain pattem Extended Patterns compressed patterm rare pattern/negative pattem high-dimensional and colossalpattems candidate generation Apriori,partitioning, sampling,...) Basic Mining Pattern growth FPgrowth,HMineFPMax,Closet+,... Methods verticalformat EClat.CHARM... interestingness(subjective vs.objective) Mining Interesting constraint-based mining MiningMethods Patterns correlation rules exceptionrules distributed/parallelmining Distributed.parallel& incrementalmining incremental streampattern seguential ad time-series patterns structurale.g.,tree,lattice,graphpatterns Extended Data spatial(e.g.,co-location)pattern Types temporal(evolutionary,periodic) image,video and muitimedia patterns Extensions& network pattems Application patterm-based classification pattern-based clustering Applications pattern-based semantic annotation collaborative filtering privacy-preserving" }, { "page_index": 391, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_005.png", "page_index": 391, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:12+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Mining Multi-Level Association Mining Multi-Dimensional Association Mining Quantitative Association Rules Mining Rare Patterns and Negative Patterns Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 5" }, { "page_index": 392, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_006.png", "page_index": 392, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:18+07:00" }, "raw_text": "Mining Multiple-Level Association Rules Items often form hierarchies Flexible support settings Items at the lower level are expected to have lower support Exploration of shared multi-level mining (Agrawal & Srikant@VLB'95, Han & Fu@VLDB'95) uniform support reduced support Level 1 Milk Level 1 min sup = 5% support = 10% min sup = 5% 2% Milk Skim Milk Level 2 Level 2 min_sup = 5% support = 6%] support = 4%l min sup = 3% 6" }, { "page_index": 393, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_007.png", "page_index": 393, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:24+07:00" }, "raw_text": "Multi-level Association: Flexible Support and Redundancy filtering Flexible min-support thresholds: Some items are more valuable but less frequent Use non-uniform, group-based min-support E.g., {diamond, watch, camera}: 0.05%; {bread, milk}: 5%; .. Redundancy Filtering: Some rules may be redundant due to \"ancestor\" relationships between items milk => wheat bread [support = 8%, confidence = 70%] 2% milk => wheat bread [support = 2%, confidence = 72%] The first rule is an ancestor of the second rule A rule is redundant if its support is close to the \"expected\" value based on the rule's ancestor 7" }, { "page_index": 394, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_008.png", "page_index": 394, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:29+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Mining Multi-Level Association Mining Multi-Dimensional Association Mining Quantitative Association Rules Mining Rare Patterns and Negative Patterns Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 8" }, { "page_index": 395, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_009.png", "page_index": 395, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:35+07:00" }, "raw_text": "Mining Multi-Dimensional Association Single-dimensional rules: buys(X,\"milk\") => buys(X,\"bread\") Multi-dimensional rules: 2 dimensions or predicates Inter-dimension assoc. rules (no repeated predicates) age(X,\"19-25\") occupation(X,\"student\") => buys(X,\"coke\") hybrid-dimension assoc. rules (repeated predicates) age(X,\"19-25') buys(X,\"popcorn') => buys(X,\"coke') Categorical Attributes: finite number of possible values, no ordering among values-data cube approach Quantitative Attributes: Numeric, implicit ordering among values-discretization, clustering, and gradient approaches" }, { "page_index": 396, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_010.png", "page_index": 396, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:40+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Mining Multi-Level Association Mining Multi-Dimensional Association Mining Quantitative Association Rules Mining Rare Patterns and Negative Patterns Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 10" }, { "page_index": 397, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_011.png", "page_index": 397, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:46+07:00" }, "raw_text": "Mining Quantitative Associations Techniques can be categorized by how numerical attributes such as age or salary are treated 1. Static discretization based on predefined concept hierarchies (data cube methods) 2. Dynamic discretization based on data distribution (quantitative rules, e.g., Agrawal & Srikant@SIGMOD96) Clustering: Distance-based association (e.g., Yang & Miller@SIGMOD97) One dimensional clustering then association 4. Deviation: (such as Aumann and Lindell@KDD99) Sex = female Wage: mean=$7/hr (overall mean = $9) 11" }, { "page_index": 398, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_012.png", "page_index": 398, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:52+07:00" }, "raw_text": "Static Discretization of Ouantitative Attributes Discretized prior to mining using concept hierarchy. Numeric values are replaced by ranges In relational database, finding all frequent k-predicate sets will require kor k+1 table scans Data cube is well suited for mining The cells of an n-dimensional (age) income (buys) cuboid correspond to the predicate sets (age, income (age,buys) (income,buys) Mining from data cubes can be much faster (age,income,buys) 12" }, { "page_index": 399, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_013.png", "page_index": 399, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:41:59+07:00" }, "raw_text": "Quantitative Association Rules Based on Statistical Inference Theory [Aumann and Lindell@DMKD'o3] Finding extraordinary and therefore interesting phenomena, e.g., (Sex = female) => Wage: mean=$7/hr (overall mean = $9) LHS: a subset of the population RHS: an extraordinary behavior of this subset The rule is accepted only if a statistical test (e.g., z-test) confirms the inference with high confidence Subrule: highlights the extraordinary behavior of a subset of the pop. of the super rule E.g., (Sex = female) (South = yes) => mean wage = $6.3/hr Two forms of rules Categorical => quantitative rules, or Quantitative => quantitative rules E.g., Education in [14-18] (yrs) => mean wage = $11.64/hr Open problem: Efficient methods for LHS containing two or more quantitative attributes 13" }, { "page_index": 400, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_014.png", "page_index": 400, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:04+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Mining Multi-Level Association Mining Multi-Dimensional Association Mining Quantitative Association Rules Mining Rare Patterns and Negative Patterns Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 14" }, { "page_index": 401, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_015.png", "page_index": 401, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:09+07:00" }, "raw_text": "Negative and Rare Patterns Rare patterns: Very low support but interesting E.g., buying Rolex watches Mining: Setting individual-based or special group-based support threshold for valuable items Negative patterns Since it is unlikely that one buys Ford Expedition (an SUV car) and Toyota Prius (a hybrid car) together, Ford Expedition and Toyota Prius are likely negatively correlated patterns Negatively correlated patterns that are infrequent tend to be more interesting than those that are frequent 15" }, { "page_index": 402, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_016.png", "page_index": 402, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:16+07:00" }, "raw_text": "Defining Negative Correlated Patterns (I) Definition 1 (support-based) If itemsets X and Y are both frequent but rarely occur together, i.e., sup(X U Y) < sup (X) * sup(Y) Then X and Y are negatively correlated Problem: A store sold two needle 100 packages A and B, only one transaction containing both A and B. When there are in total 200 transactions, we have s(A U B) = 0.005, s(A) * s(B) = 0.25,s(A U B) < s(A) * s(B When there are 1o5 transactions, we have s(A U B) = 1/105,s(A) * s(B) = 1/103*1/103,s(A U B) > s(A) ) * s(B) Where is the problem? -Null transactions, i.e., the support-based definition is not null-invariant! 16" }, { "page_index": 403, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_017.png", "page_index": 403, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:22+07:00" }, "raw_text": "Defining Negative Correlated Patterns (II) Definition 2 (negative itemset-based) X is a negative itemset if (1) X = A U B, where B is a set of positive items, and A is a set of negative items, IAI 1, and (2) s(X) Itemsets X is negatively correlated, if k 1 s(x) < s(xi),where xi E X, and s(xi) is the support of xi i=1 This definition suffers a similar null-invariant problem Definition 3 (Kulzynski measure-based) If itemsets X and Y are frequent, but (P(XY) + P(YX))/2 < e, where e is a negative pattern threshold, then X and Y are negatively correlated. Ex. For the same needle package problem, when no matter there are 200 or 105 transactions, if e = 0.01, we have (P(AIB) + P(BA))/2 = (0.01 + 0.01)/2 < e 17" }, { "page_index": 404, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_018.png", "page_index": 404, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:26+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 18" }, { "page_index": 405, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_019.png", "page_index": 405, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:30+07:00" }, "raw_text": "Constraint-based (Query-Directed) 1 Mining Finding all the patterns in a database autonomously? - unrealistic! The patterns could be too many but not focused! Data mining should be an interactive process User directs what to be mined using a data mining query language (or a graphical user interface) Constraint-based mining User flexibility: provides constraints on what to be mined Optimization: explores such constraints for efficient mining - constraint-based mining: constraint-pushing, similar to push selection first in DB query processing Note: still find all the answers satisfying constraints, not finding some answers in \"heuristic search\" 19" }, { "page_index": 406, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_020.png", "page_index": 406, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:35+07:00" }, "raw_text": "Constraints in Data Mining Knowledge type constraint: classification, association, etc. Data constraint using SQL-like queries find product pairs sold together in stores in Chicago this year Dimension/level constraint in relevance to region, price, brand, customer category Rule (or pattern) constraint small sales (price < $10) triggers big sales (sum > $200) Interestingness constraint strong rules: min_support 3%, min_confidence 60% 20" }, { "page_index": 407, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_021.png", "page_index": 407, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:40+07:00" }, "raw_text": "Meta-Rule Guided Mining Meta-rule can be in the rule form with partially instantiated predicates and constants P1(X,Y) Pz(X,W) => buys(X,\"iPad\") The resulting rule derived can be age(X, \"15-25') profession(X, \"student\") => buys(X, \"iPad') In general, it can be in the form of P=> Q1 Qz P1 P A A Q Method to find meta-rules Find frequent (I+r) predicates (based on min-support threshold) Push constants deeply when possible into the mining process (see the remaining discussions on constraint-push techniques) Use confidence, correlation, and other measures when possible 21" }, { "page_index": 408, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_022.png", "page_index": 408, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:46+07:00" }, "raw_text": "Constraint-Based Frequent Pattern Mining Pattern space pruning constraints Anti-monotonic: If constraint c is violated, its further mining can be terminated Monotonic: If c is satisfied, no need to check c again Succinct: c must be satisfied, so one can start with the data sets satisfying c Convertible: c is not monotonic nor anti-monotonic, but it can be converted into it if items in the transaction can be properly ordered Data space pruning constraint Data succinct: Data space can be pruned at the initial pattern mining process Data anti-monotonic: If a transaction t does not satisfy c, t can be pruned from its further mining 22" }, { "page_index": 409, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_023.png", "page_index": 409, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:53+07:00" }, "raw_text": "Pattern Space Pruning with Anti-Monotonicity Constraint. TDB (min_sup=2) A constraint C is anti-monotone if the super TID Transaction pattern satisfies C, all of its sub-patterns do so 10 a, b, c, d, f too 20 b, c, d, f, g, h In other words, anti-monotonicity: If an itemset 30 a, c, d, e, f S violates the constraint, so does any of its 40 c, e, f, g superset Item Profit Ex. 1. sum(S.price) v is anti-monotone 40 a Ex. 2. range(S.profit) 15 is anti-monotone 6 0 Itemset ab violates C -20 c So does every superset of ab 0 10 Ex. 3. sum(S.Price) > v is not anti-monotone -30 e Ex. 4. support count is anti-monotone: core f 30 property used in Apriori 20 g h -10 23" }, { "page_index": 410, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_024.png", "page_index": 410, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:42:59+07:00" }, "raw_text": "Pattern Space Pruning with Monotonicity Constraints TDB (min sup=2) A constraint C is monotone if the pattern TID Transaction satisfies C, we do not need to check C in 10 a, b, c, d, f subsequent mining 20 b, c, d, f, g, h 30 a, c, d, e, f Alternatively, monotonicity: If an itemset S 40 c, e, f, g satisfies the constraint, so does any of its superset Item Profit Ex. 1. sum(S.Price) > v is monotone 40 a 6 0 Ex. 2. min(S.Price) v is monotone -20 C Ex. 3. C: range(S.profit) 15 0 10 Itemset ab satisfies C -30 e So does every superset of ab f 30 20 g h -10 24" }, { "page_index": 411, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_025.png", "page_index": 411, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:06+07:00" }, "raw_text": "Data Space Pruning with Data Anti-monotonicity TDB (min_sup=2) A constraint c is data anti-monotone if for a pattern TID Transaction p cannot satisfy a transaction t under c, p's 10 a, b, c, d, f, h superset cannot satisfy t under c either 20 b, c,d,f,g, h The key for data anti-monotone is recursive data 30 b, c, d, f, g reduction 40 c, e, f, g Ex. 1. sum(S.Price) > v is data anti-monotone Item Profit Ex. 2. min(S.Price) v is data anti-monotone 40 a 0 Ex. 3. C: range(S.profit) 25 is data anti- -20 C monotone 0 -15 Itemset {b, c}'s projected DB: -30 e T10':{d, f, h}, T20': {d, f, g, h}, T30':{d, f, g} f -10 since C cannot satisfy T10', T10' can be pruned 20 g h -5 25" }, { "page_index": 412, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_026.png", "page_index": 412, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:11+07:00" }, "raw_text": "Pattern Space Pt runing with Succinctness Succinctness : Given A, the set of items satisfying a succinctness t C, then any set Ssatisfying Cis based on constraint A , i.e., S contains a subset belonging to A Idea: Without looking at the transaction database, whether an itemset S satisfies constraint C can be determined based on the selection of items min(S.Price) v is succinct sum(S.Price) > v is not succinct Optimization: If Cis succinct, Cis pre-counting pushable 26" }, { "page_index": 413, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_027.png", "page_index": 413, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:21+07:00" }, "raw_text": "Naive Algorithm: Apriori + Constraint Database D itemset sup. itemset sup. L1 C1 TID Items {1} 2 {1} 2 100 1 3 4 {2} 3 {2} 3 Scan D 200 2 3 5 {3} 3 {3} 3 300 123 5 {4} 1 {5} 3 400 2 5 {5} 3 C2 itemset itemset sup C2 {1 2} L2 itemset sup {1 2} 1 Scan D f1 3} {1 3} 2 {1 3} 2 t1 5} {1 5} 2 1 72 {2 3} {2 3} 2 {2 5} 3 {2 5} [2 5} 3 {35} 2 {3 5} {3 5} 2 C3 L3 itemset itemset sup Scan D Constraint: {2 3 5} {2 3 5} Sum{S.price} < 5 27" }, { "page_index": 414, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_028.png", "page_index": 414, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:30+07:00" }, "raw_text": "Constrained Apriori : Push a Succinct Constraint Deep Database D itemset sup. itemsetsup. L1 TID C1 Items {1} 2 {1} 2 100 1 3 4 {2} 3 {2} 3 Scan D 200 2 3 5 {3} 3 {3} 3 300 1 2 3 5 {4} 1 {5} 3 400 2 5 {5} 3 C2 itemset C2 itemset sup {1 2} L2 itemset sup {1 2} 1 Scan D {1 3} {1 3} 2 {1 3} 2 not immediately [1 5} {1 5} 1 C to be used 32 3} cD cN 12 5} t3 i 13 57 C3 L3 itemset itemset sup Scan D Constraint: {2 3 5} {2 3 5} min{S.price } <= 1 28" }, { "page_index": 415, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_029.png", "page_index": 415, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:36+07:00" }, "raw_text": "Constrained FP-Growth: Push a Succinct Constraint Deep TID Items TID Items 100 1 3 100 1 3 4 200 2 3 5 200 2 3 5 - Remove 300 1 2 3 5 3001 2 3 5 infrequent FP-Tree 400 2 5 length 1 400 2 5 1-Projected DB TID Items No Need to project on 2, 3, or 5 100 3 4 300 2 3 5 Constraint: min 25 bcdfg: 2 min_sup >= 2 31" }, { "page_index": 418, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_032.png", "page_index": 418, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:43:54+07:00" }, "raw_text": "Convertible Constraints: : Ordering Data in Transactions TDB (min sup=2) TID Transaction Convert tough constraints into anti- 10 a, b, c, d, f monotone or monotone by properly 20 b, c, d, f, g, h ordering items 30 a, c, d, e, f Examine C: avg(S.profit) 25 40 c, e, f, g Order items in value-descending Item Profit order 40 a 0 -20 C If an itemset afb violates C 10 So does afbh, afb* -30 e f 30 It becomes anti-monotone! 20 9 C -10 32" }, { "page_index": 419, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_033.png", "page_index": 419, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:00+07:00" }, "raw_text": "Strongly Convertible Constraints avg(X) 25 is convertible anti-monotone w.r.t. item value descending order R: Item Profit If an itemset afviolates a constraint C, so 40 a does every itemset with afas prefix, such as 0 afd -20 C avg(X) 25 is convertible monotone w.r.t. item 10 value ascending order R-1: 30 If an itemset d satisfies a constraint C, so 20 does itemsets dfand dfa, which having d as h -10 a prefix Thus, avg(X) 25 is strongly convertible 33" }, { "page_index": 420, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_034.png", "page_index": 420, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:06+07:00" }, "raw_text": "Can Apriori Handle Convertible Constraints? A convertible, not monotone nor anti-monotone nor succinct constraint cannot be pushed deep into the an Apriori mining algorithm Within the level wise framework, no direct Item Value pruning based on the constraint can be made 40 a Itemset df violates constraint C: avg(X) >= 0 25 -20 c Since adf satisfies C, Apriori needs df to 0 10 e -30 assemble adf, df cannot be pruned f 30 But it can be pushed into frequent-pattern 20 g growth framework! h -10 34" }, { "page_index": 421, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_035.png", "page_index": 421, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:13+07:00" }, "raw_text": "Pattern Space Pruning w. Convertible Constraints Item Value C: avg(X) >= 25, min_sup=2 40 a List items in every transaction in value 30 descending order R: g 20 C is convertible anti-monotone w.r.t. R 1 0 10 Scan TDB once 0 remove infrequent items h -10 Item h is dropped C -20 Itemsets a and f are good, .. e -30 TDB (min_sup=2) Projection-based mining TID Transaction Imposing an appropriate order on item 10 a, f d,b, c projection 20 f, g, d, b, c Many tough constraints can be converted into 30 a, f, d, c, e (anti)-monotone 40 f, g, h, c, e 35" }, { "page_index": 422, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_036.png", "page_index": 422, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:17+07:00" }, "raw_text": "Handling Multiple Constraints Different constraints may require different or even conflicting item-ordering If there exists an order R s.t. both C, and C, are convertible w.r.t. R, then there is no conflict between the two convertible constraints If there exists conflict on order of items Try to satisfy one constraint first Then using the order for the other constraint to mine frequent itemsets in the corresponding projected database 36" }, { "page_index": 423, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_037.png", "page_index": 423, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:23+07:00" }, "raw_text": "What Constraints Are Convertible? Convertible anti- Convertible Strongly Constraint monotone monotone convertible avg(S) <,>v Yes Yes Yes median(S) <, > v Yes Yes Yes sum(S) v (items could be of any value) Yes No No v 0) sum(S) v (items could be of any value, No Yes No v < 0) sum(S) v (items could be of any value No Yes No v 0) sum(S) v (items could be of any value, Yes No No v < 0) 37" }, { "page_index": 424, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_038.png", "page_index": 424, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:30+07:00" }, "raw_text": "Constraint-Based Mining - A General Picture Constraint Anti-monotone Monotone Succinct v e S no yes yes s =V no yes yes S c V yes no yes min(S) < v no yes yes min(S) > v yes no yes max(S) < v yes no yes max(S) > v no yes yes count(s) < v yes no weakly count(S) v no yes weakly sum(S)0 yes no no sum(S)v(a e S,a>0 no yes no range(S) v yes no no range(s) v no yes no avg(S) 6 v, 0 e{=, <, >} convertible convertible no support(s) > & yes no no support(S) < & no yes no 38" }, { "page_index": 425, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_039.png", "page_index": 425, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:34+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 39" }, { "page_index": 426, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_040.png", "page_index": 426, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:38+07:00" }, "raw_text": "Mining Colossal Frequent Patterns F. Zhu, X. Yan, J. Han, P. S. Yu, and H. Cheng, \"Mining Colossal Frequent Patterns by Core Pattern Fusion\", ICDE'07. We have many algorithms, but can we mine large (i.e., colossal) patterns? - such as just size around 50 to 100? Unfortunately, not! Why not? - the curse of \"downward closure\" of frequent patterns The \"downward closure\" property Any sub-pattern of a frequent pattern is frequent. Example. If (a., a2, ..., a1oo) is frequent, then a1, a2, .., a.oo, (a.. az), (a,, as), ..., (a,, aoo), (a,, az, as), ... are all frequent! l There are about 2100 such frequent itemsets! No matter using breadth-first search (e.g., Apriori) or depth-first search (FPgrowth), we have to examine so many patterns Thus the downward closure property leads to explosion! 40" }, { "page_index": 427, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_041.png", "page_index": 427, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:45+07:00" }, "raw_text": "Colossal Patterns: A Motivating Example Let's make a set of 40 transactions Closed/maximal patterns may partially alleviate the problem but not T1 = 1 2 3 4 ..... 39 40 really solve it: We often need to mine T2 = 1 2 3 4 ..... 39 40 scattered large patterns! Let the minimum support threshold 0 D g= 20 40 There are frequent patterns of 20 T40=1 2 3 4 ..... 39 40 size 20 Then delete the items on the diagona Each is closed and maximal T1 = 2 3 4 ..... 39 40 2r # patterns = n T2 = 1 3 4 ..... 39 40 2/T n/2 vn The size of the answer set is exponential to n T40=1 2 3 4 ...... 39 41" }, { "page_index": 428, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_042.png", "page_index": 428, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:49+07:00" }, "raw_text": "Colossal Pattern Set: Small but Interesting It is often the case that only a small number of Middle-sized Patterns patterns are colossal LargePatterns i.e., of large size Colossal patterns are m. usually attached with greater importance than those of small pattern sizes 42" }, { "page_index": 429, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_043.png", "page_index": 429, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:54+07:00" }, "raw_text": "Mining Colossal Patterns: Motivation and Philosophy Motivation: Many real-world tasks need mining colossal patterns Micro-array analysis in bioinformatics (when support is low) Biological sequence patterns Biological/sociological/information graph pattern mining No hope for completeness If the mining of mid-sized patterns is explosive in size, there is no hope to find colossal patterns efficiently by insisting \"complete set\" mining philosophy Jumping out of the swamp of the mid-sized results What we may develop is a philosophy that may jump out of the swamp of mid-sized results that are explosive in size and jump to reach colossal patterns Striving for mining almost complete colossal patterns The key is to develop a mechanism that may quickly reach colossal patterns and discover most of them 43" }, { "page_index": 430, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_044.png", "page_index": 430, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:44:59+07:00" }, "raw_text": "Alas, A Show of Colossal Pattern Mining! Let the min-support threshold o= 20 T1 = 2 3 4 ..... 3940 40 Then there are closed/maximal T2 = 13 4 ..... 39 40 20 frequent patterns of size 20 However, there is only one with size greater than 20, (i.e., colossal) : T40=1 2 3 4 ...... 39 a={41,42,...,79} of size 39 T41= 41 42 43 ..... 79 T42= 41 42 43 ..... 79 The existing fastest mining algorithms (e.g., FPClose, LCM) fail to complete T60= 4142 43 ... 79 running Our algorithm outputs this colossal pattern in seconds 44" }, { "page_index": 431, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_045.png", "page_index": 431, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:04+07:00" }, "raw_text": "Pattern-Fusion traverses the tree in a bounded-breadth way Always pushes down a frontier of a bounded-size candidate pool Only a fixed number of patterns in the current candidate pool will be used as the starting nodes to go down in the pattern tree thus avoids the exponential search space Pattern-Fusion identifies \"shortcuts\" whenever possible Pattern growth is not performed by single-item addition but by leaps and bounded: agglomeration of multiple patterns in the pool These shortcuts will direct the search down the tree much more rapidly towards the colossal patterns 45" }, { "page_index": 432, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_046.png", "page_index": 432, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:09+07:00" }, "raw_text": "Observation: Colossal Patterns and Core Patterns Transaction Database D A colossal pattern a D ar a1 Dak 02 Ba1 Da2 Olk Subpatterns a, to a cluster tightly around the colossal pattern a by sharing a similar support. We call such subpatterns core patterns of a 46" }, { "page_index": 433, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_047.png", "page_index": 433, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:13+07:00" }, "raw_text": "Robustness of Colossal Patterns Core Patterns Intuitively, for a frequent pattern a, a subpattern is a -core pattern of a if β shares a similar support set with a, i.e., D Z t 0 < t <1 D B where is called the core ratio Robustness of Colossal Patterns A colossal pattern is robust in the sense that it tends to have much more core patterns than small patterns 47" }, { "page_index": 434, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_048.png", "page_index": 434, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:19+07:00" }, "raw_text": "Example: Core Patterns A colossal pattern has far more core patterns than a small-sized pattern A colossal pattern has far more core descendants of a smaller size c A random draw from a complete set of pattern of size c would more likely to pick a core descendant of a colossal pattern A colossal pattern can be generated by merging a set of core patterns Transaction (# of Ts) Core Patterns (7 = 0.5) (abe) (100) (abe), (ab), (be), (ae), (e) (bcf) (100) (bcf),(bc),(bf) (acf) (100) (acf), (ac), (af) (abcef) (100) (ab), (ac), (af), (ae), (bc), (bf), (be) (ce), (fe), (e) (abc), (abf), (abe), (ace), (acf), (afe), (bcf), (bce), (bfe), (cfe), (abcf), (abce), (bcfe), (acfe), (abfe), (abcef) 48" }, { "page_index": 435, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_049.png", "page_index": 435, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:25+07:00" }, "raw_text": "Robustness of Colossal Patterns (d,)-robustness: A pattern a is (d, T)-robust if d is the maximum number of items that can be removed from a for the resulting pattern to remain a -core pattern of a For a (d,T)-robust pattern a, it has Q(2d ) core patterns A colossal patterns tend to have a large number of core patterns Pattern distance: For patterns a and P, the pattern distance of a and is defined to be D.. Dist(a,β) =1- D 0 B If two patterns a and are both core patterns of a same pattern, they would be bounded by a \"ball\" of a radius specified by their core ratio T 1 Dist (x,B) <1- r(t) 2 /t -1 Once we identify one core pattern, we will be able to find all the other core patterns by a bounding ball of radius r() 49" }, { "page_index": 436, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_050.png", "page_index": 436, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:29+07:00" }, "raw_text": "Colossal Patterns Correspond to Dense Balls Due to their robustness, Colossal Pattern Small Pattern colossal patterns correspond to dense balls ( 2d) in population A random draw in the pattern space will hit somewhere in the ball with high probability 50" }, { "page_index": 437, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_051.png", "page_index": 437, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:34+07:00" }, "raw_text": "Idea of Pattern-Fusion Algorithm Generate a complete set of frequent patterns up to a small size Randomly pick a pattern P, and has a high probability to be a core-descendant of some colossal pattern a Identify all a's descendants in this complete set, and merge all of them - This would generate a much larger core-descendant of a In the same fashion, we select K patterns. This set of larger core-descendants will be the candidate pool for the next iteration 51" }, { "page_index": 438, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_052.png", "page_index": 438, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:38+07:00" }, "raw_text": "Pattern-Fusion: The Algorithm Initialization (Initial pool): Use an existing algorithm to mine all frequent patterns up to a small size, e.g., 3 Iteration (Iterative Pattern Fusion) : At each iteration, k seed patterns are randomly picked from the current pattern pool For each seed pattern thus picked, we find all the patterns within a bounding ball centered at the seed pattern All these patterns found are fused together to generate a set of super-patterns. All the super-patterns thus Termination: when the current pool contains no more than K patterns at the beginning of an iteration 52" }, { "page_index": 439, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_053.png", "page_index": 439, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:43+07:00" }, "raw_text": "Why Is Pattern-Fusion Efficient? A bounded-breadth pattern tree traversal It avoids explosion in Pattern Candidates mining mid-sized ones Large Patterns CurTent Pool Randomness comes to help Shortcut to stay on the right path Ability to identify \"short-cuts\" and take \"leaps\" fuse small patterns together in one step to generate new patterns of significant sizes Efficiency 53" }, { "page_index": 440, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_054.png", "page_index": 440, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:46+07:00" }, "raw_text": "Pattern-Fusion Leads to Good Approximation Gearing toward colossal patterns The larger the pattern, the greater the chance it will be generated Catching outliers The more distinct the pattern, the greater the chance it will be generated 54" }, { "page_index": 441, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_055.png", "page_index": 441, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:52+07:00" }, "raw_text": "Experimental Setting Synthetic data set Diag, an n x (n-1) table where ith row has integers from 1 to n except i. Each row is taken as an itemset. min_support is n/2 Real data set Replace: A program trace data set collected from the \"replace' program, widely used in software engineering research ALL: A popular gene expression data set, a clinical data on ALL-AML leukemia (www.broad.mit.edu/tools/data.html Each item is a column, representing the activitiy level of gene/protein in the same Frequent pattern would reveal important correlation between gene expression patterns and disease outcomes 55" }, { "page_index": 442, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_056.png", "page_index": 442, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:45:59+07:00" }, "raw_text": "LCM run time increases LaM_maximal 101 Pa:tern-Fuslon exponentially with pattern (spucces eu und size n 102 Pattern-Fusion finishes 10 efficiently The approximation error of 102 Pattern-Fusion (with min-sup 5 Is 2022242628303234 40 45t Matrlx Slze 20) in comparison with the 0.49 complete set) is rather close DnFon S3mpInP 0.4 to uniform sampling (which 0.35 randomly picks K patterns 0.3 from the complete answer 0.25 set) 0.2 0.1 1 100 150 200 290 310 350 400 480 Nunmber oi ylned Pattere 56" }, { "page_index": 443, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_057.png", "page_index": 443, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:06+07:00" }, "raw_text": "Experimental Results on ALL transactions, each with 866 columns There are 1736 items in total The table shows a hiah frequency threshold of 30 90 LCMmaximal 0 80 Pattern Size 110 10710291868483 -+-Top-k Pattern-Fusion 70 The complete set 60 Pattern-Fusion 50 Pattern Size 82 76 75747371 40 The complete set 30 20 Pattern-Fusion 10 31 30 29 28 27 26 25 24 23 22 21 Minimum Support Threshold" }, { "page_index": 444, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_058.png", "page_index": 444, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:10+07:00" }, "raw_text": "Experimental Results on REPLACE REPLACE A program trace data set, recording 4395 calls and transitions The data set contains 4395 transactions with 57 items in total With support threshold of 0.03, the largest patterns are of size 44 They are all discovered by Pattern-Fusion with different settings of K and T, when started with an initial pool of 20948 patterns of size <=3 58" }, { "page_index": 445, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_059.png", "page_index": 445, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:16+07:00" }, "raw_text": "Experimental Results on REPLACE Approximation error when compared with the complete 0.01 mining result ..... K=50 0.009 *K=100 Example. Out of the total 98 -+-: K=200 0.008 4 patterns of size >=42, when V 0.007 K=100, Pattern-Fusion returns 0.006 0.005 80 of them 0.004 A good approximation to the 0.003 colossal patterns in the sense 0.002 that any pattern in the 0.001 complete set is on average at 39 40 41 42 43 44 45 Pattern Size (>=) most 0.17 items away from one of these 80 patterns 59" }, { "page_index": 446, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_060.png", "page_index": 446, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:20+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 60" }, { "page_index": 447, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_061.png", "page_index": 447, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:27+07:00" }, "raw_text": "Mining Compressed Patterns: -clustering Why compressed patterns? ID Item-Sets Support P1 {38,16,18,12} 205227 too many, but less meaningful P2 {38,16,18,12,17} 205211 Pattern distance measure P3 {39,38,16,18,12,17} 101758 T(P1)NT(P2 P4 {39,16,18,12,17} 161563 D(P1,P2) =1 T(P1) UT(P2)l P5 {39,16,18,12} 161576 -clustering: For each pattern P, Closed frequent pattern find all patterns which can be Report P1, P2, P3, P4, P5 expressed by P and their distance Emphasize too much on to P are within (-cover) support All p patterns in the cluster can be no compression represented by P Max-pattern, P3: info loss Xin et al., \"Mining Compressed A desirable output: P2, P3, P4 Frequent-Pattern Sets\", VLDB'05 61" }, { "page_index": 448, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_062.png", "page_index": 448, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:33+07:00" }, "raw_text": "Redundancy-Award Top-k Patterns Why redundancy-aware top-k patterns? Desired patterns: high significance & low significance redundancy Propose the MMS significance + relevance (Maximal Marginal a) a set of patterns (b) redundancy-aware Significance) for top-k measuring the combined significance of a pattern set Xin et al., Extracting Redundancy-Aware relevance significance Top-K Patterns, KDD'O6 (c) traditional top-k (d) summarization 62" }, { "page_index": 449, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_063.png", "page_index": 449, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:36+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 63" }, { "page_index": 450, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_064.png", "page_index": 450, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:40+07:00" }, "raw_text": "How to Understand and Interpret Patterns? Do they all make sense? diaper beer What do they mean? How are they useful? female sterile (2) tekele morphological info. and simple statistics Semantic Information Not all frequent patterns are useful, only meaningful ones .. Annotate patterns with semantic information" }, { "page_index": 451, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_065.png", "page_index": 451, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:47+07:00" }, "raw_text": "QM1 A Dictionary Analogy Word: \"pattern\" - from Merriam-Webster Main Entry: lpat tern Non-semantic info. Pronunciation: 'pa-tarn Function: noun Etymology: Middle English ,ratron, from Middle French,from Definitions indicating Latin patronus Date: 14th century semantics l : a form 2 : somethi Wattern Main Entry: pattern 5 : a nodel Function: casting noun 4 : an artisi 1 ge nonymsMoDEL 2, archetype, beau ideal, ensample, example, exemplar, idea Synonyms rror, paradigm e Related Wordoriginal component 12 : freque 2 Synonyms FIGURE 3 design, device, motif, motive synonyms Related WoreKpatterning -patternt Related Words bnymsQRDER 8, method, orderliness, plan, system Related Word arrangenent constellation" }, { "page_index": 452, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_066.png", "page_index": 452, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:50+07:00" }, "raw_text": "Slide 65 QM1 put this earlier, following the example... our work is motivated by the analogy... remove the sentence, make the figure larger. show only one definition box Qiaozhu Mei, 18-Aug-06" }, { "page_index": 453, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_067.png", "page_index": 453, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:46:54+07:00" }, "raw_text": "Semantic Analysis with Context Models Task1: Model the context of a frequent pattern Based on the Context Model.. Task2: Extract strongest context indicators Task3: Extract representative transactions Task4: Extract semantically similar patterns" }, { "page_index": 454, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_068.png", "page_index": 454, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:00+07:00" }, "raw_text": "Annotating DBLP Co-authorship & Title Pattern Database: Freauent Patterns Authors Title P,: { x_yan,j_han } Substructure Similarity Search X.Yan, P. Yu, J. Han In Graph Databases Frequent Itemset P,: \"substructure search\" Semantic Annotations Context Units Pattern { x_yan, j_han} 1 Non Sup = ... < { p_yu, j_han}, { d_xin }, ...,\"graph pattern\" CI {p_yu}, graph pattern, ... .. \"substructure similarity\", ... > Trans. gSpan: graph-base...... SSPs { j_wang},{j_han, p_yu}, ... Pattern = {xifeng_yan, jiawei_han} Annotation Results: Context Indicator (Cl) graph; {philip_yu}; mine close; graph pattern; sequential pattern; ... Representative > gSpan: graph-base substructure pattern mining; Transactions (Trans) > mining close relational graph connect constraint; ... Semantically Similar {jiawei_han, philip_yu}; {jian_pei, jiawei_han}; {jiong_yang, philip_yu, Patterns (SSP) wei_wang}; ..." }, { "page_index": 455, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_069.png", "page_index": 455, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:04+07:00" }, "raw_text": "Chapter 7 : Advanced Frequent Pattern Mining Pattern Mining: A Road Map Pattern Mining in Multi-Level, Multi-Dimensional Space Constraint-Based Frequent Pattern Mining Mining High-Dimensional Data and Colossal Patterns Mining Compressed or Approximate Patterns Pattern Exploration and Application Summary 68" }, { "page_index": 456, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_070.png", "page_index": 456, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:08+07:00" }, "raw_text": "Summary Roadmap: Many aspects & extensions on pattern mining Mining patterns in multi-level, multi dimensional space Mining rare and negative patterns Constraint-based pattern mining Specialized methods for mining high-dimensional data and colossal patterns Mining compressed or approximate patterns Pattern exploration and understanding: Semantic annotation of frequent patterns 69" }, { "page_index": 457, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_071.png", "page_index": 457, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:13+07:00" }, "raw_text": "Ref: Mining Multi-Level and Quantitative Rules Y. Aumann and Y. Lindell. A Statistical Theory for Quantitative Association Rules,KDD'99 T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using two-dimensional optimized association rules: Scheme, algorithms, and visualization. SlGMOD'96 J. Han and Y. Fu. Discovery of multiple-level association rules from large databases.VLDB'95 R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97. R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95. R. Srikant and R. Agrawal. Mining quantitative association rules in large relational tables. SIGMOD'96 K. Wang, Y. He, and J. Han. Mining frequent itemsets using support constraints. VLDB'00 K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing optimized rectilinear regions for association rules. KDD'97. 70" }, { "page_index": 458, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_072.png", "page_index": 458, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:18+07:00" }, "raw_text": "Ref: Mining Other Kinds of Rules F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new paradigm for fast, quantifiable data mining. VLDB'98 Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discovery of Functional and Approximate Dependencies Using Partitions. ICDE'98 H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and Pattern Extraction with Fascicles. VLDB'99 B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97. R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining association rules. VLDB'96. A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative associations in a large database of customer transactions. ICDE'98. D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov. Query flocks: A generalization of association-rule mining. SIGMOD'98 71" }, { "page_index": 459, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_073.png", "page_index": 459, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:24+07:00" }, "raw_text": "Ref: Constraint-Based Pattern Mining R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item constraints. KDD'97 R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. Exploratory mining and pruning optimizations of constrained association rules. SIGMOD'98 G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained correlated sets. ICDE'00 J. Pei, J. Han, and L. V. S. Lakshmanan. Mining Frequent Itemsets with Convertible Constraints. ICDE'01 J. Pei, J. Han, and W. Wang, Mining Sequential Patterns with Constraints in Large Databases, CIKM'02 F. Bonchi, F. Giannotti, A. Mazzanti, and D. Pedreschi. ExAnte: Anticipated Data Reduction in Constrained Pattern Mining, PKDD'03 F. Zhu, X. Yan, J. Han, and P. S. Yu, \"gPrune: A Constraint Pushing Framework for Graph Pattern Mining\", PAKDD'07 72" }, { "page_index": 460, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_074.png", "page_index": 460, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:28+07:00" }, "raw_text": "Ref: Mining Seguential Patterns X. Ji, J. Bailey, and G. Dong. Mining minimal distinguishing subsequence patterns with gap constraints. ICDM'05 H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of freguent episodes in event sequences. DAMI:97. J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu. PrefixSpan: Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. ICDE'01. R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations and performance improvements. EDBT'96. X. Yan, J. Han, and R. Afshar. CloSpan: Mining Closed Sequential Patterns in Large Datasets. SDM'03. M. Zaki. SPADE: An Efficient Algorithm for Mining Frequent Sequences. Machine Learning:01. 73" }, { "page_index": 461, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_075.png", "page_index": 461, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:33+07:00" }, "raw_text": "Mining Graph and Structured Patterns A. Inokuchi, T. Washio, and H. Motoda. An apriori-based algorithm for mining frequent substructures from graph data. PKDD'00 M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. ICDM'O1 X. Yan and J. Han. gSpan: Graph-based substructure pattern mining. ICDM'02 X. Yan and J. Han. CloseGraph: Mining Closed Frequent Graph Patterns. KDD'03 X. Yan, P. S. Yu, and J. Han. Graph indexing based on discriminative frequent structure analysis.ACM T0DS, 30:960-993, 2005 X. Yan, F. Zhu, P. S. Yu, and J. Han. Feature-based substructure similarity search.ACM Trans. Database Systems, 31:1418-1453, 2006 74" }, { "page_index": 462, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_076.png", "page_index": 462, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:38+07:00" }, "raw_text": "Ref: Mining Spatial, Spatiotemporal, Multimedia Data H. Cao, N. Mamoulis, and D. W. Cheung. Mining frequent spatiotempora sequential patterns. lCDM'o5 D. Gunopulos and I. Tsoukatos. Efficient Mining of Spatiotemporal Patterns. SSTD'01 K. Koperski and J. Han, Discovery of Spatial Association Rules in Geographic Information Databases, SSD'95 H. Xiong, S. Shekhar, Y. Huang, V. Kumar, X. Ma, and J. S. Yoo. A framework for discovering co-location patterns in data sets with extended spatial objects.SDM'04 J. Yuan, Y. Wu, and M. Yang. Discovery of collocation patterns: From visual words to visual phrases. CVPR'07 O. R. Zaiane, J. Han, and H. Zhu, Mining Recurrent Items in Multimedia with Progressive Resolution Refinement. ICDE'oo 75" }, { "page_index": 463, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_077.png", "page_index": 463, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:43+07:00" }, "raw_text": "Ref: Mining Freguent Patterns in Time-Series Data B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules. ICDE'98. J. Han, G. Dong and Y. Yin, Efficient Mining of Partial Periodic Patterns in Time Series Database,lCDE'99. J. Shieh and E. Keogh. iSAX: Indexing and mining terabyte sized time series. KDD'08 B.-K. Yi, N. Sidiropoulos, T. Johnson, H. V. Jagadish, C. Faloutsos, and A. Biliris. Online Data Mining for Co-Evolving Time Sequences. ICDE'00. W. Wang, J. Yang, R. Muntz. TAR: Temporal Association Rules on Evolving Numerical Attributes.ICDE'01 J. Yang, W. Wang, P. S. Yu. Mining Asynchronous Periodic Patterns in Time Series Data. TKDE'03 L. Ye and E. Keogh. Time series shapelets: A new primitive for data mining. KDD'09 76" }, { "page_index": 464, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_078.png", "page_index": 464, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:48+07:00" }, "raw_text": "Ref: FP for Classification and Clustering G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and differences. KDD'99 B. Liu, W. Hsu, Y. Ma. Integrating Classification and Association Rule Mining. KDD'98. W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules. ICDM'01. H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data sets. SIGMOD' 02. J. Yang and W. Wang. CLUSEQ: efficient and effective sequence clustering. ICDE'03 X. Yin and J. Han. CPAR: Classification based on Predictive Association Rules. SDM'03. H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern Analysis for Effective Classification\", ICDE'07 77" }, { "page_index": 465, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_079.png", "page_index": 465, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:52+07:00" }, "raw_text": "Ref: Privacy-Preserving FP Mining A. Evfimievski, R. Srikant, R. Agrawal, J. Gehrke. Privacy Preserving Mining of Association Rules. KDD'O2 A. Evfimievski, J. Gehrke, and R. Srikant. Limiting Privacy Breaches in Privacy Preserving Data Mining. PODS'03 J. Vaidya and C. Clifton. Privacy Preserving Association Rule Mining in Vertically Partitioned Data. KDD'O2 78" }, { "page_index": 466, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_080.png", "page_index": 466, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:56+07:00" }, "raw_text": "Mining Compressed Patterns D. Xin, H. Cheng, X. Yan, and J. Han. Extracting redundancy- aware top-k patterns. KDD'06 D. Xin, J. Han, X. Yan, and H. Cheng. Mining compressed frequent-pattern sets. VLDB'05 X. Yan, H. Cheng, J. Han, and D. Xin. Summarizing itemset patterns: A profile-based approach. KDD'05 79" }, { "page_index": 467, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_081.png", "page_index": 467, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:47:59+07:00" }, "raw_text": "Mining Colossal Patterns F. Zhu, X. Yan, J. Han, P. S. Yu, and H. Cheng. Mining colossal frequent patterns by core pattern fusion. ICDE'07 F. Zhu, Q. Qu, D. Lo, X. Yan, J. Han. P. S. Yu, Mining Top-K Large Structural Patterns in a Massive Network. VLDB'11 80" }, { "page_index": 468, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_082.png", "page_index": 468, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:03+07:00" }, "raw_text": "Ref: FP Mining from Data Streams Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. Multi-Dimensional Regression Analysis of Time-Series Data Streams. VLDB'O2. R. M. Karp, C. H. Papadimitriou, and S. Shenker. A simple algorithm for finding frequent elements in streams and bags. T0Ds 2003. G. Manku and R. Motwani. Approximate Frequency Counts over Data Streams. VLDB'02 A. Metwally, D. Agrawal, and A. El Abbadi. Efficient computation of frequent and top-k elements in data streams. /CDT'o5 81" }, { "page_index": 469, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_083.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_6/slide_083.png", "page_index": 469, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:08+07:00" }, "raw_text": "Ref: Freq. Pattern Mining Applications T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk. Mining Database Structure; or How to Build a Data Quality Browser. SlGMOD'02 M. Khan, H. Le, H. Ahmadi, T. Abdelzaher, and J. Han. DustMiner: Troubleshooting interactive complexity bugs in sensor networks., SenSys'o8 Z. Li, S. Lu, S. Myagmar, and Y. Zhou. CP-Miner: A tool for finding copy-paste and related bugs in operating system code. In Proc. 2004 Symp. Operating Systems Design and Implementation (OSDI'04) Z. Li and Y. Zhou. PR-Miner: Automatically extracting implicit programming rules and detecting violations in large software code. FSE'05 D. Lo, H. Cheng, J. Han, S. Khoo, and C. Sun. Classification of software behaviors for failure detection: A discriminative pattern mining approach. KDD'09 Q. Mei, D. Xin, H. Cheng, J. Han, and C. Zhai. Semantic annotation of frequent patterns ACM TKDD, 2007. K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions. EDBT'O2. 82" }, { "page_index": 470, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_001.png", "page_index": 470, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:12+07:00" }, "raw_text": "Mining: Data (3rd ed.) - Chapter 8 - Jiawei Han, Micheline Kamber, and Jian Pei University of illinois at Urbana-Champaign & Simon Fraser University O2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 471, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_002.png", "page_index": 471, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:17+07:00" }, "raw_text": "" }, { "page_index": 472, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_003.png", "page_index": 472, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:22+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 3" }, { "page_index": 473, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_004.png", "page_index": 473, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:26+07:00" }, "raw_text": "Supervised vs. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data 4" }, { "page_index": 474, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_005.png", "page_index": 474, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:30+07:00" }, "raw_text": "Prediction Problems: Classification vs. Numeric Prediction Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data Numeric Prediction models continuous-valued functions, i.e., predicts unknown or missing values Typical applications Credit/loan approval: Medical diagnosis: if a tumor is cancerous or benign Fraud detection: if a transaction is fraudulent Web page categorization: which category it is 5" }, { "page_index": 475, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_006.png", "page_index": 475, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:35+07:00" }, "raw_text": "Classification-A Two-Step Process Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction is training set The model is represented as classification rules, decision trees, or mathematical formulae Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accuracy rate is the percentage of test set samples that are correctly classified by the model Test set is independent of training set (otherwise overfitting) If the accuracy is acceptable, use the model to classify new data Note: If the test set is used to select models, it is called validation (test) set 6" }, { "page_index": 476, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_007.png", "page_index": 476, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:42+07:00" }, "raw_text": "Process (1): Model Construction Classification Algorithms Training Data Classifier NAME RANK YEARS TENURED (Model) Mike Assistant Prof 3 no Mary Assistant Prof 7 yes Bill Professor 2 yes Jim Associate Prof 7 yes IF rank = professor Dave Assistant Prof no OR years > 6 Anne Associate Prof 3 no THEN tenured = yes 7" }, { "page_index": 477, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_008.png", "page_index": 477, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:46+07:00" }, "raw_text": "Process (2): Using the Model in Prediction Classifier Testing Unseen Data Data (Jeff, Professor, 4) NAME RANK YEARS TENURED Tenured? Tom Assistant Prof 2 no Merlisa Associate Prof 7 no Yes George Professor 5 yes Joseph Assistant Prof 7 yes 8" }, { "page_index": 478, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_009.png", "page_index": 478, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:48:50+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 9" }, { "page_index": 479, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_010.png", "page_index": 479, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:00+07:00" }, "raw_text": "Decision Tree Induction: : An Example age income [studentcredit ratingl buys computer <=30 high no fair no Training data set: Buys_computer <=30 high no excellent no The data set follows an example of 31...40 high no fair yes >40 medium no fair yes Quinlan's ID3 (Playing Tennis) >40 low yes fair yes Resulting tree: >40 low yes excellent no 31...40 low yes excellent yes age? <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes <=30 31..40 31...40 medium no excellent >40 yes 31...40 high yes fair yes >40 medium no excellent no credit rating? student? yes fair excellent no yes no yes no yes 10" }, { "page_index": 480, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_011.png", "page_index": 480, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:04+07:00" }, "raw_text": "Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and- conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning - majority voting is employed for classifying the leaf There are no samples left 11" }, { "page_index": 481, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_012.png", "page_index": 481, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:10+07:00" }, "raw_text": "Brief Review of Entropy Entropy (Information Theory) A measure of uncertainty associated with a random variable Calculation: For a discrete random variable Y taking m distinct values {y1, ..., Ym}, H(Y) =-Em=1p;log(pi),where pi = P(Y = yi) Interpretation: 1.0 Higher entropy => higher uncertainty (x)H 0.5 Lower entropy => lower uncertainty Conditional Entropy 0 0 0.5 1.0 Pr(X = 1) H(YX) = Zx p(x)H(YX = x) m = 2 12" }, { "page_index": 482, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_013.png", "page_index": 482, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:15+07:00" }, "raw_text": "Attribute Selection Measure: Information Gain 1 (ID3/C4.5) Select the attribute with the highest information gain Let p, be the probability that an arbitrary tuple in D belongs to class C, estimated by ICi,DI/IDI Expected information (entropy) needed to classify a tuple in D: m Info(D) =-E p;log2(p;) Information needed (after using A to split D into v partitions) to classify D: V Info (D) = x Info (Dj) D i=1 Information gained by branching on attribute A Gain(A) = Info(D) - Info (D) 13" }, { "page_index": 483, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_014.png", "page_index": 483, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:28+07:00" }, "raw_text": "Attribute Selection: lnformation Gain 5 4 Class P: buys_computer = \"yes\" Info ave(D) = I(2,3)+- 1(4,0) Class N: buys_computer = \"no\" 14 14 5 9 9 5 5 lnfo(D) =1(9,5) = log log 2 =0.940 1(3,2) = 0.694 14 14 14 14 14 age pi ni I(pi, nj) 5 I(2,3)means \"age <=30\" has 5 out of <=30 2 3 0.971 14 14 samples, with 2 yes'es and 3 31...40 4 0 no's. Hence >40 3 2 0.971 age income student credit rating buys computer Gain(age) = Info(D) -Info age(D) = 0.246 <=30 high no fair no <=30 high no excellent no 31...40 high no fair yes Similarly, >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no Gain(income) = 0.029 31...40 low yes excellent yes <=30 medium no fair no <=30 low fair Gain(student) = 0.151 yes yes >40 medium yes fair yes <=30 medium yes excellent yes Gain(credit rating) = 0.048 31...40 medium no excellent yes 31...40 high yes fair yes >40 medium no excellent no 14" }, { "page_index": 484, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_015.png", "page_index": 484, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:32+07:00" }, "raw_text": "Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point The point with the minimum expected information requirement for A is selected as the split-point for A Split: D1 is the set of tuples in D satisfying A split-point, and D2 is the set of tuples in D satisfying A > split-point 15" }, { "page_index": 485, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_016.png", "page_index": 485, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:38+07:00" }, "raw_text": "Gain Ratio for Attribute Selection (C4.5) Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) x1og2 D D j=1 GainRatio(A) = Gain(A)/Splitlnfo(A) Ex. -1 4 6 -4 4 x log2 (4) SplitInfoincome(D) x log2 x log2 = 1.557 14 gain_ratio(income) = 0.029/1.557 = 0.019 The attribute with the maximum gain ratio is selected as the splitting attribute 16" }, { "page_index": 486, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_017.png", "page_index": 486, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:44+07:00" }, "raw_text": "Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index gini(D) is defined as n 2 gini (D) =1- p J j =1 where p, is the relative frequency of class j in D If a data set D is split on A into two subsets D and D,, the gini index gini(D) is defined as D gini (D) gini (D2) gini (D) = D D Reduction in Impurity gini(A) =gini(D)-gini (D) reduction in impurity) is chosen to split the node (need to enumerate all the possible splitting points for each attribute) 17" }, { "page_index": 487, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_018.png", "page_index": 487, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:50+07:00" }, "raw_text": "Computation of Gini Index Ex. D has 9 tuples in buys_computer = \"yes\" and 5 in \"no\"' 2 9 5 gini(D) =1- = 0.459 14 14 Suppose the attribute income partitions D into 10 in D,:{low 10 medium} and 4 in D, (D) = Gini(D) + Gini(D,) gini . incomee{low,medium} 14 14 (-()-() (-(i)-()) 10 14 = 0.443 is 0.458; Gini is 0.450. Thus, split on the I{medium,high} {low,medium} (and {high}) since it has the lowest Gini index All attributes are assumed continuous-valued May need other tools, e.g., clustering, to get the possible split values Can be modified for categorical attributes 18" }, { "page_index": 488, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_019.png", "page_index": 488, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:49:55+07:00" }, "raw_text": "Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain: biased towards multivalued attributes Gain ratio: tends to prefer unbalanced splits in which one partition is much smaller than the others Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions and purity in both partitions 19" }, { "page_index": 489, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_020.png", "page_index": 489, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:00+07:00" }, "raw_text": "Other Attribute Selection ! Measures CHAiD: a popular decision tree algorithm, measure based on x2 test for independence C-SEP: performs better than info. gain and gini index in certain cases G-statistic: has a close approximation to x2 distribution MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred): The best tree as the one that requires the fewest # of bits to both (1) encode the tree, and (2) encode the exceptions to the tree Multivariate splits (partition based on multiple variable combinations) CART: finds multivariate splits based on a linear comb. of attrs. Which attribute selection measure is the best? Most give good results, none is significantly superior than others 20" }, { "page_index": 490, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_021.png", "page_index": 490, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:05+07:00" }, "raw_text": "Overfitting and Tree Pruning Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliers Poor accuracy for unseen samples Two approaches to avoid overfitting Prepruning: Halt tree construction early-do not split a node if this would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold Postpruning: Remove branches from a \"fully grown\" tree- get a sequence of progressively pruned trees Use a set of data different from the training data to decide which is the \"best pruned tree' 21" }, { "page_index": 491, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_022.png", "page_index": 491, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:10+07:00" }, "raw_text": "Enhancements to Basic Decision Tree Induction Allow for continuous-valued attributes Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals Handle missing attribute values Assign the most common value of the attribute Assign probability to each of the possible values Attribute construction Create new attributes based on existing ones that are sparsely represented This reduces fragmentation, repetition, and replication 22" }, { "page_index": 492, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_023.png", "page_index": 492, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:14+07:00" }, "raw_text": "Classification in Large Databases Classification-a classical problem extensively studied by statisticians and machine learning researchers Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed Why is decision tree induction popular? relatively faster learning speed (than other classification methods convertible to simple and easy to understand classification rules can use SQL queries for accessing databases comparable classification accuracy with other methods RainForest (VLDB'98 - Gehrke, Ramakrishnan & Ganti) Builds an AVC-list (attribute, value, class label) 23" }, { "page_index": 493, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_024.png", "page_index": 493, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:18+07:00" }, "raw_text": "Scalability Framework for RainForest Separates the scalability aspects from the criteria that determine the quality of the tree Builds an AVC-list: AVC (Attribute, Value, Class_label) AVC-set (of an attribute X ) Projection of training dataset onto the attribute X and class label where counts of individual class label are aggregated AVC-group (of a node n ) Set of AVC-sets of all predictor attributes at the node n 24" }, { "page_index": 494, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_025.png", "page_index": 494, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:32+07:00" }, "raw_text": "Rainforest: Training Set and Its AVC Sets Training Examples AVC-set on Age AVC-set on income age income student redit rating q com income Buy_Computer Age Buy_Computer <=30 high fair no no yes no <=30 high excellent yes no no no high 2 2 high fair <=30 2 3 31...40 no yes 4 0 medium 4 2 >40 fair 31..40 medium no yes >40 low fair >40 3 2 yes yes low 3 1 >40 low yes excellent no 31...40 low yes excellent yes AVC-set on <=30 fair AVC-set on Student medium no no credit_rating <=30 low yes fair yes student Buy_Computer >40 medium fair Buy_Computer yes yes Credit <=30 medium yes excellent yes yes no rating yes no 31...40 medium no excellent yes yes 6 1 fair 6 2 31...40 high yes fair yes no 3 4 excellent 3 3 >40 medium no excellent no 25" }, { "page_index": 495, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_026.png", "page_index": 495, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:37+07:00" }, "raw_text": "BOAT (Bootstrapped Optimistic Algorithm for Tree . Construction) Use a statistical technique called bootstrapping to create several smaller samples (subsets), each fits in memory Each subset is used to create a tree, resulting in several trees These trees are examined and used to construct a new tree T It turns out that T' is very close to the tree that would be generated using the whole data set together Adv: requires only two scans of DB, an incremental alg. 26" }, { "page_index": 496, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_027.png", "page_index": 496, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:44+07:00" }, "raw_text": "Presentation of Classification Results dbminer Eile Edit Query Yiew window Options Help EE 2 yiew Dim: cost Level0 Class% evel. 85 revenue(0.002000.00) Q cost(0.001000.00) X on region(Europe) X on region(Far East) Classification attnbute:product Envirornental Line region(North America) GO Sport Line cost(1000.002000.00) Dutdoor Products revenue(2000.004000.00) revenue(4000.006000.00 revenue(6000.00+) revenue(Not Specified) 27" }, { "page_index": 497, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_028.png", "page_index": 497, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:51+07:00" }, "raw_text": "Visualization of a Decision Tree in SGl/MineSet 3.O DmhEr E cVer e.CnE 1rc:.. sgi s333dE DE3 pF targe: 36E cF l3*t reaI 1.0 H m I I] n lly He1oh1 o1a1 sa1es piskheioh1Ta1ge1saes Color gs of torgct 1002 20109 500:6 R F:Fy F11" }, { "page_index": 498, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_029.png", "page_index": 498, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:50:58+07:00" }, "raw_text": "e Visual Mining by nteractive Perception- Based Classification (PBC) Perception-Dased Classification-segment.1.train.td cx tile LO0ls Operations Ontions View Help tawoluc rcan Sp it. E.8.38.. nuc msar [Eplt(-. 2.1. 1.)] FOLIACE WACCW wo'kin progress Wotkir progress workir progrcoe woikir progress attribJte : =cords93 Left moucc bu.ton inscrt linc.Shit+lcf moloc bJtton mcvco linc Riahtmousc button plitc attributc 29" }, { "page_index": 499, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_030.png", "page_index": 499, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:01+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 30" }, { "page_index": 500, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_031.png", "page_index": 500, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:06+07:00" }, "raw_text": "Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities Foundation: Based on Bayes' Theorem. Performance: A simple Bayesian classifier, naive Bayesian c/assifier, has comparable performance with decision tree and selected neural network classifiers Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct - prior knowledge can be combined with observed data Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured 31" }, { "page_index": 501, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_032.png", "page_index": 501, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:11+07:00" }, "raw_text": "Bayes' Theorem: Basics M Total probability Theorem: P(B)= P(BA.)P(A: i=1 Bayes' Theorem: P(X) Let x be a data sample (\"evidence\"): class label is unknown Let H be a hypothesis that X belongs to class C Classification is to determine P(Hx), (i.e., posteriori probability): the probability that the hypothesis holds given the observed data sample X P(H) (prior probability): the initial probability E.g., X will buy computer, regardless of age, income, ... P(x): probability that sample data is observed P(xH) (likelihood): the probability of observing the sample X, given that the hypothesis holds E.g., Given that X will buy computer, the prob. that X is 31..40 medium income 32" }, { "page_index": 502, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_033.png", "page_index": 502, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:16+07:00" }, "raw_text": "Prediction Based on Bayes' Theorem Given training data x, posteriori probability of a hypothesis H P(Hx), follows the Bayes' theorem P(HX)= P(X Informally, this can be viewed as posteriori = likelihood x prior/evidence Predicts X belongs to C iff the probability P(Cx) is the highest among all the P(CkX) for all the k classes Practical difficulty: It requires initial knowledge of many probabilities, involving significant computational cost 33" }, { "page_index": 503, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_034.png", "page_index": 503, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:20+07:00" }, "raw_text": "Classification Is to Derive the Maximum Posterior. Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x, X,, .., xn) Classification is to derive the maximum posteriori, i.e., the maximal p(cx) This can be derived from Bayes' theorem P(XC;)P(C) P(C;X)= P(X) Since P(X) is constant for all classes, only P(C;X)=P(XC;)P(C;) needs to be maximized 34" }, { "page_index": 504, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_035.png", "page_index": 504, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:26+07:00" }, "raw_text": "Nalve Bayes Classifier A simplified assumption: attributes are conditionally independent (i.e., no dependence relation between attributes): n P(XCi)= II P(x,Ci)=P(x,Ci)xP(x,Ci)x..xP(xCi n k=1 This greatly reduces the computation cost: Only counts the class distribution If Ak is categorical, P(xk!C) is the # of tuples in C. having value Xk for Ak divided by ICi,pI (# of tuples of C; in D) If Ak is continous-valued, P(xkI C.) is usually computed based on Gaussian distribution with a mean u and standard deviation o (x-u)2 1 2 2 g(x,u,o)= e 2t0 and P(xkCi) is P(XCi)=g(xr,Uq,Oc 35" }, { "page_index": 505, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_036.png", "page_index": 505, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:36+07:00" }, "raw_text": "Naive Bayes Classifier: Training Dataset studentredit_rating_com age income <=30 high fair no no Class: <=30 high no excellent no C1:buys_computer = 'yes' 31...40 high fair no yes C2:buys_computer = 'no' >40 medium fair no yes >40 low fair yes yes >40 low yes excellent no Data to be classified: 31...40 low yes excellent yes X = (age <=30, <=30 medium fair no no Income = medium, <=30 low fair yes yes Student = yes >40 medium fair yes yes Credit_rating = Fair) <=30 medium yes excellent yes 31...40 medium no excellent yes 31...40 high fair yes yes >40 medium no excellent no 36" }, { "page_index": 506, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_037.png", "page_index": 506, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:47+07:00" }, "raw_text": "Nalve Bayes Classifier: An Example age income studentredit_rating com <=30 high no fair no <=30 high no excellent no 31...40 high no fair yes P(C): P(buys_computer = \"yes\") = 9/14 = 0.643 >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no P(buys_computer = \"no\") = 5/14= 0.357 31...40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes Compute P(xCi) for each class >40 medium yes fair yes <=30 medium yes excellent yes 31...40 medium P(age = \"<=30\" l buys_computer = \"yes\") = 2/9 = 0.222 no excellent yes 31...40 high yes fair yes >40 medium no excellent no P(age = \"<= 30\" buys_computer = \"no\") = 3/5 = 0.6 P(income = \"medium\" I buys_computer = \"yes\") = 4/9 = 0.444 P(income = \"medium\" l buys_computer = \"no\") = 2/5 = 0.4 P(student = \"yes\" 1 buys_computer = \"yes) = 6/9 = 0.667 P(student = \"yes\" I buys_computer = \"no\") = 1/5 = 0.2 P(credit_rating = \"fair\" buys_computer = \"yes\") = 6/9 = 0.667 P(credit_rating = \"fair\" l buys_computer = \"no\") = 2/5 = 0.4 X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(xci) : P(xbuys_computer = \"yes\") = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X buys_computer = \"no\") = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 p(xci)*p(c.) : P(xbuys_computer = \"yes\") * P(buys_computer = \"yes\") = 0.028 P(Xbuys_computer =\"no\") * P(buys_computer = \"no\") = 0.007 Therefore, x belongs to class (\"buys_computer = yes\") 37" }, { "page_index": 507, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_038.png", "page_index": 507, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:53+07:00" }, "raw_text": "Avoiding the Zero-Probability Problem Naive Bayesian prediction requires each conditional prob. be non-zero. Otherwise, the predicted prob. will be zero n P(XC i II P(xkC i k = 1 Ex. Suppose a dataset with 1000 tuples, income=low (0 income= medium (990), and income = high (10) Use Laplacian correction (or Laplacian estimator) Adding 1 to each case Prob(income = low) = 1/1003 Prob(income = medium) = 991/1003 Prob(income = high) = 11/1003 The \"corrected\" prob. estimates are close to their \"uncorrected\" counterparts 38" }, { "page_index": 508, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_039.png", "page_index": 508, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:51:57+07:00" }, "raw_text": "Naive Bayes Classifier: . Comments Advantages Easy to implement Good results obtained in most of the cases Disadvantages Assumption: class conditional independence, therefore loss of accuracy Practically, dependencies exist among variables E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc. Dependencies among these cannot be modeled by Naive Bayes Classifier How to deal with these dependencies? Bayesian Belief Networks (Chapter 9) 39" }, { "page_index": 509, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_040.png", "page_index": 509, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:01+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 40" }, { "page_index": 510, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_041.png", "page_index": 510, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:06+07:00" }, "raw_text": "Using IF-THEN Rules for Classification Represent the knowledge in the form of IF-THEN rules R: IF age = youth AND student = yes THEN buys computer = yes Rule antecedent/precondition vs. rule consequent Assessment of a rule: coverage and accuracy = # of tuples covered by R n covers n accuracy(R) = ncorrect/ ncovers If more than one rule are triggered, need conflict resolution Size ordering: assign the highest priority to the triggering rules that has the \"toughest\" requirement (i.e., with the most attribute tests) Class-based ordering: decreasing order of prevalence or misclassification cost per class Rule-based ordering (decision list): rules are organized into one long priority list, according to some measure of rule quality or by experts 41" }, { "page_index": 511, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_042.png", "page_index": 511, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:13+07:00" }, "raw_text": "Rule Extraction from a Decision Tree Rules are easier to understand than large trees age? One rule is created for each path from the <=30 31..40 >40 root to a leaf student? credit'rating? yes Each attribute-value pair along a path forms a / excellent fair conjunction: the leaf holds the class no yes no no yes prediction yes Rules are mutually exclusive and exhaustive Example: Rule extraction from our buys_computer decision-tree IF age = young AND student = no THEN buys_computer = no IF age = young AND student = yes THEN buys_computer = yes IF age = mid-age THEN buys_computer = yes IF age = old AND credit_rating = excellent THEN buys_computer = no IF age = old AND credit_rating = fair THEN buys_computer = yes 42" }, { "page_index": 512, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_043.png", "page_index": 512, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:17+07:00" }, "raw_text": "Rule Induction: Sequential Covering Method Sequential covering algorithm: Extracts rules directly from training data Typical sequential covering algorithms: FOIL, AQ, CN2, RIPPER Rules are learned sequentially, each for a given class C: will cover many tuples of C but none (or few) of the tuples of other classes Steps: Rules are learned one at a time Each time a rule is learned, the tuples covered by the rules are removed Repeat the process on the remaining tuples until termination condition, e.g., when no more training examples or when the quality of a rule returned is below a user-specified threshold Comp. w. decision-tree induction: learning a set of rules simultaneously 43" }, { "page_index": 513, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_044.png", "page_index": 513, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:20+07:00" }, "raw_text": "Sequential Covering Algorithm while (enough target tuples left) generate a rule remove positive target tuples satisfying this rule Examples covered by Rule 2 Examples covered by Rule 1 covered oy Rule 3 Positive examples 44" }, { "page_index": 514, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_045.png", "page_index": 514, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:24+07:00" }, "raw_text": "Rule Generation To generate a rule while(true) find the best predicate p if foil-gain(p) > threshold then add p to current rule else break A3=1&&A1=2 Positive Negative examples examples 45" }, { "page_index": 515, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_046.png", "page_index": 515, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:30+07:00" }, "raw_text": "How to Learn-One-Rule? Start with the most general rule possible: condition = empty Adding new attributes by adopting a greedy depth-first strategy Picks the one that most improves the rule quality Rule-Quality measures: consider both coverage and accuracy Foil-gain (in FOIL & RIPPER): assesses info_gain by extending condition pos' pos Gain= pos'x(log, -10g2 FOIL C pos'+neg pos+neg favors rules that have high accuracy and cover many positive tuples Rule pruning based on an independent set of test tuples pos + neg Pos/neg are # of positive/negative tuples covered by R. If FOlL_Prune is higher for the pruned version of R, prune R 46" }, { "page_index": 516, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_047.png", "page_index": 516, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:34+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 47" }, { "page_index": 517, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_048.png", "page_index": 517, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:38+07:00" }, "raw_text": "Model Evaluation and Selection Evaluation metrics: How can we measure accuracy? Other metrics to consider? Use validation test set of class-labeled tuples instead of training set when assessing accuracy Methods for estimating a classifier's accuracy: Holdout method, random subsampling Cross-validation Bootstrap Comparing classifiers: Confidence intervals Cost-benefit analysis and ROC Curves 48" }, { "page_index": 518, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_049.png", "page_index": 518, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:45+07:00" }, "raw_text": "Classifier Eyglugtion Metrics: : Confusion Matrix Confusion Matrix: - C1 Actual classPredicted class C C1 True Positives (TP) False Negatives (FN) - C1 False Positives (FP) True Negatives (TN) Example of Confusion Matrix: Actual classPredicted buy computer buy_computer Total class = yes = no buy_computer = yes 6954 46 7000 buy_computer = no 412 2588 3000 Total 7366 2634 10000 # of tuples in class i that were labeled by the classifier as class j May have extra rows/columns to provide totals 49" }, { "page_index": 519, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_050.png", "page_index": 519, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:50+07:00" }, "raw_text": "Metrics: Accuracy, Classifier Eyglugtion Error Rate, Sensitivity and Specificity AP C -C Class Imbalance Problem: C TP FN P One class may be rare, e.g. -C FP TN N fraud, or HIV-positive p' N' All Significant majority of the negative c/ass and minority of Classifier Accuracy, or the positive class recognition rate: percentage of test set tuples that are correctly Sensitivity: True Positive classified recognition rate Accuracy = (TP + TN)/All Sensitivity = TP/p Error rate: 1 - accuracy, or Specificity: True Negative recognition rate Error rate = (FP + FN)/All Specificity = TN/N 50" }, { "page_index": 520, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_051.png", "page_index": 520, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:52:57+07:00" }, "raw_text": "Classifier Evaluation Metrics: Precision and Recall, and F-measures Precision: exactness - what % of tuples that the classifier labeled as positive are actually positive TP precision TP+FP Recall: completeness - what % of positive tuples did the classifier label as positive? TP recall Perfect score is 1.0 TP+FN Inverse relationship between precision & recall F measure (F, or F-score): harmonic mean of precision and recall, 2 x precision x recall F precision+recall assigns times as much weight to recall as to precision (1 + β2) x precision x recall Fs β2 x precision + recall 51" }, { "page_index": 521, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_052.png", "page_index": 521, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:02+07:00" }, "raw_text": "Classifier Eygluation Metrics: : Example Actual ClassPredicted class Total cancer = yes cancer = no Recognition(% 90 210 300 30.00 (sensitivity cancer = yes 140 9560 9700 98.56 (specificity) cancer = no Total 230 9770 10000 96.40 (accuracy Precision = 90/230 = 39.13% Recall = 90/300 = 30.00% 52" }, { "page_index": 522, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_053.png", "page_index": 522, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:07+07:00" }, "raw_text": "Evaluating g Classifier Accuracy: Holdout & Cross-Validation Methods Holdout method Given data is randomly partitioned into two independent sets Training set (e.g., 2/3) for model construction Test set (e.g., 1/3) for accuracy estimation Random sampling: a variation of holdout Repeat holdout k times, accuracy = avg. of the accuracies obtained Cross-validation (k-fold, where k = 10 is most popular) Randomly partition the data into k mutually exclusive subsets, each approximately equal size At i-th iteration, use D: as test set and others as training set Leave-one-out: k folds where k = # of tuples, for small sized data *stratified cross-validation*: folds are stratified so that class dist. in each fold is approx. the same as that in the initial data 53" }, { "page_index": 523, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_054.png", "page_index": 523, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:12+07:00" }, "raw_text": "Evaluating Classifier Accuracy: E Bootstrap Bootstrap Works well with small data sets Samples the given training tuples uniformly with rep/acement i.e., each time a tuple is selected, it is equally likely to be selected again and re-added to the training set Several bootstrap methods, and a common one is .632 boostrap A data set with d tuples is sampled d times, with replacement, resulting in a training set of d samples. The data tuples that did not make it into the training set end up forming the test set. About 63.2% of the original data end up in the bootstrap, and the remaining 36.8% form the test set (since (1 - 1/d)d e-1 = 0.368) Repeat the sampling procedure k times, overall accuracy of the model: k 1 Acc(M) = i=1 54" }, { "page_index": 524, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_055.png", "page_index": 524, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:16+07:00" }, "raw_text": "Estimating Confidence Intervals: Classifier Models M, us. M 2 Suppose we have 2 classifiers, M and M,, which one is better? Use 10-fold cross-validation to obtain and erT(M) err(M2) These mean error rates are just estimates of error on the true population of future data cases What if the difference between the 2 error rates is just attributed to chance? Use a test of statistical significance Obtain confidence limits for our error estimates 55" }, { "page_index": 525, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_056.png", "page_index": 525, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:20+07:00" }, "raw_text": "Estimating Confidence Intervals: Null Hypothesis Perform 10-fold cross-validation Assume samples follow a t distribution with k-1 degrees of freedom (here, k=10) Use t-test (or Student's t-test) Null Hypothesis: M1 & M, are the same If we can reject null hypothesis, then we conclude that the difference between M1 & M, is statistically significant Chose model with lower error rate 56" }, { "page_index": 526, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_057.png", "page_index": 526, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:53:26+07:00" }, "raw_text": "Estimating Confidence Intervals: t-test If only 1 test set available: pairwise comparison For ith round of 10-fold cross-validation, the same cross partitioning is used to obtain err(M); and err(M)i Average over 10 rounds to get err(M) and err(M2) t-test computes t-statistic with k-1 degrees of freedom: err(M1) - err(M2) where t = Vvar(Mi - M2)/k k 1 2 var(M1 - M2) = i=1 If two test sets available: use non-paired t-test var(M1) var(M2) where var(M1 - M2) k1 k2 where k, & k, are # of cross-validation samples used for M, & M,, resp. 57" }, { "page_index": 527, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_058.png", "page_index": 527, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:13+07:00" }, "raw_text": "Estimating Confidence Intervals: Table for t-distribution TABLEB:-DISTRIBUTIONCRITICALVALUES Tail probabilityp df .25 .20 .15 .10 .05 .025 .02 .01 .005 .0025 .001 .0005 1 1.000 1.376 1.963 3.078 6.314 12.71 15.89 31.82 63.66 127.3 318.3 636.6 2 .816 1.061 1.386 1.886 2.920 4.303 4.849 6.965 9.925 14.09 22.33 31.60 3 .765 .978 1.250 1.638 2.353 3.182 3.482 4.541 5.841 7.453 10.21 12.92 4 .741 .941 1.190 1.533 2.132 2.776 2.999 3.747 4.604 5.598 7.173 8.610 5 .727 .920 1.156 1.476 2.015 2.571 2.757 3.365 4.032 4.773 5.893 6.869 6 .718 .906 1.134 1.440 1.943 2.447 2.612 3.143 3.707 4.317 5.208 5.959 7 .711 .896 1.119 1.415 1.895 2.365 2.517 2.998 3.499 4.029 4.785 5.408 t 8 .706 .889 1.108 1.397 1.860 2.306 2.449 2.896 3.355 3.833 4.501 5:041 9 .703 .883 1.100 1.383 1.833 2.262 2.398 2.821 3.250 3.690 4.297 4.781 Symmetric 10 .700 .879 1.093 1.372 1.812 2.228 2.359 2.764 3.169 3.581 4.144 4.587 11 .697 .876 1.088 1.363 1.796 2,201 2.328 2.718 3.106 3.497 4.025 4.437 12 .695 .873 1.083 1.356 1.782 2.179 2.303 2.681 3.055 3.428 3.930 4.318 Significance level, 13 .694 .870 1.079 1.350 1.771 2.160 2.282 2.650 3.012 3.372 3.852 4.221 14 .692 .868 1.076 1.345 1.761 2.145 2.264 2.624 2.977 3.326 3.787 4.140 15 .691 .866 1.074 1.341 1.753 2.131 2.249 2.602 2.947 3.286 3.733 4.073 e.g., sig = 0.05 or 16 .690 .865 1.071 1.337 1.746 2.120 2.235 2.583 2.921 3.252 3.686 4.015 17 .689 .863 1.069 1.333 1.740 2.110 2.224 2.567 2.898 3.222 3.646 3.965 5% means M1 & M2 18 .688 .862 1.067 1.330 1.734 2.101 2.214 2.552 2.878 3.197 3.611 3.922 19 .688 .861 1.066 1.328 1.729 2.093 2.205 2.539 2.861 3.174 3.579 3.883 20 .687 .860 1.064 1.325 1.725 2.086 2.197 2.528 2.845 3.153 3.552 3.850 are significantly 21 .686 .859 1.063 1.323 1.721 2.080 2.189 2.518 2.831 3.135 3.527 3.819 22 .686 .858 1.061 1.321 1.717 2.074 2.183 2.508 2.819 3.119 3.505 3.792 different for 95% of 23 .685 .858 1.060 1.319 1.714 2.069 2.177 2.500 2.807 3.104 3.485 3.768 24 .685 .857 1.059 1.318 1.711 2.064 2.172 2.492 2.797 3.091 3.467. 3.745 25 .684 .856 1.058 1.316 1.708 2.060 2.167 2.485 2.787 3.078 3.450 3.725 population 26 .684 .856 1.058 1.315 1.706 2.056 2.162 2.479 2.779 3.067 3.435 3.707 27 .684 .855 1.057 1.314 1.703 2.052 2.158 2.473 2.771 3.057 3.421 3.690 28 .683 .855 1.056 1.313 1.701 2.048 2.154 2.467 2.763 3.047 3.408 3.674 Confidence limit, z 29 .683 .854 1.055 1.311 1.699 2.045 2.150 2.462 2.756 3.038 3.396 3.659 30 .683 .854 1.055 1.310 1.697 2.042 2.147 2.457 2.750 3.030 3.385 3.646 40 .681 .851 1.050 1.303 1.684 2.021 2.123 2.423 2.704 2.971 3.307 3.551 = sig/2 50 .679 .849 1.047 1.299 1.676 2.009 2.109 2.403 2.678 2.937 3.261 3.496 60 .679 .848 1.045 1.296 1.671 2.000 2.099 2.390 2.660 2.915 3.232 3.460 80 .678 .846 1.043 1.292 1.664 1.990 2.088 2.374 2.639 2.887 3.195 3.416 100 .677 .845 1.042 1.290 1.660 1.984 2.081 2.364 2.626 2.871 3.174 3.390 1000 .675 .842 1.037 1.282 1.646 1.962 2.056 2.330 2.581 2.813 3.098 3.300 .674 .841 1.036 1.282 1.645 1.960 2.054 2.326 2.576 2.807 3.091 3.291 $0% 60% 70% 80% 90% 95% 96% 98% 99% 99.5% 99.8% 99.9% Confidence level C 58" }, { "page_index": 528, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_059.png", "page_index": 528, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:18+07:00" }, "raw_text": "Estimating Confidence Intervals: Statistical Significance Are M & M, significantly different? Compute t. Select significance level (e.g. sig = 5%) Consult table for t-distribution: Find t va/ue corresponding to k-1 degrees offreedom (here, 9) t-distribution is symmetric: typically upper % points of distribution shown -> look up value for confidence limit z=sig/2 (here, 0.025) If t > z or t < -z, then t value lies in rejection region: Reject null hypothesis that mean error rates of M1 & M, are same Conclude: statistically significant difference between M, & M2 Otherwise, conclude that any difference is chance 59" }, { "page_index": 529, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_060.png", "page_index": 529, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:26+07:00" }, "raw_text": "1.0 Model Selection: ROC Curves 0.8 0.6 ROc (Receiver Operating Characteristics) curves: for visual a4 comparison of classification models a2 Originated from signal detection theory Shows the trade-off between the true ao a2 0.4 06 0.8 1.0 falseposinve rae positive rate and the false positive rate Vertical axis The area under the ROC curve is a represents the true measure of the accuracy of the model positive rate Rank the test tuples in decreasing Horizontal axis rep. order: the one that is most likely to the false positive rate belong to the positive class appears at The plot also shows a the top of the list diagonal line The closer to the diagonal line (i.e., the A model with perfect closer the area is to 0.5), the less accuracy will have an accurate is the model area of 1.0 60" }, { "page_index": 530, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_061.png", "page_index": 530, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:31+07:00" }, "raw_text": "Affecting Model Selection ssues Accuracy classifier accuracy: predicting class label Speed time to construct the model (training time) time to use the model (classification/prediction time) Robustness: handling noise and missing values Scalability: efficiency in disk-resident databases Interpretability understanding and insight provided by the model size or compactness of classification rules 61" }, { "page_index": 531, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_062.png", "page_index": 531, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:34+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 62" }, { "page_index": 532, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_063.png", "page_index": 532, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:38+07:00" }, "raw_text": "New data sample Combine Class Data votes prediction Ensemble methods Use a combination of models to increase accuracy the aim of creating an improved model M* Popular ensemble methods Bagging: averaging the prediction over a collection of classifiers Boosting: weighted vote with a collection of classifiers Ensemble: combining a set of heterogeneous classifiers 63" }, { "page_index": 533, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_064.png", "page_index": 533, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:44+07:00" }, "raw_text": "Bagging: Boostrap Aggregation Analogy: Diagnosis based on multiple doctors' majority vote Training Given a set D of d tuples, at each iteration i, a training set D: of d tuples is sampled with replacement from D (i.e., bootstrap) A classifier model M; is learned for each training set D Classification: classify an unknown sample x Each classifier M: returns its class prediction The bagged classifier M* counts the votes and assigns the class with the most votes to X Prediction: can be applied to the prediction of continuous values by taking the average value of each prediction for a given test tuple Accuracy Often significantly better than a single classifier derived from D For noise data: not considerably worse, more robust Proved improved accuracy in prediction 64" }, { "page_index": 534, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_065.png", "page_index": 534, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:49+07:00" }, "raw_text": "Boosting Analogy: Consult several doctors, based on a combination of weighted diagnoses-weight assigned based on the previous diagnosis accuracy How boosting works? Weights are assigned to each training tuple A series of k classifiers is iteratively learned After a classifier M: is learned, the weights are updated to the training tuples that were misclassified by M; The final M* combines the votes of each individual classifier, where the weight of each classifier's vote is a function of its accuracy Boosting algorithm can be extended for numeric prediction Comparing with bagging: Boosting tends to have greater accuracy, but it also risks overfitting the model to misclassified data 65" }, { "page_index": 535, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_066.png", "page_index": 535, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:54:54+07:00" }, "raw_text": "Adaboost (Freund and Schapire, 1997) Given a set of d class-labeled tuples, (X, Y1), .., (X., Ya) Initially, all the weights of tuples are set the same (1/d) Generate k classifiers in k rounds. At round i Tuples from D are sampled (with replacement) to form a training set D of the same size Each tuple's chance of being selected is based on its weight A classification model M; is derived from D. Its error rate is calculated using D: as a test set If a tuple is misclassified, its weight is increased, o.w. it is decreased Error rate: err(x.) is the misclassification error of tuple X. Classifier M error rate is the sum of the weights of the misclassified tuples: d w;x err (Xj) (M i) = error j The weight of classifier M's vote is 1-error(Mi) log- error(M:) 66" }, { "page_index": 536, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_067.png", "page_index": 536, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:00+07:00" }, "raw_text": "Random Forest (Breiman 2001) Random Forest: Each classifier in the ensemble is a decision tree classifier and is generated using a random selection of attributes at each node to determine the split During classification, each tree votes and the most popular class is returned Two Methods to construct Random Forest: Forest-RI (random input selection): Randomly select, at each node, F attributes as candidates for the split at the node. The CART methodology is used to grow the trees to maximum size Forest-RC (random linear combinations): Creates new attributes (or features) that are a linear combination of the existing attributes (reduces the correlation between individual classifiers) Comparable in accuracy to Adaboost, but more robust to errors and outliers Insensitive to the number of attributes selected for consideration at each split, and faster than bagging or boosting 67" }, { "page_index": 537, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_068.png", "page_index": 537, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:06+07:00" }, "raw_text": "Classification of Class-lmbalanced Data Sets Class-imbalance problem: Rare positive example but numerous negative ones, e.g., medical diagnosis, fraud, oil-spill, fault, etc. Traditional methods assume a balanced distribution of classes and equal error costs: not suitable for class-imbalanced data Typical methods for imbalance data in 2-class classification: Oversampling: re-sampling of data from positive class Under-sampling: randomly eliminate tuples from negative class Threshold-moving: moves the decision threshold, t, so that the rare class tuples are easier to classify, and hence, less chance of costly false negative errors Ensemble techniques: Ensemble multiple classifiers introduced above Still difficult for class imbalance problem on multiclass tasks 68" }, { "page_index": 538, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_069.png", "page_index": 538, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:11+07:00" }, "raw_text": "Chapter 8. Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Rule-Based Classification Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary 69" }, { "page_index": 539, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_070.png", "page_index": 539, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:16+07:00" }, "raw_text": "Classification is a form of data analysis that extracts models describing important data classes. Effective and scalable methods have been developed for decision tree induction, Naive Bayesian classification, rule-based classification, and many other classification methods. Evaluation metrics include: accuracy, sensitivity, specificity, precision, recall, F measure, and F, measure. Stratified k-fold cross-validation is recommended for accuracy estimation. Bagging and boosting can be used to increase overall accuracy by learning and combining a series of individual models. 70" }, { "page_index": 540, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_071.png", "page_index": 540, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:20+07:00" }, "raw_text": "Summary (11) Significance tests and ROC curves are useful for model selection. There have been numerous comparisons of the different classification methods; the matter remains a research topic No single method has been found to be superior over all others for all data sets Issues such as accuracy, training time, robustness, scalability, and interpretability must be considered and can involve trade- offs, further complicating the quest for an overall superior method 71" }, { "page_index": 541, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_072.png", "page_index": 541, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:27+07:00" }, "raw_text": "References (1) C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future Generation Computer Systems, 13, 1997 C. M. Bishop, Neural Networks for Pattern Recognition. Oxford University Press, 1995 L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth International Group, 1984 C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Data Mining and Knowledge Discovery, 2(2): 121-168, 1998 P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for scaling machine learning. KDD'95 H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Freguent Pattern Analysis for Effective Classification, ICDE'07 H. Cheng, X. Yan, J. Han, and P. S. Yu, Direct Discriminative Pattern Mining for Effective Classification, ICDE'08 W. Cohen. Fast effective rule induction. ICML'95 G. Cong, K.-L. Tan, A. K. H. Tung, and X. Xu. Mining top-k covering rule groups for gene expression data. SIGMOD'05 72" }, { "page_index": 542, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_073.png", "page_index": 542, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:34+07:00" }, "raw_text": "References (2) A. J. Dobson. An Introduction to Generalized Linear Models. Chapman & Hall, 1990 G. Dong and J. Li. Efficient mining of emerging patterns: Discovering trends and differences. KDD'99. R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed. John Wiley, 2001 U. M. Fayyad. Branching on attribute values in decision tree generation. AAAl'94. Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Computer and System Sciences, 1997. J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree construction of large datasets. VLDB'98 J. Gehrke, V. Gant, R. Ramakrishnan, and W.-Y. Loh, BOAT -- Optimistic Decision Tree Construction. SIGMOD'99 T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, 2001. D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 1995. W. Li, J. Han, and J. Pei, CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules, ICDM'01. 73" }, { "page_index": 543, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_074.png", "page_index": 543, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:40+07:00" }, "raw_text": "References (3) T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy, complexity and training time of thirty-three old and new classification algorithms. Machine Learning, 2000. J. Magidson. The Chaid approach to segmentation modeling: Chi-sguared automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research, Blackwell Business, 1994 M. Mehta, R. Agrawal, and J. Rissanen. SLiQ : A fast scalable classifier for data mining.EDBT'96. T. M. Mitchell. Machine Learning. McGraw Hill, 1997. S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi- Disciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998 J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986 J. R. Quinlan and R. M. Cameron-Jones. FOIL: A midterm report. ECML'93. J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993 J. R. Quinlan. Bagging, boosting, and c4.5. AAAl'96. 74" }, { "page_index": 544, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_075.png", "page_index": 544, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:46+07:00" }, "raw_text": "References (4) R. Rastogi and K. Shim. Public: A decision tree classifier that integrates building and pruning.VLDB'98. J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data mining. VLDB'96. J. W. Shavlik and T. G. Dietterich. Readings in Machine Learning. Morgan Kaufmann, 1990. P. Tan, M. Steinbach, and V. Kumar. Introduction to Data Mining. Addison Wesley 2005. S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufman, 1991 S. M. Weiss and N. Indurkhya. Predictive Data Mining. Morgan Kaufmann, 1997. I. H. Witten and E. Frank. Data Mining: Practical Machine Learning Tools and Techniques, 2ed. Morgan Kaufmann, 2005 X. Yin and J. Han. CPAR: Classification based on predictive association rules. SDM'O3 H. Yu, J. Yang, and J. Han. Classifying large data sets using SVM with hierarchical clusters.KDD'03. 75" }, { "page_index": 545, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_076.png", "page_index": 545, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:50+07:00" }, "raw_text": "" }, { "page_index": 546, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_077.png", "page_index": 546, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:55:55+07:00" }, "raw_text": "CS412 Midterm Exam Statistics Opinion Question Answering: Like the style: 70.83%, dislike: 29.16% Exam is hard: 55.75%, easy: 0.6%, just right: 43.63% Time: plenty:3.03%, enough: 36.96%, not: 60% Score distribution: # of students (Total: 180) >=90: 24 <40: 2 60-69: 37 80-89: 54 50-59: 15 70-79: 46 40-49: 2 Final grading are based on overall score accumulation and relative class distributions 77" }, { "page_index": 547, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_078.png", "page_index": 547, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:00+07:00" }, "raw_text": "ssues: Evaluating Classification Methods Accuracy classifier accuracy: predicting class label predictor accuracy: guessing value of predicted attributes Speed time to construct the model (training time) time to use the model (classification/prediction time) Robustness: handling noise and missing values Scalability: efficiency in disk-resident databases Interpretability understanding and insight provided by the model size or compactness of classification rules 78" }, { "page_index": 548, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_079.png", "page_index": 548, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:06+07:00" }, "raw_text": "Predictor r Error Measures Measure predictor accuracy: measure how far off the predicted value is from the actual known value Loss function: measures the error betw. y: and the predicted value Y: Absolute error: l yi -y'l Squared error: (yi-y')2 Test error (generalization error): the average loss over the test set d a I y: - JMean squared error: E(y;-y;') Mean absolute error: i=1 i=1 d a d E(yi-yi')2 d I y: Rél'ative squared error: Relative absolute error: i=1 i=1 d d Zy;-y1 E(y: -y) i=1 i=1 The mean squared-error exaggerates the presence of outliers Popularly use (square) root mean-square error, similarly, root relative sguared error 79" }, { "page_index": 549, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_080.png", "page_index": 549, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:10+07:00" }, "raw_text": "Scalable Decision Tree Induction Methods SLIQ (EDBT'96 - Mehta et al.) Builds an index for each attribute and only class list and the SPRINT (VLDB'96 - J. Shafer et al.) Constructs an attribute list data structure PUBLIC (VLDB'98 - Rastogi & Shim Integrates tree splitting and tree pruning: stop growing the tree earlier RainForest (VLDB'98 - Gehrke, Ramakrishnan & Ganti) Builds an AVC-list (attribute, value, class label) BOAT (PODS'99 - Gehrke, Ganti, Ramakrishnan & Loh) Uses bootstrapping to create several small samples 80" }, { "page_index": 550, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_7/slide_081.png", "page_index": 550, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:15+07:00" }, "raw_text": "Data Cube-Based Decision-Tree lnduction Integration of generalization with decision-tree induction (Kamber et al.'97) Classification at primitive concept levels E.g., precise temperature, humidity, outlook, etc. Low-level concepts, scattered classes, bushy classification- trees Semantic interpretation problems Cube-based multi-level classification Relevance analysis at multi-levels Information-gain analysis with dimension + level 81" }, { "page_index": 551, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_001.png", "page_index": 551, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:18+07:00" }, "raw_text": "Mining: Data l Concepts and Techniques (3rd ed.) - Chapter 9 - Classification: Adyanced Methods Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 552, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_002.png", "page_index": 552, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:22+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 2" }, { "page_index": 553, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_003.png", "page_index": 553, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:27+07:00" }, "raw_text": "Bayesian Belief Networks Bayesian belief networks (also known as Bayesian networks, probabilistic networks): allow c/ass conditional independencies between subsets of variables A (directed acyc/ic) graphical model of causal relationships Represents dependency among the variables Gives a specification of joint probability distribution Nodes: random variables Links: dependency Y x X and Y are the parents of Z, and Y is the parent of P z P No dependency between z and P Has no loops/cycles 3" }, { "page_index": 554, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_004.png", "page_index": 554, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:34+07:00" }, "raw_text": "Bayesian Belief Network: An Example CPT: Conditional Probability Table Family Smoker (S History (FH for variable LungCancer: (FH, S) (FH,S (FH,S) (FH,S) LC 0.8 0.5 0.7 0.1 LungCancer Emphysema LC 0.2 0.5 0.3 0.9 (LC) shows the conditional probability for each possible combination of its parents Derivation of the probability of a PositiveXRay Dyspnea particular combination of values of x, from CPT: Bayesian Belief Network n P (x iParents (Y j)) i = 1 4" }, { "page_index": 555, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_005.png", "page_index": 555, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:40+07:00" }, "raw_text": "Training Bayesian Networks: Several Scenarios Scenario 1: Given both the network structure and all variables observable: compute on/y the CPT entries Scenario 2: Network structure known, some variables hidden: gradient descent (greedy hill-climbing) method, i.e., search for a solution along the steepest descent of a criterion function Weights are initialized to random probability values At each iteration, it moves towards what appears to be the best solution at the moment, w.o. backtracking Weights are updated at each iteration & converge to local optimum Scenario 3: Network structure unknown, all variables observable: search through the model space to reconstruct network topology Scenario 4: Unknown structure, all hidden variables: No good algorithms known for this purpose D. Heckerman. A Tutorial on Learning with Bayesian Networks. In Learning in Graphical Models, M. Jordan, ed.. MIT Press, 1999. 5" }, { "page_index": 556, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_006.png", "page_index": 556, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:44+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 6" }, { "page_index": 557, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_007.png", "page_index": 557, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:49+07:00" }, "raw_text": "Classification n by Backpropagation Backpropagation: A neural network learning algorithm Started by psychologists and neurobiologists to develop and test computational analogues of neurons A neural network: A set of connected input/output units where each connection has a weight associated with it s by During the learning phase, the network learns adjusting the weights so as to be able to predict the correct class label of the input tuples Also referred to as connectionist learning due to the connections between units 7" }, { "page_index": 558, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_008.png", "page_index": 558, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:54+07:00" }, "raw_text": "Neural Network as a Classifier Weakness Long training time Require a number of parameters typically best determined empirically, e.g., the network topology or \"structure.\" Poor interpretability: Difficult to interpret the symbolic meaning behind the learned weights and of \"hidden units\" in the network Strength High tolerance to noisy data Ability to classify untrained patterns Well-suited for continuous-valued inputs and outputs Successful on an array of real-world data, e.g., hand-written letters Algorithms are inherently parallel Technigues have recently been developed for the extraction of rules from trained neural networks 8" }, { "page_index": 559, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_009.png", "page_index": 559, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:56:58+07:00" }, "raw_text": "A Multi-Layer . Feed-Forward Neural Network Output vector k+1 k (y; - yk M Output layer Hidden layer W il Input layer Input vector: X 9" }, { "page_index": 560, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_010.png", "page_index": 560, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:04+07:00" }, "raw_text": "HoW A Multi-Layer Neural Network Works The inputs to the network correspond to the attributes measured for each training tuple Inputs are fed simultaneously into the units making up the input layer They are then weighted and fed simultaneously to a hidden layer The number of hidden layers is arbitrary, although usually only one The weighted outputs of the last hidden layer are input to units making up the output layer, which emits the network's prediction The network is feed-forward: None of the weights cycles back to an input unit or to an output unit of a previous layer From a statistical point of view, networks perform nonlinear regression: Given enough hidden units and enough training samples, they can closely approximate any function 10" }, { "page_index": 561, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_011.png", "page_index": 561, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:09+07:00" }, "raw_text": "Defining a Network Topology Decide the network topology: Specify # of units in the input layer, # of hidden layers (if > 1), # of units in each hidden layer, and # of units in the output layer Normalize the input values for each attribute measured in the training tuples to [0.01.0] One input unit per domain value, each initialized to 0 Output, if for classification and more than two classes, one output unit per class is used Once a network has been trained and its accuracy is unacceptable, repeat the training process with a different network topology or a different set of initial weights 11" }, { "page_index": 562, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_012.png", "page_index": 562, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:15+07:00" }, "raw_text": "Backpropagation Iteratively process a set of training tuples & compare the network's prediction with the actual known target value For each training tuple, the weights are modified to minimize the mean squared error between the network's prediction and the actual target value Modifications are made in the \"backwards\" direction: from the output layer, through each hidden layer down to the first hidden layer, hence \"backpropagation\" Steps Initialize weights to small random numbers, associated with biases Propagate the inputs forward (by applying activation function) Backpropagate the error (by updating weights and biases) Terminating condition (when error is very small, etc.) 12" }, { "page_index": 563, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_013.png", "page_index": 563, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:22+07:00" }, "raw_text": "Neuron: A Hidden/Output Layer Unit bias Hk X0 W 0 X 1 W output y x W For Example n n n y = sign( W;Xj - fUk Input weight weighted Activation i=0 vector w function vector x sum An n-dimensional input vector x is mapped into variable y by means of the scalar product and a nonlinear function mapping The inputs to unit are outputs from the previous layer. They are multiplied by their corresponding weights to form a weighted sum, which is added to the bias associated with unit. Then a nonlinear activation function is applied to it. 13" }, { "page_index": 564, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_014.png", "page_index": 564, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:28+07:00" }, "raw_text": "Efficiency and Interpretability Efficiency of backpropagation: Each epoch (one iteration through the training set) takes O(Dl * w), with [Dl tuples and wweights, but # of epochs can be exponential to n, the number of inputs, in worst case For easier comprehension: Rule extraction by network pruning Simplify the network structure by removing weighted links that have the least effect on the trained network Then perform link, unit, or activation value clustering The set of input and activation values are studied to derive rules describing the relationship between the input and hidden unit layers Sensitivity analysis: assess the impact that a given input variable has on a network output. The knowledge gained from this analysis can be represented in rules 14" }, { "page_index": 565, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_015.png", "page_index": 565, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:32+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 15" }, { "page_index": 566, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_016.png", "page_index": 566, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:38+07:00" }, "raw_text": "Classification: A Mathematical Mapping Classification: predicts categorical class labels E.g., Personal homepage classification X; =(x1,Xz,X3, ...), Yj =+1 or -1 x1 : # of word \"homepage' x x, : # of word \"welcome\" X x x X Mathematically, x e X = R\",y e Y ={+1,-1} X x X 0 X We want to derive a function f: X -> Y 0 0 x 0 Linear Classification 0 % 0 0 Binary Classification problem 0 0 0 0 Data above the red line belongs to class 'x' Data below red line belongs to class 'o' Examples: SVM, Perceptron, Probabilistic Classifiers 16" }, { "page_index": 567, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_017.png", "page_index": 567, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:43+07:00" }, "raw_text": "Discriminative Classifiers Advantages Prediction accuracy is generally high As compared to Bayesian methods - in general Robust, works when training examples contain errors Fast evaluation of the learned target function Bayesian networks are normally slow Criticism Long training time Difficult to understand the learned function (weights) Bayesian networks can be used easily for pattern discovery Not easy to incorporate domain knowledge Easy in the form of priors on the data or distributions 17" }, { "page_index": 568, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_018.png", "page_index": 568, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:47+07:00" }, "raw_text": "A relatively new classification method for both linear and nonlinear data It uses a nonlinear mapping to transform the original training data into a higher dimension With the new dimension, it searches for the linear optimal separating hyperplane (i.e., \"decision boundary\") With an appropriate nonlinear mapping to a sufficiently high dimension, data from two classes can always be separated by a hyperplane SVM finds this hyperplane using support vectors ('essential\" training tuples) and margins (defined by the support vectors) 18" }, { "page_index": 569, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_019.png", "page_index": 569, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:52+07:00" }, "raw_text": "SVM-History and Applications Vapnik and colleagues (1992)-groundwork from Vapnik & Chervonenkis' statistical learning theory in 1960s Features: training can be slow but accuracy is high owing to their ability to model complex nonlinear decision boundaries (margin maximization) Used for: classification and numeric prediction Applications: handwritten digit recognition, object recognition, speaker identification, benchmarking time-series prediction tests 19" }, { "page_index": 570, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_020.png", "page_index": 570, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:57:55+07:00" }, "raw_text": "SVM-General Philosophy 0 0: Small Margin Large Margin Support Vectors 20" }, { "page_index": 571, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_021.png", "page_index": 571, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:04+07:00" }, "raw_text": "SVM-Margins and Support Vectors A2 class 1, y = +1 ( buys_computer = \"yes\" C class 2, y = -1 ( buys_computer = \"no\" O O O O O smal_ihargjn O O A2 O Ai 0 O O A2 O O class 1, O 0 class 2. O O O O O O O O O O O O O O A1 O O Ai 21" }, { "page_index": 572, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_022.png", "page_index": 572, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:09+07:00" }, "raw_text": "SVM-When Data Is Linearly Separable m There are infinite lines (hyperplanes) separating the two classes but we want to find the best one (the one that minimizes classification error on unseen data) SvM searches for the hyperplane with the largest margin, i.e., maximum marginal hyperplane (MMH) 22" }, { "page_index": 573, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_023.png", "page_index": 573, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:13+07:00" }, "raw_text": "SVM-Linearly Separable A separating hyperplane can be written as W o X + b = 0 For 2-D it can be written as Wo+ W1 X1 + Wz Xz= 0 The hyperplane defining the sides of the margin: H1: Wo+ W1X1 + Wz Xz Z 1 for Yj = +1, and Hz: Wo+ w1 X1+ wz Xz <-1 for Yi=-1 Any training tuples that fall on hyperplanes H1 or H, (i.e., the sides defining the margin) are support vectors This becomes a constrained (convex) guadratic optimization problem: Quadratic objective function and linear constraints -> Quadratic Programming (QP) -> Lagrangian multipliers 23" }, { "page_index": 574, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_024.png", "page_index": 574, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:18+07:00" }, "raw_text": "Why Is SVM Effective on High Dimensional Data? The complexity of trained classifier is characterized by the # of support vectors rather than the dimensionality of the data The support vectors are the essential or critical training examples - they lie closest to the decision boundary (MMH) If all other training examples are removed and the training is repeated, the same separating hyperplane would be found The number of support vectors found can be used to compute an (upper) bound on the expected error rate of the SVM classifier, which is independent of the data dimensionality Thus, an SVM with a small number of support vectors can have good generalization, even when the dimensionality of the data is high 24" }, { "page_index": 575, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_025.png", "page_index": 575, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:25+07:00" }, "raw_text": "SVM-Linearly Inseparable Transform the original input data into a higher dihensional space Example 6.8 Nonlinear transformation of original input data into a higher dimensional space. Con- sider the following example. A 3D input vector X = (x1, x2, x3) is mapped into a 6D space Z using the mappings 61(X) = x1,$2(X) = x2,$3(X) = x3,$4(X) = (x1)2,65(X) = x1x2, and 66(X) = x1x3. A decision hyperplane in the new space is d(Z)= WZ + b, where W and Z are vectors. This is linear. We solve for W and b and then substitute back so that we see that the linear decision hyperplane in the new (Z) space corresponds to a nonlinear second order polynomial in the original 3-D input space. d(Z) = W1 X1+ w2X2 + w3x3 + w4(x1)2+ w5 X1X2+ w6x1X3 +b = W1Z1 + w222 + W3Z3 + w4Z4 + w5Z5 + w6Z6 +b Search for a linear separating hyperplane in the new space 25" }, { "page_index": 576, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_026.png", "page_index": 576, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:29+07:00" }, "raw_text": "SVM: Different Kernel functions Instead of computing the dot product on the transformed data, it is math. equivalent to applying a kernel function K(Xi, Xj) to the original data, i.e., K(Xi, Xj) = $(X;) $(Xj) Typical Kernel Functions Polynomial kernel of degree h : K(Xi,Xj)=(X;Xj+1)h Gaussian radial basis function kernel : Sigmoid kernel : K(X;,Xj)= tanh(xXjXj -) SVM can also be used for classifying multiple (> 2) classes and for regression analysis (with additional parameters) 26" }, { "page_index": 577, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_027.png", "page_index": 577, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:33+07:00" }, "raw_text": "Scaling SVM by Hierarchical Micro-Clustering SVM is not scalable to the number of data objects in terms of training time and memory usage H. Yu, J. Yang, and J. Han, \"Classifying Large Data Sets Using SVM with Hierarchical Clusters\", KDD'03) CB-SVM (Clustering-Based SVM) Given limited amount of system resources (e.g., memory) maximize the SVM performance in terms of accuracy and the training speed Use micro-clustering to effectively reduce the number of points to be considered At deriving support vectors, de-cluster micro-clusters near \"candidate vector\" to ensure high classification accuracy 27" }, { "page_index": 578, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_028.png", "page_index": 578, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:38+07:00" }, "raw_text": "CF-Tree: Hierarchical Micro-cluster Negative clusters Positive clusters CF CF Root CF CF Nonleaf node. CF Leaf nodes Read the data set once, construct a statistical summary of the data (i.e., hierarchical clusters) given a limited amount of memory Micro-clustering: Hierarchical indexing structure provide finer samples closer to the boundary and coarser samples farther from the boundary 28" }, { "page_index": 579, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_029.png", "page_index": 579, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:43+07:00" }, "raw_text": "Selective Declustering: Ensure High Accuracy CF tree is a suitable base structure for selective declustering De-cluster only the cluster E such that D: - Ri < Ds, where D is the distance from the boundary to the center point of E and R is the radius of Ej Decluster only the cluster whose subclusters have possibilities to be the support cluster of the boundary \"Support cluster\": The cluster whose centroid is a support vector F 29" }, { "page_index": 580, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_030.png", "page_index": 580, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:48+07:00" }, "raw_text": "CB-SVM Algorithm: Outline Construct two CF-trees from positive and negative data sets independently Need one scan of the data set Train an SVM from the centroids of the root entries De-cluster the entries near the boundary into the next level The children entries de-clustered from the parent entries are accumulated into the training set with the non-declustered parent entries Train an SVM again from the centroids of the entries in the training set Repeat until nothing is accumulated 30" }, { "page_index": 581, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_031.png", "page_index": 581, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:54+07:00" }, "raw_text": "Accuracy and Scalability on Synthetic Dataset I (b)0.5%randomly sampleddata cdata distribution at thelastiteration in aoriginaldata setN=113601 (N=603) CB-SVM(=597) Figure 6: Synthetic data set in a two-dimensional space. [: positive data, - : negative data Experiments on large synthetic data sets shows better accuracy than random sampling approaches and far more scalable than the original SVM algorithm 31" }, { "page_index": 582, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_032.png", "page_index": 582, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:58:58+07:00" }, "raw_text": "SVM vs. Neural Network SVM Neural Network Deterministic algorithm Nondeterministic algorithm Nice generalization Generalizes well but properties doesn't have strong Hard to learn - learned mathematical foundation in batch mode using Can easily be learned in guadratic programming incremental fashion techniques To learn complex Using kernels can learn functions-use multilayer very complex functions perceptron (nontrivial) 32" }, { "page_index": 583, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_033.png", "page_index": 583, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:02+07:00" }, "raw_text": "SVM Related Links SVM Website: http://www.kernel-machines.org/ Representative implementations LIBsVM: an efficient implementation of SVM, multi- class classifications, nu-SVM, one-class SVM, including also various interfaces with java, python, etc. SvM-light: simpler but performance is not better than LIBSVM, support only binary classification and only in C SvM-torch: another recent implementation also written in C 33" }, { "page_index": 584, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_034.png", "page_index": 584, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:06+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 34" }, { "page_index": 585, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_035.png", "page_index": 585, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:11+07:00" }, "raw_text": "Associative Classification Associative classification: Major steps Mine data to find strong associations between frequent patterns (conjunctions of attribute-value pairs) and class labels Association rules are generated in the form of P1pz. pi>\"A = C\" (conf, sup) class Organize the rules to form a rule-based classifier Why effective? It explores highly confident associations among multiple attributes and may overcome some constraints introduced by decision-tree induction, which considers only one attribute at a time Associative classification has been found to be often more accurate than some traditional classification methods, such as C4.5 35" }, { "page_index": 586, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_036.png", "page_index": 586, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:16+07:00" }, "raw_text": "Typical Associative Classification Methods CBA (Classification Based on Associations: Liu, Hsu & Ma, KDD'98) Mine possible association rules in the form of Cond-set (a set of attribute-value pairs) -> class label Build classifier: Organize rules according to decreasing precedence based on confidence and then support CMAR R (Classification based on Multiple Association Rules: Li, Han, Pei ICDM'01 Classification: Statistical analysis on multiple rules CPAR (Classification based on Predictive Association Rules: Yin & Han, SDM'03) Generation of predictive rules (FOIL-like analysis) but allow covered rules to retain with reduced weight Prediction using best k rules High efficiency, accuracy similar to CMAR 36" }, { "page_index": 587, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_037.png", "page_index": 587, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:21+07:00" }, "raw_text": "Freguent Pattern-Based Classification H. Cheng, X. Yan, J. Han, and C.-W. Hsu, \"Discriminative Frequent Pattern Analysis for Effective Classification\", ICDE'07 Accuracy issue Increase the discriminative power Increase the expressive power of the feature space Scalability issue It is computationally infeasible to generate all feature combinations and filter them with an information gain threshold Efficient method (DDPMine: FPtree pruning): H. Cheng, X. Yan, J. Han, and P. S. Yu, \"Direct Discriminative Pattern Mining for Effective Classification\", ICDE'08 37" }, { "page_index": 588, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_038.png", "page_index": 588, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:29+07:00" }, "raw_text": "Frequent Pattern vs. Single Feature The discriminative power of some frequent patterns is higher than that of single features. 0.35 0.45 0.35 0.4 0.3 0.3 0.35 0.25 0.25 0.3 0.2 0.2 0.25 0.2 0.15 0.15 0.15 0.1 0.1 0.1 0.05 0.05 0.05 1 2 6 8 10 0 2 4 6 8 10 12 14 16 18 20 22 4 Pattern Length Pattern Length 3 4 5 6 8 9 10 11 Pattern Length (a) Austral (b) Cleve (c) Sonar Fig. 1. Information Gain vs. Pattern Length 38" }, { "page_index": 589, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_039.png", "page_index": 589, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:38+07:00" }, "raw_text": "Empirical Results 0.9 InfoGain IG UpperBnd InfoGain 0.9 InfoGain 0.9 IG UpperBnd 0.8 IG_UpperBnd 0.8 0.8 0.7 0.7 0.7 0.6 0.6 0.6 0.5 0.5 0.5 0.4 0.4 0.4 0.3 0.3 0.3 0.2 0.2 0.2 0.1 0.1 0.1 0 100 200 300 400 500 600 700 50 100 150 200 250 0 100 200 300 400 500 600 700 Support Support Support (a) Austral (b) Breast (c) Sonar Fig. 2. Information Gain vs. Pattern Frequency 39" }, { "page_index": 590, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_040.png", "page_index": 590, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T15:59:42+07:00" }, "raw_text": "Feature Selection Given a set of freguent patterns, both non-discriminative and redundant patterns exist, which can cause overfitting We want to single out the discriminative patterns and remove redundant ones The notion of Maximal Marginal Relevance (MMR) is borrowed A document has high marginal relevance if it is both relevant to the query and contains minimal marginal similarity to previously selected documents 40" }, { "page_index": 591, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_041.png", "page_index": 591, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:03+07:00" }, "raw_text": "Experimental Results Table 1. Accuracy by sVM on Frequent Com- Table 2. Accuracy by C4.5 on Frequent Com- bined Features vs. Single Features bined Features vs. Single Features Data Single Feature Freq. Pattern Dataset Single Features Frequent Patterns ltem_AllItem_F'Sltem_RBF Pat_AllPat_FS Item-All Item_FS Pat_All Pat_FS anneal 99.78 99.78 99.11 99.33 99.67 anneal 98.33 98.33 97.22 98.44 austral 85.01 85.50 85.01 81.79 91.14 austral 84.53 84.53 84.21 88.24 auto 83.25 84.21 78.80 74.97 90.79 auto 71.70 77.63 71.14 78.77 breast 97.46 97.46 96.98 96.83 97.78 breast 95.56 95.56 95.40 96.35 cleve 84.81 84.81 85.80 78.55 95.04 cleve 80.87 80.87 80.84 91.42 diabetes 74.41 74.41 74.55 77.73 78.31 diabetes 77.02 77.02 76.00 76.58 glass 75.19 75.19 74.78 79.91 81.32 glass 75.24 75.24 76.62 79.89 heart 84.81 84.81 84.07 82.22 88.15 heart 81.85 81.85 80.00 86.30 hepatic 84.50 89.04 85.83 81.29 96.83 hepatic 78.79 85.21 80.71 93.04 horse 83.70 84.79 82.36 82.35 92.39 horse 83.71 83.71 84.50 87.77 iono 93.15 94.30 92.61 89.17 95.44 iono 92.30 92.30 92.89 94.87 iris 94.00 96.00 94.00 95.33 96.00 iris 94.00 94.00 93.33 93.33 labor 89.99 91.67 91.67 94.99 95.00 labor 86.67 86.67 95.00 91.67 lymph 81.00 81.62 84.29 83.67 96.67 lymph 76.95 77.62 74.90 83.67 pima 74.56 74.56 76.15 76.43 77.16 pima 75.86 75.86 76.28 76.72 sonar 82.71 86.55 82.71 84.60 90.86 sonar 80.83 81.19 83.67 83.67 vehicle 70.43 72.93 72.14 73.33 76.34 vehicle 70.70 71.49 74.24 73.06 wine 98.33 99.44 98.33 98.30 100 wine 95.52 93.82 96.63 99.44 Zoo 97.09 97.09 95.09 94.18 99.00 Zoo 91.18 91.18 95.09 97.09 41" }, { "page_index": 592, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_042.png", "page_index": 592, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:13+07:00" }, "raw_text": "Scalability Tests Table 3. Accuracy & Time on Chess Data min_sup #PatternsTime (s)SVM (%)C4.5 (%) 1 N/A N/A N/A N/A 2000 68,967 44.703 92.52 97.59 2200 28,358 19.938 91.68 97.84 2500 6,837 2.906 91.68 97.62 2800 1,031 0.469 91.84 97.37 3000 136 0.063 91.90 97.06 Table 4. Accuracy & Time on Waveform Data #PatternsTime (s)SVM I (%)C4.5 (%) min_sup 1 9,468,109 N/A N/A N/A 80 26,576 176.485 92.40 88.35 100 15,316 90.406 92.19 87.29 150 5,408 23.610 91.53 88.80 200 2,481 8.234 91.22 87.32 42" }, { "page_index": 593, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_043.png", "page_index": 593, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:18+07:00" }, "raw_text": "DDPMine: I Branch-and-Bound Search sup(child) sup(parent) sup(b) sup(a) maximize IG(Cb) subject to a min_sup < sup(b) < sup(a) 0< sup+(b) sup+(a) 0 < sup_(b) < sup_(a) b a: constant, a parent Association between information node gain and frequency b: variable, a descendent 43" }, { "page_index": 594, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_044.png", "page_index": 594, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:22+07:00" }, "raw_text": "Efficiency: Runtime DDPMine 700 PatClass Harmony 600 DDPMine PatClass 500 Harmony 400 Guiuund 300 200 PatClass: ICDE'0Z DDPMine 100 Pattern Classification Alg. 2000 1500 1000 500 0 Minimum i Support 44" }, { "page_index": 595, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_045.png", "page_index": 595, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:26+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 45" }, { "page_index": 596, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_046.png", "page_index": 596, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:32+07:00" }, "raw_text": "Lazy vs. Eager Learning Lazy vs. eager learning Lazy learning (e.g., instance-based learning): Simply stores training data (or only minor processing) and waits until it is given a test tuple Eager learning (the above discussed methods): Given a set of training tuples, constructs a classification model before receiving new (e.g., test) data to classify Lazy: less time in training but more time in predicting Accuracy Lazy method effectively uses a richer hypothesis space since it uses many local linear functions to form an implicit global approximation to the target function Eager: must commit to a single hypothesis that covers the entire instance space 46" }, { "page_index": 597, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_047.png", "page_index": 597, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:36+07:00" }, "raw_text": "Lazy Learner: Instance-Based Methods Instance-based learning: Store training examples and delay the processing (\"lazy evaluation') until a new instance must be classified Typical approaches k-nearest neighbor approach Instances represented as points in a Euclidean space. Locally weighted regression Constructs local approximation Case-based reasoning Uses symbolic representations and knowledge- based inference 47" }, { "page_index": 598, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_048.png", "page_index": 598, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:40+07:00" }, "raw_text": "The k-Nearest Neighbor Algorithm All instances correspond to points in the n-D space The nearest neighbor are defined in terms of Euclidean distance, dist(X1, X,) Target function could be discrete- or real- valued For discrete-valued, k-NN returns the most common value among the k training examples nearest to x, Vonoroi diagram: the decision surface induced by 1- NN for a typical set of training examples X 48" }, { "page_index": 599, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_049.png", "page_index": 599, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:46+07:00" }, "raw_text": "Discussion on the k-NN Algorithm k-NN for real-valued prediction for a given unknown tuple Returns the mean values of the k nearest neighbors Distance-weighted nearest neighbor algorithm Weight the contribution of each of the k neighbors 1 WE d(xq,x;)2 Give greater weight to closer neighbors Robust to noisy data by averaging k-nearest neighbors Curse of dimensionality: distance between neighbors could be dominated by irrelevant attributes To overcome it, axes stretch or elimination of the least relevant attributes 49" }, { "page_index": 600, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_050.png", "page_index": 600, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:51+07:00" }, "raw_text": "Case-Based j Reasoning j (CBR) CBR: Uses a database of problem solutions to solve new problems Store symbolic description (tuples or cases)-not points in a Euclidean space Applications: Customer-service (product-related diagnosis), legal ruling Methodology Instances represented by rich symbolic descriptions (e.g., function graphs) Search for similar cases, multiple retrieved cases may be combined Tight coupling between case retrieval, knowledge-based reasoning, and problem solving Challenges Find a good similarity metric Indexing based on syntactic similarity measure, and when failure backtracking, and adapting to additional cases 50" }, { "page_index": 601, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_051.png", "page_index": 601, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:00:55+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 51" }, { "page_index": 602, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_052.png", "page_index": 602, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:01+07:00" }, "raw_text": "Genetic Algorithms (GA) Genetic Algorithm: based on an analogy to biological evolution An initial population is created consisting of randomly generated rules Each rule is represented by a string of bits E.g., if A1 and -A, then C, can be encoded as 100 If an attribute has k > 2 values, k bits can be used Based on the notion of survival of the fittest, a new population is formed to consist of the fittest rules and their offspring The fitness ofa ru/e is represented by its classification accuracy on a set of training examples Offspring are generated by crossover and mutation The process continues until a population P evolves when each ru/e in P satisfies a prespecified threshold Slow but easily parallelizable 52" }, { "page_index": 603, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_053.png", "page_index": 603, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:06+07:00" }, "raw_text": "Rough Set Approach Rough sets are used to approximately or \"roughly\" define equivalent classes A rough set for a given class C is approximated by two sets: a lower approximation (certain to be in C) and an upper approximation (cannot be described as not belonging to C) Finding the minimal subsets (reducts) of attributes for feature reduction is NP-hard but a discernibility matrix (which stores the differences between attribute values for each pair of data tuples) is used to reduce the computation intensity C upper approximation of C lower approximation of C 53" }, { "page_index": 604, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_054.png", "page_index": 604, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:13+07:00" }, "raw_text": "fuzzy Fuzzy Set ineinbclshi p low Approaches hiab 1.0 SOInCW boldclline 0.5 low high 10K 20K 30K. +OK SOK 60K 70K i hcounc Fuzzy logic uses truth values between 0.0 and 1.0 to represent the degree of membership (such as in a fuzzy membership graph) Attribute values are converted to fuzzy values. Ex.: Income, x, is assigned a fuzzy membership value to each of the discrete categories {low, medium, high}, e.g. $49K belongs to \"medium income\" with fuzzy value 0.15 but belongs to \"high income\" with fuzzy value 0.96 Fuzzy membership values do not have to sum to 1. Each applicable rule contributes a vote for membership in the categories Typically, the truth values for each predicted category are summed and these sums are combined 54" }, { "page_index": 605, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_055.png", "page_index": 605, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:17+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 55" }, { "page_index": 606, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_056.png", "page_index": 606, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:23+07:00" }, "raw_text": "Multiclass Classification Classification involving more than two classes (i.e., > 2 Classes) Method 1. One-vs.-all (OVA): Learn a classifier one at a time Given m classes, train m classifiers: one for each class Classifier j: treat tuples in class j as positive & all others as negative To classify a tuple x, the set of classifiers vote as an ensemble Method 2. All-vs.-all (AVA): Learn a classifier for each pair of classes Given m classes, construct m(m-1)/2 binary classifiers A classifier is trained using tuples of the two classes To classify a tuple x, each classifier votes. X is assigned to the class with maximal vote Comparison All-vs.-all tends to be superior to one-vs.-all Problem: Binary classifier is sensitive to errors, and errors affect vote count 56" }, { "page_index": 607, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_057.png", "page_index": 607, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:31+07:00" }, "raw_text": "Error-Correcting Codes for Multiclass Classification Originally designed to correct errors during data Class Error-Corr. Codeword transmission for communication tasks by exploring C1 1 1 1 data redundancy C2 0 0 0 0 1 1 C3 0 0 1 1 0 0 1 Example C4 0 1 0 0 1 A Z-bit codeword associated with classes 1-4 Given a unknown tuple X, the 7-trained classifiers output: 0001010 Hamming distance: # of different bits between two codewords H(X,C) = 5, by checking # of bits between [1111111] & [0001010] H(X, C,) = 3,H(X, C3) = 3, H(X, C4) = 1, thus C4 as the label for X Error-correcting codes can correct up to (h-1)/h 1-bit error, where h is the minimum Hamming distance between any two codewords If we use 1-bit per class, it is equiv. to one-vs.-all approach, the code are insufficient to self-correct When selecting error-correcting codes, there should be good row-wise and col.-wise separation between the codewords 57" }, { "page_index": 608, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_058.png", "page_index": 608, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:37+07:00" }, "raw_text": "Semi-Supervised Classification Semi-supervised: Uses labeled and unlabeled data to build a classifier Self-training: Build a classifier using the labeled data Use it to label the unlabeled data, and those with the most confident label prediction are added to the set of labeled data Repeat the above process Adv: easy to understand; disadv: may reinforce errors Co-training: Use two or more classifiers to teach each other Each learner uses a mutually independent set of features of each tuple to train a good classifier, say f1 Then f, and f, are used to predict the class label for unlabeled data X Teach each other: The tuple having the most confident prediction from f, is added to the set of labeled data for f,, & vice versa Other methods, e.g., joint probability distribution of features and labels 58" }, { "page_index": 609, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_059.png", "page_index": 609, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:43+07:00" }, "raw_text": "learn a model machinelearning model Active Learning labeled training set unlabeled pool C u Class labels are expensive to obtain Active learner: query human (oracle) for labels select queries oracle(e.g.,human annotator) Pool-based approach: Uses a pool of unlabeled data L: a small subset of D is labeled, U: a pool of unlabeled data in D Use a query function to carefully select one or more tuples from U and request labels from an oracle (a human annotator) The newly labeled samples are added to L, and learn a model Goal: Achieve high accuracy using as few labeled data as possible Evaluated using /earning curves: Accuracy as a function of the number of instances queried (# of tuples to be queried should be small) Research issue: How to choose the data tuples to be queried? Uncertainty sampling: choose the least certain ones Reduce version space, the subset of hypotheses consistent w. the training data Reduce expected entropy over U: Find the greatest reduction in the total number of incorrect predictions 59" }, { "page_index": 610, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_060.png", "page_index": 610, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:49+07:00" }, "raw_text": "Transfer Learning: Conceptual Framework Transfer learning: Extract knowledge from one or more source tasks and apply the knowledge to a target task Traditional learning: Build a new classifier for each new task Transfer learning: Build new classifier by applying existing knowledge learned from source tasks Different Tasks Source Tasks Target Task 11 Knowledge Learning System Learning System Learning System Learning System Traditional Learning Framework Transfer Learning Framework 60" }, { "page_index": 611, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_061.png", "page_index": 611, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:54+07:00" }, "raw_text": "Transfer Learning: 1 Methods and Applications Applications: Especially useful when data is outdated or distribution changes, e.g., Web document classification, e-mail spam filtering Instance-based transfer learning: : Reweight some of the data from source tasks and use it to learn the target task TrAdaBoost (Transfer AdaBoost) Assume source and target data each described by the same set of attributes (features) & class labels, but rather diff. distributions Require only labeling a small amount of target data Use source data in training: When a source tuple is misclassified, reduce the weight of such tupels so that they will have less effect on the subsequent classifier Research issues Negative transfer: When it performs worse than no transfer at all Heterogeneous transfer learning: Transfer knowledge from different feature space or multiple source domains Large-scale transfer learning 61" }, { "page_index": 612, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_062.png", "page_index": 612, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:01:58+07:00" }, "raw_text": "Chapter 9. Classification: Advanced Methods Bayesian Belief Networks Classification by Backpropagation Support Vector Machines Classification by Using Frequent Patterns Lazy s (or Learning from Your Neighbors) Learners Other Classification Methods Additional Topics Regarding Classification Summary 62" }, { "page_index": 613, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_063.png", "page_index": 613, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:03+07:00" }, "raw_text": "Summary Effective and advanced classification methods Bayesian belief network (probabilistic networks) Backpropagation (Neural networks) Support Vector Machine (SVM) Pattern-based classification Other classification methods: lazy learners (KNN, case-based reasoning), genetic algorithms, rough set and fuzzy set approaches Additional Topics on Classification Multiclass classification Semi-supervised classification Active learning Transfer learning 63" }, { "page_index": 614, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_064.png", "page_index": 614, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:05+07:00" }, "raw_text": "References Please see the references of Chapter 8 64" }, { "page_index": 615, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_065.png", "page_index": 615, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:07+07:00" }, "raw_text": "Surplus Slides" }, { "page_index": 616, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_066.png", "page_index": 616, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:12+07:00" }, "raw_text": "What Is Prediction? (Numerical) prediction is similar to classification construct a model use model to predict continuous or ordered value for a given input Prediction is different from classification Classification refers to predict categorical class label Prediction models continuous-valued functions Major method for prediction: regression model the relationship between one or more independentor predictor variables and a dependent or response variable Regression analysis Linear and multiple regression Non-linear regression Other regression methods: generalized linear model, Poisson regression, log-linear models, regression trees 66" }, { "page_index": 617, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_067.png", "page_index": 617, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:18+07:00" }, "raw_text": "Linear : Regression Linear regression: involves a response variable y and a single predictor variable x y = Wo + W1 X where w, (y-intercept) and w (slope) are regression coefficients Method of least squares: estimates the best-fitting straight line D E(x;-x)(yi-y) W. i=1 D (x, -x)2 i=1 Multiple linear regression: involves more than one predictor variable Training data is of the form (X1, Y1), (X2, Yz),..., (XDI' YDI) Ex. For 2-D data, we may have: y = wo + W1 X1+ wz Xz Solvable by extension of least square method or using SAS, S-Plus Many nonlinear functions can be transformed into the above 67" }, { "page_index": 618, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_068.png", "page_index": 618, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:23+07:00" }, "raw_text": "Nonlinear Regression Some nonlinear models can be modeled by a polynomial function A polynomial regression model can be transformed into linear regression model. For example, y = Wo + W1 X+ Wz x2+ W3 x3 Y = Wo + W1 X + W2 X2 + W3 X3 Other functions, such as power function, can also be transformed to linear model exponential terms) possible to obtain least square estimates through extensive calculation on more complex formulae 68" }, { "page_index": 619, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_069.png", "page_index": 619, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:28+07:00" }, "raw_text": "Other Regression-Based Models Generalized linear model: Foundation on which linear regression can be applied to modeling categorical response variables Variance of y is a function of the mean value of y, not a constant Logistic regression: models the prob. of some event occurring as a linear function of a set of predictor variables Poisson regression: models the data that exhibit a Poisson distribution Log-linear models: (for categorical data) Approximate discrete multidimensional prob. distributions Also useful for data compression and smoothing Regression trees and model trees Trees to predict continuous values rather than class labels 69" }, { "page_index": 620, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_070.png", "page_index": 620, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:33+07:00" }, "raw_text": "Regression Trees and Model Trees Regression tree: proposed in CART system (Breiman et al. 1984) CART: Classification And Regression Trees Each leaf stores a continuous-valued prediction It is the average value of the predicted attribute for the training tuples that reach the leaf Model tree: proposed by Quinlan (1992) Each leaf holds a regression model-a multivariate linear equation for the predicted attribute A more general case than regression tree Regression and model trees tend to be more accurate than linear regression when the data are not represented well by a simple linear model 70" }, { "page_index": 621, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_071.png", "page_index": 621, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:38+07:00" }, "raw_text": "Predictive Modeling in Multidimensional Databases Predictive modeling: Predict data values or construct One can only predict value ranges or category distributions Method outline: Minimal generalization Attribute relevance analysis Generalized linear model construction Prediction Determine the major factors which influence the prediction Data relevance analysis: uncertainty measurement entropy analysis, expert judgement, etc Multi-level prediction: drill-down and roll-up analysis 71" }, { "page_index": 622, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_072.png", "page_index": 622, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:49+07:00" }, "raw_text": "Prediction: Numerical Data dbminei -6x File Edit Query View window Help -6x IE Dim: Sale_Price 1 Level: level1 + Sale_Price Channel Cost of Goods Sold Advertising_Cost Average_Sales_Area Relevance Analysis Profit: 0.45 -365.00480.00 805.001000.00 -1260.006005.00 0.40 0.45 480.00805.00 1130.001260.00 0.35 0.40 0.30 0.35 0.9 0.25 0.30 0.20 0.25 0.8 6.15 0.20 0.10 0.15 0.7 0.05 0.10 0.05 0.6 0.00 0.5 0.4 # Predictive Name Value Sale Price 0.3 -4950.00091950.000 2 Channel 0.2 Cost of Goods Sold 3 -3900.00085900.000 0.1 Advertising_Cost a (0.000 1715.000) Average_Sales_Area 0.0 1130.0004230.000 1560 2420 14440 1220 08839 13120 2 For Help.press F1 72" }, { "page_index": 623, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_073.png", "page_index": 623, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:02:59+07:00" }, "raw_text": "Prediction: Categorical Data bminei File Edit Query View Window Help Ep IE Dim: Channel Level: Channel Sale_Price Channel Cost_of_Goods Sold Advertising_Cost Average_Sales_Area Relevance Analysis Profit: 0.45 365.00480.00 805.001000.00 1260.006005.00 0.40 0.45 480.00805.00 1130.001260.00 0.35 0.40 0.30 0.35 Camping Chain GO Outlet Independent 0.25 0.30 0.20 0.25 8.15 0.20 0.10 0.15 0.05 0.10 0.05 0.00 # Predictive Name Value Mass Marketer Sports Chain SalePrice -4950.00091950.000) 2 Channel Cost of Goods Sold 3 -3900.00085900.000 Advertising_Cost 0.0001715.000 Average_Sales_Area 51130.0004230.000) For Help,press F1 73" }, { "page_index": 624, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_074.png", "page_index": 624, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:04+07:00" }, "raw_text": "SVM-Introductory Literature \"Statistical Learning Theory\" by Vapnik: extremely hard to understand, containing many errors too. C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Knowledge Discovery and Data Mining, 2(2), 1998. Better than the Vapnik's book, but still written too hard for introduction, and the examples are so not-intuitive The book \"An Introduction to Support Vector Machines\" by N. Cristianini and J. Shawe-Taylor Also written hard for introduction, but the explanation about the mercer's theorem is better than above literatures The neural network book by Haykins Contains one nice chapter of SVM introduction 74" }, { "page_index": 625, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_075.png", "page_index": 625, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:09+07:00" }, "raw_text": "Notes about SVM- Introductory Literature \"Statistical Learning Theory\" by Vapnik: difficult to understand containing many errors. C. J. C. Burges. A Tutorial on Support Vector Machines for Pattern Recognition. Knowledge Discovery and Data Mining, 2(2), 1998. Easier than Vapnik's book, but still not introductory level; the examples are not so intuitive The book An Introduction to Support Vector Machines by Cristianini and Shawe-Taylor Not introductory level, but the explanation about Mercer's Theorem is better than above literatures Neural Networks and Learning Machines by Haykin Contains a nice chapter on SVM introduction 75" }, { "page_index": 626, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_076.png", "page_index": 626, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:21+07:00" }, "raw_text": "Associative Classification Can Achieve High Efficiency (Cong et al. SIGMOD05) Dataset RCBT CBA IRGCassife C4.5family SVM single tree bagging boosting AML/ALL (ALL) 91.18% 91.18% 64.71% 91.18% 91.18% 91.18% 97.06% Lung Cancer(LC) 97.99% 81.88% 89.93% 81.88%% 96.64% 81.88% 96.64% Ovarian Cancer(OC) 97.67% 93.02% 97.67% 97.67% 97.67% 97.67% Prostate CancerPC 97.06% 82.35% 88.24% 26.47% 26.47% 26.47% 79.41% Average Accuracy 95.98% 87.11% 80.96% 74.3% 77.99% 74.3% 92.70% Table 2: Classification Results 10000 1000 10000 PARMER FARAIER PARAERmsp=os PARIER/mCCO.S ARAIERmCO=O.S- FARMER(muco=O.9S 1000 1E+exmccf=0.S-. FARMERTTKXmCO=O.S 100 FARIER-Prefxmcoa=0.95- 1O1- T01- 1000 T011- T0?100 T02100- T0100- 100 103 10 100 10 0.1 01 0.01 0.01 17 19 21 22 23 25 7 8 10 12 13 15 20 94 100 107 114 120 127 Mei.mSoppost MiamaS?or MRitmStppet (a)ALL-AMLleukemia b) Lung Cancei COvarian Cancer 76" }, { "page_index": 627, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_077.png", "page_index": 627, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:27+07:00" }, "raw_text": "A Closer Look at CMAR CMAR (Classification based on Multiple Association Rules: Li, Han, Pei, ICDM'01) Efficiency: Uses an enhanced FP-tree that maintains the distribution of class labels among tuples satisfying each frequent itemset Rule pruning whenever a rule is inserted into the tree Given two rules, R and R,, if the antecedent of R is more general than that of R, and conf(R1) conf(Rz), then prune Rz Prunes rules for which the rule antecedent and class are not positively correlated, based on a x2 test of statistical significance Classification based on generated/pruned rules If only one ru/e satisfies tuple X, assign the class label of the rule If a ru/e set S satisfies X, CMAR divides S into groups according to class labels uses a weighted x2 measure to find the strongest group of rules, based on the statistical correlation of rules within a group assigns X the class label of the strongest group 77" }, { "page_index": 628, "chapter_num": 8, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_8/slide_078.png", "page_index": 628, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:32+07:00" }, "raw_text": "Perceptron & Winnow Vector: x, W X2 Scalar: x, y, w Input: {(x, Y1, ...} Output: classification function f(x) f(xi) > 0 for yi= +1 f(xj) < 0 for yj = -1 f(x)=> wx + b = 0 or WX1+W,X,+b = 0 Perceptron: update W additively . Winnow: update W multiplicatively X1 78" }, { "page_index": 629, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_001.png", "page_index": 629, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:36+07:00" }, "raw_text": "Mining: Data Concepts and Techniques (3rd ed.) - Chapter 1o - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 630, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_002.png", "page_index": 630, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:40+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 2" }, { "page_index": 631, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_003.png", "page_index": 631, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:45+07:00" }, "raw_text": "What is Cluster Analysis? Cluster: A collection of data objects similar (or related) to one another within the same group dissimilar (or unrelated) to the objects in other groups Cluster analysis (or clustering, data segmentation, ...) Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters Unsupervised learning: no predefined classes s (i.e., learning by observations vs. learning by examples: supervised) Typical applications As a stand-alone tool to get insight into data distribution As a preprocessing step for other algorithms 3" }, { "page_index": 632, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_004.png", "page_index": 632, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:51+07:00" }, "raw_text": "Clustering for Data Understanding and Applications Biology: taxonomy of living things: kingdom, phylum, class, order. family, genus and species Information retrieval: document clustering Land use: ldentification of areas of similar land use in an earth observation database Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs City-planning: Identifying groups of houses according to their house type, value, and geographical location Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults Climate: understanding earth climate, find patterns of atmospheric and ocean Economic Science: market resarch 4" }, { "page_index": 633, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_005.png", "page_index": 633, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:55+07:00" }, "raw_text": "Clustering as a Preprocessing Tool (Utility) Summarization. Preprocessing for regression, PCA, classification, and association analysis Compression: Image processing: vector quantization Finding K-nearest Neighbors Localizing search to one or a small number of clusters Outlier detection Outliers are often viewed as those \"far away\" from any cluster 5" }, { "page_index": 634, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_006.png", "page_index": 634, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:03:59+07:00" }, "raw_text": "Quality: What Is Good Clustering? A good clustering method will produce high quality clusters high intra-class similarity: cohesive within clusters low inter-class similarity: distinctive between clusters The guality of a clustering method depends on the similarity measure used by the method its implementation, and Its ability to discover some or all of the hidden patterns 6" }, { "page_index": 635, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_007.png", "page_index": 635, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:04+07:00" }, "raw_text": "Measure the Quality of Clustering Dissimilarity/Similarity metric Similarity is expressed in terms of a distance function. typically metric: d(i, j) The definitions of distance functions are usually rather different for interval-scaled, boolean, categorical. ordinal ratio, and vector variables Weights should be associated with different variables based on applications and data semantics Quality of clustering: There is usually a separate \"quality\" function that measures the \"goodness\" of a cluster. It is hard to define \"similar enough\" or \"good enough The answer is typically highly subjective 7" }, { "page_index": 636, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_008.png", "page_index": 636, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:09+07:00" }, "raw_text": "Considerations for Cluster Analysis Partitioning criteria Single level vs. hierarchical partitioning (often, multi-level hierarchical partitioning is desirable) Separation of clusters Exclusive (e.g., one customer belongs to only one region) vs. non- exclusive (e.g., one document may belong to more than one class) Similarity measure Distance-based (e.g., Euclidian, road network, vector) vs. connectivity-based (e.g., density or contiguity) Clustering space Full space (often when low dimensional) vs. subspaces (often in high-dimensional clustering) 8" }, { "page_index": 637, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_009.png", "page_index": 637, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:13+07:00" }, "raw_text": "Requirements and Challenges Scalability Clustering all the data instead of only on samples Ability to deal with different types of attributes Numerical, binary, categorical, ordinal, linked, and mixture of these Constraint-based clustering User may give inputs on constraints Use domain knowledge to determine input parameters Interpretability and usability Others Discovery of clusters with arbitrary shape Ability to deal with noisy data Incremental clustering and insensitivity to input order High dimensionality 9" }, { "page_index": 638, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_010.png", "page_index": 638, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:19+07:00" }, "raw_text": "Major Clustering Approaches (I) Partitioning approach: Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors Typical methods: k-means, k-medoids, CLARANS Hierarchical approach: Create a hierarchical decomposition of the set of data (or objects) using some criterion Typical methods: Diana,Agnes, BIRCH, CAMELEON Density-based approach: Based on connectivity and density functions Typical methods: DBSACN, OPTICS, DenClue Grid-based approach. based on a multiple-level granularity structure Typical methods: STING, WaveCluster, CLIQUE 10" }, { "page_index": 639, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_011.png", "page_index": 639, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:24+07:00" }, "raw_text": "Model-based A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other Typical methods: EM, SOM,COBWEB Freguent pattern-based: Based on the analysis of frequent patterns Typical methods: p-Cluster User-guided or constraint-based: Clustering by considering user-specified or application-specific constraints Typical methods: COD (obstacles), constrained clustering Link-based clustering: Objects are often linked together in various ways Massive links can be used to cluster objects: SimRank, LinkClus 11" }, { "page_index": 640, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_012.png", "page_index": 640, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:28+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 12" }, { "page_index": 641, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_013.png", "page_index": 641, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:34+07:00" }, "raw_text": "Partitioning Algorithms: Basic Concept Partitioning method: Partitioning a database D of n objects into a set of k clusters, such that the sum of sqguared distances is minimized (where c: is the centroid or medoid of cluster C:) E =% pec,(p-c;)2 Given k, find a partition of k c/usters that optimizes the chosen partitioning criterion Global optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids algorithms k-means (MacQueen'67, Lloyd'57/'82): Each cluster is represented by the center of the cluster k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw'87): Each cluster is represented by one of the objects in the cluster 13" }, { "page_index": 642, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_014.png", "page_index": 642, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:38+07:00" }, "raw_text": "The K-Means Clustering Method Given k, the k-means algorithm is implemented in four steps: Partition objects into k nonempty subsets Compute seed points as the centroids of the clusters of the current partitioning (the centroid is the center, i.e., mean point, of the cluster) Assign each object to the cluster with the nearest seed point Go back to Step 2, stop when the assignment does not change 14" }, { "page_index": 643, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_015.png", "page_index": 643, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:45+07:00" }, "raw_text": "An Example of K-Means Clustering K=2 Arbitrarily Update the partition cluster objects into centroids k groups The initial data set Loop if ReassignLobjects needed Partition objects into k nonempty subsets Repeat Compute centroid (i.e., mean Update the cluster point) for each partition centroids Assign each object to the cluster of its nearest centroid Until no change 15" }, { "page_index": 644, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_016.png", "page_index": 644, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:50+07:00" }, "raw_text": "Comments on the K-Means Method Strength: Efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k)) Comment: Often terminates at a /ocal optimal. Weakness Applicable only to objects in a continuous n-dimensional space Using the k-modes method for categorical data In comparison, k-medoids can be applied to a wide range of data Need to specify k, the number of clusters, in advance (there are ways to automatically determine the best k (see Hastie et al., 2009) Sensitive to noisy data and outliers Not suitable to discover clusters with non-convex shapes 16" }, { "page_index": 645, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_017.png", "page_index": 645, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:04:56+07:00" }, "raw_text": "Varigtions of the K-Megns Method Most of the variants of the k-means which differ in Selection of the initial k means x Dissimilarity calculations X Strategies to calculate cluster means Handling categorical data: k-modes Replacing means of clusters with modes Using new dissimilarity measures to deal with categorical objects Using a frequency-based method to update modes of clusters A mixture of categorical and numerical data: k-prototype method 17" }, { "page_index": 646, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_018.png", "page_index": 646, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:03+07:00" }, "raw_text": "What Is the Problem of the K-Means Method? The k-means algorithm is sensitive to outliers ! Since an object with an extremely large value may substantially distort the distribution of the data K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster 10 10 9 9 8 8 7 7 6 6 5 5 4 4 3 3 2 2 1 1 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 18" }, { "page_index": 647, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_019.png", "page_index": 647, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:16+07:00" }, "raw_text": "PAM: A Typical K-Medoids Algorithm Total Cost = 20 10 10 10 9 9 9 8 8 8 7 Arbitrary 7 Assign 7 6 6 6 choose k each 5 5 5 object as remainin 4 4 4 initial 3 g object 3 3 2 medoids 2 to 2 1 1 nearest 1 0 0 0 10 3 4 5 6 7 8 9 10 medoids 0 2 3 4 5 6 7 8 0 0 2 3 4 5 6 7 8 9 10 K=2 Randomly select a nonmedoid object,O Iotal Cost = 26 ramdom 10 10 Do loop 9 9 Compute Swapping 0 8 8 total cost of Until no 7 7 6 swapping 6 change 5 5 If quality is 4 4 improved. 3 3 2 2 1 1 - 0 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 19" }, { "page_index": 648, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_020.png", "page_index": 648, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:21+07:00" }, "raw_text": "The K-Medoid Clustering Method K-Medoids Clustering: Find representative objects (medoids) in clusters PAM (Partitioning Around Medoids, Kaufmann & Rousseeuw 1987) Starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering PAM works effectively for small data sets, but does not scale well for large data sets (due to the computational complexity) Efficiency improvement on PAM CLARA (Kaufmann & Rousseeuw, 1990): PAM on samples CLARANS (Ng & Han,1994): Randomized re-sampling 20" }, { "page_index": 649, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_021.png", "page_index": 649, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:25+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 21" }, { "page_index": 650, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_022.png", "page_index": 650, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:30+07:00" }, "raw_text": "Hierarchical Clustering Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition Step 0 Step 1 Step 2 Step 3 Step 4 agglomerative (AGNES) a b a a b c d e c c d e d d e e divisive Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA) 22" }, { "page_index": 651, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_023.png", "page_index": 651, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:38+07:00" }, "raw_text": "AgNEs (Agglomerative Nesting) Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical packages, e.g., Splus Use the single-link method and the dissimilarity matrix Merge nodes that have the least dissimilarity Go on in a non-descending fashion Eventually all nodes belong to the same cluster 10 10 10 9 9 9 8 8 8 + 6 6 6 5 5 5 4 4 4 3 3 3 2 2 2 2 3 4 5 6 7 8 9 10 0 2 3 4 5 6 7 8 9 10 0 2 3 5 6 7 8 9 10 23" }, { "page_index": 652, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_024.png", "page_index": 652, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:43+07:00" }, "raw_text": "Dendrogram: Shows How Clusters are Merged Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram A clu$tering of the data objects isobtained by cutting the dendrogram at the desired level, then each connected component forms a cluster 24" }, { "page_index": 653, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_025.png", "page_index": 653, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:50+07:00" }, "raw_text": "DIANA (Divisive Analysis) Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages, e.g., Splus Inverse order of AGNES Eventually each node forms a cluster on its own 10 10 10 9 9 8 8 8 7 6 6 6 6 5 5 4 4 3 3 2 2 2 3 4 5 6 r 8 9 10 2 8 4 5 6 8 9 10 10 3 5 9 25" }, { "page_index": 654, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_026.png", "page_index": 654, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:05:55+07:00" }, "raw_text": "Distance between Clusters X Single link: smallest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq) Complete link: largest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq) element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq) Centroid: distance between the centroids of two clusters, i.e. dist(Kj, Ki) = dist(Ci, Cj) Medoid: distance between the medoids of two clusters, i.e., dist(K. Kj) = dist(Mi, Mj) Medoid: a chosen, centrally located object in the cluster 26" }, { "page_index": 655, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_027.png", "page_index": 655, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:01+07:00" }, "raw_text": "Centroid, Radius and Diameter of a Cluster Centroid: : the \"middle\" of a cluster m N Radius: square root of average distance from any point of the cluster to its centroid 2 ip Rm N Diameter: square root of average mean squared distance between all pairs of points in the cluster N ip iq D. N(N-1) 27" }, { "page_index": 656, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_028.png", "page_index": 656, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:05+07:00" }, "raw_text": "Extensions to Hierarchical Clustering Major weakness of agglomerative clustering methods Can never undo what was done previously Do not scale well: time complexity of at least O(n2) where n is the number of total objects Integration of hierarchical & distance-based clustering BlRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters CHAMELEON (1999): hierarchical clustering using dynamic modeling 28" }, { "page_index": 657, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_029.png", "page_index": 657, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:10+07:00" }, "raw_text": "BlRCH (Balanced Iterative Reducing and Clustering Using Hierarchies) Zhang, Ramakrishnan & Livny, SIGMOD'96 Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering Phase 1 : scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans Weakness: handles only numeric data, and sensitive to the order of the data record 29" }, { "page_index": 658, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_030.png", "page_index": 658, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:17+07:00" }, "raw_text": "Clustering Feature Vector in BIRCH Clustering Feature (CF): CF = (N, LS, SS) N: Number of data points N LS: linear sum of N points: 2 x i=1 CF = (5, (16,30),(54,190)) SS: square sum of N points N 2 (3,4) 10 x 9 8 (2,6) i =1 7 6 (4,5) 5 3 (4,7) 2 (3,8) 0 2 3 4 5 6 7 8 9 10 30" }, { "page_index": 659, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_031.png", "page_index": 659, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:22+07:00" }, "raw_text": "CF-Tree in BIRCH Clustering feature: Summary of the statistics for a given subcluster: the 0-th, 1st and 2nd moments of the subcluster from the statistical point of view Registers crucial measurements for computing cluster and utilizes storage efficiently A CF tree is a height-balanced tree that stores the clustering features for a hierarchical clustering A nonleaf node in a tree has descendants or \"children\" The nonleaf nodes store sums of the CFs of their children A CF tree has two parameters Branching factor: max # of children Threshold: max diameter of sub-clusters stored at the leaf nodes 31" }, { "page_index": 660, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_032.png", "page_index": 660, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:28+07:00" }, "raw_text": "The CF Tree Structure Root CF CF CF CF B = 7 2 3 6 child2 child1 child3 child6 L = 6 Non-leaf node CF CF CF CF 2 3 5 child1 child2 child3 child5 Leaf node Leaf node CF CFCF prev next prev next 2 6 I 32" }, { "page_index": 661, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_033.png", "page_index": 661, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:33+07:00" }, "raw_text": "The Birch Algorithm Cluster Diameter 1 2 1 n(n -1) For each point in the input Find closest leaf entry Add point to leaf entry and update CF If entry diameter > max diameter, then split leaf, and possibly parents Algorithm is O(n) Concerns Sensitive to insertion order of data points Since we fix the size of leaf nodes, so clusters may not be so natural Clusters tend to be spherical given the radius and diameter measures 33" }, { "page_index": 662, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_034.png", "page_index": 662, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:37+07:00" }, "raw_text": "CHAMELEON: Hierarchical Clustering Using Dynamic Modeling (1999) CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999 Measures the similarity based on a dynamic model Two clusters are merged only if the interconnectivity and closeness (proximity) between two clusters are high relative to the internal interconnectivity of the clusters and closeness of items within the clusters Graph-based, and a two-phase algorithm 1. Use a graph-partitioning algorithm: cluster objects into a large number of relatively small sub-clusters Use an agglomerative hierarchical clustering algorithm: find the genuine clusters by repeatedly combining these sub-clusters 34" }, { "page_index": 663, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_035.png", "page_index": 663, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:41+07:00" }, "raw_text": "Overall Framework of CHAMELEON Construct (K-NN) Sparse Graph Partition the Graph Data Set K-NN Graph P and q are connected if Merge Partition q is among the top k closest neighbors of p Relative interconnectivity: connectivity of c, and c, over internal connectivity Final Clusters Relative closeness: closeness of c1 and c, over internal closeness 35" }, { "page_index": 664, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_036.png", "page_index": 664, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:47+07:00" }, "raw_text": "CHAMELEON (Clustering Complex Objects) 36" }, { "page_index": 665, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_037.png", "page_index": 665, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:53+07:00" }, "raw_text": "Probabilistic Hierarchical Clustering Algorithmic hierarchical clustering Nontrivial to choose a good distance measure Hard to handle missing attribute values Optimization goal not clear: heuristic, local search Probabilistic hierarchical clustering Use probabilistic models to measure distances between clusters Generative model: Regard the set of data objects to be clustered as a sample of the underlying data generation mechanism to be analyzed Easy to understand, same efficiency as algorithmic agglomerative clustering method, can handle partially observed data In practice, assume the generative models adopt common distributions functions, e.g., Gaussian distribution or Bernoulli distribution, governed by parameters 37" }, { "page_index": 666, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_038.png", "page_index": 666, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:06:59+07:00" }, "raw_text": "Generative Moce Given a set of 1-D points X = {x, ..., x,} for clustering analysis & assuming they are generated by a Gaussian distribution: 1 (x-)2 N(u,o2) e 202 V2To2 The probability that a point x e X is generated by the model 1 (xi-)2 P(xiu,o2) e 202 /2T02 The likelihood that X is generated by the model: n 1 (xi-)2 1 L(W(u,o2):X)=P(Xu,o2)= e 22 2T02 i=1 The task of learning the generative model: find the the maximum likelihood parameters and o2 such that N(o,o?)=argmax{L(W(,o2) :X)} 38" }, { "page_index": 667, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_039.png", "page_index": 667, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:05+07:00" }, "raw_text": "A Probabilistic Hierarchical Clustering Algorithm For a set of objects partitioned into m clusters s C1, - . - ,Cm, the quality can be measured by rm 11 Q({C1,...,Cm})= P(Ci) i=1 where PO is the maximum likelihood P(C1 U C2) dist(Ci,Cj) = -log Distance between clusters C, and C,: P(C1)P(C2 Algorithm: Progressively merge points and clusters Input: D = {o1, ..., 0,}: a data set containing n objects Output: A hierarchy of clusters Method Create a cluster for each object C, = {o}, 1 i n; For i= 1 to n{ Find pair of clusters C and C; such that C;,Cj =argmaxij{log (P(C;uCj )/(P(Ci)P(Cj))}; If Iog (P(C;uCj)/(P(C;)P(Cj)) > 0 then merge C; and Cj} 39" }, { "page_index": 668, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_040.png", "page_index": 668, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:09+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 40" }, { "page_index": 669, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_041.png", "page_index": 669, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:13+07:00" }, "raw_text": "Density-Based Clustering Methods Clustering based on density (local cluster criterion), such as density-connected points Major features: Discover clusters of arbitrary shape Handle noise One scan Need density parameters as termination condition Several interesting studies: DBSCAN: Ester,et al. (KDD'96) OPTlCS: Ankerst, et al (SIGMOD'99 DENCLUE: Hinneburg & D. Keim (KDD'98) CLIQUE: Agrawal, et al. (SIGMOD'98) (more grid-based) 41" }, { "page_index": 670, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_042.png", "page_index": 670, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:18+07:00" }, "raw_text": "Density-Based Clustering: Basic Concepts Two parameters: Eps: Maximum radius of the neighbourhood MinPts: Minimum number of points in an Eps- neighbourhood of that point Directly density-reachable: A point p is directly density- reachable from a point q w.r.t. Eps, MinPts if p belongs to NEps(q) MinPts = 5 p core point condition: Eps = 1 cm q 42" }, { "page_index": 671, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_043.png", "page_index": 671, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:23+07:00" }, "raw_text": "Density-Reachable and Density-Connected Density-reachable: A point p is density-reachable from p a point q w.r.t. Eps, MinPts if there is a chain of points P1, ..., Pn: P1 = q q, Pn = p such that Pi+1 is directly density-reachable from Pi Density-connected A point p is density-connected to a 0 q point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts 43" }, { "page_index": 672, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_044.png", "page_index": 672, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:27+07:00" }, "raw_text": "DBSCAN: Density-Based Spatial Clustering of Applications with Noise Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points Discovers clusters of arbitrary shape in spatial databases with noise Outlier Border Eps = 1cm Core MinPts = 5 44" }, { "page_index": 673, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_045.png", "page_index": 673, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:30+07:00" }, "raw_text": "DBSCAN: The Algorithm Arbitrary select a point p Retrieve all points density-reachable from p w.r.t. Eps and MinPts If p is a core point, a cluster is formed If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database Continue the process until all of the points have been processed 45" }, { "page_index": 674, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_046.png", "page_index": 674, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:36+07:00" }, "raw_text": "DBSCAN: Sensitive to Parameters Figure8.DBScan results forDs1with MinPts at 4and Eps at a)0.5and(b)0.4 Figure 9.DBScan resultsforDS2with MinPts at4and Eps at (a) (b) (a5.0.b)3.5.and (c) 3.0. (a) (b) (c) 46" }, { "page_index": 675, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_047.png", "page_index": 675, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:40+07:00" }, "raw_text": "OPTICS: A Cluster-Ordering Method (1999) OPTICS: Ordering Points To Identify the Clustering Structure Ankerst, Breunig, Kriegel, and Sander (SIGMOD'99) Produces a special order of the database wrt its density-based clustering structure This cluster-ordering contains info equiv to the density- based clusterings corresponding to a broad range of parameter settings Good for both automatic and interactive cluster analysis including finding intrinsic clustering structure Can be represented graphically or using visualization techniques 47" }, { "page_index": 676, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_048.png", "page_index": 676, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:46+07:00" }, "raw_text": "OPTICS: Some Extension from DBscan Index-based: k = number of dimensions N = 20 D p = 75% M = N(1-p) = 5 Complexity: : O(NlogN pl Core Distance: min eps s.t. point is core p2 Reachability Distance 0 Max (core-distance e(o), d(o,p)) MinPts =5 r(p1, o) = 2.8cm. r(p2,o) = 4cm e =3 cm 48" }, { "page_index": 677, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_049.png", "page_index": 677, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:49+07:00" }, "raw_text": "Reachability -distance undefined & € Cluster-order of the objects 49" }, { "page_index": 678, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_050.png", "page_index": 678, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:07:54+07:00" }, "raw_text": "Density-Based Clustering: OPTICS & Its Applications p 50" }, { "page_index": 679, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_051.png", "page_index": 679, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:01+07:00" }, "raw_text": "DENCLUE: Using Statistical Density Functions DENsity-based CLUstEring by Hinneburg & Keim (KDD'98) total influence Using statistical density functions: on x d(x,x;)2 d(x,y) N (x)= 2 g e ussian(x,y)=e 202 Gaussian 1= d(x,x;)2 influence of y N > 202 (x;-x).e on x Major features gradient of x in Solid mathematical foundation the direction of Xi Good for data sets with large amounts of noise Allows a compact mathematical description of arbitrarily shaped clusters in high-dimensional data sets Significant faster than existing algorithm (e.g., DBSCAN) But needs a large number of parameters 51" }, { "page_index": 680, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_052.png", "page_index": 680, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:06+07:00" }, "raw_text": "Denclue: Technical Essence Uses grid cells but only keeps information about grid cells that do actually contain data points and manages these cells in a tree-based access structure Influence function: describes the impact of a data point within its neighborhood Overall density of the data space can be calculated as the sum of the influence function of all data points Clusters can be determined mathematically by identifying density attractors Density attractors are local maximal of the overall density function Center defined clusters: assign to each density attractor the points density attracted to it Arbitrary shaped cluster: merge density attractors that are connected through paths of high density (> threshold) 52" }, { "page_index": 681, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_053.png", "page_index": 681, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:11+07:00" }, "raw_text": "Density Attractor L HHSUBT 1 {c] Gausalan {a] Data $et I3QOO Cata 5paco 53" }, { "page_index": 682, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_054.png", "page_index": 682, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:17+07:00" }, "raw_text": "Center-Defined and Arbitrary [a] o =0.2 {b} o = 0.6 {d} o = 1.5 Figure $: Example of Center-Defined Clusters for ditlerent c {s} E=2 {b}$=2 {c} £=1 {d} £=1 Figure 4: Example of Arbitray-$hape Clusters for dilferent & 54" }, { "page_index": 683, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_055.png", "page_index": 683, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:21+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 55" }, { "page_index": 684, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_056.png", "page_index": 684, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:25+07:00" }, "raw_text": "Grid-Based Clustering Methoc Using multi-resolution grid data structure Several interesting methods STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB'98) A multi-resolution clustering approach using wavelet method CLIQUE: Agrawal, et al. (SIGMOD'98) Both grid-based and subspace clustering 56" }, { "page_index": 685, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_057.png", "page_index": 685, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:29+07:00" }, "raw_text": "STING: A Statistical Information Grid Approach Wang,Yang and Muntz (VLDB'97) The spatial area is divided into rectangular cells There are several levels of cells corresponding to different levels of resolution 1st layer (i-1)st layer i-th layer 57" }, { "page_index": 686, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_058.png", "page_index": 686, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:34+07:00" }, "raw_text": "The STING Clustering Method Each cell at a high level is partitioned into a number of smaller cells in the next lower level Statistical info of each cell is calculated and stored beforehand and is used to answer queries Parameters of higher level cells can be easily calculated from parameters of lower level cell count, mean, s, min, max type of distributionnormal, uniform, etc. Use a top-down approach to answer spatial data queries Start from a pre-selected layer-typically with a small number of cells For each cell in the current level compute the confidence interval 58" }, { "page_index": 687, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_059.png", "page_index": 687, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:38+07:00" }, "raw_text": "STING Algorithm and Its Analysis Remove the irrelevant cells from further consideration When finish examining the current layer, proceed to the next lower level Repeat this process until the bottom layer is reached Advantages: Query-independent, easy to parallelize, incremental update O(K), where K is the number of grid cells at the lowest level Disadvantages: All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected 59" }, { "page_index": 688, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_060.png", "page_index": 688, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:44+07:00" }, "raw_text": "CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD'98) Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space CLIQUE can be considered as both density-based and grid-based It partitions each dimension into the same number of equal length interval It partitions an m-dimensional data space into non-overlapping rectangular units A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter A cluster is a maximal set of connected dense units within a subspace 60" }, { "page_index": 689, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_061.png", "page_index": 689, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:48+07:00" }, "raw_text": "CLIQUE: The Major Steps Partition the data space and find the number of points that lie inside each cell of the partition. Identify the subspaces that contain clusters using the Apriori principle ldentify clusters Determine dense units in all subspaces of interests Determine connected dense units in all subspaces of interests. Generate minimal description for the clusters Determine maximal regions that cover a cluster of connected dense units for each cluster Determination of minimal cover for each cluster 61" }, { "page_index": 690, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_062.png", "page_index": 690, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:54+07:00" }, "raw_text": "(000°01) Sataty 5 5 4 3 3 2 2 age age 20 30 40 50 60 20 30 40 50 60 t = 3 30 50 age Salary 62" }, { "page_index": 691, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_063.png", "page_index": 691, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:08:58+07:00" }, "raw_text": "Strength and Weakness of CL/OUE Strength automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces insensitive to the order of records in input and does not presume some canonical data distribution scales linearly with the size of input and has good scalability as the number of dimensions in the data increases Weakness The accuracy of the clustering result may be degraded at the expense of simplicity of the method 63" }, { "page_index": 692, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_064.png", "page_index": 692, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:01+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 64" }, { "page_index": 693, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_065.png", "page_index": 693, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:07+07:00" }, "raw_text": "Assessing Clustering Tendency Assess if non-random structure exists in the data by measuring the probability that the data is generated by a uniform data distribution Test spatial randomness by statistic test: Hopkins Static Given a dataset D regarded as a sample of a random variable o. determine how far away o is from being uniformly distributed in the data space Sample n points, P1, ..., P, uniformly from D. For each Pi, find its nearest neighbor in D: x; = min{dist (P, v)} where v in D nearest neighbor in D -{q}: yi = min{dist (qj, v)} where v in D and v qi Di=1yi Calculate the Hopkins Statistic: H = n Di=1 yi If D is uniformly distributed, x and y: will be close to each other and H is close to 0.5. If D is highly skewed, H is close to 0 65" }, { "page_index": 694, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_066.png", "page_index": 694, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:13+07:00" }, "raw_text": "Determine the Number of Clusters Empirical method # of clusters n/2 for a dataset of n points : Elbow method Use the turning point in the curve of sum of within cluster variance w.r.t the # of clusters Cross validation method Divide a given data set into m parts Use m - 1 parts to obtain a clustering model Use the remaining part to test the quality of the clustering E.g., For each point in the test set, find the closest centroid, and use the sum of squared distance between all points in the test set and the closest centroids to measure how well the model fits the test set For any k > 0, repeat it m times, compare the overall quality measure w.r.t. different k's, and find # of clusters that fits the data the best 66" }, { "page_index": 695, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_067.png", "page_index": 695, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:18+07:00" }, "raw_text": "Measuring Clustering Quality Two methods: extrinsic vs. intrinsic Extrinsic: supervised, i.e., the ground truth is available Compare a clustering against the ground truth using certain clustering quality measure Ex. BCubed precision and recall metrics Intrinsic: unsupervised, i.e., the ground truth is unavailable Evaluate the goodness of a clustering by considering how well the clusters are separated, and how compact the clusters are Ex. Silhouette coefficient 67" }, { "page_index": 696, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_068.png", "page_index": 696, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:24+07:00" }, "raw_text": "Measuring Clustering Quality: Extrinsic Methods Clustering quality measure: : Q(C, Cg), for a clustering C given the ground truth Q is good if it satisfies the following 4 essential criteria Cluster homogeneity: the purer, the better Cluster completeness: should assign objects belong to the same category in the ground truth to the same cluster Rag bag: putting a heterogeneous object into a pure cluster should be penalized more than putting it into a rag bag (i.e., \"miscellaneous\" or \"other\" category) Small cluster preservation: splitting a small category into pieces is more harmful than splitting a large category into pieces 68" }, { "page_index": 697, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_069.png", "page_index": 697, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:28+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Evaluation of Clustering Summary 69" }, { "page_index": 698, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_070.png", "page_index": 698, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:33+07:00" }, "raw_text": "Cluster analysis groups objects based on their similarity and has wide applications Measure of similarity can be computed for various types of data Clustering algorithms can be categorized into partitioning methods, hierarchical methods, density-based methods, grid-based methods and model-based methods K-means and K-medoids algorithms are popular partitioning-based clustering algorithms Birch and Chameleon are interesting hierarchical clustering algorithms, and there are also probabilistic hierarchical clustering algorithms DBSCAN, OPTICS, and DENCLU are interesting density-based algorithms STING and CLIQUE are grid-based methods,where CLIQUE is also a subspace clustering algorithm Quality of clustering results can be evaluated in various ways 70" }, { "page_index": 699, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_071.png", "page_index": 699, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:39+07:00" }, "raw_text": "CS512-Spring 2011: An Introduction Coverage Cluster Analysis: Chapter 11 Outlier Detection: Chapter 12 Mining Sequence Data: BK2: Chapter 8 Mining Graphs Data: BK2: Chapter 9 Social and Information Network Analysis BK2: Chapter 9 Partial coverage: Mark Newman: \"Networks: An Introduction\", Oxford U., 2010 Scattered coverage: Easley and Kleinberg, \"Networks, Crowds, and Markets: Reasoning About a Highly Connected World\", Cambridge U., 2010 Recent research papers Mining Data Streams: BK2: Chapter 8 Requirements One research project One class presentation (15 minutes) Two homeworks (no programming assignment Two midterm exams (no final exam) 71" }, { "page_index": 700, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_072.png", "page_index": 700, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:47+07:00" }, "raw_text": "References (1) R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. SIGMOD'98 M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973 M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points to identify the clustering structure, SlGMOD'99. Beil F., Ester M., Xu X.: \"Frequent Term-Based Text Clustering\", KDD'02 M. M. Breunig, H.-P. Kriegel, R. Ng, J. Sander. LOF: Identifying Density-Based Local Outliers. SIGMOD 2000. M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for discovering clusters in large spatial databases. KDD'96. M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial databases: Focusing techniques for efficient class identification. SSD'95 D. Fisher. Knowledge acquisition via incremental conceptual clustering Machine Learning, 2:139-172, 1987. D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. VLDB'98. V. Ganti, J. Gehrke, R. Ramakrishan. CACTUS Clustering Categorical Data Using Summaries. KDD'99. 72" }, { "page_index": 701, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_073.png", "page_index": 701, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:09:53+07:00" }, "raw_text": "References (2) D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An approach based on dynamic systems. In Proc. VLDB'98. S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for large databases. SlGMOD'98. S. Guha, R. Rastogi, and K. Shim. ROCK: A robust clustering algorithm for categorical attributes. In /CDE'99, pp. 512-521, Sydney, Australia, March 1999. A. Hinneburg, D.I A. Keim: An Efficient Approach to Clustering in Large Multimedia Databases with Noise. KDD'98. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall 1988. G. Karypis, E.-H. Han, and V. Kumar. CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling. COMPUTER, 32(8): 68-75 1999. L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. John Wiley & Sons, 1990. E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets.VLDB'98. 73" }, { "page_index": 702, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_074.png", "page_index": 702, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:01+07:00" }, "raw_text": "References (3) G, J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley and Sons, 1988. R. Ng and J. Han. Efficient and effective clustering method for spatial data mining. VLDB'94. L. Parsons, E. Hague and H. Liu, Subspace Clustering for High Dimensional Data: A Review, SlGKDD Explorations, 6(1), June 2004 E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large data sets. Proc. 1996 Int. Conf. on Pattern Recognition G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution clustering approach for very large spatial databases. VLDB'98. A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering in Large Databases, ICDT'01. A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles. lCDE'01 H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data sets, SIGMOD'02 W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial Data Mining,VLDB'97 T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : An efficient data clustering method for very large databases. SIGMOD'96 X. Yin, J. Han, and P. S. Yu, \"LinkClus: Efficient Clustering via Heterogeneous Semantic Links\"VLDB'06 74" }, { "page_index": 703, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_075.png", "page_index": 703, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:04+07:00" }, "raw_text": "Slides unused in class 75" }, { "page_index": 704, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_076.png", "page_index": 704, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:17+07:00" }, "raw_text": "A Typical K-Medoids Algorithm (PAM) Total Cost = 20 10 10 10 9 9 9 8 8 8 7 Arbitrary 7 Assign 7 6 6 6 choose k each 5 5 5 object as remainin 4 4 4 initial 3 g object 3 3 2 medoids 2 to 2 1 1 nearest 1 0 0 0 3 4 5 6 7 8 9 10 medoids 0 2 3 4 5 6 7 8 9 10 0 0 2 3 4 5 6 7 8 9 10 K=2 Randomly select a nonmedoid object, Total Cost = 26 ramdom 10 10 Do loop 9 9 Compute Swapping O 8 8 total cost of Until no 7 7 6 swapping 6 change 5 5 If quality is 4 4 improved. 3 3 2 2 1 1 - 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 8 9 10 76" }, { "page_index": 705, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_077.png", "page_index": 705, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:23+07:00" }, "raw_text": "1 (Partitioning Around Medoids) (1987) PAM PAM (Kaufman and Rousseeuw, 1987), built in Splus Use real object to represent the cluster Select k representative objects arbitrarily For each pair of non-selected object h and selected object i, calculate the total swapping cost TC ih For each pair of i and h. If TCh< 0,i is replaced by h Then assign each non-selected object to the most similar representative object repeat steps 2-3 until there is no change 77" }, { "page_index": 706, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_078.png", "page_index": 706, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:29+07:00" }, "raw_text": "PAM Clustering: Finding the Best Cluster Center Case 1: p currently belongs to oj. If o is replaced by o, as a random representative object and p is the closest to one of the other representative object o, then p is reassigned to o: Oj Oi 0 Oi + 0 0 Oi 0 + + p p--+ - + Orandom random p 1.Reassigned to O 2. Reassigned to 3. No change 4. Reassigned to Orandom Orandom data object + cluster center -before swapping -- after swapping 78" }, { "page_index": 707, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_079.png", "page_index": 707, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:33+07:00" }, "raw_text": "What Is the Problem with PAM? Pam is more robust than k-means in the presence of noise and outliers because a medoid is less influenced by outliers or other extreme values than a mean Pam works efficiently for small data sets but does not scale well for large data sets. O(k(n-k)2 ) for each iteration where n is # of data,k is # of clusters >Sampling-based method CLARA(Clustering LARge Applications) 79" }, { "page_index": 708, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_080.png", "page_index": 708, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:37+07:00" }, "raw_text": "CLARA (Clustering Large Applications) (1990) CLARA (Kaufmann and Rousseeuw in 1990) Built in statistical analysis packages, such as SPlus It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output Strength: deals with larger data sets than PAM Weakness: Efficiency depends on the sample size A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased 80" }, { "page_index": 709, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_081.png", "page_index": 709, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:43+07:00" }, "raw_text": "CLARANS (\"Randomized\" CLARA) (1994) CLARANS (A Clustering Algorithm based on Randomized Search) ) (Ng and Han'94) Draws sample of neighbors dynamically The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids If the local optimum is found, it starts with new randomly selected node in search for a new local optimum Advantages: More efficient and scalable than both PAM and CLARA Further improvement: Focusing techniques and spatial access structures s (Ester et al.'95) 81" }, { "page_index": 710, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_082.png", "page_index": 710, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:46+07:00" }, "raw_text": "ROCK: Clustering Categorical Data ROCK: RObust Clustering using linKs S. Guha, R. Rastogi & K. Shim, ICDE'99 Major ideas Use links to measure similarity/proximity Not distance-based Algorithm: sampling-based clustering Draw random sample Cluster with links Label data in disk Experiments Congressional voting, mushroom data 82" }, { "page_index": 711, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_083.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_083.png", "page_index": 711, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:10:53+07:00" }, "raw_text": "Similarity Measure in ROCK Traditional measures for categorical data may not work well, e.g., Jaccard coefficient Example: Two groups (clusters) of transactions C,:{a, b, c},{a, b, d},{a, b,e},{a, c, d},{a, c, e} {a, d, e},{b,c,d},{b, c,e},{b, d, e},{c, d, e} C,,:{a, b, c},{a, b, d},{a, b, e},{a, c, d},{a, c, e},{a, d, e} {b, c, d},{b, c, e},{b, d,e},{c, d, e} C,::{a, b, f}, {a, b, g},{a, f, g}, {b, f, g} Neighbors Two transactions are neighbors if sim(T1,T,) > threshold Let T,={a, b,c}, T,={c, d,e},T3={a, b,f} T, connected to:{a,b,d},{a,b,e},{a,c,d},{a,c,e},{b,c,d},{b,c,e} {a,b,f},{a,b,g} T, connected to:{a,c,d}, {a,c,e},{a,d,e},{b,c,e}, {b,d,e}, {b,c,d} T, connected to: {a,b,c},{a,b,d},{a,b,e},{a,b,g},{a,f,g},{b,f,g} Link Similarity Link similarity between two transactions is the # of common neighbors link(T1, T,) = 4, since they have 4 common neighbors - {a, c, d},{a,c,e},{b,c,d},{b,c,e} link(T1. T) = 3, since they have 3 common neighbors - {a, b, d},{a, b,e},{a, b,g} 84" }, { "page_index": 713, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_085.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_085.png", "page_index": 713, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:03+07:00" }, "raw_text": "Rock Algorithm Method Compute similarity matrix Use link similarity Run agglomerative hierarchical clustering When the data set is big Get sample of transactions Cluster sample Problems: Guarantee cluster interconnectivity any two transactions in a cluster are very well connected Ignores information about closeness of two clusters two separate clusters may still be quite connected 85" }, { "page_index": 714, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_086.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_086.png", "page_index": 714, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:09+07:00" }, "raw_text": "Aggregation-Based Similarity Computation 0.2 ST2 0.9 0.8 0.9 1.0 1.0 10 11 12 13 14 N/ ST1 b a For each node nke {nor n1, n} and n,e {n3, n14}, their path- based similarity simp(nk n) = s(nk n4) s(n4r ns)s(ns, n). 12 14 sim(n.,n, ik=10 =13 = 0.171 3 2 takes O(3+2) time After aggregation, we reduce quadratic time computation to linear time computation. 86" }, { "page_index": 715, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_087.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_087.png", "page_index": 715, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:15+07:00" }, "raw_text": "Computing Similarity with Aggregation Average similarity a:(0.9.3) b(0.95,2) and total weight 0.2 4 sim(na n) can be computed 10 11 12 13 14 from aggregated similarities b a sim(na np) = avg_sim(nan4) x s(n4, ns) x avg_sim(nns) = 0.9 x 0.2 x 0.95 = 0.171 To compute sim(na,nb): Find all pairs of sibling nodes n; and n, so that n, linked with n, and n, with nj. Calculate similarity (and weight) between na and n, w.r.t. n; and nj. Calculate weighted average similarity between n, and n, w.r.t. all such pairs. 87" }, { "page_index": 716, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_088.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_088.png", "page_index": 716, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:19+07:00" }, "raw_text": "Chapter 1o. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: Basic Concepts Overview of Clustering Methods Partitioning Methods Hierarchical Methods Density-Based Methods Grid-Based Methods Summary 88" }, { "page_index": 717, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_089.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_089.png", "page_index": 717, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:26+07:00" }, "raw_text": "Link-Based Clustering: Calculate Similarities Based On Links Authors Proceedings Conferences The similarity between two Tom sigm0d03 objects x and y is defined as sigmod04 sigmod the average similarity between Mike sigmod05 objects linked with x and those with y: vldb03 Cathy vldb04 vldb I(a)I(b) C ZZ'sim(I(a),1,(b) vldb05 John i=1 j=1 aaai04 aaal Mary aaai05 Issue: Expensive to compute: For a dataset of N objects Jeh & Widom, KDD'2002: SimRank and M links, it takes O(N2) Two objects are similar if they are space and O(M2) time to linked with the same or similar compute all similarities objects 89" }, { "page_index": 718, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_090.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_090.png", "page_index": 718, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:31+07:00" }, "raw_text": "Observation 1: Hierarchical Structures Hierarchical structures often exist naturally among objects (e.g., taxonomy of animals) Relationships between articles and A hierarchical structure of words (Chakrabarti, Papadimitriou. products in Walmart Modha, Faloutsos, 2004) All grocery electronics apparel TV DVD camera Words 90" }, { "page_index": 719, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_091.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_091.png", "page_index": 719, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:37+07:00" }, "raw_text": "0.4 Distribution of SimRank similarities 0.3 among DBLP authors 0.2 0.1 0 t0 90 800 2 4 6 8 2 2 2 O similarity value Power law distribution exists in similarities 56% of similarity entries are in [0.005, 0.015] 1.4% of similarity entries are larger than 0.1 Can we design a data structure that stores the significant similarities and compresses insignificant ones? 91" }, { "page_index": 720, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_092.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_092.png", "page_index": 720, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:42+07:00" }, "raw_text": "A Novel Data Structure: SimTree Each non-leaf node Each leaf node represents a group represents an 7object of similar lower-leve hodes Similaritiesl between siblings are stored Canon A40 digital camera Digital Cameras Sony V3 digita Apparels vohsumer camera electronics TVs 92" }, { "page_index": 721, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_093.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_093.png", "page_index": 721, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:47+07:00" }, "raw_text": "Similarity Defined by SimTree Similarity between two sibling nodes n and n, 0.2 n 0.8 0.9 0.9 Adjustment ratio for node n- 0.3 n6 na n5 0.9 0.8 1.0 ng Path-based node similarity simp(n7,n:) = s(n7, n4) x s(n4,n5) x s(n5, n:) Similarity between two nodes is the average similarity between objects linked with them in other SimTrees Average similarity between xand all other nodes Adjust/ ratio for x = Average similarity between x's parent and all other nodes 93" }, { "page_index": 722, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_094.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_094.png", "page_index": 722, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:52+07:00" }, "raw_text": "LinkClus: Efficient Clustering via Heterogeneous Semantic Links Method Initialize a SimTree for objects of each type Repeat until stable For each SimTree, update the similarities between its nodes using similarities in other SimTrees Similarity between two nodes x and y is the average similarity between objects linked with them Adjust the structure of each SimTree Assign each node to the parent node that it is most similar to For details: X. Yin, J. Han, and P. S. Yu, \"LinkClus: Efficient Clustering via Heterogeneous Semantic Links\", VLDB'06 94" }, { "page_index": 723, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_095.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_095.png", "page_index": 723, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:11:57+07:00" }, "raw_text": "Initialization of SimTrees Initializing a SimTree Repeatedly find groups of tightly related nodes, which are merged into a higher-level node Tightness of a group of nodes For a group of nodes {n,, ..., nk}, its tightness is defined as the number of leaf nodes in other SimTrees that are connected to all of {n1, ..., nk} Leaf nodes in Nodes another SimTree 1 n1 2 The tightness of {n1,n2} is 3 3 4 n2 5 95" }, { "page_index": 724, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_096.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_096.png", "page_index": 724, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:03+07:00" }, "raw_text": "Finding Tight Groups by Freg. Pattern Mining Finding tight groups Frequent pattern mining Reduced to Transactions {n1} n1 The tightness of a {n1,n2} g1 2 {n2} group of nodes is the 3 n2 {n1,n2} 4 support of a frequent {n1,n2} 5 pattern {n2,n3,n4 n3 6 {n4} 92 7 n3,n4 n4 8 n3,n4 9 Procedure of initializing a tree Start from leaf nodes (level-0) At each level /, find non-overlapping groups of similar nodes with frequent pattern mining 96" }, { "page_index": 725, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_097.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_097.png", "page_index": 725, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:07+07:00" }, "raw_text": "Adjusting SimTree Structures 0.9 n 0.8 After similarity changes, the tree structure also needs to be changed If a node is more similar to its parent's sibling, then move it to be a child of that sibling Try to move each node to its parent's sibling that it is most similar to, under the constraint that each parent node can have at most c children 97" }, { "page_index": 726, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_098.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_098.png", "page_index": 726, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:11+07:00" }, "raw_text": "Complexity For two types of objects, W in each, and M linkages between them. Time Space Updating similarities O(M(logN)2) O(M+N) Adjusting tree structures O(N) O(N) LinkClus O(M(logN)2) O(M+N) O(M2) SimRank O(N2) 98" }, { "page_index": 727, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_099.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_099.png", "page_index": 727, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:18+07:00" }, "raw_text": "Experiment: Email Dataset F. Nielsen. Email dataset. Approach Accuracy time (s) www.imm.dtu.dk/rem/data/Email-1431.zip LinkClus 0.8026 1579.6 370 emails on conferences, 272 on jobs and 789 spam emails SimRank 0.7965 39160 Accuracy: measured by manually labeled ReCom 0.5711 74.6 data F-SimRank 0.3688 479.7 Accuracy of clustering: % of pairs of objects in the same cluster that share common labe CLARANS 0.4768 8.55 Approaches compared: SimRank (Jeh & Widom, KDD 2002): Computing pair-wise similarities SimRank with FingerPrints (F-SimRank): Fogaras & R'acz, WWW 2005 pre-computes a large sample of random paths from each object and uses samples of two objects to estimate SimRank similarity ReCom (Wang et al. SlGIR 2003) Iteratively clustering objects using cluster labels of linked objects 99" }, { "page_index": 728, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_100.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_100.png", "page_index": 728, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:25+07:00" }, "raw_text": "WaveCluster: Clustering by Wavelet Analysis (1998) Sheikholeslami, Chatterjee, and Zhang (VLDB'98) A multi-resolution clustering approach which applies wavelet transform to the feature space; both grid-based and density-based Wavelet transform: A signal processing technique that decomposes a signal into different frequency sub-band Data are transformed to preserve relative distance between objects at different levels of resolution Allows natural clusters to become more distinguishable 0.0T 0.06 1 0.05 13 1 0.04 0.3 F r eq ue nc y 1 0.03 0.02 1 lri lG lri lDI Ci lI0 0.01 0 -0.01 Ti me 0.02 0 200 1000 1500 2000 00 100" }, { "page_index": 729, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_101.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_101.png", "page_index": 729, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:29+07:00" }, "raw_text": "The WaveCluster Algorithm How to apply wavelet transform to find clusters Summarizes the data by imposing a multidimensional grid structure onto data space These multidimensional spatial data objects are represented in a n-dimensional feature space Apply wavelet transform on feature space to find the dense regions in the feature space Apply wavelet transform multiple times which result in clusters at different scales from fine to coarse Major features: Complexity O(N) Detect arbitrary shaped clusters at different scales Not sensitive to noise, not sensitive to input order Only applicable to low dimensional data 101" }, { "page_index": 730, "chapter_num": 9, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_102.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_9/slide_102.png", "page_index": 730, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:34+07:00" }, "raw_text": "Quantization & Transformation Quantize data into m-D grid struc then wavelet transform a) scale 1: high resolution b) scale 2: medium resolution c) scale 3: low resolution Figure 1: A sample 2-dimensional feature space. :: b}" }, { "page_index": 731, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_001.png", "page_index": 731, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:37+07:00" }, "raw_text": "Mining: Data Concepts and Techniques (3rd ed.) - Chapter 11 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University @2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 732, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_002.png", "page_index": 732, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:43+07:00" }, "raw_text": ": Basic Cluster Analysis Methods (Chap. 1O) Reyiew: Cluster Analysis: Basic Concepts Group data so that object similarity is high within clusters but low across clusters Partitioning Methods K-means and k-medoids algorithms and their refinements Hierarchical Methods Agglomerative and divisive method, Birch, Cameleon Density-Based Methods DBScan, Optics and DenCLu Grid-Based Methods STING and CLIQUE (subspace clustering) Evaluation of Clustering Assess clustering tendency, determine # of clusters, and measure clustering quality 2" }, { "page_index": 733, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_003.png", "page_index": 733, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:47+07:00" }, "raw_text": "K-Means Clustering K=2 Arbitrarily Update the partition cluster objects into centroids k groups The initial data set Loop if Reassign Lobjects needed Partition objects into k nonempty subsets Repeat Compute centroid (i.e., mean Update the cluster point) for each partition centroids Assign each object to the cluster of its nearest centroid Until no change 3" }, { "page_index": 734, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_004.png", "page_index": 734, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:52+07:00" }, "raw_text": "Hierarchical Clustering Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition Step 0 Step 1 Step 2 Step 3 Step 4 agglomerative (AGNES) a b a a b c d e c c d e d d e e divisive Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA) 4" }, { "page_index": 735, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_005.png", "page_index": 735, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:12:58+07:00" }, "raw_text": "Distance between Clusters X Single link: smallest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq) Complete link: largest distance between an element in one cluster and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq) element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq) Centroid: distance between the centroids of two clusters, i.e. dist(Kj, Ki) = dist(Ci, Cj) Medoid: distance between the medoids of two clusters, i.e., dist(K. Kj) = dist(Mi, Mj) Medoid: a chosen, centrally located object in the cluster 5" }, { "page_index": 736, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_006.png", "page_index": 736, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:05+07:00" }, "raw_text": "CF = (5, BIRCH and the Clustering Feature 16,30),(54,190 (3,4) (CF) Tree Structure (2,6) (4,5) (4,7) Root CF 6 CF3 (3,8) CF1 CF B = 7 child6 child child, child3 L = 6 Non-leaf node CF CF CF CF 2 3 5 child1 child2 child3 child5 Leaf node Leaf node CF 4 CF CFCF prev next prev next 2 6 I 6" }, { "page_index": 737, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_007.png", "page_index": 737, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:09+07:00" }, "raw_text": "Overall Framework of CHAMELEON Construct (K-NN) Sparse Graph Partition the Graph Data Set K-NN Graph P and q are connected if Merge Partition q is among the top k closest neighbors of p Relative interconnectivity: connectivity of c, and c, over internal connectivity Final Clusters Relative closeness: closeness of c1 and c, over internal closeness 7" }, { "page_index": 738, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_008.png", "page_index": 738, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:13+07:00" }, "raw_text": "Density-Based Clustering: DBscan Two parameters: Eps: Maximum radius of the neighbourhood MinPts: Minimum number of points in an Eps- neighbourhood of that point Directly density-reachable: A point p is directly density- reachable from a point q w.r.t. Eps, MinPts if p belongs to NEps(q) MinPts = 5 p core point condition: Eps = 1 cm q 8" }, { "page_index": 739, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_009.png", "page_index": 739, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:18+07:00" }, "raw_text": "Density-Based Clustering: OPTICS & Its Applications p 9" }, { "page_index": 740, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_010.png", "page_index": 740, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:24+07:00" }, "raw_text": "DENCLU: Center-Defined and Arbitrary [a] o =0.2 {b] o = 0.6 {d} o = 1.5 Figure $: Example of Center-Defined Clusters for ditferent c {s} E=2 {b}$=2 {c} £=1 {d} £=1 Figure 4: Example of Arbitray-$hape Clusters for dilferent & 10" }, { "page_index": 741, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_011.png", "page_index": 741, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:28+07:00" }, "raw_text": "STING: A Statistical Information Grid Approach Wang,Yang and Muntz (VLDB'97) The spatial area is divided into rectangular cells There are several levels of cells corresponding to different levels of resolution 1st layer (i-1)st layer i-th layer 11" }, { "page_index": 742, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_012.png", "page_index": 742, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:33+07:00" }, "raw_text": "Evaluation of Clustering Quality Assessing Clustering Tendency Assess if non-random structure exists in the data by measuring the probability that the data is generated by a uniform data distribution Determine the Number of Clusters Empirical method: # of clusters yn/2 Elbow method: Use the turning point in the curve of sum of within cluster variance w.r.t # of clusters Cross validation method Measuring Clustering Quality Extrinsic: supervised Compare a clustering against the ground truth using certain clustering quality measure Intrinsic: unsupervised Evaluate the goodness of a clustering by considering how well the clusters are separated, and how compact the clusters are 12" }, { "page_index": 743, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_013.png", "page_index": 743, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:39+07:00" }, "raw_text": "Outline of Advanced Clustering Analysis Probability Model-Based Clustering Each object may take a probability to belong to a cluster Clustering High-Dimensional Data Curse of dimensionality: Difficulty of distance measure in high-D space Clustering Graphs and Network Data Similarity measurement and clustering methods for graph and networks Clustering with Constraints Cluster analysis under different kinds of constraints, e.g., that raised from background knowledge or spatial distribution of the objects 13" }, { "page_index": 744, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_014.png", "page_index": 744, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:42+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Probability Model-Based Clustering Clustering High-Dimensional Data Clustering Graphs and Network Data Clustering with Constraints Summary 14" }, { "page_index": 745, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_015.png", "page_index": 745, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:48+07:00" }, "raw_text": "Set and Fuzzy Fuzzy Cluster Clustering methods discussed so far Every data object is assigned to exactly one cluster Some applications may need for fuzzy or soft cluster assignment Ex. An e-game could belong to both entertainment and software Methods: fuzzy clusters and probabilistic model-based clusters Fuzzy cluster: A fuzzy set S: Fs : X -> [0, 1] (value between 0 and 1) Example: Popularity of cameras is defined as a fuzzy mapping Camera Sales (units A 50 if 1,000 or more units of o are sold Pop(o) = B 1320 i if i ( < 1000) units of o are sold C 860 1000 D 270 Then,A(0.05), B(1), C(0.86), D(0.27) 15" }, { "page_index": 746, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_016.png", "page_index": 746, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:13:56+07:00" }, "raw_text": "Fuzzy (Soft) Clustering Review-id Keywords 1 0 R1 Example: Let cluster features be digital camera, lens 1 0 R2 digital camera 1 0 C1 :\"digital camera\" and \"lens' M = R3 lens 2 1 3 R4 digital camera, lens, computer C,: \"computer\" 0 1 R5 computer, CPU 0 1 Fuzzy clustering R6 computer, computer game k fuzzy clusters C1, ...,Ck ,represented as a partition matrix M = [wj] P1: for each object o; and cluster C, 0 w; 1 (fuzzy set) k P2: for each object 0;, = 1, equal participation in the clustering Wij n P3: for each cluster Cj, < ensures there is no empty cluster Wij < n f For an object o, sum of the squared error (ssE), p is a parameter: n k For a cluster C, SSE: SSE(Cj) = wa;dist(0i, cj)2 SSE(o;) = W;dist(Oi, Cj)? i=1 j=1 Measure how well a clustering fits the data: n k SSE(C) = w;dist(oi, ci)? i=1 j=1 16" }, { "page_index": 747, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_017.png", "page_index": 747, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:02+07:00" }, "raw_text": "Probabilistic Model-Based Clustering Cluster analysis is to find hidden categories. A hidden category (i.e., probabilistic cluster) is a distribution over the data space, which can be mathematically represented using a probability density function (or distribution function)) Prob Consumer line Professional line Ex. 2 categories for digital cameras sold consumer line vs. professional line density functions f1, f, for C1, C2 obtained by probabilistic clustering >Price 1000 A mixture model assumes that a set of observed objects is a mixture of instances from multiple probabilistic clusters, and conceptually each observed object is generated independently Out task: infer a set of k probabilistic clusters that is mostly likely to generate D using the above data generation process 17" }, { "page_index": 748, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_018.png", "page_index": 748, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:08+07:00" }, "raw_text": "Model-Based Clustering A set C of k probabilistic clusters C., ...,C with probability density functions f,, ..., fk, respectively, and their probabilities w, ..., Wk Probability of an object o generated by cluster C, is P(oCj) = wj fj(o k Probability of o generated by the set of cluster C is P(oC) = wjfj(0) Since objects are assumed to be generated j=1 independently, for a data set D = {o1, ..., o,}, we have n n k 1 IIZw; fj (o;) P(DC) = P(oiC) = i=1 i=1j=1 Task: Find a set C of k probabilistic clusters s.t. P(DlC) is maximized However, maximizing P(DC) is often intractable since the probability density function of a cluster can take an arbitrarily complicated form To make it computationally feasible (as a compromise), assume the probability density functions being some parameterized distributions 18" }, { "page_index": 749, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_019.png", "page_index": 749, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:15+07:00" }, "raw_text": "Univariate Gaussian Mixture Model O = {o., ..., o.} (n observed objects), = {01, ..., 0k} (parameters of the k distributions), and P(oil j) is the probability that o: is generated from the j-th distribution using parameter O,, we have k n k P(oiO) = wjPj(0iOj) P(0O) = wjPj(0iOj j=1 i=1 j=1 Univariate Gaussian mixture model Assume the probability density function of each cluster follows a 1- d Gaussian distribution. Suppose that there are k clusters. with standard deviation j, 0j, = (j, j), we have (oi-j)2 k 1 1 (0i-uj) P(oiOj)= e 22 P(oiO) = e 202 V2T0 j V2T0 j 21 n k 1 (0i-j)2 P(0O) = e 202 V2T 0 j i=1 j=1 19" }, { "page_index": 750, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_020.png", "page_index": 750, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:21+07:00" }, "raw_text": "The EM (Expectation Maximization) Algorithm The k-means algorithm has two steps at each iteration: Expectation Step (E-step): Given the current cluster centers, each object is assigned to the cluster whose center is closest to the object: An object is expected to belong to the closest cluster Maximization Step (M-step): Given the cluster assignment, for each cluster, the algorithm adjusts the center so that the sum of distance from the objects assigned to this cluster and the new center is minimized The (EM) algorithm: A framework to approach maximum likelihood or maximum a posteriori estimates of parameters in statistical models E-step assigns objects to clusters according to the current fuzzy clustering or parameters of probabilistic clusters M-step finds the new clustering or parameters that maximize the sum of squared error (SSE) or the expected likelihood 20" }, { "page_index": 751, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_021.png", "page_index": 751, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:33+07:00" }, "raw_text": "Fuzzy Clustering Using the EM Algorithm Iteration E-step M-step 1 0 0.48 0.42 0.41 0.47 C1 = (8.47,5.12) e (18, 11) 1 MT b (4,10) 0 1 0.52 0.58 0.59 0.53 C2 = (10.42, 8.99 d(14,8) 0.73 0.49 0.91 0.26 0.33 0.42 C1 = (8.51, 6.11) f(21,7) 2 MT c(9,6) 0.27 0.51 0.09 0.74 0.67 0.58 C2 14.42, 8.69 0.80 0.76 0.99 0.02 0.14 0.23 C1 (6.40, 6.24 a (3,3) 3 MT 0.20 0.24 0.01 0.98 0.86 0.77 C2 (16.55, 8.64) X 0 Initially, let c1 = a and c, = b 1 dist(o,c2)2 1st E-step: assign o to c1,w. wt = dist(o,c1)2 1 1 dist(o,c1)2 + dist(o,c2)2 dist(o,c1)2 dist(o,c2)2 41 Wc,c1 = 0.48 45+41 1st M-step: recalculate the centroids according to the partition matrix, minimizing the sum of squared error (SSE) 2 C1 12+02+0.482+0.422+0.412+0.472 each point 2 12+02+0.482+0.422+0.412+0.472 0.Cj = (8.47, 5.12) each point Iteratively calculate this until the cluster centers converge or the change is small enough" }, { "page_index": 752, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_022.png", "page_index": 752, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:40+07:00" }, "raw_text": "Univariate Gaussian Mixture Model O = {o., ..., o.} (n observed objects), = {01, ..., 0k} (parameters of the k distributions), and P(oil j) is the probability that o: is generated from the j-th distribution using parameter O,, we have k n k P(oiO) = wjPj(0iOj) P(0O) = wjPj(0iOj j=1 i=1 j=1 Univariate Gaussian mixture model Assume the probability density function of each cluster follows a 1- d Gaussian distribution. Suppose that there are k clusters. with standard deviation j, 0j, = (j, j), we have (oi-j)2 k 1 1 (0i-uj) P(oiOj)= e 22 P(oiO) = e 202 V2T0 j V2T0 j 21 n k 1 (0i-j)2 P(0O) = e 202 V2T 0 j i=1 j=1 22" }, { "page_index": 753, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_023.png", "page_index": 753, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:46+07:00" }, "raw_text": "Computing Mixture Models with EM Given n objects O = {o., ..., o.}, we want to mine a set of parameters = {01, ..., 0k} s.t.,P(OI0) is maximized, where 0j = (j, j) are the mean and standard deviation of the j-th univariate Gaussian distribution We initially assign random values to parameters , then iteratively conduct the E- and M- steps until converge or sufficiently small change At the E-step, for each object o, calculate the probability that o, belongs to each distribution P(oiOj) P(Oj0i,O) At the M-step, adjust the parameters = (j, i) so that the expected likelihood P(OO) is maximized n Di=1P(Oj0i,O)(0i -uj)2 P(Oj0i,O) =1 0;P(Oj0i,O) pj = Di=1 P(Oj0i,O) Di=1 P(OjoI,O) P(Oj0i,O) i=1 23" }, { "page_index": 754, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_024.png", "page_index": 754, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:51+07:00" }, "raw_text": "Advantages and Disadvantages of Mixture Models Strength Mixture models are more general than partitioning and fuzzy clustering Clusters can be characterized by a small number of parameters The results may satisfy the statistical assumptions of the generative models Weakness Converge to local optimal (overcome: run multi-times w. random initialization) Computationally expensive if the number of distributions is large or the data set contains very few observed data points Need large data sets Hard to estimate the number of clusters 24" }, { "page_index": 755, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_025.png", "page_index": 755, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:14:55+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Probability Model-Based Clustering Clustering High-Dimensional Data Clustering Graphs and Network Data Clustering with Constraints Summary 25" }, { "page_index": 756, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_026.png", "page_index": 756, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:00+07:00" }, "raw_text": "Clustering High-Dimensional Data Clustering high-dimensional data (How high is high-D in clustering?) Many applications: text documents, DNA micro-array data Major challenges. Many irrelevant dimensions may mask clusters Distance measure becomes meaningless-due to equi-distance Clusters may exist only in some subspaces Methods Subspace-clustering: Search for clusters existing in subspaces of the given high dimensional data space CLIQUE, ProClus, and bi-clustering approaches Dimensionality reduction approaches: Construct a much lower dimensional space and search for clusters there (may construct new dimensions by combining some dimensions in the original data) Dimensionality reduction methods and spectral clustering 26" }, { "page_index": 757, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_027.png", "page_index": 757, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:09+07:00" }, "raw_text": "Traditional Distance Measures May Not Be Effective on High-D Data Traditional distance measure could be dominated by noises in many dimensions Ex. Which pairs of customers are more similar? Customer P1 P3 P4 P5 P6 P7 P8 P9 P10 P2 Ada 1 0 0 0 0 0 0 0 0 0 Bob 0 0 0 0 0 0 0 0 0 1 Cathy 1 0 0 0 1 0 0 0 0 1 By Euclidean distance, we get, dist(Ada,Bob) = dist(Bob,Cathy) = dist(Ada,Cathy) = V2 despite Ada and Cathy look more similar Clustering should not only consider dimensions but also attributes (features) Feature transformation: effective if most dimensions are relevant (PCA & SVD useful when features are highly correlated/redundant) Feature selection: useful to find a subspace where the data have nice clusters 27" }, { "page_index": 758, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_028.png", "page_index": 758, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:16+07:00" }, "raw_text": "The Curse of Dimensionality 0.5 1.0 1.5 (graphs adapted from Parsons et al. KDD Explorations 2004) Dimension a Data in only one dimension is relatively packed 9 SO Adding a dimension \"stretch\" the 00 points across that dimension, making 00 0.5 1.0 1.5 20 Dimension a them further apart (b) 6 Objects in One Unit Bin Adding more dimensions will make the points further apart-high dimensional data is extremely sparse Distance measure becomes 80 0.0 meaningless-due to equi-distance CO 0.5 Le 15 20 Dimensiona (c) 4 Objects in One Unit Bin 28" }, { "page_index": 759, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_029.png", "page_index": 759, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:24+07:00" }, "raw_text": "Why Subspace Clustering? (adapted from Parsons et al. SlGKDD Explorations 2oo4) Clusters may exist only in some subspaces ab be Subspace-clustering: find clusters in all the subspaces a Dimension a 100 200 300 400 100 200 300 400 100 200 300 400 Index Frequency Index Index Frequency Frequency a Dimension a b) Dimension b c Dimension c DD -2 05 00 0.5 Dimension a Dimension b Dimension a a) Dims a &b b) Dims b & c (c)Dims a & c" }, { "page_index": 760, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_030.png", "page_index": 760, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:28+07:00" }, "raw_text": "Subspace Clustering Methocs Subspace search methods: Search various subspaces to find clusters Bottom-up approaches Top-down approaches Correlation-based clustering methods E.g., PCA based approaches Bi-clustering methods Optimization-based methods Enumeration methods" }, { "page_index": 761, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_031.png", "page_index": 761, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:33+07:00" }, "raw_text": "Subspace Clustering Method (I): Subspace . Search Methods Search various subspaces to find clusters Bottom-up approaches Start from low-D subspaces and search higher-D subspaces only when there may be clusters in such subspaces Various pruning techniques to reduce the number of higher-D subspaces to be searched Ex. CLlQUE (Agrawal et al.1998) Top-down approaches Start from full space and search smaller subspaces recursively Effective only if the /ocality assumption holds: restricts that the subspace of a cluster can be determined by the local neighborhood Ex. PROCLUS (Aggarwal et al. 1999): a k-medoid-like method 31" }, { "page_index": 762, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_032.png", "page_index": 762, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:41+07:00" }, "raw_text": "CLIQUE: SubSpace Clustering with Aprori i Pruning (00001) (eek) 5 4 2 2 O age age 20 30 40 50 60 20 30 40 50 60 t = 3 30 50 age Salary 32" }, { "page_index": 763, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_033.png", "page_index": 763, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:47+07:00" }, "raw_text": "Method (Il): Subspace Clustering Correlation-Based Methods Subspace search method: similarity based on distance or density Correlation-based method: based on advanced correlation models Ex. PCA-based approach: Apply PCA (for Principal Component Analysis) to derive a set of new, uncorrelated dimensions then mine clusters in the new space or its subspaces Other space transformations: Hough transform Fractal dimensions 33" }, { "page_index": 764, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_034.png", "page_index": 764, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:15:55+07:00" }, "raw_text": " Clustering Method (lll): Subspace Bi-Clustering Methods sample/condition Bi-clustering: Cluster both objects and attributes simultaneously (treat objs and attrs in symmetric way) W1 m Four requirements: T22 gene W1 Only a small set of objects participate in a cluster W1 W2 A cluster only involves a small number of attributes An object may participate in multiple clusters, or does not participate in any cluster at all W nl om An attribute may be involved in multiple clusters, or is not involved in any cluster at all Ex 1. Gene expression or microarray data: a gene sample/condition matrix. Each element in the matrix, a real number, products records the expression level of a gene under a W11 W12 W1m specific condition customers W21 W22 W2m Ex. 2. Clustering customers and products Another bi-clustering problem Wn1 Wn2 Wnm 34" }, { "page_index": 765, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_035.png", "page_index": 765, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:05+07:00" }, "raw_text": "Let A = {a,, ..., a,} be a set of genes, B = {b1, ..., b,} a set of conditions A bi-cluster: A submatrix where genes and conditions follow some consistent patterns 10 10 10 10 10 4 types of bi-clusters (ideal cases) 20 20 20 20 20 Bi-clusters with constant values: 50 50 50 50 50 0 0 0 0 0 - for any i in I and j in J, e, = c 10 50 30 70 20 Bi-clusters with constant values on rows: 20 60 40 80 30 ej= c + aj 50 90 70 110 60 0 40 20 60 10 Also, it can be constant values on columns Bi-clusters with coherent values (aka. pattern-based clusters) ej = c + aj+ Bj 10 50 30 70 20 20 100 50 1000 30 Bi-clusters with coherent evolutions on rows 50 100 90 120 80 ey (enj1- enj2)(erj1- e2j2) = 0 0 80 20 100 10 i.e., only interested in the up- or down- regulated changes across genes or conditions without constraining on the exact values 35" }, { "page_index": 766, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_036.png", "page_index": 766, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:11+07:00" }, "raw_text": "Bi-Clustering Methods Real-world data is noisy: Try to find approximate bi-clusters Methods: Optimization-based methods vs. enumeration methods Optimization-based methods Try to find a submatrix at a time that achieves the best significance as a bi-cluster Due to the cost in computation, greedy search is employed to find local optimal bi-clusters Ex. -Cluster Algorithm (Cheng and Church, ISMB'2000) Enumeration methods Use a tolerance threshold to specify the degree of noise allowed in the bi-clusters to be mined Then try to enumerate all submatrices as bi-clusters that satisfy the requirements Ex. -pCluster Algorithm (H. Wang et al.' SIGMOD'2002, MaPle: Pei et al.,ICDM'2003) 36" }, { "page_index": 767, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_037.png", "page_index": 767, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:21+07:00" }, "raw_text": "Bi-Clustering for Micro-Array Data Analysis Left figure: Micro-array \"raw\" data shows 3 genes and their values in a multi-D space: Difficult to find their patterns Right two: Some subsets of dimensions form nice shift and scaling patterns No globally defined similarity/distance measure Clusters may not be exclusive An object can appear in multiple clusters 00 90 Object1- Object1- 80 Object1- 80 80 Obje2- Objec2 Obje2- 70 70 Object3-0 70 Obje ct3-0 Objct3-0 60 60 60 50 50 50 40 40 40 30 30 30 20 20 20 10 10 10 0 0 a b c d 9 h b C h f d 9 9" }, { "page_index": 768, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_038.png", "page_index": 768, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:30+07:00" }, "raw_text": "Bi-Clustering (I): &-Bi-Cluster 1 For a submatrix / x J, the mean of the i-th row: eiJ Cij J 1 jEJ The mean of the i-th column: e1j Cij II The mean of all elements in the submatrix is iEI 1 1 eIJ eij eiJ eIj IJ I J iEI,jEJ iEI jEJ The quality of the submatrix as a bi-cluster can be measured by the mean squared residue value 1 H(I x J) = (eij -eiJ -eIj+ eIJ)2 IJ iEI,jEJ A submatrix / x J is -bi-cluster if H(/ x J) where 0 is a threshold. When = 0, / x J is a perfect bi-cluster with coherent values. By setting > 0 a user can specify the tolerance of average noise per element against a perfect bi-cluster residue(ei) = ej - ei - eij+ e 38" }, { "page_index": 769, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_039.png", "page_index": 769, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:37+07:00" }, "raw_text": "Bi-Clustering (I): The S-Cluster Algorithm Maximal -bi-cluster is a &-bi-cluster / x J such that there does not exist another -bi-cluster I' x J' which contains / x J Computing is costly: Use heuristic greedy search to obtain local optimal clusters Two phase computation: deletion phase and additional phase Deletion phase: Start from the whole matrix, iteratively remove rows and columns while the mean sguared residue of the matrix is over At each iteration, for each row/column, compute the mean squared residue: d(i) = J 1 1 jEJ iEI Remove the row or column of the largest mean squared residue Addition phase: Expand iteratively the -bi-cluster / x J obtained in the deletion phase as long as the -bi-cluster reguirement is maintained Consider all the rows/columns not involved in the current bi-cluster / x J by calculating their mean squared residues A row/column of the smallest mean squared residue is added into the current ö-bi-cluster It finds only one -bi-cluster, thus needs to run multiple times: replacing the elements in the output bi-cluster by random numbers 39" }, { "page_index": 770, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_040.png", "page_index": 770, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:43+07:00" }, "raw_text": "Bi-Clustering (lI): &-pCluster Enumerating all bi-clusters (-pClusters) [H. Wang, et al., Clustering by pattern similarity in large data sets. SIGMOD'02] ei1j1 ei1j2 -(ei1j1- ei2j1) - (ei1j2- ei2j2)1 p-score ei2j1 ei2j2 A submatrix / x J is a -pCluster (pattern-based cluster) if the p-score of every 2 x 2 submatrix of / x J is at most , where 0 is a threshold specifying a user's tolerance of noise against a perfect bi-cluster The p-score controls the noise on every element in a bi-cluster, while the mean squared residue captures the average noise Monotonicity: If / x J is a -pClusters, every x x y (x,y 2) submatrix of / x J is also a -pClusters. A -pCluster is maximal if no more row or column can be added into the cluster and retain -pCluster: We only need to compute all maximal -pClusters. 40" }, { "page_index": 771, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_041.png", "page_index": 771, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:16:54+07:00" }, "raw_text": "MaPle: Efficient Enumeration of &-pClusters Pei et al., MaPle: Efficient enumerating all maximal - 90 Object1- 02- 70 Object3-0 pClusters. ICDM'03 60 50 Framework: Same as pattern-growth in frequent pattern 40 mining (based on the downward closure property) 30 20 For each condition combination J, find the maximal subsets 10 . 0 of genes I such that I x J is a -pClusters c d e 9 h 90 Obje ct1- 80 If I x J is not a submatrix of another -pClusters Oa2- 70 Object3-0 60 then I x J is a maximal -pCluster. 50 Algorithm is very similar to mining freguent closed itemsets 40 30 Additional advantages of -pClusters: 20 10 D Due to averaging of -cluster, it may contain outliers b 90 Obje ct1- but still within -threshold 80 Oe2- 70 Object3-0 Computing bi-clusters for scaling patterns, take 60 50 logarithmic on d 40 10 30 20 10 will lead to the p-score form 41" }, { "page_index": 772, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_042.png", "page_index": 772, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:02+07:00" }, "raw_text": "Dimensionality-Reduction Methods 0.707x + 0.707y Dimensionality reduction: In some situations, it is more effective to construct a new space instead of using some subspaces of the original data 0 X Ex. To cluster the points in the right figure, any subspace of the original one, X and Y, cannot help, since all the three clusters will be projected into the overlapping areas in X and Y axes. Construct a new dimension as the dashed one, the three clusters become apparent when the points projected into the new dimension Dimensionality reduction methods Feature selection and extraction: But may not focus on clustering structure finding Spectral clustering: Combining feature extraction and clustering (i.e., use the spectrum of the similarity matrix of the data to perform dimensionality reduction for clustering in fewer dimensions) Normalized Cuts (Shi and Malik, CVPR'97 or PAMI'2000) The Ng-Jordan-Weiss algorithm (NIPS'01) 42" }, { "page_index": 773, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_043.png", "page_index": 773, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:09+07:00" }, "raw_text": "Spectral Clustering: The Ng-Jordan-Weiss (N)W) Algorithm of objects, dist(o, oj), find the desired number k of clusters Calculate an affinity matrix W, where o is a scaling parameter that controls how fast the affinity W, decreases as dist(o, oi) increases. In NJW,set Wj= 0 Dii = Wij j=1 Derive a matrix A = f(W). NJW defines a matrix D to be a diagonal dist(oi,0j) e 2 Then,A is set to A= D-WD- A spectral clustering method finds the k leading eigenvectors of A A vector v is an eigenvector of matrix A if Av = Av, where is the corresponding eigen-value Using the k leading eigenvectors, project the original data into the new space defined by the k leading eigenvectors, and run a clustering algorithm, such as k-means, to find k clusters Assign the original data points to clusters according to how the transformed points are assigned in the clusters obtained 43" }, { "page_index": 774, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_044.png", "page_index": 774, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:17+07:00" }, "raw_text": "Spectral Clustering: Illustration and Comments Data Affinity matrix Computing the leading Clustering in the Projecting back to k eigenvectors of A cluster the original data new space [wj] Av = lamda y A = f(W) V =[v,V2,V3. U =[u,u2,u3] W;A 10 10 40 5I 0.5 10 0 to 20 30 Spectral clustering: Effective in tasks like image processing Scalability challenge: Computing eigenvectors on a large matrix is costly Can be combined with other clustering methods, such as bi-clustering 44" }, { "page_index": 775, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_045.png", "page_index": 775, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:21+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Probability Model-Based Clustering Clustering High-Dimensional Data Clustering Graphs and Network Data Clustering with Constraints Summary 45" }, { "page_index": 776, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_046.png", "page_index": 776, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:26+07:00" }, "raw_text": "Clustering Graphs and Network Data Applications Bi-partite graphs, e.g., customers and products, authors and conferences Web search engines, e.g., click through graphs and Web graphs Social networks, friendship/coauthor graphs Similarity measures Geodesic distances Distance based on random walk (SimRank) Graph clustering methods Minimum cuts: FastModularity (Clauset, Newman & Moore, 2004) Density-based clustering: SCAN (Xu et al., KDD'2007) 46" }, { "page_index": 777, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_047.png", "page_index": 777, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:32+07:00" }, "raw_text": "b a C Similarity Measure d. e Geodesic distance (A, B): length (i.e., # of edges) of the shortest path between A and B (if not connected, defined as infinite) Eccentricity of v, eccen(v): The largest geodesic distance between v and any other vertex u e V - {v} E.g., eccen(a) = eccen(b) = 2; eccen(c) = eccen(d) = eccen(e) = 3 Radius of graph G: The minimum eccentricity of all vertices, i.e., the distance between the \"most central point\" and the \"farthest border' r = min vev eccen(v) E.g., radius (g) = 2 Diameter of graph G: The maximum eccentricity of all vertices, i.e., the largest distance between any pair of vertices in G E.g., diameter (g) = 3 A peripheral vertex is a vertex that achieves the diameter. E.g., Vertices c, d, and e are peripheral vertices 47" }, { "page_index": 778, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_048.png", "page_index": 778, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:39+07:00" }, "raw_text": "SimRank: Similarity Based on Random Walk and Structural Context SimRank: structural-context similarity, i.e., based on the similarity of its neighbors In a directed graph G = (V,E) individual in-neighborhood of v: I(v) ={u(u, v) e E} individual out-neighborhood of v: O(v) ={w l (v, w) e E} C s(u,v) = Similarity in SimRank: s(x,y) I(u)I(v) xEI(u) yEI(v) 0 if u T v k-1 1 if l(t) > 0 P[t] = O(w) if u = v 0 if l(t) = 0. 1 Similarity based on random walk: in a strongly connected component d(u,v)= P[t]l(t) Expected distance: P[t] is the probability of the tour t:umv Expected meeting distance: m(u,v)= P[t]l(t) t:(u,v)(x,x) Expected meeting probability: p(u,v)= P[t]Cl(t t:(u,v)(x,x) 48" }, { "page_index": 779, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_049.png", "page_index": 779, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:48+07:00" }, "raw_text": "Graph Clustering: Sparsest Cut Sparsest cut C2 g G = (V,E). The cut set of a cut is the set of edges{(u,v) e E l u e S,v e T} and d a S and T are in two partitions e Size of the cut: # of edges in the cut set k inimum cut Cl Min-cut (e.g., C1) is not a good partition the size of the cut A better measure: Sparsity: = min{S,TD A cut is sparsest if its sparsity is not greater than that of any other cut Ex. Cut C2 = ({a, b, c, d, e, f, I}, {g, h, i, j, k}) is the sparsest cut For k clusters, the modularity of a clustering assesses the quality of the clustering: k 2 /: # edges between vertices in the i-th cluster Q = E 2E dj: the sum of the degrees of the vertices in the i-th cluster i=1 The modularity of a clustering of a graph is the difference between the fraction of all edges that fall into individual clusters and the fraction that would do so if the graph vertices were randomly connected The optimal clustering of graphs maximizes the modularity 49" }, { "page_index": 780, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_050.png", "page_index": 780, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:53+07:00" }, "raw_text": "Graph Clustering: Challenges of Finding Good Cuts High computational cost Many graph cut problems are computationally expensive The sparsest cut problem is NP-hard Need to tradeoff between efficiency/scalability and quality Sophisticated graphs May involve weights and/or cycles High dimensionality A graph can have many vertices. In a similarity matrix, a vertex is represented as a vector (a row in the matrix) whose dimensionality is the number of vertices in the graph Sparsity A large graph is often sparse, meaning each vertex on average connects to only a small number of other vertices A similarity matrix from a large sparse graph can also be sparse 50" }, { "page_index": 781, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_051.png", "page_index": 781, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:17:58+07:00" }, "raw_text": "Two Approaches for Graph Clustering Two approaches for clustering graph data Use generic clustering methods for high-dimensional data Designed specifically for clustering graphs Using clustering methods for high-dimensional data Extract a similarity matrix from a graph using a similarity measure A generic clustering method can then be applied on the similarity matrix to discover clusters Ex. Spectral clustering: approximate optimal graph cut solutions Methods specific to graphs Search the graph to find well-connected components as clusters Ex. SCAN (Structural Clustering Algorithm for Networks) X. Xu, N. Yuruk, Z. Feng, and T. A. J. Schweiger, \"SCAN: A Structural Clustering Algorithm for Networks\", KDD'07 51" }, { "page_index": 782, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_052.png", "page_index": 782, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:05+07:00" }, "raw_text": "SCAN: Density-Based Clustering of Networks How many clusters? 3 What size should they be? 5 What is the best partitioning? 6 Should some points be 8 12 segregated? 10 An Example Network 13 d byyFiles Application: Given simply information of who associates with whom. could one identify clusters of individuals with common interests or special relationships (families, cliques, terrorist cells)? 52" }, { "page_index": 783, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_053.png", "page_index": 783, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:09+07:00" }, "raw_text": "A Social Network Model Cliques, hubs and outliers Individuals in a tight social group, or clique, know many of the same people, regardless of the size of the group Individuals who are hubs know many people in different groups but belong to no single group. Politicians, for example bridge multiple groups Individuals who are outliers reside at the margins of society. Hermits, for example, know few people and belong to no group The Neighborhood of a Vertex Define T(v) as the immediate neighborhood of a vertex (i.e. the set of people that an individual knows 53" }, { "page_index": 784, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_054.png", "page_index": 784, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:14+07:00" }, "raw_text": "Structure Similarity The desired features tend to be captured by a measure we call Structural Similarity T(v)NT(w) o(v,w)= VT(v)T(w)l Structural similarity is large for members of a clique 2 3 and small for hubs and outliers 0 12 10 13 byyFies 54" }, { "page_index": 785, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_055.png", "page_index": 785, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:19+07:00" }, "raw_text": "s-Neighborhood: N,(v)={weT(v)o(v,w)&} Core: CORE (v)<>N(v)>u Direct structure reachable: DirRECH (v,w) =>CORE (v)weN(v) Structure reachable: transitive closure of direct structure reachability Structure connected: CONNECT (v,w)<>3u e V:RECH (u,v) RECH (u,w) [1] M. Ester, H. P. Kriegel, J. Sander, & X. Xu (KDD'96) \"A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases 55" }, { "page_index": 786, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_056.png", "page_index": 786, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:24+07:00" }, "raw_text": "Structure-Connected Clusters Structure-connected cluster C Connectivity: Vv,w e C:CONNECT.u(v,w) Maximality: Vv,W e V:v E C REACH (v,w) => w e C Hubs: Not belong to any cluster Bridge to many clusters 0 Outliers: 12 Not belong to any cluster 10 Connect to less clusters outlier eredbyyFiles 56" }, { "page_index": 787, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_057.png", "page_index": 787, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:28+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 8 12 10 9 13 c byyFlles 57" }, { "page_index": 788, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_058.png", "page_index": 788, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:33+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 8 12 10 9 0.63 13 d byyFlles 58" }, { "page_index": 789, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_059.png", "page_index": 789, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:38+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 0.67 6 0 11 0.82 8 12 0.75 10 9 13 c byyFlles 59" }, { "page_index": 790, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_060.png", "page_index": 790, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:42+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 12 10 9 13 c byyFlles 60" }, { "page_index": 791, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_061.png", "page_index": 791, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:46+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 12 10 0.67 9 13 c byyFlles 61" }, { "page_index": 792, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_062.png", "page_index": 792, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:51+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 0.73 11 12 0.72 10 9 13 c byyFlles 62" }, { "page_index": 793, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_063.png", "page_index": 793, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:55+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 12 10 9 13 c byyFlles 63" }, { "page_index": 794, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_064.png", "page_index": 794, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:18:59+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 0.51 7 6 0 11 12 10 9 13 c byyFlles 64" }, { "page_index": 795, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_065.png", "page_index": 795, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:04+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0.68 0 11 12 10 9 13 c byyFlles 65" }, { "page_index": 796, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_066.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_066.png", "page_index": 796, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:08+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 0.51 12 10 9 13 c byyFlles 66" }, { "page_index": 797, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_067.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_067.png", "page_index": 797, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:11+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 12 10 9 13 c byyFlles 67" }, { "page_index": 798, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_068.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_068.png", "page_index": 798, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:16+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 0.51 4 7 n 6 0.51 0 11 12 10 9 13 c byyFlles 68" }, { "page_index": 799, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_069.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_069.png", "page_index": 799, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:19+07:00" }, "raw_text": "Algorithm 2 3 u = 2 5 = 0.7 4 7 6 0 11 12 10 9 13 c byyFlles 69" }, { "page_index": 800, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_070.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_070.png", "page_index": 800, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:24+07:00" }, "raw_text": "Running Time Running time = O(lEI) For sparse networks = O(IV) 3500 3000 -This algorithm 2500 -Fast Modularity [2 2000 1500 1000 500 0 1,000 2,000 5,000 10,000 20,000 50,000 100,000 200,000 500,000 1,000,000 Num. of Vertices [2]A. Clauset, M. E. J. Newman, & C. Moore, Phys. Rev. E 70, 066111 (2004) 70" }, { "page_index": 801, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_071.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_071.png", "page_index": 801, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:28+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Probability Model-Based Clustering Clustering High-Dimensional Data Clustering Graphs and Network Data Clustering with Constraints Summary 71" }, { "page_index": 802, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_072.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_072.png", "page_index": 802, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:31+07:00" }, "raw_text": "Why Constraint-Based Cluster Analysis? Need user feedback: Users know their applications the best ATM allocation problem: obstacle & desired clusters 72" }, { "page_index": 803, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_073.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_073.png", "page_index": 803, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:37+07:00" }, "raw_text": "Categorization of Constraints Constraints on instances: specifies how a pair or a set of instances should be grouped in the cluster analysis Must-link vs. cannot link constraints must-link(x, y): x and y should be grouped into one cluster Constraints can be defined using variables, e.g., cannot-link(x, y) if dist(x, y) > d Constraints on clusters: specifies a requirement on the clusters E.g., specify the min # of objects in a cluster, the max diameter of a cluster, the shape of a cluster (e.g., a convex), # of clusters (e.g., k) Constraints on similarity measurements: specifies a requirement that the similarity calculation must respect E.g., driving on roads, obstacles (e.g., rivers, lakes) Issues: Hard vs. soft constraints; conflicting or redundant constraints 73" }, { "page_index": 804, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_074.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_074.png", "page_index": 804, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:43+07:00" }, "raw_text": "Constraint-Based Clustering Methods (I): Handling Hard Constraints Handling hard constraints: Strictly respect the constraints in cluster assignments Example: The COP-k-means algorithm Generate super-instances for must-link constraints Compute the transitive closure of the must-link constraints To represent such a subset, replace all those objects in the subset by the mean. The super-instance also carries a weight, which is the number of objects it represents Conduct modified k-means clustering to respect cannot-link constraints Modify the center-assignment process in k-means to a nearest feasible center assignment An object is assigned to the nearest center so that the assignment respects all cannot-link constraints 74" }, { "page_index": 805, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_075.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_075.png", "page_index": 805, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:49+07:00" }, "raw_text": "Constraint-Based Clustering Methods (ll): Handling Soft Constraints Treated as an optimization problem: When a clustering violates a soft constraint, a penalty is imposed on the clustering Overall objective: Optimizing the clustering quality, and minimizing the constraint violation penalty Ex. CVQE (Constrained Vector Quantization Error) algorithm: Conduct k-means clustering while enforcing constraint violation penalties Objective function: Sum of distance used in k-means, adjusted by the constraint violation penalties Penalty of a must-link violation If objects x and y must-be-linked but they are assigned to two different centers, c, and c,, dist(c1, c,) is added to the objective function as the penalty Penalty of a cannot-link violation If objects x and y cannot-be-linked but they are assigned to a common center c, dist(c, c'), between c and c' is added to the objective function as the penalty, where c' is the closest cluster to c that can accommodate x or y 75" }, { "page_index": 806, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_076.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_076.png", "page_index": 806, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:19:56+07:00" }, "raw_text": "Speeding Up Constrained Clustering It is costly to compute some constrained clustering Ex. Clustering with obstacle objects: Tung 0 9 P Hou, and Han. Spatial clustering in the presence of obstacles, ICDE'01 VG K-medoids is more preferable since k-means may locate the ATM center in the middle of a VG' lake Visibility graph and shortest path Triangulation and micro-clustering Two kinds of ioin indices (shortest-paths worth pre-computation VV index: indices for any pair of obstacle vertices MV index: indices for any pair of micro- cluster and obstacle indices 76" }, { "page_index": 807, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_077.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_077.png", "page_index": 807, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:02+07:00" }, "raw_text": "An Example: Clustering With Obstacle Objects Not Taking obstacles into account Taking obstacles into account 77" }, { "page_index": 808, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_078.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_078.png", "page_index": 808, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:09+07:00" }, "raw_text": "User-Guided Clustering: A Special Kind of Constraints Open-course Course Work-ln Professor course course-id person name office semester name group instructor position area / Publish Publication Advise Group author title / professor title name year student conf area degree Register User hint student Student course name semester Target of office unit clustering position grade X. Yin, J. Han, P. S. Yu, \"Cross-Relational Clustering with User's Guidance\" KDD'05 User usually has a goal of clustering, e.g., clustering students by research area User specifies his clustering goal to CrossClus 78" }, { "page_index": 809, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_079.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_079.png", "page_index": 809, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:14+07:00" }, "raw_text": "Comparing with Classification User-specified feature (in the form User hint of attribute) is used as a hint, not class labels The attribute may contain too many or too few distinct values. e.g., a user may want to cluster students into 20 clusters instead of 3 Additional features need to be All tuples for clustering included in cluster analysis 79" }, { "page_index": 810, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_080.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_080.png", "page_index": 810, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:18+07:00" }, "raw_text": "Comparing with Semi-Supervised Clustering Semi-supervised clustering: User provides a training set consisting of \"similar\" (\"must-link) and \"dissimilar\" (\"cannot link\") pairs of objects User-guided clustering: User specifies an attribute as a hint, and more relevant features are found for clustering User-guided clustering Semi-supervised clustering All tuples for clustering 80" }, { "page_index": 811, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_081.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_081.png", "page_index": 811, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:22+07:00" }, "raw_text": "Why Not Semi-Supervised Clustering? Much information (in multiple relations) is needed to judge whether two tuples are similar A user may not be able to provide a good training set It is much easier for a user to specify an attribute as a hint such as a student's research area Tom Smith SC1211 TA Jane Chang Bl205 RA Tuples to be compared User hint 81" }, { "page_index": 812, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_082.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_082.png", "page_index": 812, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:26+07:00" }, "raw_text": "CrossClus: An Overview Measure similarity between features by how they group objects into clusters Use a heuristic method to search for pertinent features Start from user-specified feature and gradually expand search range Use tuple ID propagation to create feature values Features can be easily created during the expansion of search range, by propagating IDs Explore three clustering algorithms: k-means, k-medoids, and hierarchical clustering 82" }, { "page_index": 813, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_083.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_083.png", "page_index": 813, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:34+07:00" }, "raw_text": "Multi-Relational Features A multi-relational feature is defined by: A join path, e.g., Student -> Register - OpenCourse -> Course An attribute, e.g., Course.area (For numerical feature) an aggregation operator, e.g., sum or average Categorical feature f = [Student -> Register -> OpenCourse -> Course, Course.area, null] f(t1) areas of courses of each student Values of feature f Tuple Areas of courses Tuple Feature f f(t2) DB Al TH DB Al TH DB t1 5 5 0 t1 0.5 0.5 0 f(t3) t2 0 3 t2 0 0.3 0.7 AI f(t4) t3 1 5 4 t3 0.1 0.5 0.4 TH t4 5 0 5 t4 0.5 0 0.5 f(ts) t5 3 3 4 t5 0.3 0.3 0.4 83" }, { "page_index": 814, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_084.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_084.png", "page_index": 814, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:40+07:00" }, "raw_text": "Representing Features Similarity between tuples t, and t, w.r.t. categorical feature f Cosine similarity between vectors f(t1) and f(t2) Ef(t)Pkf(t}Pk sim(t,t,)= k=1 E f(t)Px Ef(t)Px 2 Similarity vector Vf k=1 k=1 2 Most important information of a 1.8 feature f is how f groups tuples into 1.6 clusters 1.4 f is represented by similarities 1.2 between every pair of tuples indicated by f 0.8 The horizontal axes are the tuple 0.6 indices, and the vertical axis is the 5 0.4 similarity 0.2 3 This can be considered as a vector 0 of N x N dimensions S5 S4 S3 S2 S1 84" }, { "page_index": 815, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_085.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_085.png", "page_index": 815, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:49+07:00" }, "raw_text": "Similarity Between Features Vf Values of Feature f and g 2 1.8 Feature f (course) Feature g (group) 1.6 DB AI TH Info sys Cog sci Theory 1.4 1.2 t1 0.5 0.5 0 1 0 O t2 0 0.3 0.7 0 0 1 0.8 0.6 0.1 t3 0.5 0.4 0 0.5 0.5 5 0.4 t4 0.5 0 0.5 0.5 0 0.5 0.2 0 0.3 0.3 0.4 0.5 0.5 S5 t5 O S4 S3 S2 S1 Vg Similarity between two features - 1.8 cosine similarity of two vectors 1.6 1.4 1.2 Vf.V8 sim(f,g)= 0.8 0.6 V g 5 0.4 0.2 3 0 S5 S4 S3 S2 S1 85" }, { "page_index": 816, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_086.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_086.png", "page_index": 816, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:20:56+07:00" }, "raw_text": "Computing Feature Similarity Tuples Similarity between feature Feature g Feature./ values w.r.t. the tuples sim(f,8q)= DB Info sys AI Cog sci DB Info sys TH Theory 2 N N m 1 Vf V8 (t,ti)= .sim Feature value similarities, g i=1 j1 hard to compute k=1 q=easy to compute DB Info sys Compute similarity between each pair of AI Cog sci feature values by one scan on data TH Theory 86" }, { "page_index": 817, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_087.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_087.png", "page_index": 817, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:00+07:00" }, "raw_text": "Searching for Pertinent Features Different features convey different aspects of information Académic Performances Research area Demographic info GPA Research group area Permanent address GRE score Conferences of papers Nationality Number of papers Advisor Features conveying same aspect of information usually cluster tuples in more similar ways Research group areas vs. conferences of publications Given user specified feature Find pertinent features by computing feature similarity 87" }, { "page_index": 818, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_088.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_088.png", "page_index": 818, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:08+07:00" }, "raw_text": "Heuristic Search for Pertinent Features Open-course Course Work-In Professor course course-id person name group office semester name Overall procedure 2 instructor position area 1. Start from the user- Publication Group Publish specified feature professor title author name 2. Search in neighborhood student year title area of existing pertinent degree conf Register features User hint student 3. Expand search range Student course gradually name semester office Target of unit clustering position grade Tuple ID propagation is used to create multi-relational features IDs of target tuples can be propagated along any join path, from which we can find tuples joinable with each target tuple 88" }, { "page_index": 819, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_089.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_089.png", "page_index": 819, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:12+07:00" }, "raw_text": "Clustering with Multi-Relational Features Given a set of L pertinent features f1, ..., f., similarity between two tuples 1 sim(t,t,)= (t,t2) f;.weight sim i=1 Weight of a feature is determined in feature search by its similarity with other pertinent features Clustering methods CLARANS [Ng & Han 94], a scalable clustering algorithm for non-Euclidean space K-means Agglomerative hierarchical clustering 89" }, { "page_index": 820, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_090.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_090.png", "page_index": 820, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:18+07:00" }, "raw_text": "Experiments: Compare CrossClus with Baseline: Only use the user specified feature PROCLUS [Aggarwal, et al. 99]: a state-of-the-art subspace clustering algorithm Use a subset of features for each cluster We convert relational database to a table by propositionalization User-specified feature is forced to be used in every cluster RDBC [Kirsten and Wrobel'00] A representative ILP clustering algorithm Use neighbor information of objects for clustering User-specified feature is forced to be used 90" }, { "page_index": 821, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_091.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_091.png", "page_index": 821, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:22+07:00" }, "raw_text": "Measure of Clustering Accuracy Accuracy Measured by manually labeled data We manually assign tuples into clusters according to their properties (e.g., professors in different research areas) Accuracy of clustering: Percentage of pairs of tuples in the same cluster that share common label This measure favors many small clusters We let each approach generate the same number of clusters 91" }, { "page_index": 822, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_092.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_092.png", "page_index": 822, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:27+07:00" }, "raw_text": "DBLP Dataset Clustering Accurarcy - DBLP - 0.9 0.8 0.7 CrossClus K-Medoids 0.6 CrossClus K-Means 0.5 CrossClus Agglm 0.4 Baseline PROCLUS 0.3 RDBO 0.2 0.1 0 Conf Word All three Coauthor Conf+Word Conf+Coauthor Word+Coauthor 92" }, { "page_index": 823, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_093.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_093.png", "page_index": 823, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:31+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Probability Model-Based Clustering Clustering High-Dimensional Data Clustering Graphs and Network Data Clustering with Constraints Summary 93" }, { "page_index": 824, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_094.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_094.png", "page_index": 824, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:35+07:00" }, "raw_text": "Summary Probability Model-Based Clustering Fuzzy clustering Probability-model-based clustering The EM algorithm Clustering High-Dimensional Data Subspace clustering: bi-clustering methods Dimensionality reduction: Spectral clustering Clustering Graphs and Network Data Graph clustering: min-cut vs. sparsest cut High-dimensional clustering methods Graph-specific clustering methods, e.g., SCAN Clustering with Constraints Constraints on instance objects, e.g., Must link vs. Cannot Link Constraint-based clustering algorithms 94" }, { "page_index": 825, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_095.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_095.png", "page_index": 825, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:44+07:00" }, "raw_text": "References (I) R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. S/GMOD'98 C. C. Aggarwal, C. Procopiuc, J. Wolf, P. S. Yu, and J.-S. Park. Fast algorithms for projected clustering. S/GMOD'99 S. Arora, S. Rao, and U. Vazirani. Expander flows, geometric embeddings and graph partitioning J.ACM.56:5:1-5:37.2009. J. C. Bezdek. Pattern Recognition with Fuzzy Obiective Function A/gorithms. Plenum Press, 1981. K. S. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is \"nearest neighbor\" meaningful? lCDT'99 Y. Cheng and G. Church. Biclustering of expression data. /SMB'00 I. Davidson and S. S. Ravi. Clustering with constraints: Feasibility issues and the k-means algorithm. SDM'05 I. Davidson, K. L. Wagstaff, and S. Basu. Measuring constraint-set utility for partitional clustering algorithms. PKDD'06 C. Fraley and A. E. Raftery. Model-based clustering, discriminant analysis, and density estimation. J. American Stat. Assoc.. 97:611-631. 2002. F. H\"oppner, F. Klawonn, R. Kruse, and T. Runkler. Fuzzy Cluster Analysis: Methods for Classification, Data Analysis and Image Recognition. Wiley, 1999. G. Jeh and J. Widom. SimRank: a measure of structural-context similarity. KDD'02 H.-P. Kriegel, P. Kroeger, and A. Zimek. Clustering high dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering. ACM Trans. Knowledge Discovery from Data (TKDD), 3,2009. U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395-416, 2007 95" }, { "page_index": 826, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_096.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_096.png", "page_index": 826, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:52+07:00" }, "raw_text": "References (II) G. J. McLachlan and K. E. Bkasford. Mixture Models: Inference and Applications to Clustering. John Wiley & Sons, 1988. B. Mirkin. Mathematical classification and clustering. J. of G/obal Optimization, 12:105-108, 1998 S. C. Madeira and A. L. Oliveira. Biclustering algorithms for biological data analysis: A survey. IEEE/ACM Trans. Comput. Biol. Bioinformatics, 1, 2004. A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. NIPS'01 J. Pei, X. Zhang, M. Cho, H. Wang, and P. S. Yu. Maple: A fast algorithm for maximal pattern-based clustering. lCDM'03 M. Radovanovi'c, A. Nanopoulos, and M. Ivanovi'c. Nearest neighbors in high-dimensional data: the emergence and influence of hubs. /CML '09 S. E. Schaeffer. Graph clustering. Computer Science Review, 1:27-64, 2007. A. K. H. Tung, J. Hou, and J. Han. Spatial clustering in the presence of obstacles. /CDE'01 A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-based clustering in large databases./CD7'01 A. Tanay, R. Sharan, and R. Shamir. Biclustering algorithms: A survey. In Handbook of Computational Molecular Biology, Chapman & Hall, 2004. K. Wagstaff, C. Cardie, S. Rogers, and S. Schr\"odl. Constrained k-means clustering with background knowledge. /CML'01 H. Wang, W. Wang, J. Yang, and P. S. Yu. Clustering by pattern similarity in large data sets. S/GMOD'02 X. Xu, N. Yuruk, Z. Feng, and T. A. J. Schweiger. SCAN: A structural clustering algorithm for networks. KDD'07 X. Yin, J. Han, and P.S. Yu, \"Cross-Relational Clustering with User's Guidance\", KDD'05" }, { "page_index": 827, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_097.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_097.png", "page_index": 827, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:55+07:00" }, "raw_text": "Slides Not to Be Used in Class 97" }, { "page_index": 828, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_098.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_098.png", "page_index": 828, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:21:59+07:00" }, "raw_text": "Conceptual Clustering Conceptual clustering A form of clustering in machine learning Produces a classification scheme for a set of unlabeled objects Finds characteristic description for each concept (class) COBWEB (Fisher'87) A popular a simple method of incremental conceptual learning Creates a hierarchical clustering in the form of a classification tree Each node refers to a concept and contains a probabilistic description of that concept 98" }, { "page_index": 829, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_099.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_099.png", "page_index": 829, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:03+07:00" }, "raw_text": "COBWEB Clustering Methoc A classification tree anima P(CO)=1.O P(scalcslCO)= Q.25 fish amphibian mammalrbicd FlC1j = Q.25 P(C2) =0.25 P(C3)=Q.5 PlscalcslC1)= 1.Q P(moistlC2)=1.0 PlhaiclC31= Q3 bicd mamma P(C+)=Q.5 P(C5) =Q.5 PthaiclC4)= IQ FlfcathccslC5J= 1Q 99" }, { "page_index": 830, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_100.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_100.png", "page_index": 830, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:09+07:00" }, "raw_text": "More on 1 Conceptual Clustering imitations of COBWEB The assumption that the attributes are independent of each other is often too strong because correlation may exist Not suitable for clustering large database data - skewed tree and expensive probability distributions CLASSIT an extension of COBWEB for incremental clustering of continuous data suffers similar problems as COBWEB AutoClass (Cheeseman and Stutz, 1996) Uses Bayesian statistical analysis to estimate the number of clusters Popular in industry 100" }, { "page_index": 831, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_101.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_101.png", "page_index": 831, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:15+07:00" }, "raw_text": "Neural Network Approaches Neural network approaches Represent each cluster as an exemplar, acting as a prototype\" of the cluster New objects are distributed to the cluster whose exemplar is the most similar according to some distance measure Typical methods SOM (Soft-Organizing feature Map) Competitive learning Involves a hierarchical architecture of several units (neurons) Neurons compete in a \"winner-takes-all\" fashion for the object currently being presented 101" }, { "page_index": 832, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_102.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_102.png", "page_index": 832, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:20+07:00" }, "raw_text": "Self-Organizing Feature Map (SOM) SOMs, also called topological ordered maps, or Kohonen Self- Organizing Feature Map (KSOMs) It maps all the points in a high-dimensional source space into a 2 to 3-d target space, s.t., the distance and proximity relationship (i.e., topology) are preserved as much as possible Similar to k-means: cluster centers tend to lie in a low-dimensional manifold in the feature space Clustering is performed by having several units competing for the current object The unit whose weight vector is closest to the current object wins The winner and its neighbors learn by having their weights adjusted SOMs are believed to resemble processing that can occur in the brain Useful for visualizing high-dimensional data in 2- or 3-D space 102" }, { "page_index": 833, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_103.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_103.png", "page_index": 833, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:32+07:00" }, "raw_text": "Web Document Clustering Using SOM interpreted universities workshop varren costa postdoctoral aisb The result of neuron signal phonene SOM clustering engines genesis connect packa bootstrap of 12088 Web progranner extrapolation nteractions packages curves atree weightless articles miseing pdp neuroconputing elnan The picture on paradign personne judgenent exploration Java ai the right: drilling conjugate principle nining fortran intelligence down on the trading mining saturatior brain alanos toolbox consciousness levenberg-narquardt keyword analysts rbf robot neurotransnitters mining' annealing scheduling backpropagator Papers signals lizing annealing Based on tools snns noise variable nining tdl websom.hut.fi decay encoding bayes unsupervised Web page neurofuzzy benchnark hidden popular signoid validation rate 103" }, { "page_index": 834, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_104.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_104.png", "page_index": 834, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:37+07:00" }, "raw_text": "Why Semi-Supervised Learning? Sparsity in data: training g examples cannot cover the data space well unlabeled data can help to address sparsity SVM Transductive SVM Labeled data only 104" }, { "page_index": 835, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_105.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_105.png", "page_index": 835, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:42+07:00" }, "raw_text": "Semi-Supervised Learning Methods Many methods exist: EM with generative mixture models. self-training, co-training, data-based methods, Inductive methods and Transductive methods Transductive methods: only label the available unlabeled data - not generating a classifier Inductive methods: not only produce labels for unlabeled data, but also generate a classifier Algorithmic methods Classifier-based methods: start from an initial classifier, and iteratively enhance it Data-based methods: find an inherent geometry in the data, and use the geometry to find a good classifier 105" }, { "page_index": 836, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_106.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_106.png", "page_index": 836, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:47+07:00" }, "raw_text": "Co-Training Allow C1 to label Iteration:t Iterationt+1 Some instances C1: A Classifier trained on view 1 Add self-labeled instances to the pool of training data C2: A Classifier trained on view 2 Allow C2 to label Some instances 106" }, { "page_index": 837, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_107.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_107.png", "page_index": 837, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:50+07:00" }, "raw_text": "Graph Mincuts Positive samples as sources and negative samples as sinks Unlabeled samples are connected to other samples with weights based on similarity Objective: find a minimum set of edges to remove so that all flows from sources to sinks are blocked 107" }, { "page_index": 838, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_108.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_108.png", "page_index": 838, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:55+07:00" }, "raw_text": "Bi-clustering and Co-clustering Biclustering, co-clustering, or two-mode clustering allows simultaneous clustering of the rows and columns of a matrix Given a set of m rows in n columns (i.e., an mxn matrix), a biclustering algorithm generates biclusters - a subset of rows which exhibit similar behavior across a subset of columns, or vice versa 108" }, { "page_index": 839, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_109.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_109.png", "page_index": 839, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:22:58+07:00" }, "raw_text": "Chapter 11. Cluster Analysis: Advanced Methods Statistics-Based Clustering Clustering High-Dimensional Data Semi-Supervised Learning and Active Learning Constraint-Based Clustering Bi-Clustering and co-Clustering Collaborative filtering Spectral Clustering Evaluation of Clustering Quality Summary 109" }, { "page_index": 840, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_110.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_110.png", "page_index": 840, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:04+07:00" }, "raw_text": "Collaborative Filtering Collaborative filtering (CF) is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources, etc Applications involving very large data sets: sensing and monitoring data, financial data, electronic commerce and web 2.0 applications Example: a method of making automatic predictions (filtering) about the interests of a user by collecting taste information from many users (collaborating) Assumption: those who agreed in the past tend to agree again in the future 110" }, { "page_index": 841, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_111.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_111.png", "page_index": 841, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:08+07:00" }, "raw_text": "Framework A general 2-step process Look for users who share the same rating patterns with the active user (the one for which the prediction is made) Use the ratings of those found in step 1 to calculate a prediction Item-based filtering (used in Amazon.com) Build an item-to-item matrix determining the relationships between each pair of items Infer the user's taste (i.e., the prediction) using the matrix 111" }, { "page_index": 842, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_112.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_112.png", "page_index": 842, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:13+07:00" }, "raw_text": "Spectral Clustering Given a set of data points A, a similarity matrix S may be between points i and j (i, j e A) Spectral clustering makes use of the spectrum of the similarity matrix of the data to perform dimensionality reduction for clustering in fewer dimensions In functional analysis, the spectrum of a bounded operator is a generalization of eigenvalues for matrices A complex number is said to be in the spectrum of a bounded linear operator T if Al - T is not invertible, where I is the identity operator 112" }, { "page_index": 843, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_113.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_113.png", "page_index": 843, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:18+07:00" }, "raw_text": "Shi-Malik Algorithm Given a set S of points, the algorithm partitions Let v be the eigenvector v corresponding to the second-smallest eigenvalue of the Laplacian matrix L of S SD-1/2, where D is the diagonal L = 1 - D-1/2 matrix D. - S Let m be the median of the components in v Place all points whose component in v is greater 113" }, { "page_index": 844, "chapter_num": 10, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_114.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_10/slide_114.png", "page_index": 844, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:22+07:00" }, "raw_text": "Example Data & Graph, 5-NN 2nd eigenvalue (sorted) Clustering 02 0.15 0.1 0.06 ... 0.06 0.1 0.15 -02 Extraction from http://www.kimbly.com/blog/000489.html 114" }, { "page_index": 845, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_001.png", "page_index": 845, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:26+07:00" }, "raw_text": "Mining: Data Concepts and Techniques (3rd ed.) - Chapter 12 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University O2011 Han, Kamber & Pei. All rights reserved. 1" }, { "page_index": 846, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_002.png", "page_index": 846, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:30+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 2" }, { "page_index": 847, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_003.png", "page_index": 847, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:38+07:00" }, "raw_text": "Whgt Are Outliers? Outlier: A data object that deviates significantly from the normal objects as if it were generated by a different mechanism Ex.: Unusual credit card purchase, sports: Michael Jordon, Wayne Gretzky, ... Outliers are different from the noise data Noise is random error or variance in a measured variable Noise should be removed before outlier detection Outliers are interesting: It violates the mechanism that generates the normal data Outlier detection vs. novelty detection: early stage, outlier; but later merged into the model R Applications: Credit card fraud detection Telecom fraud detection Customer segmentation Medical analysis 3" }, { "page_index": 848, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_004.png", "page_index": 848, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:44+07:00" }, "raw_text": "R Types of Outliers (I) Three kinds: global, contextual and collective outliers Global outlier (or point anomaly) Global Outlier Object is O. if it significantly deviates from the rest of the data set Ex. Intrusion detection in computer networks Issue: Find an appropriate measurement of deviation Contextual outlier (or conditional outlier) Object is O. if it deviates significantly based on a selected context Ex. 8oo F in Urbana: outlier? (depending on summer or winter?) Attributes of data objects should be divided into two groups Contextual attributes: defines the context, e.g., time & location Behavioral attributes: characteristics of the object, used in outlier evaluation, e.g., temperature Can be viewed as a generalization of local outliers-whose density significantly deviates from its local area Issue: How to define or formulate meaningful context? 4" }, { "page_index": 849, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_005.png", "page_index": 849, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:52+07:00" }, "raw_text": "Types of Outliers (II) 0 0 0 0 0 0 Collective Outliers 0 0 0 0 0 0 0 0 0 A subset of data objects collectively deviate 0 0 0 0 0 significantly from the whole data set, even if the 0 0 0 0 individual data objects may not be outliers 0 0 0 0 0 0 0 Applications: E.g., intrusion detection: Collective Outlier When a number of computers keep sending denial-of-service packages to each other Detection of collective outliers Consider not only behavior of individual objects, but also that of groups of objects Need to have the background knowledge on the relationship among data objects, such as a distance or similarity measure on objects. A data set may have multiple types of outlier One object may belong to more than one type of outlier 5" }, { "page_index": 850, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_006.png", "page_index": 850, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:23:57+07:00" }, "raw_text": "Challenges of Outlier Detection Modeling normal objects and outliers properly Hard to enumerate all possible normal behaviors in an application The border between normal and outlier objects is often a gray area Application-specific outlier detection Choice of distance measure among objects and the model of relationship among objects are often application-dependent E.g., clinic data: a small deviation could be an outlier; while in marketing analysis, larger fluctuations Handling noise in outlier detection Noise may distort the normal objects and blur the distinction between normal objects and outliers. It may help hide outliers and reduce the effectiveness of outlier detection Understandability Understand why these are outliers: Justification of the detection Specify the degree of an outlier: the unlikelihood of the object being generated by a normal mechanism 6" }, { "page_index": 851, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_007.png", "page_index": 851, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:01+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 7" }, { "page_index": 852, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_008.png", "page_index": 852, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:08+07:00" }, "raw_text": "Outlier Detection l: Supervised Methods Two ways to categorize outlier detection methods: Based on whether user-/abe/ed examples of outliers can be obtained Supervised, semi-supervised vs. unsupervised methods Based on assumptions about normal data and outliers: Statistical, proximity-based, and clustering-based methods Outlier Detection I: Supervised Methods Modeling outlier detection as a classification problem - Samples examined by domain experts used for training & testing Methods for Learning a classifier for outlier detection effectively: Model normal objects & report those not matching the model as outliers, or Model outliers and treat those not matching the model as normal Challenges Imbalanced classes, i.e., outliers are rare: Boost the outlier class and make up some artificial outliers Catch as many outliers as possible, i.e., recall is more important than accuracy (i.e., not mislabeling normal objects as outliers) 8" }, { "page_index": 853, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_009.png", "page_index": 853, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:15+07:00" }, "raw_text": "Outlier Detection Il: Unsupervised Methods Assume the normal objects are somewhat clustered\" into multiple groups, each having some distinct features An outlier is expected to be far away from any groups of normal objects Weakness: Cannot detect collective outlier effectively Normal objects may not share any strong patterns, but the collective outliers may share high similarity in a small area Ex. In some intrusion or virus detection, normal activities are diverse Unsupervised methods may have a high false positive rate but still miss many real outliers. Supervised methods can be more effective, e.g., identify attacking some key resources Many clustering methods can be adapted for unsupervised methods Find clusters, then outliers: not belonging to any cluster Problem 1: Hard to distinguish noise from outliers Problem 2: Costly since first clustering: but far less outliers than normal objects Newer methods: tackle outliers directly 9" }, { "page_index": 854, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_010.png", "page_index": 854, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:20+07:00" }, "raw_text": "Outlier Detection Ill: Semi-Supervised Methods Situation: In many applications, the number of labeled data is often small: Labels could be on outliers only, normal objects only, or both Semi-supervised outlier detection: Regarded as applications of semi- supervised learning If some labeled normal objects are available Use the labeled examples and the proximate unlabeled objects to train a model for normal objects Those not fitting the model of normal objects are detected as outliers If only some labeled outliers are available, a small number of labeled outliers many not cover the possible outliers well To improve the quality of outlier detection, one can get help from models for normal objects learned from unsupervised methods 10" }, { "page_index": 855, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_011.png", "page_index": 855, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:26+07:00" }, "raw_text": "Outlier Detection (1): Statistical Methods Statistical methods (also known as model-based methods) assume that the normal data follow some statistical model (a stochastic model) The data not following the model are outliers. R Example (right figure): First use Gaussian distribution to model the normal data For each object y in region R, estimate gp(y), the probability of y fits the Gaussian distribution If gp(y) is very low, y is unlikely generated by the Gaussian model, thus an outlier Effectiveness of statistical methods: highly depends on whether the assumption of statistical model holds in the real data There are rich alternatives to use various statistical models E.g., parametric vs. non-parametric 11" }, { "page_index": 856, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_012.png", "page_index": 856, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:31+07:00" }, "raw_text": "Outlier Detection (2): Proximity-Based Methods An object is an outlier if the nearest neighbors of the object are far away, i.e., the proximity of the object is significantly deviates from the proximity of most of the other objects in the same data set Example (right figure): Model the proximity of an R object using its 3 nearest neighbors Objects in region R are substantially different from other objects in the data set Thus the objects in R are outliers The effectiveness of proximity-based methods highly relies on the proximity measure. In some applications, proximity or distance measures cannot be obtained easily Often have a difficulty in finding a group of outliers which stay close to each other Two major types of proximity-based outlier detection Distance-based vs. density-based 12" }, { "page_index": 857, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_013.png", "page_index": 857, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:37+07:00" }, "raw_text": "Outlier Detection (3): Clustering-Based Methods Normal data belong to large and dense clusters, whereas outliers belong to small or sparse clusters, or do not belong to any clusters R Example (right figure): two clusters All points not in R form a large cluster The two points in R form a tiny cluster, thus are outliers Since there are many clustering methods, there are many clustering-based outlier detection methods as well Clustering is expensive: straightforward adaption of a clustering method for outlier detection can be costly and does not scale up well for large data sets 13" }, { "page_index": 858, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_014.png", "page_index": 858, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:41+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 14" }, { "page_index": 859, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_015.png", "page_index": 859, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:47+07:00" }, "raw_text": "Statistical Approaches Statistical approaches assume that the objects in a data set are generated by a stochastic process (a generative model) Idea: learn a generative model fitting the given data set, and then identify the objects in low probability regions of the model as outliers Methods are divided into two categories: parametric vs. non-parametric Parametric method Assumes that the normal data is generated by a parametric distribution with parameter e The probability density function of the parametric distribution f(x, e) gives the probability that object x is generated by the distribution The smaller this value, the more likely x is an outlier Non-parametric method Not assume an a-priori statistical model and determine the model from the input data Not completely parameter free but consider the number and nature of the parameters are flexible and not fixed in advance Examples: histogram and kernel density estimation 15" }, { "page_index": 860, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_016.png", "page_index": 860, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:54+07:00" }, "raw_text": "Parametric Methods I: Detection Univariate Outliers s Based on Normal Distribution Univariate data: A data set involving only one attribute or variable Often assume that data are generated from a normal distribution, learn the parameters from the input data, and identify the points with low probability as outliers Ex: Avg. temp.: {24.0, 28.9, 28.9, 29.0, 29.1, 29.1, 29.2, 29.2, 29.3, 29.4} Use the maximum likelihood method to estimate u and o n n 1 n n Z(xi-u)2 lnC(u,o2) = Zln f(xi(,o2))=-) ln(2T) - 2 2 202 i=1 i=1 Taking derivatives with respect to and o2, we derive the following maximum likelihood estimates n n 1 (xi -x)2 a=x= Ki n n i=1 i=1 For the above data with n = 10, we have = 28.61 = v2.29 = 1.51 Then (24 - 28.61) /1.51 = - 3.04 < -3, 24 is an outlier since 3o region contains 99.7% data 16" }, { "page_index": 861, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_017.png", "page_index": 861, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:24:59+07:00" }, "raw_text": "Parametric Methods I: The Grubb's Test Univariate outlier detection: The Grubb's test (maximum normed residual test) - another statistical method under normal distribution For each object x in a data set, compute its z-score: x is an outlier if N - 1 +2 a/(2N),N-2 2 /N where is the value taken by a t-distribution at a a/(2N),N-2 1 significance level of a/(2N), and N is the # of objects in the data set 17" }, { "page_index": 862, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_018.png", "page_index": 862, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:04+07:00" }, "raw_text": "Parametric Methods ll: Detection of Multivariate Outliers Multivariate data: A data set involving two or more attributes or variables Transform the multivariate outlier detection task into a univariate outlier detection problem Method 1. Compute Mahalaobis distance Let be the mean vector for a multivariate data set. Mahalaobis distance for an object o to is MDist(o, ) = (o - )t S -1(o - ) where S is the covariance matrix Use the Grubb's test on this measure to detect outliers n (oi - Ei)2 Ei where E; is the mean of the i-dimension among all objects, and n is the dimensionality If x2 -statistic is large, then object o; is an outlier 18" }, { "page_index": 863, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_019.png", "page_index": 863, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:09+07:00" }, "raw_text": "Parametric Methods lll: Using Mixture of Parametric Distributions Assuming data generated by a normal distribution 0 C3 could be sometimes overly simplified C1 Example (right figure): The objects between the two clusters cannot be captured as outliers since they C2 are close to the estimated mean To overcome this problem, assume the normal data is generated by two normal distributions. For any object o in the data set, the probability that o is generated by the mixture of the two distributions is given by Pr(oO1,O2) = fe1(o)+ fe2(o) where fe1 and fe2 are the probability density functions of 0, and 0, Then use EM algorithm to learn the parameters 1, 1, 2, , from data An object o is an outlier if it does not belong to any cluster 19" }, { "page_index": 864, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_020.png", "page_index": 864, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:16+07:00" }, "raw_text": "Non-Parametric Methods: Detection Using Histogram The model of normal data is learned from the 60% input data without any a priori structure. Often makes fewer assumptions about the data, 20% 10% 6.7% and thus can be applicable in more scenarios 1% 0 x $1000 0-1 1-2 2-3 3-4 4-5 Outlier detection using histogram: Amount per transaction Figure shows the histogram of purchase amounts in transactions A transaction in the amount of $7,500 is an outlier, since only 0.2% transactions have an amount higher than $5,000 Problem: Hard to choose an appropriate bin size for histogram Too small bin size -> normal objects in empty/rare bins, false positive Too big bin size -> outliers in some frequent bins, false negative Solution: Adopt kernel density estimation to estimate the probability density distribution of the data. If the estimated density function is high the object is likely normal. Otherwise, it is likely an outlier. 20" }, { "page_index": 865, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_021.png", "page_index": 865, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:19+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 21" }, { "page_index": 866, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_022.png", "page_index": 866, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:24+07:00" }, "raw_text": "Proximity-Based Approaches: : Distance-Based vs. Density-Based Outlier Detection Intuition: Objects that are far away from the others are outliers Assumption of proximity-based approach: The proximity of an outlier deviates significantly from that of most of the others in the data set Two types of proximity-based outlier detection methods Distance-based outlier detection: An object o is an outlier if its neighborhood does not have enough other points Density-based outlier detection: An object o is an outlier if its density is relatively much lower than that of its neighbors 22" }, { "page_index": 867, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_023.png", "page_index": 867, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:29+07:00" }, "raw_text": "Distance-Based Outlier Detection For each object o, examine the # of other objects in the r-neighborhood of o, where r is a user-specified distance threshold An object o is an outlier if most (taking T as a fraction threshold) of the objects in D are far away from o, i.e., not in the r-neighborhood of o l{o'dist(o,o') r}ll 7T An object o is a DB(r, T) outlier if IDII Equivalently, one can check the distance between o and its k-th nearest neighbor ok, where k = lDll1. o is an outlier if dist(o, ok) > r Efficient computation: Nested loop algorithm For any object o, calculate its distance from other objects, and count the # of other objects in the r-neighborhood. If -n other objects are within r distance, terminate the inner loop Otherwise, o: is a DB(r, ) outlier Efficiency: Actually CPU time is not O(n2) but linear to the data set size since for most non-outlier objects, the inner loop terminates early 23" }, { "page_index": 868, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_024.png", "page_index": 868, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:39+07:00" }, "raw_text": "Distance-Based Outlier Detection: A Grid-Based Methoc Why efficiency is still a concern? When the complete set of objects cannot be held into main memory, cost l/O swapping The major cost: (1) each object tests against the whole data set, why not only its close neighbor? (2) check objects one by one, why not group by group? Grid-based method (CELL): Data space is partitioned into a multi-D grid. Each cell is a hyper cube with diagonal length r/2 Pruning using the level-1 & level 2 cell properties: 2 2 2 2 2 2 2 2 2 2 2 2 For any possible point x in cell C and any 2 2 1 i 1 2 2 possible point y in a level-1 cell, dist(x,y) r 2 2 -1 c 1 2 2 2 2 1 1 1 2 2 For any possible point x in cell C and any point y 2 2 2 2 2 2 2 such that dist(x,y) r, y is in a level-2 cell 2 2 2 2 2 2 2 Thus we only need to check the objects that cannot be pruned, and even for such an object o, only need to compute the distance between o and the objects in the level-2 cells (since beyond level-2, the distance from o is more than r) 24" }, { "page_index": 869, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_025.png", "page_index": 869, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:46+07:00" }, "raw_text": "Density-Based Outlier Detection C1 Local outliers: Outliers comparing to their local 01 04 neighborhoods, instead of the global data 02 C2 distribution 03 In Fig., 01 and o2 are local outliers to C1, 0 is a global outlier, but o, is not an outlier. However, proximity-based clustering cannot find o, and o2 are outlier (e.g., comparing with O4). Intuition (density-based outlier detection): The density around an outlier object is significantly different from the density around its neighbors Method: Use the relative density of an object against its neighbors as the indicator of the degree of the object being outliers k-distance of an object o, dist.(o): distance between o and its k-th NN k-distance neighborhood of o, Nk(o) = {o'l o' in D, dist(o, o') distk(o)} Nk(o) could be bigger than k since multiple objects may have identical distance to o 25" }, { "page_index": 870, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_026.png", "page_index": 870, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:52+07:00" }, "raw_text": "Local Outlier Factor: LOF Reachability distance from o' to o: MinPts = 3 k=3 reachdistk(o- o') = max{distk(o),dist(o,o')} where k is a user-specified parameter nin Local reachability density of o: min lNk(o lrdk(o) = LOF (Local outlier factor) of an object o is the average of the ratio of local reachability of o and those of o's k-nearest neighbors lrdk(o') LOFk(o) = lrdk(o) : reachdistk(o' - o lNk(o) o'ENk(o o'ENk(o The lower the local reachability density of o, and the higher the local reachability density of the kNN of o, the higher LOF This captures a local outlier whose local density is relatively low comparing to the local densities of its kNN 26" }, { "page_index": 871, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_027.png", "page_index": 871, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:25:56+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 27" }, { "page_index": 872, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_028.png", "page_index": 872, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:03+07:00" }, "raw_text": "Clustering-Based Outlier Detection (1 & 2): Not belong to any cluster, or far from the closest one An object is an outlier if (1) it does not belong to any cluster, (2) there is a large distance between the object and its closest cluster , or (3) it belongs to a small or sparse cluster :8 Case I: Not belong to any cluster Identify animals not part of a flock: Using a density- based clustering method such as DBSCAN a Case 2: Far from its closest cluster Using k-means, partition data points of into clusters a For each object o, assign an outlier score based on its distance from its closest center - If dist(o, c,)/avg_dist(c,) is large, likely an outlier Ex. Intrusion detection: Consider the similarity between b data points and the clusters in a training data set Use a training set to find patterns of \"normal\" data, e.g., frequent itemsets in each segment, and cluster similar connections into groups Compare new data points with the clusters mined--Outliers are possible attacks 28" }, { "page_index": 873, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_029.png", "page_index": 873, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:10+07:00" }, "raw_text": "Clustering-Based Outlier Detection (3): Detecting Outliers in Small Clusters O O FindCBLOF: Detect outliers in small clusters O 8 C3 Find clusters, and sort them in decreasing size O C1 To each data point, assign a cluster-based local O outlier factor (CBLOF): C2 O 0 8 If obj p belongs to a large cluster, CBLOF = O cluster size X similarity between p and cluster If p belongs to a small one, CBLOF = cluster size X similarity betw. p and the closest large cluster Ex. In the figure, o is outlier since its closest large cluster is C1, but the similarity between o and C, is small. For any point in C3, its closest large cluster is C, but its similarity from C, is low, plus Csl = 3 is small 29" }, { "page_index": 874, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_030.png", "page_index": 874, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:14+07:00" }, "raw_text": "Clustering-Based Method: Strength and Weakness Strength Detect outliers without requiring any labeled data Work for many types of data Clusters can be regarded as summaries of the data Once the cluster are obtained, need only compare any object against the clusters to determine whether it is an outlier (fast) Weakness Effectiveness depends highly on the clustering method used-they may not be optimized for outlier detection High computational cost: Need to first find clusters A method to reduce the cost: Fixed-width clustering A point is assigned to a cluster if the center of the cluster is within a pre-defined distance threshold from the point If a point cannot be assigned to any existing cluster, a new cluster is created and the distance threshold may be learned from the training data under certain conditions" }, { "page_index": 875, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_031.png", "page_index": 875, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:18+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 31" }, { "page_index": 876, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_032.png", "page_index": 876, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:28+07:00" }, "raw_text": "Classification-Based Method l: One-Class Moce Idea: Train a classification model that can 0 0 O 0 distinguish \"normal\" data from outliers O 0 0 O 0 0 O O O O 8 0 0 A brute-force approach: Consider a training set 8 O O 0 00 O that contains samples labeled as \"normal\" and O O O 00 others labeled as \"outlier\" e But, the training set is typically heavily O 0000 O e 0 biased: # of \"normal\" samples likely far O exceeds # of outlier samples Cannot detect unseen anomaly One-class model: A classifier is built to describe only the normal class. Learn the decision boundary of the normal class using classification methods such as SVM Any samples that do not belong to the normal class (not within the decision boundary) are declared as outliers Adv: can detect new outliers that may not appear close to any outlier objects in the training set Extension: Normal objects may belong to multiple classes 32" }, { "page_index": 877, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_033.png", "page_index": 877, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:35+07:00" }, "raw_text": "Classification-Based Method Il: Semi-Supervised Learning Semi-supervised learning: Combining classification- based and clustering-based methods Method C1 Using a clustering-based approach, find a large cluster, C, and a small cluster, C1 C Since some objects in C carry the label \"normal\", a treat all objects in C as normal Use the one-class model of this cluster to identify normal objects in outlier detection Since some objects in cluster C, carry the label objects with lable \"normal' 'outlier\", declare all objects in C, as outliers objects with label \"outlier' objects without label Any object that does not fall into the model for C (such as a) is considered an outlier as well Comments on classification-based outlier detection methods Strength: Outlier detection is fast Bottleneck: Quality heavily depends on the availability and quality of the training set, but often difficult to obtain representative and high- quality training data 33" }, { "page_index": 878, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_034.png", "page_index": 878, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:39+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 34" }, { "page_index": 879, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_035.png", "page_index": 879, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:46+07:00" }, "raw_text": "Mining Contextual Outliers l: Transform into Conventional Outlier Detection If the contexts can be clearly identified, transform it to conventional outlier detection 1. Identify the context of the object using the contextual attributes 2. Calculate the outlier score for the object in the context using a conventional outlier detection method Ex. Detect outlier customers in the context of customer groups Behavioral attributes: # of trans/yr, annual total trans. amount Steps: (1) locate c's context, (2) compare c with the other customers in the same group, and (3) use a conventional outlier detection method If the context contains very few customers, generalize contexts Ex. Learn a mixture model U on the contextual attributes, and another mixture model V of the data on the behavior attributes Learn a mapping p(VlU): the probability that a data object o belonging to cluster U: ön the contextual attributes is generated by cluster Vi on the behavior attributes Outlier score: S(o) =>p(o e Uj)>p(o e Vi)p(ViUj) U j Vi 35" }, { "page_index": 880, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_036.png", "page_index": 880, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:26:52+07:00" }, "raw_text": "Mining Contextual Outliers s Il: Modeling Normal Behavior with Respect to Contexts In some applications, one cannot clearly partition the data into contexts Ex. if a customer suddenly purchased a product that is unrelated to those she recently browsed, it is unclear how many products browsed earlier should be considered as the context Model the \"normal\" behavior with respect to contexts Using a training data set, train a model that predicts the expected behavior attribute values with respect to the contextual attribute values An object is a contextual outlier if its behavior attribute values significantly deviate from the values predicted by the model Using a prediction model that links the contexts and behavior, these methods avoid the explicit identification of specific contexts Methods: A number of classification and prediction technigues can be used to build such models, such as regression, Markov Models, and Finite State Automaton 36" }, { "page_index": 881, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_037.png", "page_index": 881, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:01+07:00" }, "raw_text": "Mining Collective Outliers l: On the Set of \"Structured Objects\" 0 0 0 0 0 0 0 0 0 0 0 0 0 Collective outlier if objects as a group deviate 0 0 0 0 significantly from the entire data 0 0 0 0 0 Need to examine the structure of the data set, i.e, the 0 0 0 0 0 relationships between multiple data objects 0 O Each of these structures is inherent to its respective type of data For temporal data (such as time series and sequences), we explore the structures formed by time, which occur in segments of the time series or subsequences For spatial data, explore local areas For graph and network data, we explore subgraphs Difference from the contextual outlier detection: the structures are often not explicitly defined, and have to be discovered as part of the outlier detection process. Collective outlier detection methods: two categories Reduce the problem to conventional outlier detection - Identify structure units, treat each structure unit (e.g.: subsequence, time series segment, local area, or subgraph) as a data object, and extract features Then outlier detection on the set of \"structured objects constructed as such using the extracted features 37" }, { "page_index": 882, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_038.png", "page_index": 882, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:07+07:00" }, "raw_text": "Mining Collective Outliers ll: Direct Modeling of the Expected Behavior of Structure Units Models the expected behavior of structure units directly Ex. 1. Detect collective outliers in online social network of customers Treat each possible subgraph of the network as a structure unit Collective outlier: An outlier subgraph in the social network Small subgraphs that are of very low frequency Large subgraphs that are surprisingly frequent Ex. 2. Detect collective outliers in temporal sequences Learn a Markov model from the sequences A subsequence can then be declared as a collective outlier if it significantly deviates from the model Collective outlier detection is subtle due to the challenge of exploring the structures in data The exploration typically uses heuristics, and thus may be application dependent The computational cost is often high due to the sophisticated mining process 38" }, { "page_index": 883, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_039.png", "page_index": 883, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:11+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 39" }, { "page_index": 884, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_040.png", "page_index": 884, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:17+07:00" }, "raw_text": "Challenges for Outlier Detection in High- Dimensional Data Interpretation of outliers Detecting outliers without saying why they are outliers is not very useful in high-D due to many features (or dimensions) are involved in a high-dimensional data set E.g., which subspaces that manifest the outliers or an assessment regarding the \"outlier-ness\" of the objects Data sparsity Data in high-D spaces are often sparse The distance between objects becomes heavily dominated by noise as the dimensionality increases Data subspaces Adaptive to the subspaces signifying the outliers Capturing the local behavior of data Scalable with respect to dimensionality # of subspaces increases exponentially 40" }, { "page_index": 885, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_041.png", "page_index": 885, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:22+07:00" }, "raw_text": "Approach l: Extending Conventional Outlier Detection Method 1: Detect outliers in the full space, e.g., HilOut Algorithm Find distance-based outliers, but use the ranks of distance instead of the absolute distance in outlier detection For each object o, find its k-nearest neighbors: nn.(o), - . - , nn(o) k The weight of object o: w(o) = dist(o,nni(o)) i=1 All objects are ranked in weight-descending order Top-/ objects in weight are output as outliers (/: user-specified parm) Employ space-filling curves for approximation: scalable in both time and space w.r.t. data size and dimensionality Method 2: Dimensionality reduction Works only when in lower-dimensionality, normal instances can still be distinguished from outliers PCA: Heuristically, the principal components with low variance are preferred because, on such dimensions, normal objects are likely close to each other and outliers often deviate from the majority 41" }, { "page_index": 886, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_042.png", "page_index": 886, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:28+07:00" }, "raw_text": "Approach Il: Finding Outliers in Subspaces Extending conventional outlier detection: Hard for outlier interpretation Find outliers in much lower dimensional subspaces: easy to interpret why and to what extent the object is an outlier E.g., find outlier customers in certain subspace: average transaction amount >> avg. and purchase frequency << avg. Ex. A grid-based subspace outlier detection method Project data onto various subspaces to find an area whose density is much lower than average Discretize the data into a grid with equi-depth (why?) regions Search for regions that are significantly sparse Consider a k-d cube: k ranges on k dimensions, with n objects If objects are independently distributed, the expected number of objects falling into a k-dimensional region is (1/ )kn = fkn,the standard deviation is V fk(1 - fk)n n(C) - fkn S(C) = The sparsity coefficient of cube C: If S(C) < 0, C contains less objects than expected The more negative, the sparser C is and the more likely the objects in C are outliers in the subspace 42" }, { "page_index": 887, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_043.png", "page_index": 887, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:34+07:00" }, "raw_text": "Approach lll: Modeling High-Dimensional Outliers Develop new models for high- d dimensional outliers directly a Avoid proximity measures and adopt A set of points b f6rm a cluster new heuristics that do not deteriorate except c (outlier) in high-dimensional data Ex. Angle-based outliers: Kriegel, Schubert, and Zimek [KSz08] For each point o, examine the angle Axoy for every pair of points x, y. Point in the center (e.g., a), the angles formed differ widely An outlier (e.g., c), angle variable is substantially smaller Use the variance of angles for a point to determine outlier Combine angles and distance to model outliers Use the distance-weighted angle variance as the outlier score Angle-based outlier factor (ABOF): (ot,oy) ABOF(o) =VARx,yeD,xo,yo dist(o,x)2dist(o,y)2 Efficient approximation computation method is developed It can be generalized to handle arbitrary types of data 43" }, { "page_index": 888, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_044.png", "page_index": 888, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:38+07:00" }, "raw_text": "Chapter 12. Outlier Analysis Outlier and Outlier Analysis Outlier Detection Methods Statistical Approaches Proximity-Base Approaches Clustering-Base Approaches Classification Approaches Mining Contextual and Collective Outliers Outlier Detection in High Dimensional Data Summary 44" }, { "page_index": 889, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_045.png", "page_index": 889, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:42+07:00" }, "raw_text": "Summary Types of outliers global, contextual & collective outliers Outlier detection supervised, semi-supervised, or unsupervised Statistical (or model-based) approaches Proximity-base approaches Clustering-base approaches Classification approaches Mining contextual and collective outliers Outlier detection in high dimensional data 45" }, { "page_index": 890, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_046.png", "page_index": 890, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:49+07:00" }, "raw_text": "References (I) B. Abraham and G.E.P. Box. Bayesian analysis of some outlier problems in time series. Biometrika, 66:229-248 1979. M. Agyemang, K. Barker, and R. Alhajj. A comprehensive survey of numeric and symbolic outlier mining techniques. lntell. Data Anal., 10:521-538, 2006. F. J. Anscombe and I. Guttman. Rejection of outliers. Technometrics, 2:123-147, 1960. D. Agarwal. Detecting anomalies in cross-classified streams: a bayesian approach. Knowl. Inf. Syst., 11:29-44 2006. F. Angiulli and C. Pizzuti. Outlier mining in large high-dimensional data sets. TKDE, 2005. C. C. Aggarwal and P. S. Yu. Outlier detection for high dimensional data. S/GMOD'01 R.J. Beckman and R.D. Cook. Outlier...s. Technometrics, 25:119-149, 1983. I. Ben-Gal. Outlier detection. In Maimon O. and Rockach L. (eds.) Data Mining and Knowledge Discovery Handbook: A Complete Guide for Practitioners and Researchers, Kluwer Academic, 2005. M. M. Breunig, H.-P. Kriegel, R. Ng, and J. Sander. LOF: Identifying density-based local outliers. S/GMOD'00 D. Barbar'a, Y. Li, J. Couto, J.-L. Lin, and S. Jajodia. Bootstrapping a data mining intrusion detection system SAC'03 Z. A. Bakar, R. Mohemad, A. Ahmad, and M. M. Deris. A comparative study for outlier detection techniques in data mining. IEEE Conf. on Cybernetics and Intelligent Systems, 2006. S. D. Bay and M. Schwabacher. Mining distance-based outliers in near linear time with randomization and a simple pruning rule.KDD'03 D. Barbara, N. Wu, and S. Jajodia. Detecting novel network intrusion using bayesian estimators. SDM'01 V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 41:1-58 2009. D. Dasgupta and N.S. Majumdar. Anomaly detection in multidimensional data using negative selection algorithm. In CEC'02" }, { "page_index": 891, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_047.png", "page_index": 891, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:27:56+07:00" }, "raw_text": "References (2) E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo. A geometric framework for unsupervised anomaly detection: Detecting intrusions in unlabeled data. In Proc. 2002 Int. Conf. of Data Mining for Security Applications, 2002. E. Eskin. Anomaly detection over noisy data using learned probability distributions. /CML'oo T. Fawcett and F. Provost. Adaptive fraud detection. Data Mining and Knowledge Discovery, 1:291-316, 1997. V. J. Hodge and J. Austin. A survey of outlier detection methdologies. Artif. Intell. Rev., 22:85-126, 2004 D. M. Hawkins. /dentification of Outliers. Chapman and Hall, London, 1980. Z. He, X. Xu, and S. Deng. Discovering cluster-based local outliers. Pattern Recogn. Lett., 24, June, 2003. W. Jin, K. H. Tung, and J. Han. Mining top-n local outliers in large databases. KDD'01 W. Jin, A. K. H. Tung, J. Han, and W. Wang. Ranking outliers using symmetric neighborhood relationship. PAKDD'06 E. Knorr and R. Ng. A unified notion of outliers: Properties and computation. KDD'97 E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB'98 E. M. Knorr, R. T. Ng, and V. Tucakov. Distance-based outliers: Algorithms and applications. VLDB J., 8:237- 253, 2000. H.-P. Kriegel, M. Schubert, and A. Zimek. Angle-based outlier detection in high-dimensional data. KDD'08 M. Markou and S. Singh. Novelty detection: A review-part 1: Statistical approaches. Signal Process., 83:2481- 2497,2003 M. Markou and S. Singh. Novelty detection: A review-part 2: Neural network based approaches. Signal Process., 83:2499-2521,2003. C. C. Noble and D. J. Cook. Graph-based anomaly detection. KDD'03" }, { "page_index": 892, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_048.png", "page_index": 892, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:01+07:00" }, "raw_text": "References (3 S. Papadimitriou, H. Kitagawa, P. B. Gibbons, and C. Faloutsos. Loci: Fast outlier detection using the local correlation integral. /CDE'03 A. Patcha and J.-M. Park. An overview of anomaly detection techniques: Existing solutions and latest technological trends. Comput. Netw., 51, 2007 X. Song, M. Wu, C. Jermaine, and S. Ranka. Conditional anomaly detection. IEEE Trans. on Knowl. and Data Eng.,19, 2007. Y. Tao, X. Xiao, and S. Zhou. Mining distance-based outliers from large databases in any metric space. KDD'06 N. Ye and Q. Chen. An anomaly detection technique based on a chi-square statistic for detecting intrusions into information systems. Quality and Reliability Engineering International, 17:105-112, 2001. B.-K. Yi, N. Sidiropoulos, T. Johnson, H. V. Jagadish, C. Faloutsos, and A. Biliris. Online data mining for co. evolving time sequences. /CDE'00" }, { "page_index": 893, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_049.png", "page_index": 893, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:03+07:00" }, "raw_text": "Un-Used Slides 49" }, { "page_index": 894, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_050.png", "page_index": 894, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:08+07:00" }, "raw_text": "Outlier Discovery: Statistical Approaches 95% of Area 2.5% 2.5% 95% Confidence H Limits Data Values Assume a model underlying distribution that generates data set (e.g. normal distribution) Use discordancy tests depending on data distribution distribution parameter (e.g., mean, variance) number of expected outliers Drawbacks most tests are for single attribute In many cases, data distribution may not be known 50" }, { "page_index": 895, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_051.png", "page_index": 895, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:12+07:00" }, "raw_text": "Outlier Discovery: Distance-Based Approach Introduced to counter the main limitations imposed by statistical methods We need multi-dimensional analysis without knowing data distribution Distance-based outlier: A DB(p, D)-outlier is an object O in a dataset T such that at least a fraction p of the objects in T lies at a distance greater than D from O Algorithms for mining distance-based outliers [Knorr & Ng. VLDB'98] Index-based algorithm Nested-Ioop algorithm Cell-based algorithm 51" }, { "page_index": 896, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_052.png", "page_index": 896, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:18+07:00" }, "raw_text": "Density-Based Local Outlier Detection M. M. Breunig, H.-P. Kriegel, R. Ng, J Sander. LOF: Identifying Density-Based Local Outliers. SlGMOD 2000! 01 Distance-based outlier detection is based on global distance distribution Need the concept of local outlier It encounters difficulties to identify outliers Local outlier factor (LOF) if data is not uniformly distributed Assume outlier is not Ex. C1 contains 400 loosely distributed crisp Each point has a LOF points, C, has 100 tightly condensed points, 2 outlier points 01, 02 Distance-based method cannot identify 02 as an outlier 52" }, { "page_index": 897, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_053.png", "page_index": 897, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:22+07:00" }, "raw_text": "Outlier Discovery: Deviation-Based Approach Identifies outliers by examining the main characteristics of objects in a group Objects that \"deviate\" from this description are considered outliers Sequential exception technique simulates the way in which humans can distinguish unusual objects from among a series of supposedly like objects OLAP data cube technique uses data cubes to identify regions of anomalies in large multidimensional data 53" }, { "page_index": 898, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_054.png", "page_index": 898, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:30+07:00" }, "raw_text": "References (1) B. Abraham and G.E.P. Box. Bayesian analysis of some outlier problems in time series. Biometrika, 1979. Malik Agyemang, Ken Barker, and Rada Alhajj. A comprehensive survey of numeric and symbolic outlier mining techniques. Intell. Data Anal., 2006. Deepak Agarwal. Detecting anomalies in cross-classied streams: a bayesian approach. Knowl. Inf Syst., 2006. C. C. Aggarwal and P. S. Yu. Outlier detection for high dimensional data. S/GMOD'01. M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander. Optics-of: Identifying local outliers. PKDD '99 M. M. Breunig, H.-P. Kriegel, R. Ng, and J. Sander. LOF: Identifying density-based local outliers SIGMOD'0O. V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Comput. Surv., 2oo9 D. Dasgupta and N.S. Majumdar. Anomaly detection in multidimensional data using negative selection algorithm. Computational Intelligence, 2002. E. Eskin, A. Arnold, M. Prerau, L. Portnoy, and S. Stolfo. A geometric framework for unsupervised anomaly detection: Detecting intrusions in unlabeled data. In Proc. 2002 Int. Conf. of Data Mining for Security Applications, 2002. E. Eskin. Anomaly detection over noisy data using learned probability distributions. /CML'00. T. Fawcett and F. Provost. Adaptive fraud detection. Data Mining and Knowledge Discovery, 1997. R. Fujimaki, T. Yairi, and K. Machida. An approach to spacecraft anomaly detection problem using kernel feature space. KDD '05 F. E. Grubbs. Procedures for detecting outlying observations in samples. Technometrics, 1969 54" }, { "page_index": 899, "chapter_num": 11, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_11/slide_055.png", "page_index": 899, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:38+07:00" }, "raw_text": "References (2) V. Hodge and J. Austin. A survey of outlier detection methodologies. Artif. Intell. Rev., 2004. Douglas M Hawkins. /dentification of Outliers. Chapman and Hall, 1980. P. S. Horn, L. Feng, Y. Li, and A. J. Pesce. Effect of Outliers and Nonhealthy Individuals on Reference Interval Estimation. Clin Chem, 2001. W. Jin, A. K. H. Tung, J. Han, and W. Wang. Ranking outliers using symmetric neighborhood relationship. PAKDD'06 E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large datasets. VLDB'98 M. Markou and S. Singh.. Novelty detection: a reviewl part 1: statistical approaches. Signal Process., 83(12, 2003 M. Markou and S. Singh. Novelty detection: a review l part 2: neural network based approaches. Signal Process., 83(12), 2003 S. Papadimitriou, H. Kitagawa, P. B. Gibbons, and C. Faloutsos. Loci: Fast outlier detection using the local correlation integral. /CDE'03. A. Patcha and J.-M. Park. An overview of anomaly detection techniques: Existing solutions and latest technological trends. Comput. Netw., 51(12):3448{3470, 2007. W. Stefansky. Rejecting outliers in factorial designs. Technometrics, 14(2):469{479, 1972 X. Song, M. Wu, C. Jermaine, and S. Ranka. Conditional anomaly detection. IEEE Trans. on Knowl and Data Eng.,19(5):631{645,2007 Y. Tao, X. Xiao, and S. Zhou. Mining distance-based outliers from large databases in any metric space. KDD '06: N. Ye and Q. Chen. An anomaly detection technique based on a chi-square statistic for detecting intrusions into information systems. Quality and Reliability Engineering International, 2001. 55" }, { "page_index": 900, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_001.png", "page_index": 900, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:41+07:00" }, "raw_text": "Data Mining: (3rd ed.) - Chapter 13 - Jiawei Han, Micheline Kamber, and Jian Pei University of Illinois at Urbana-Champaign & Simon Fraser University O2011 Han, Kamber & Pei. All rights reserved" }, { "page_index": 901, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_002.png", "page_index": 901, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:46+07:00" }, "raw_text": "" }, { "page_index": 902, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_003.png", "page_index": 902, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:49+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining j Trends Summary 3" }, { "page_index": 903, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_004.png", "page_index": 903, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:53+07:00" }, "raw_text": "Mining Complex Types of Data Mining Sequence e Data Mining Time Series Mining Symbolic Sequences Mining Biological Sequences Mining Graphs and Networks Mining Other Kinds of Data 4" }, { "page_index": 904, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_005.png", "page_index": 904, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:28:59+07:00" }, "raw_text": "Mining Sequence Data Similarity Search in Time Series Data Subsequence match, dimensionality reduction, query-based similarity search, motif-based similarity search Regression and Trend Analysis in Time-Series Data long term + cyclic + seasonal variation + random movements Sequential Pattern Mining in Symbolic Sequences GSP, PrefixSpan, constraint-based sequential pattern mining Sequence Classification Feature-based vs. sequence-distance-based vs. model-based Alignment of Biological Sequences Pair-wise vs. multi-sequence alignment, substitution matirces, BLAST Hidden Markov Model for Biological Sequence Analysis Markov chain vs. hidden Markov models, forward vs. Viterbi vs. Baum- Welch algorithms 5" }, { "page_index": 905, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_006.png", "page_index": 905, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:04+07:00" }, "raw_text": "Mining Graphs and Networks Graph Pattern Mining Frequent subgraph patterns, closed graph patterns, gSpan vs. CloseGraph Statistical Modeling of Networks Small world phenomenon, power law (log-tail) distribution, densification Clustering and Classification of Graphs and Homogeneous Networks Clustering: Fast Modularity vs. SCAN Classification: model vs. pattern-based mining Clustering, Ranking and Classification of Heterogeneous Networks RankClus, RankClass, and meta path-based, user-guided methodology Role Discovery and Link Prediction in Information Networks PathPredict Similarity Search and OLAP in Information Networks: PathSim, GraphCube Evolution of Social and Information Networks: EvoNetClus 6" }, { "page_index": 906, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_007.png", "page_index": 906, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:09+07:00" }, "raw_text": "Mining Kinds of Data j Other Mining Spatial Data Spatial frequent/co-located patterns, spatial clustering and classification Mining Spatiotemporal and Moving Object Data Spatiotemporal data mining, trajectory mining, periodica, swarm, ... Mining Cyber-Physical System Data Applications: healthcare, air-traffic control, flood simulation Mining Multimedia Data Social media data, geo-tagged spatial clustering, periodicity analysis, ... Mining Text Data Topic modeling, i-topic model, integration with geo- and networked data Mining Web Data Web content, web structure, and web usage mining Mining Data Streams Dynamics, one-pass, patterns, clustering, classification, outlier detection 7" }, { "page_index": 907, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_008.png", "page_index": 907, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:13+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining j Trends Summary 8" }, { "page_index": 908, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_009.png", "page_index": 908, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:17+07:00" }, "raw_text": "Methodologies of Data Mining Other l Statistical Data Mining Views on Data Mining Foundations Visual and Audio Data Mining 9" }, { "page_index": 909, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_010.png", "page_index": 909, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:20+07:00" }, "raw_text": "Major Statistical Data Mining Methods Regression Generalized Linear Mode Analysis of Variance Mixed-Effect Models Factor Analysis Discriminant Analysis Survival Analysis 10" }, { "page_index": 910, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_011.png", "page_index": 910, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:26+07:00" }, "raw_text": "Statistical Data Mining (1) There are many well-established statistical techniques for data analysis, particularly for numeric data applied extensively to data from scientific experiments and data from economics and the social sciences Regression Manufacturing Wages and GNP Per Capita predict the value of a response 20 (dependent) variable from one or more predictor (independent) variables where the variables are numeric forms of regression: linear, multiple, weighted, polynomial, nonparametric 10 15 20 25 30 35 40 and robust 1998 GNP Per Capita in US Dollars Thousands 11" }, { "page_index": 911, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_012.png", "page_index": 911, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:32+07:00" }, "raw_text": "Scientific and Statistical Data Mining (2) 0.9 Generalized linear models 0.8 0.7 allow a categorical response variable (or 0.6 some transformation of it) to be related 0.5 0.4 to a set of predictor variables 0.3 similar to the modeling of a numeric 0.2 response variable using linear regressior 0.1 0 1.6 1.7 1.8 1.9 2 include logistic regression and Poisson DOSE regression Mixed-effect models For analyzing grouped data, i.e. data that can be classified according to one or more grouping variables Typically describe relationships between a response variable and some covariates in data grouped according to one or more factors 12" }, { "page_index": 912, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_013.png", "page_index": 912, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:39+07:00" }, "raw_text": "Scientific and Statistical Data Mining (3) Regression Tree Regression trees Ivean Squared Eror for the objec tive function Distibutionoftargetathibute MSE(R) Binary trees used for classification N neT and prediction Frelirtel Value Similar to decision trees:Tests are performed at the internal nodes In a regression tree the mean of the objective attribute is computed and Frelictel Vabue FredirtedValue FredirtelVale used as the predicted value 14 13 12 Analysis of variance 11 10- 9 Analyze experimental data for two or 8 7 96 more populations described by a 5 4 numeric response variable and one or 3 2 more categorical variables (factors) bradley buchanan forbes gore mccain bush Candidate 13" }, { "page_index": 913, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_014.png", "page_index": 913, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:46+07:00" }, "raw_text": "Statistical Data Mining (4) Data Mining - Factor Analysis Factor analysis determine which variables are [ JCq 3 I combined to generate a given factor Chec king e.g., for many psychiatric data, one Savings Personal Loan can indirectly measure other AutoLoan Certif.ofDeposit quantities (such as test scores) that Investments reflect the factor of interest Home Imp.Loan Mortgage Discriminant analysis predict a categorical response Data Mining - Discriminant variable, commonly used in social science Attempts to determine several discriminant functions (linear combinations of the independent variables) that discriminate among the groups defined by the response 10% 30% 50% 70% 90% Probability of Non-Renewal variable www.spss.com/datamine/factor.htm 14" }, { "page_index": 914, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_015.png", "page_index": 914, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:55+07:00" }, "raw_text": "Statistical Data Mining (5) Time series: many methods such as autoregression, ARIMA (Autoregressive integrated moving-average modeling), long memory time-series modeling Quality control: displays group summary charts Survival analysis 1.2 Predicts the probability 1 that a patient 0.8 x Censored 0.6 Drug A undergoing a medical Placebo 0.4 treatment would 0.2 survive at least to time 0 t (life span prediction) 0 100 200 300 400 Survival Time 15" }, { "page_index": 915, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_016.png", "page_index": 915, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:29:58+07:00" }, "raw_text": "Methodologies of Data Mining Other l Statistical Data Mining Views on Data Mining Foundations Visual and Audio Data Mining 16" }, { "page_index": 916, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_017.png", "page_index": 916, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:02+07:00" }, "raw_text": "Views on Data Mining Foundations (1) Data reduction Basis of data mining: Reduce data representation Trades accuracy for speed in response Data compression Basis of data mining: Compress the given data by encoding in terms of bits, association rules, decision trees, clusters, etc Probability and statistical theory Basis of data mining: Discover joint probability distributions of random variables 17" }, { "page_index": 917, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_018.png", "page_index": 917, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:08+07:00" }, "raw_text": "Views on Data Mining Foundations (11) Microeconomic view A view of utility: Finding patterns that are interesting only to the extent in that they can be used in the decision-making process of some enterprise Pattern Discovery and Inductive databases Basis of data mining: Discover patterns occurring in the database such as associations, classification models, sequential patterns, etc Data mining is the problem of performing inductive logic on databases The task is to query the data and the theory (i.e., patterns) of the database Popular among many researchers in database systems 18" }, { "page_index": 918, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_019.png", "page_index": 918, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:10+07:00" }, "raw_text": "Methodologies of Data Mining Other l Statistical Data Mining Views on Data Mining Foundations Visual and Audio Data Mining 19" }, { "page_index": 919, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_020.png", "page_index": 919, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:15+07:00" }, "raw_text": "Visual Data Mining Visualization: Use of computer graphics to create visual images which aid in the understanding of complex, often massive representations of data Visual Data Mining: discovering implicit but useful knowledge from large data sets using visualization technigues Multimedia Human Computer Systems Computer Graphics Interfaces Visual Data Mining High Pattern Performance Recognition Computing 20" }, { "page_index": 920, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_021.png", "page_index": 920, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:19+07:00" }, "raw_text": "Visualization Purpose of Visualization Gain insight into an information space by mapping data onto graphical primitives Provide qualitative overview of large data sets Search for patterns, trends, structure, irregularities, relationships among data. Help find interesting regions and suitable parameters for further quantitative analysis. Provide a visual proof of computer representations derived 21" }, { "page_index": 921, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_022.png", "page_index": 921, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:23+07:00" }, "raw_text": "Visual Data Mining & Data Visualization Integration of visualization and data mining data visualization data mining result visualization data mining process visualization interactive visual data mining Data visualization Data in a database or data warehouse can be viewed at different levels of abstraction different combinations of attributes or as dimensions Data can be presented in various visual forms 22" }, { "page_index": 922, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_023.png", "page_index": 922, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:28+07:00" }, "raw_text": "Data Mining Result Visualization Presentation of the results or knowledge obtained from data mining in visual forms Examples Scatter plots and boxplots (obtained from descriptive data mining) Decision trees Association rules Clusters Outliers Generalized rules 23" }, { "page_index": 923, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_024.png", "page_index": 923, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:33+07:00" }, "raw_text": "Boxplots from Statsoft: Multiple Variable Combinations IPLLEEIH 24" }, { "page_index": 924, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_025.png", "page_index": 924, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:44+07:00" }, "raw_text": "Mining Visualization of Data Results in SAS Enterprise Miner: Scatter Plots ICHT.8USIHESS BOISIPESE 127 9LE9 PHF F 30 2.9 5-026 I 640 658 a818 3.0 8506 2.46 F629 3650 396 5 $91 1142 S 64 $24 LSALEE + 6220 T90 ScaUer PIoIIHSI6HT.AUSIHESS 11 298 -38 1 9 527 I.BLISIAIESS LEHPCOI 1-0986 F 1n LSALEB S 7.9990 LEHPLOY L PAOFIT : 24 taFma1 FIt LIL B amde i LF 25" }, { "page_index": 925, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_026.png", "page_index": 925, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:49+07:00" }, "raw_text": "Visualization of Association Rules in SGI/MineSet 3.0 AvT FE m tat wt. xNQY 26" }, { "page_index": 926, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_027.png", "page_index": 926, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:30:57+07:00" }, "raw_text": "Visualization of a Decision Tree in SGI/MineSet 3.0 A 88 Fonteris cver acmelrc.: sgi s33316.01.83 of targe:,36. cf la*t rea A 1f cmI l)nlly Heiohl:.1 olalsales Disk heighl:Taigel sales Color:%oftorgct D% 100% 200% 500% R pAHy 27" }, { "page_index": 927, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_028.png", "page_index": 927, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:01+07:00" }, "raw_text": "Visualization of Cluster Grouping in IBM Intelligent Miner Fayu Yluw Oplue Hup 32 28" }, { "page_index": 928, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_029.png", "page_index": 928, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:06+07:00" }, "raw_text": "Mining Data Process Visualization Presentation of the various processes of data mining in visual forms so that users can see Data extraction process Where the data is extracted How the data is cleaned, integrated, preprocessed and mined Method selected for data mining Where the results are stored How they may be viewed 29" }, { "page_index": 929, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_030.png", "page_index": 929, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:14+07:00" }, "raw_text": "Visualization of Data Mining Processes by Clementine See your solution discovery Acccss Prepare Data visualize process clearly Data Cle Model Deploy Evaluate OrdcrPage Understand HoincPagc HoToConlatl yariations with visualized data - SilcMap Produl3 Pro dud 2 Prooutli 30" }, { "page_index": 930, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_031.png", "page_index": 930, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:19+07:00" }, "raw_text": "Mining Interactive Visual Data Using visualization tools in the data mining process to help users make smart data mining decisions Example Display the data distribution in a set of attributes using colored sectors or columns (depending on whether the whole space is represented by either a circle or a set of columns) Use the display to which sector should first be selected for classification and where a good split point for this sector may be 31" }, { "page_index": 931, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_032.png", "page_index": 931, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:26+07:00" }, "raw_text": "Interactive Visual Mining j by Perception-Based Classification (PBC) cx tlle Lools Operations Ontions View Help tawolucrcan.Spit..E.8.38..? uc mzar [Eplt(-. 2.. 1.9] FOLIACL WACCW wokin progress wotkir progress wotkir progrcoe woikir progress \"ttribJte : =colds93 Left moucc buton inscrts linc.Shit+lcf molcc bJtton mcvco linc Riahtmousc button plitc attributc 32" }, { "page_index": 932, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_033.png", "page_index": 932, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:30+07:00" }, "raw_text": "Mining Audio Data Uses audio signals to indicate the patterns of data or the features of data mining results An interesting alternative to visual mining An inverse task of mining audio (such as music) databases which is to find patterns from audio data Visual data mining may disclose interesting patterns using graphical displays, but requires users to concentrate on watching patterns Instead, transform patterns into sound and music and listen to pitches, rhythms, tune, and melody in order to identify anything interesting or unusual 33" }, { "page_index": 933, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_034.png", "page_index": 933, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:34+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining j Trends Summary 34" }, { "page_index": 934, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_035.png", "page_index": 934, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:39+07:00" }, "raw_text": "Data Mining j Applications Data mining: A young discipline with broad and diverse applications There still exists a nontrivial gap between generic data mining methods and effective and scalable data mining tools for domain-specific applications Some application domains (briefly discussed here) Data Mining for Financial data analysis Data Mining for Retail and Telecommunication Industries Data Mining in Science and Engineering Data Mining for Intrusion Detection and Prevention Data Mining and Recommender Systems 35" }, { "page_index": 935, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_036.png", "page_index": 935, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:44+07:00" }, "raw_text": "Mining for Financial Data Analysis (I) Data I Financial data collected in banks and financial institutions are often relatively complete, reliable, and of high quality Design and construction of data warehouses for multidimensional data analysis and data mining View the debt and revenue changes by month, by region, by sector, and by other factors Access statistical information such as max, min, total average, trend, etc. Loan payment prediction/consumer credit policy analysis feature selection and attribute relevance ranking Loan payment performance Consumer credit rating 36" }, { "page_index": 936, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_037.png", "page_index": 936, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:49+07:00" }, "raw_text": "Data Mining for Financial Data Analysis (II) Classification and clustering of customers for targeted marketing multidimensional segmentation by nearest-neighbor, classification, decision trees, etc. to identify customer customer c group Detection of money laundering and other financial crimes integration of from multiple DBs (e.g., bank transactions, federal/state crime history DBs) Tools: data visualization, linkage analysis, classification, clustering tools, outlier analysis, and sequential pattern analysis tools (find unusual access sequences) 37" }, { "page_index": 937, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_038.png", "page_index": 937, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:54+07:00" }, "raw_text": "Mining for Retail & Telcomm. Industries (I) Data l Retail industry: huge amounts of data on sales, customer shopping history, e-commerce, etc. Applications of retail data mining Identify customer buying behaviors Discover customer shopping patterns and trends Improve the quality of customer service Achieve better customer retention and satisfaction Enhance goods consumption ratios Design more effective goods transportation and distribution policies Telcomm. and many other industries: Share many similar goals and expectations of retail data mining 38" }, { "page_index": 938, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_039.png", "page_index": 938, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:31:59+07:00" }, "raw_text": "Practice for Retail Industry Data Mining Design and construction of data warehouses Multidimensional analysis of sales, customers, products, time, and region Analysis of the effectiveness of sales campaigns Customer retention: Analysis of customer loyalty Use customer loyalty card information to register sequences of purchases of particular customers Use seguential pattern mining to investigate changes in customer consumption or loyalty Suggest adjustments on the pricing and variety of goods Product recommendation and cross-reference of items Fraudulent analysis and the identification of usual patterns Use of visualization tools in data analysis 39" }, { "page_index": 939, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_040.png", "page_index": 939, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:04+07:00" }, "raw_text": "Mining in Science and Engineering Data Data warehouses and data preprocessing Resolving inconsistencies or incompatible data collected in diverse environments and different periods (e.g. eco-system studies) Mining complex data types Spatiotemporal, biological, diverse semantics and relationships Graph-based and network-based mining Links, relationships, data flow, etc Visualization tools and domain-specific knowledge Other issues Data mining in social sciences and social studies: text and social media Data mining in computer science: monitoring systems, software bugs, network intrusion 40" }, { "page_index": 940, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_041.png", "page_index": 940, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:09+07:00" }, "raw_text": "Mining for Intrusion Detection and Data Prevention Majority of intrusion detection and prevention systems use Signature-based detection: use signatures, attack patterns that are preconfigured and predetermined by domain experts Anomaly-based detection: build profiles (models of normal behavior) and detect those that are substantially deviate from the profiles What data mining can help New data mining algorithms for intrusion detection Association, correlation, and discriminative pattern analysis help select and build discriminative classifiers Analysis of stream data: outlier detection, clustering, model shifting Distributed data mining Visualization and querying tools 41" }, { "page_index": 941, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_042.png", "page_index": 941, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:16+07:00" }, "raw_text": "Mining and Recommender Systems Data I Recommender systems: Personalization, making product recommendations that are likely to be of interest to a user Approaches: Content-based, collaborative, or their hybrid Content-based: Recommends items that are similar to items the user preferred or queried in the past Collaborative filtering: Consider a user's social environment, opinions of other customers who have similar tastes or preferences Data mining and recommender systems Users C x items S: extract from known to unknown ratings to predict user-item combinations Memory-based method often uses k-nearest neighbor approach Model-based method uses a collection of ratings to learn a model (e.g., probabilistic models, clustering, Bayesian networks, etc.) Hybrid approaches integrate both to improve performance (e.g., using ensemble) 42" }, { "page_index": 942, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_043.png", "page_index": 942, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:20+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining j Trends Summary 43" }, { "page_index": 943, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_044.png", "page_index": 943, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:26+07:00" }, "raw_text": "Ubiquitous and Invisible Data Mining Ubiquitous Data Mining Data mining is used everywhere, e.g., online shopping Ex. Customer relationship management (CRM) Invisible Data Mining Invisible: Data mining functions are built in daily life operations Ex. Google search: Users may be unaware that they are examining results returned by data Invisible data mining is highly desirable Invisible mining needs to consider efficiency and scalability, user interaction, incorporation of background knowledge and visualization techniques, finding interesting patterns, real-time, ... Further work: Integration of data mining into existing business and scientific technologies to provide domain-specific data mining tools 44" }, { "page_index": 944, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_045.png", "page_index": 944, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:32+07:00" }, "raw_text": "Privacy, Security and Social Impacts of Mining Data I Many data mining applications do not touch personal data E.g., meteorology, astronomy, geography, geology, biology, and other scientific and engineering data Many DM studies are on developing scalable algorithms to find general or statistically significant patterns, not touching individuals The real privacy concern: unconstrained access of individual records especially privacy-sensitive information Method 1: Removing sensitive IDs associated with the data Method 2: Data security-enhancing methods Multi-level security model: permit to access to only authorized level Encryption: e.g., blind signatures, biometric encryption, and anonymous databases (personal information is encrypted and stored at different locations) Method 3: Privacy-preserving data mining methods 45" }, { "page_index": 945, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_046.png", "page_index": 945, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:39+07:00" }, "raw_text": "Privacy-Preserving Mining Data l Privacy-preserving (privacy-enhanced or privacy-sensitive) mining: Obtaining valid mining results without disclosing the underlying sensitive data values Often needs trade-off between information loss and privacy Privacy-preserving data mining methods: Randomization (e.g., perturbation): Add noise to the data in order to mask some attribute values of records K-anonymity and I-diversity: Alter individual records so that they cannot be uniquely identified k-anonymity: Any given record maps onto at least k other records I-diversity: enforcing intra-group diversity of sensitive values Distributed privacy preservation: Data partitioned and distributed either horizontally, vertically, or a combination of both Downgrading the effectiveness of data mining: The output of data mining may violate privacy Modify data or mining results, e.g., hiding some association rules or slightly distorting some classification models 46" }, { "page_index": 946, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_047.png", "page_index": 946, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:42+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining j Trends Summary 47" }, { "page_index": 947, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_048.png", "page_index": 947, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:47+07:00" }, "raw_text": "Mining Trends of Data I Application exploration: Dealing with application-specific problems Scalable and interactive data mining methods Integration of data mining with Web search engines, database systems, data warehouse systems and cloud computing systems Mining social and information networks Mining spatiotemporal, moving objects and cyber-physical systems Mining multimedia, text and web data Mining biological and biomedical data Data mining with software engineering and system engineering Visual and audio data mining Distributed data mining and real-time data stream mining Privacy protection and information security in data mining 48" }, { "page_index": 948, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_049.png", "page_index": 948, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:50+07:00" }, "raw_text": "Chapter 13: Data Mining Trends and Research Frontiers Mining Complex Types of Data Other Methodologies of Data Mining Data Mining Applications Data Mining and Society Data Mining Trends Summary 49" }, { "page_index": 949, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_050.png", "page_index": 949, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:32:55+07:00" }, "raw_text": "Summary We present a high-level overview of mining complex data types Statistical data mining methods, such as regression, generalized linear models, analysis of variance, etc., are popularly adopted Researchers also try to build theoretical foundations for data mining Visual/audio data mining has been popular and effective Application-based mining integrates domain-specific knowledge with data analysis technigues and provide mission-specific solutions Ubiquitous data mining and invisible data mining are penetrating our data lives Privacy and data security are importance issues in data mining, and privacy-preserving data mining has been developed recently Our discussion on trends in data mining shows that data mining is a promising, young field, with great, strategic importance 50" }, { "page_index": 950, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_051.png", "page_index": 950, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:33:02+07:00" }, "raw_text": "Reading References and Further The books lists a lot of references for further reading. Here we only list a few books E. Alpaydin. Introduction to Machine Learning, 2nd ed., MIT Press, 2011 S. Chakrabarti. Mining the Web: Statistical Analysis of Hypertex and Semi-Structured Data. Morgan Kaufmann, 2002 R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification, 2ed., Wiley-Interscience, 2000 D. Easley and J. Kleinberg. Networks, Crowds, and Markets: Reasoning about a Highly Connected Wor/d. Cambridge University Press, 2010. U. Fayyad, G. Grinstein, and A. Wierse (eds.), Information Visualization in Data Mining and Knowledge Discovery, Morgan Kaufmann, 2001 J. Han, M. Kamber, J. Pei. Data Mining: Concepts and Techniques. Morgan Kaufmann, 3rd ed. 2011 T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed., Springer-Verlag, 2009 D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Technigues. MIT Press, 2009 B. Liu. Web Data Mining, Springer 2006. T. M. Mitchell. Machine Learning, McGraw Hill, 1997 M. Newman. Networks: An Introduction. Oxford University Press, 2010 P.-N. Tan, M. Steinbach and V. Kumar, Introduction to Data Mining, Wiley, 2005 I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Technigues with Java Implementations, Morgan Kaufmann, 2nd ed. 2005 51" }, { "page_index": 951, "chapter_num": 12, "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3029", "source_file": "/workspace/data/converted/CO3029_Data_Mining/Chapter_12/slide_052.png", "page_index": 951, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-10-31T16:33:08+07:00" }, "raw_text": "" } ] }