{ "course": "Machine_Learning", "course_id": "CO3117", "schema_version": "material.v1", "slides": [ { "page_index": 0, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_001.png", "page_index": 0, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:01+07:00" }, "raw_text": "BK TP.HCM Machine Learning Chapter 1 - Introductior Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vr Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 1, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_002.png", "page_index": 1, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:03+07:00" }, "raw_text": "Machine Learning BK What is Machine learning? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1/23" }, { "page_index": 2, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_003.png", "page_index": 2, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:05+07:00" }, "raw_text": "Machine Learning BK What is Machine learning? Arthur Samuel (1959): \"Field of study that gives computers the ability to learn without being explicitly programmed\" Tom Mitchell (1997): \"A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E\". -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1 / 23" }, { "page_index": 3, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_004.png", "page_index": 3, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:07+07:00" }, "raw_text": "Machine Learning BK Machine Learning : How to construct programs that automatically improve with experience ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2/23" }, { "page_index": 4, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_005.png", "page_index": 4, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:09+07:00" }, "raw_text": "Machine Learning BK Machine Learning : How to construct programs that automatically improve with experience * The scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2 / 23" }, { "page_index": 5, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_006.png", "page_index": 5, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:11+07:00" }, "raw_text": "Machine Learning BK Machine Learning How to construct programs that automatically improve with experience * The scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. o A subset of artificial intelligence ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2 / 23" }, { "page_index": 6, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_007.png", "page_index": 6, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:18+07:00" }, "raw_text": "Example BK Experience Example Gray? Mammal? Large? Vegetarian? Wild? Elephant 1 + + + + + + 2 + + + + + 3 + + - + + - 4 : + + + + 5 + - + - + : 1 + + + + + Prediction 7 + + + 1 + 8 + : + 1 + 9 + + + 1 1 ? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 23" }, { "page_index": 7, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_008.png", "page_index": 7, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:20+07:00" }, "raw_text": "Machine Learning BK What is learning? -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 4/23" }, { "page_index": 8, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_009.png", "page_index": 8, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:22+07:00" }, "raw_text": "Machine Learning BK Learning is an (endless) generalization or induction process -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5/23" }, { "page_index": 9, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_010.png", "page_index": 9, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:24+07:00" }, "raw_text": "Types of Machine Learning BK Supervised learning: the learner (learning algorithm) are trained on labeled examples, i.e., input where the desired output is known. Unsupervised learning: the learner operates on unlabeled examples, i.e., input where the desired output is unknown. ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 6/23" }, { "page_index": 10, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_011.png", "page_index": 10, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:26+07:00" }, "raw_text": "Types of Machine Learning BK Reinforcement learning: between supervised and unsupervised learning. It is told when an answer is wrong, but not how to correct it. Evolutionary learning: biological evolution can be seen as a learning process, to improve survival rates and chance of having offspring ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7/23" }, { "page_index": 11, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_012.png", "page_index": 11, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:29+07:00" }, "raw_text": "Types of Machine Learning : The most common type: supervised learning Classification: to find the class of an instance given its selected features. Regression: to find a function whose curve passes as close as possible to all of the given data points. ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8/23" }, { "page_index": 12, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_013.png", "page_index": 12, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:30+07:00" }, "raw_text": "Phases of Machine Learning BK How many phase do we have in machine earning? -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 9/23" }, { "page_index": 13, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_014.png", "page_index": 13, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:32+07:00" }, "raw_text": "Phases of Machine Learning BK Training Testing Applying 1 Training Testing Real Data Data Data -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10 / 23" }, { "page_index": 14, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_015.png", "page_index": 14, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:34+07:00" }, "raw_text": "Phases of Machine Learning Bt K-fold cross validation : - Randomly partitioned k equal sized sub-samples - k - 1 for training and 1 for testing k times (folds) of validation and taking the average. ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 11/ 23" }, { "page_index": 15, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_016.png", "page_index": 15, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:36+07:00" }, "raw_text": "Phases of Machine Learning BK Statistical significance test: to reject the null-hypothesis that the twc compared systems are equivalently efficient although their performance measures are different. -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 12/23" }, { "page_index": 16, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_017.png", "page_index": 16, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:38+07:00" }, "raw_text": "Phases of Machine Learning BK modelloss train 0.8 test 0.7 SSOS 0.6 0.5 0.4 0 200 400 600 800 1000 epoch ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 13 / 23" }, { "page_index": 17, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_018.png", "page_index": 17, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:40+07:00" }, "raw_text": "Phases of Machine Learning BK Overfitting There is noise in the data : The number of training examples is too small to produce a representative sample of the target concept ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 14 / 23" }, { "page_index": 18, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_019.png", "page_index": 18, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:42+07:00" }, "raw_text": "Performance Measures BK relevant elements false negatives true negatives 0 0 C O true positives false positives 0 0 selected elements Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15 / 23" }, { "page_index": 19, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_020.png", "page_index": 19, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:44+07:00" }, "raw_text": "Performance Measures BK Precision: number of correct system answers D number of system answers . Recall: number of correct system answers R = number of correct problem answers ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 16 / 23" }, { "page_index": 20, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_021.png", "page_index": 20, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:46+07:00" }, "raw_text": "Performance Measures BK TP Precision = TP+FP TP Recall = TP+FN TP + TN Accuracy = TP+TN+FP+FN ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 23" }, { "page_index": 21, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_022.png", "page_index": 21, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:48+07:00" }, "raw_text": "Performance Measures BK F1 score: want to seek a balance between Precision and Recall It is good when there is an uneven class distribution P*R F1 = 2 P+ R ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 18 / 23" }, { "page_index": 22, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_023.png", "page_index": 22, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:51+07:00" }, "raw_text": "Inductive Bias BK Example Quality Price Buy 1 Good Low Yes 2 Bad High No 3 Good High ? 4 Bad Low ? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 19 / 23" }, { "page_index": 23, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_024.png", "page_index": 23, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:53+07:00" }, "raw_text": "lnductive Bias BK : A learner that makes no prior assumptions regarding the identity of the target concept cannot classify any unseen instances -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/23" }, { "page_index": 24, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_025.png", "page_index": 24, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:56+07:00" }, "raw_text": "lnductive Bias BK : A learner that makes no prior assumptions regarding the identity of the target concept cannot classify any unseen instances. - A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basic for classifying any unseen instances. -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/23" }, { "page_index": 25, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_026.png", "page_index": 25, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:46:58+07:00" }, "raw_text": "Inductive Bias BK A learner that makes no prior assumptions regarding the identity of the target concept cannot classify any unseen instances. - A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basic for classifying any unseen instances. The inductive bias (learning bias): the set of assumptions that the learner uses to predict outputs given inputs that it has not encountered -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/23" }, { "page_index": 26, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_027.png", "page_index": 26, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:00+07:00" }, "raw_text": "lnductive Bias Common inductive bias in ML: : Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence (Naive Bayes classifier) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 21/23" }, { "page_index": 27, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_028.png", "page_index": 27, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:02+07:00" }, "raw_text": "Inductive Bias BK Common inductive bias in ML: . Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence (Naive Bayes classifier). - Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 21/ 23" }, { "page_index": 28, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_029.png", "page_index": 28, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:05+07:00" }, "raw_text": "Inductive Bias BK Common inductive bias in ML Maximum conditional independence: if the hypothesis can be cast in a Bayesian framework, try to maximize conditional independence (Naive Bayes classifier). . Minimum cross-validation error: when trying to choose among hypotheses, select the hypothesis with the lowest cross-validation error. . Maximum margin: when drawing a boundary between two classes, attempt to maximize the width of the boundary (SVM). The assumption is that distinct classes tend to be separated by wide boundaries Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 21/ 23" }, { "page_index": 29, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_030.png", "page_index": 29, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:07+07:00" }, "raw_text": "Inductive Bias Common inductive bias in ML: Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. T he assumption is that simpler hypotheses are more likely to be true -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22/23" }, { "page_index": 30, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_031.png", "page_index": 30, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:09+07:00" }, "raw_text": "Inductive Bias BK Common inductive bias in ML: . Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. The assumption is that simpler hypotheses are more likely to be true . Minimum features: unless there is good evidence that a feature is useful, it should be deleted. -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22 / 23" }, { "page_index": 31, "chapter_num": 1, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_1/slide_032.png", "page_index": 31, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:12+07:00" }, "raw_text": "Inductive Bias BK Common inductive bias in ML: . Minimum description length: when forming a hypothesis, attempt to minimize the length of the description of the hypothesis. The assumption is that simpler hypotheses are more likely to be true Minimum features: unless there is good evidence that a feature is useful, it should be deleted. Nearest neighbors: assume that most of the cases in a small neighborhood in feature space belong to the same class. Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22 /23" }, { "page_index": 32, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_001.png", "page_index": 32, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:14+07:00" }, "raw_text": "BK TP.HCM Machine Learning Decision Tree Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 33, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_002.png", "page_index": 33, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:15+07:00" }, "raw_text": "Contents BK 1. Decision-Tree Learning 2. Decision-Trees 1" }, { "page_index": 34, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_003.png", "page_index": 34, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:15+07:00" }, "raw_text": "Decision-Tree Learning" }, { "page_index": 35, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_004.png", "page_index": 35, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:18+07:00" }, "raw_text": "Decision-Tree Learning BK Introduction Decision Trees TDIDT: Top-Down Induction of Decision Trees ID3 Attribute selection - Entropy, Information, Information Gain Gain Ratio C4.5 Numeric Values Missing Values : Pruning Regression and Model Trees 2" }, { "page_index": 36, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_005.png", "page_index": 36, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:19+07:00" }, "raw_text": "Decision-Trees" }, { "page_index": 37, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_006.png", "page_index": 37, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:21+07:00" }, "raw_text": "Decision-Trees BK A decision tree consists of Nodes: test for the value of a certain attribute * Edges: correspond to the outcome of a test and connect to the next node or leaf : Leaves: terminal nodes that predict the outcome 3" }, { "page_index": 38, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_007.png", "page_index": 38, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:24+07:00" }, "raw_text": "Decision-Trees BK In Decision Tree The training examples Learning, a new example are used for choosing is classified by appropriate tests in the submitting it to a series decision tree. Typically. of tests that determine the a tree is built from top to class label of the bottom, where tests that example.These tests are Training maximize the information organized in a gain about hierarchical structure the classification are called a decision tree. selected first. New Example Classification" }, { "page_index": 39, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_008.png", "page_index": 39, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:32+07:00" }, "raw_text": "Decision-Trees BK Day Temperature Outlook Humidity Windy Play Golf? 07-05 hot sunny high false no 07-06 hot sunny high true no 07-07 hot overcast high false yes 07-09 cool rain normal false yes 07-10 cool overcast normal true yes 07-12 mild sunny high false no 07-14 cool sunny normal false yes 07-15 mild rain normal false yes 07-20 mild sunny normal true yes 07-21 mild overcast high true yes 07-22 hot overcast normal false yes 07-23 mild rain high true no 07-26 cool rain normal true no 07-30 mild rain high false yes today cool sunny normal false ? tomorrow mild sunny normal false ? 5" }, { "page_index": 40, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_009.png", "page_index": 40, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:34+07:00" }, "raw_text": "Decision-Trees BK Outlook sunny overcast rain Humidity yes Windy normal high true false yes no no yes tomorrow mild sunny normal false 6" }, { "page_index": 41, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_010.png", "page_index": 41, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:36+07:00" }, "raw_text": "Decision-Trees: Divide-And-Conguer Algorithms BK Family of decision tree learning algorithms: TDIDT: Top-Down Induction of Decision Trees Learn trees in a Top-Down fashion: Divide the problem in subproblems Solve each problem" }, { "page_index": 42, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_011.png", "page_index": 42, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:38+07:00" }, "raw_text": "Decision-Trees: ID3 Algorithm BK Function lD3 Input: Example set S : Output: Decision Tree DT If all examples in S belong to the same class c, return a new leaf and label it with c. Else . Select an attribute A according to some heuristic function Generate a new node DT with A as test For each Value v; of A, let S; = all examples in S with A = v;. Use ID3 to construct a decision tree DT; for example set S;. Generate an edge that connects DT and DT 8" }, { "page_index": 43, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_012.png", "page_index": 43, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:43+07:00" }, "raw_text": "Decision-Trees: A Different Decision Tree BK Temperature hot mild cool Outlook Outlook Outlook sunny ran overcast sunny ran overcast sunny rain overcast no yes Humidity Humidity yes yes Humidity yes high normal high normal high normal no yes Windy yes Windy true false true false no yes no yes 9" }, { "page_index": 44, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_013.png", "page_index": 44, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:45+07:00" }, "raw_text": "Decision-Trees: What is a good Attribute? BK A good attribute prefers attributes that split the data so that each successor node is as pure as possible. In other words, we want a measure that prefers attributes that have a high degree of \"order\" Maximum order: All examples are of the same class . Minimum order: All classes are equally likely > Entropy is a measure for (un-)orderedness 10" }, { "page_index": 45, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_014.png", "page_index": 45, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:46+07:00" }, "raw_text": "Decision-Trees: Entropy (for two classes) BK S is a set of examples : P is the proportion of examples in class Pe = 1 - Pe is the proportion of examples in class Entropy: E(S) = -p log2 P = Pe log2 Pe (1)) 11" }, { "page_index": 46, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_015.png", "page_index": 46, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:48+07:00" }, "raw_text": "Decision-Trees: Entropy (for two classes) BK maximal value at equal class distribution .0 0.5 0.0 Pa minimal value if only one class left in S 12" }, { "page_index": 47, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_016.png", "page_index": 47, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:51+07:00" }, "raw_text": "Decision-Trees: Entropy (for more classes BK Entropy can be easily generalized for m > 2 classes 72 E(S) =-p1 log P1 - P2 log p2... - Pn log Pn =-p; log pi (2)) i=1 P: is the proportion of examples in S that belong to the i-th class 13" }, { "page_index": 48, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_017.png", "page_index": 48, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:53+07:00" }, "raw_text": "Decision-Trees: Average Entropy / Information BK Problem: Entropy only computes the quality of a single (sub-)set of examples Solution: Compute the weighted average over all sets resulting from the split weighted by their size. Si E(Si) (3) S 14" }, { "page_index": 49, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_018.png", "page_index": 49, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:55+07:00" }, "raw_text": "Decision-Trees: Information Gair BK When an attribute A splits the set S into subsets Si, we then compute the average entropy and compare the sum to the entropy of the original set S. nformation Gain for Attribute A: Gain(S,A) =E(S) -I(S,A) =E(S) - E(Si) (4) S The attribute that maximizes the difference is selected 15" }, { "page_index": 50, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_019.png", "page_index": 50, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:57+07:00" }, "raw_text": "Decision-Trees: Properties of Entropy BK Entropy is the only function that satisfies all of the following three properties When node is pure, measure should be zero : When impurity is maximal (i.e. all classes equally likely), measure should be maximal. . Measure should obey multistage property 16" }, { "page_index": 51, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_020.png", "page_index": 51, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:47:59+07:00" }, "raw_text": "Decision-Trees: Highly-branching attributes BK Problematic: attributes with a large number of values Subsets are more likely to be pure if there is a large number of different attribute values Information gain is biased towards choosing attributes with a large number of values This may cause several problems: : Overfitting: selection of an attribute that is non-optimal for predictior Fragmentation: data are fragmented into (too) many small sets 17" }, { "page_index": 52, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_021.png", "page_index": 52, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:01+07:00" }, "raw_text": "Decision-Trees: Intrinsic Information of an Attribute BK Intrinsic information of a split Si (5) 18" }, { "page_index": 53, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_022.png", "page_index": 53, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:03+07:00" }, "raw_text": "Decision-Trees: Gain Ratio Bt Modification of the information gain that reduces its bias towards multi-valued attributes. Takes number and size of branches into account when choosing an attribute. Corrects the information gain by taking the intrinsic information of a split into account. Definition of Gain Ratio: GR(S,A) = Gain(S,A) (6) IntI(S,A) 19" }, { "page_index": 54, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_023.png", "page_index": 54, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:05+07:00" }, "raw_text": "Decision-Trees: Gini Index BK There are many alternative measures to Information Gain. Most popular altermative is Gini index. Impurity measure (instead of entropy) Gini(S) =1 -Z p2 (7) Average Gini index (instead of average entropy / information): Si Gini(Si) (8) S Gini Gain could be defined analogously to information gain but typically avg. Gini index is minimized instead of maximizing Gini gain. 20" }, { "page_index": 55, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_024.png", "page_index": 55, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:08+07:00" }, "raw_text": "Decision-Trees: Comparison among Splitting Criteria BK 09 Entropy 08 0.7 06 05 Gini 0 03 0.2 Misclassification error 01 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 '1 21" }, { "page_index": 56, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_025.png", "page_index": 56, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:11+07:00" }, "raw_text": "Decision-Trees: Industrial-strength algorithms For an algorithm to be useful in a wide range of real-world applications it must: Permit numeric attributes Allow missing values Be robust in the presence of noise : Be able to approximate arbitrary concept descriptions (at least in principle 22" }, { "page_index": 57, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_026.png", "page_index": 57, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:12+07:00" }, "raw_text": "Decision-Trees: Numeric attributes BK Standard method: binary splits Unlike nominal attributes, every attribute has many possible split points and computationally more demanding Solution is straightforward extension: Evaluate info gain (or other measure) for every possible split point of attribute Choose \"best\" split point Info gain for best split point is info gain for attribute 23" }, { "page_index": 58, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_027.png", "page_index": 58, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:21+07:00" }, "raw_text": "Decision-Trees: Efficient Computation BK Efficient computation needs only one scan through the values! Linearly scan the sorted values, each time updating the count matrix and computing the evaluation measure . Choose the split position that has the best value Cheat No No No Yes Yes Yes No No No No Taxable Income Sorted Values 60 70 75 85 90 95 100 120 125 220 Split Positions 55 65 72 80 87 92 97 110 122 172 230 <= > <= > <= > <= > <= <= > <= > > <= > <= > <= > Yes 3 3 3 2 2 3 3 3 0 3 3 0 No 7 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 0 Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420 24" }, { "page_index": 59, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_028.png", "page_index": 59, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:24+07:00" }, "raw_text": "Decision-Trees: Binary vs. vs. Multiway Splits : Splitting (multi-way) on a nominal attribute exhausts all information in that attribute Not so for binary splits on numeric attributes! Numeric attribute may be tested several times along a path in the tree. Disadvantage: tree is hard to read - Remedy: pre-discretize numeric attributes, or use multi-way splits instead of binary ones 25" }, { "page_index": 60, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_029.png", "page_index": 60, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:26+07:00" }, "raw_text": "Decision-Trees: Missing values BK If an attribute with a missing value needs to be tested: : split the instance into fractional instances (pieces one piece for each outgoing branch of the node a piece going down a branch receives a weight proportional to the popularity of the branch weights sum to 1 Info gain or gain ratio work with fractional instances, use sums of weights instead of counts. During classification, split the instance in the same way. Merge probability distribution using weights of fractional instances 26" }, { "page_index": 61, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_030.png", "page_index": 61, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:27+07:00" }, "raw_text": "Decision-Trees: Overfitting and Pruning BK The smaller the complexity of a concept, the less danger that it overfits the data. Thus, learning algorithms try to keep the learned concepts simple 27" }, { "page_index": 62, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_031.png", "page_index": 62, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:31+07:00" }, "raw_text": "Decision-Trees: Prepruning BK Based on statistical significance test. Stop growing the tree when there is no statistically significant association between any attribute and the class at a particular node. Most popular test: chi-squared test. Only statistically significant attributes were allowed to be selected by information gain procedure. Pre-pruning may stop the growth process prematurely: early stopping 28" }, { "page_index": 63, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_032.png", "page_index": 63, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:33+07:00" }, "raw_text": "Decision-Trees: Post-Pruning BK Learn a complete and consistent decision tree that classifies all examples in the training set correctly . As long as the performance increases . Try simplification operators on the tree Evaluate the resulting trees Make the replacement the results in the best estimated performance then return the resulting decision tree 29" }, { "page_index": 64, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_033.png", "page_index": 64, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:35+07:00" }, "raw_text": "Decision-Trees: Post-Pruning BK Two subtree simplification operators . Subtree replacement . Subtree raising 30" }, { "page_index": 65, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_034.png", "page_index": 65, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:39+07:00" }, "raw_text": "Decision-Trees: Subtree replacement BK Bottom-up wage increase 1st year Consider replacing a tree only after considering all its subtrees <=2.5 > 2.5 bad statutory holidays wage increase 1st year <= 10 =2.5 > 2.5 good wage increase l st year working hours per week statutory holidays =36 > 36 > 10 = 10 1 bad good bad health plan contribution good wage increase Ist year none half full >4 bad good bad- bad good 31" }, { "page_index": 66, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_035.png", "page_index": 66, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:41+07:00" }, "raw_text": "Decision-Trees: : Subtree raising BK Delete node B Redistribute instances of leaves 4 and 5 into C B 2 2' 3' 32" }, { "page_index": 67, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_036.png", "page_index": 67, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:42+07:00" }, "raw_text": "Decision-Trees: Estimating Error Rates BK Prune only if it does not increase the estimated error Reduced Error Pruning . Use hold-out set for pruning . Essentially the same as in rule learning 33" }, { "page_index": 68, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_037.png", "page_index": 68, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:44+07:00" }, "raw_text": "Decision-Trees: Reduced Error Pruning BK Split training data into a growing and a pruning set Learn a complete and consistent decision tree that classifies all examples in the growing set correctly As long as the error on the pruning set does not increase, try to replace each node by a leaf, evaluate the resulting (sub-)tree on the pruning set then make the replacement the results in the maximum error reduction. Return the resulting decision tree 34" }, { "page_index": 69, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_038.png", "page_index": 69, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:46+07:00" }, "raw_text": "Decision-Trees: Decision Lists and Decision Graphs BK Decision Lists An ordered list of rules . The first rule that fires makes the predictior : can be learned with a covering approach Decision Graphs Similar to decision trees, but nodes may have multiple predecessors 35" }, { "page_index": 70, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_039.png", "page_index": 70, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:49+07:00" }, "raw_text": "Decision-Trees: Rules vs. Trees BK Each decision tree can be converted into a rule set. A decision tree can be viewed as a set of non-overlapping rules and typically learned via divide-and-conquer algorithms (recursive partitioning) Transformation of rule sets / decision lists into trees is less trivial . Many concepts have a shorter description as a rule set Low complexity decision lists are more expressive than low complexity decision trees Exceptions: if one or more attributes are relevant for the classification of all examples 36" }, { "page_index": 71, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_040.png", "page_index": 71, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:51+07:00" }, "raw_text": "Decision-Trees: Regression Problems BK Regression Task: the target variable is numerical instead of discrete I wo principal approaches : Discretize the numerical target variable : Adapt the classification algorithm to regression data 37" }, { "page_index": 72, "chapter_num": 2, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_2/slide_041.png", "page_index": 72, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:53+07:00" }, "raw_text": "Decision-Trees: Regression Trees BK Differences to Decision Trees (Classification Trees Leaf Nodes: Predict the average value of all instances in this leaf Splitting criterion: Minimize the variance of the values in each subset Termination criteria: Lower bound on standard deviation in a node and lower bound on number of examples in a node Pruning criterion: Numeric error measures, e.g. Mean-Squared Erro 38" }, { "page_index": 73, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_001.png", "page_index": 73, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:55+07:00" }, "raw_text": "BK TP.HCM Machine Learning Bayesian Learning Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vr Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 74, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_002.png", "page_index": 74, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:56+07:00" }, "raw_text": "Contents BK I. Linear Predictior 2. Bayesian Learning -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1 / 30" }, { "page_index": 75, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_003.png", "page_index": 75, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:48:57+07:00" }, "raw_text": "Linear Prediction" }, { "page_index": 76, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_004.png", "page_index": 76, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:00+07:00" }, "raw_text": "Linear Prediction BK Linear supervised learning . Many real processes can be approximated with linear models Linear regression often appears as a module of larger systems . Linear problems can be solved analytically Linear prediction provides an introduction to many of the core concepts in machine learning ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2 / 30" }, { "page_index": 77, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_005.png", "page_index": 77, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:02+07:00" }, "raw_text": "Linear Prediction BK Energy demand prediction Wind speed People inside building Energy requirement 100 2 5 50 42 25 45 31 22 60 35 18 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 30" }, { "page_index": 78, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_006.png", "page_index": 78, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:05+07:00" }, "raw_text": "Linear Prediction BK Teen Birth Rate and Poverty Level Data Fitted Line Plot Brth15to17=4.267+1.373PovPct 50 5 5.55057 R-5q 53.3% R-5q(adj) 52.4% 40 30 20 10 5 10 15 20 25 PovPct ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 4 / 30" }, { "page_index": 79, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_007.png", "page_index": 79, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:09+07:00" }, "raw_text": "Linear Prediction BK Lung Function in 6 to 10 Year Old Children 45 40 0 00 0 30 23 0 8O 1'5 600 OD 00 6 7 8 9 10 Age ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5 / 30" }, { "page_index": 80, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_008.png", "page_index": 80, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:14+07:00" }, "raw_text": "Linear Prediction BK Lung Function in 6 to 10 Year Old Children 0 0 5 0 000 8 GBD - O0D 00 00 0 00 IO 00 0 0 00 14 ODOO CUD 3 0 0 DOND 0 e 2 DO 0 0 0 5 10 15 Age ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 6 /30" }, { "page_index": 81, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_009.png", "page_index": 81, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:16+07:00" }, "raw_text": "Linear Prediction BK In general the linear model is expressed as follows d yi=Z xij0j j=1 o In matrix form y = X0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 30" }, { "page_index": 82, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_010.png", "page_index": 82, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:19+07:00" }, "raw_text": "Linear Prediction BK We can use optimization approach J(0) =(y-y)'(y-y) Least squares estimates : Probabilistic approach ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8 / 30" }, { "page_index": 83, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_011.png", "page_index": 83, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:20+07:00" }, "raw_text": "Bayesian Learning" }, { "page_index": 84, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_012.png", "page_index": 84, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:22+07:00" }, "raw_text": "Bayesian Learning BK It involves direct manipulation of probabilities in order to find correct hypotheses The quantities of interest are governed by probability distributions. : Optimal decisions can be made by reasoning about those probabilities -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 9/30" }, { "page_index": 85, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_013.png", "page_index": 85, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:24+07:00" }, "raw_text": "Bayesian Learning Bayesian learning algorithms are among the most practical approaches to certain type of learning problems Provide a useful perspective for understanding many learning algorithms that do not explicitly manipulate probabilities -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/30" }, { "page_index": 86, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_014.png", "page_index": 86, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:26+07:00" }, "raw_text": "Features of Bayesian Learning BK : Each training example can incrementally decrease or increase the estimated probability that a hypothesis is correct. Prior knowledge can be combined with observed data to determine the final probability of a hypothesis Hypotheses with probabilities can be accommodated . New instances can be classified by combining multiple hypotheses weighted by the probabilities ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 11/ 30" }, { "page_index": 87, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_015.png", "page_index": 87, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:28+07:00" }, "raw_text": "Bayes Theorem Bt P(Dh)P(h) P(hD) (1) = P(D) P(h): prior probability of hypothesis h P(D): prior probability of training data D P(hD): probability that h holds given D P(Dh): probability that D is observed given h ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 12 / 30" }, { "page_index": 88, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_016.png", "page_index": 88, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:31+07:00" }, "raw_text": "Bayes Theorem BK . Maximum A-posteriori hypothesis (MAP) hMAp = arg max P(hD) = arg max P(Dh)P(h) (2) hEH hEH P(h) is not a uniform distribution over H. P(Dh)P(h P(hD) = (3) P(D) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 13 / 30" }, { "page_index": 89, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_017.png", "page_index": 89, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:33+07:00" }, "raw_text": "Bayes Theorem BK . Maximum Likelihood hypothesis (ML): hML = arg max P(hD) = arg max P(Dh (4) hEH hEH If P(h) is a uniform distribution over H P(Dh)P(h P(hD) = (5) P(D) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 14/ 30" }, { "page_index": 90, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_018.png", "page_index": 90, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:35+07:00" }, "raw_text": "Bayes Theorem BK . 0.008 of the population have cancer ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15/30" }, { "page_index": 91, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_019.png", "page_index": 91, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:37+07:00" }, "raw_text": "Bayes Theorem BK . 0.008 of the population have cancer : Only 98% patients are correctly classified as positive -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15/30" }, { "page_index": 92, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_020.png", "page_index": 92, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:39+07:00" }, "raw_text": "Bayes Theorem BK : 0.008 of the population have cancer : Only 98% patients are correctly classified as positive : Only 97% non-patiants are correctly classified as negative -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15/30" }, { "page_index": 93, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_021.png", "page_index": 93, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:41+07:00" }, "raw_text": "Bayes Theorem BK 0.008 of the population have cancer Only 98% patients are correctly classified as positive Only 97% non-patiants are correctly classified as negative . Would a person with a positive result have cancer or not? P(cancer) >< P(-cancer) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15 /30" }, { "page_index": 94, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_022.png", "page_index": 94, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:43+07:00" }, "raw_text": "Bayes Theorem BK . Maximum A-posteriori hypothesis (MAP) : hMAP = arg max P(hO) he(cancer,-cancer) (6)) arg max P(oh)P(h) he(cancer,-cancer) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 16 / 30" }, { "page_index": 95, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_023.png", "page_index": 95, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:44+07:00" }, "raw_text": "Bayes Theorem BK : P(cancer) = .008 - P(-cancer) = .992 -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17/30" }, { "page_index": 96, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_024.png", "page_index": 96, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:46+07:00" }, "raw_text": "Bayes Theorem BK P(cancer) =.008 - P(-cancer) = .992 : P(cancer) = .98 -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 30" }, { "page_index": 97, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_025.png", "page_index": 97, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:47+07:00" }, "raw_text": "Bayes Theorem BK P(cancer) = .008 -> P(-cancer) = .992 P(cancer) = .98 : P(e-cancer) =.97 -P(-cancer) = .03 -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17/ 30" }, { "page_index": 98, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_026.png", "page_index": 98, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:50+07:00" }, "raw_text": "Bayes Theorem BK P(cancer) = .008 - P(cancer) = .992 P(cancer) = .98 P(e-cancer) = .97 -P(@-cancer) = .03 P(cancer) P(cancer)p(cancer) = .0078 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 30" }, { "page_index": 99, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_027.png", "page_index": 99, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:52+07:00" }, "raw_text": "Bayes Theorem BK P(cancer) = .008 - P(-cancer) = .992 P(cancer) = .98 P(e-cancer) = .97 -P(@-cancer) = .03 P(cancer) P(@cancer)p(cancer) = .0078 : P(-cancer) P(-cancer)P(-cancer) = .0298 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 30" }, { "page_index": 100, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_028.png", "page_index": 100, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:54+07:00" }, "raw_text": "Bayes Theorem BK . Maximum A-posteriori hypothesis (MAP): hMAP = arg max P(h) he(cancer,-cancer) arg max P(oh)P(h (7) he(cancer,-cancer) = -cancer -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 18 / 30" }, { "page_index": 101, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_029.png", "page_index": 101, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:56+07:00" }, "raw_text": "Bayes Optimal Classifier . What is the most probable hypothesis given the training data? - What is the most probable classification of a new instance given the training data? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 19/30" }, { "page_index": 102, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_030.png", "page_index": 102, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:49:58+07:00" }, "raw_text": "Bayes Optimal Classifier BK Hypothesis space ={h1,h2,h3} . Posterior probabilities ={.4,.3,.3} (h1 is hMAP) . New instance x is classified positive by h1 and negative by h2 and h3 What is the most probable classification of x? -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/30" }, { "page_index": 103, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_031.png", "page_index": 103, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:00+07:00" }, "raw_text": "Bayes Optimal Classifier Bt The most probable classification of a new instance is obtained by combining th predictions of all hypotheses weighted by their posterior probabilities: (8) cEC cEC hEH -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 21 / 30" }, { "page_index": 104, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_032.png", "page_index": 104, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:06+07:00" }, "raw_text": "Naive Bayes Classifier BK Example Sky AirTemp Humidity Wind Water Forecast EnjoySport 1 Sunny Warm Normal Strong Warm Same Yes 2 Sunny Warm High Strong Warm Same Yes 3 Rainy Cold High Strong Warm Change No 4 Sunny Warm High Strong Cool Change Yes 5 Cloudy Warm High Weak Cool Same Yes 6 Sunny Cold High Weak Cool Same No Sunny Warm Normal Strong Warm Same 8 Sunny Warm Low Strong Cool Same ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22 / 30" }, { "page_index": 105, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_033.png", "page_index": 105, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:09+07:00" }, "raw_text": "Naive Bayes Classifier . Each instance x is described by a conjunction of attribute values < a1, a2, ..., an > * The target function f(x) can take on any value from a finite set C It is to assign the most probable target value to a new instance ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23/30" }, { "page_index": 106, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_034.png", "page_index": 106, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:11+07:00" }, "raw_text": "Naive Bayes Classifier BK CMAp = arg max P(ca1,a2,...,an cEC (9) = arg maxP(a1,a2,...,anc)P(c) cEC ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 24 / 30" }, { "page_index": 107, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_035.png", "page_index": 107, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:13+07:00" }, "raw_text": "Naive Bayes Classifier BK CMAp = arg max P(ca1,a2,...,an cEC = arg max P(a1, a2, ..., anc)P(c) cEC (10) CnB = arg max lI P(aic)P(c) cEC i=1,n assuming that a1, a2, ..., an are independent given c -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 25/30" }, { "page_index": 108, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_036.png", "page_index": 108, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:20+07:00" }, "raw_text": "Naive Bayes Classifier BK Example Sky AirTemp Humidity Wind Water Forecast EnjoySport 1 Sunny Warm Normal Strong Warm Same Yes 2 Sunny Warm High Strong Warm Same Yes 3 Rainy Cold High Strong Warm Change No + Sunny Warm High Strong Cool Change Yes 5 Cloudy Warm High Weak Cool Same Yes 6 Sunny Cold High Weak Cool Same No Sunny Warm Normal Strong Warm Same 8 Sunny Warm Low Strong Cool Same ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 26 / 30" }, { "page_index": 109, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_037.png", "page_index": 109, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:22+07:00" }, "raw_text": "Naive Bayes Classifier BK Estimating probabilities: * Probability: the fraction of times the event is observed to occur over the total number of opportunities nc/n . What if the fraction is too small, or even zero? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 27 / 30" }, { "page_index": 110, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_038.png", "page_index": 110, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:24+07:00" }, "raw_text": "Naive Bayes Classifier BK Estimating probabilities: nc + mp (11) n+ m n: total number of training examples of a particular class nc: number of training examples having a particular attribute value in that class m: equivalent sample size p: prior estimate of the probability (equals 1/k where k is the number of possible values of the attribute) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 28 / 30" }, { "page_index": 111, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_039.png", "page_index": 111, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:25+07:00" }, "raw_text": "Naive Bayes Classifier BK Learning to classify text CnB = arg max I P(ai = wkc).P(c cEC i=1,n ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 29 / 30" }, { "page_index": 112, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_a/slide_040.png", "page_index": 112, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:28+07:00" }, "raw_text": "Naive Bayes Classifier Learning to classify text: CnB = arg max P(ai = wkc).P(c) cEC i=1,n (12) II P(wkc).P(c) = arg max cEC i=1,n assuming that all words have equal chance occurring in every position ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 30 / 30" }, { "page_index": 113, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_001.png", "page_index": 113, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:30+07:00" }, "raw_text": "BK TP.HCM Deep Learning Deep Feedforward Networks Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 114, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_002.png", "page_index": 114, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:32+07:00" }, "raw_text": "Contents BK I. Deep Networks 2. Gradient Based Learning 3. Hidden Units 4. Architecture Desigr 5. Back-Propagation ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 1 / 30" }, { "page_index": 115, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_003.png", "page_index": 115, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:33+07:00" }, "raw_text": "Deep Networks" }, { "page_index": 116, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_004.png", "page_index": 116, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:35+07:00" }, "raw_text": "Deep Networks: Deep Feedforward Networks BK Deep feedforward networks (multilayer perceptrons (MLPs) The quintessential deep learning models : Goal: approximate some function f* : Information flow through the function being evaluated No feedback connection ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 2/30" }, { "page_index": 117, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_005.png", "page_index": 117, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:37+07:00" }, "raw_text": "Deep Networks: Deep Feedforward Networks BK : Extreme importance to machine learning practitioners : Form the basis of many important commercial applications A conceptual stepping stone on the path to recurrent networks ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 3/30" }, { "page_index": 118, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_006.png", "page_index": 118, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:40+07:00" }, "raw_text": "Deep Networks: Deep Feedforward Networks BK Linear models Logistic regression, linear regression - Can be fit efficiently and reliably Can obtain closed form solution or with convex optimization Limitation: capacity is limited to linear functions : Can not understand the interaction between any two input variables ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 4/30" }, { "page_index": 119, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_007.png", "page_index": 119, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:41+07:00" }, "raw_text": "Gradient Based Learning" }, { "page_index": 120, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_008.png", "page_index": 120, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:43+07:00" }, "raw_text": "Gradient Based Learning BK : Training a neural network is not much different from training any other machine learning model with gradient descent - Cost function: choose how to represent the output of the model ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 5/30" }, { "page_index": 121, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_009.png", "page_index": 121, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:45+07:00" }, "raw_text": "Gradient Based Learning: Cost Functions BK Cost function . More or less the same as those for other parametric models, such as linear models The total cost function used to train a neural network will often combine one of the primary cost functions with a regularization term Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 6/30" }, { "page_index": 122, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_010.png", "page_index": 122, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:48+07:00" }, "raw_text": "Gradient Based Learning: Learning Conditional Distributions BK Most modern neural networks are trained using maximum likelihood The cost function is simply the negative log-likelihood, equivalently described as the cross-entropy between the training data and the model distribution. J(0) = -Ex,ypdata log Pmodel(yx). (1) : Cost function changes from model to model, depending on the specific form of log Pmodel If pmodeI(yx) =(y;f(x;0),I) then 116 ay - f(x;0)2+C 2 Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 7 / 30" }, { "page_index": 123, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_011.png", "page_index": 123, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:51+07:00" }, "raw_text": "Gradient Based Learning: Learning Conditional Statistics BK A sufficiently powerful neural network: be able to represent any function from a wide class of functions Learning: choosing a function rather than merely choosing a set of parameters Mean sguared error and mean absolute error often lead to poor results when used with gradient-based optimization Some output units that saturate produce very small gradients when combined with these cost functions. Cross-entropy cost function is more popular Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 8/30" }, { "page_index": 124, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_012.png", "page_index": 124, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:53+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK 1. Linear Units for Gaussian Output Distributions : Given features h, a layer of linear output units produces a vector y = W ' h + b The mean of a conditional Gaussian distribution: p(yx) = N(y;y,I (2) Maximizing the log-likelihood is then equivalent to minimizing the mean squared error ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 9/30" }, { "page_index": 125, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_013.png", "page_index": 125, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:55+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK 2. Sigmoid Units for Bernoulli Output Distributions : Task: predicting the value of a binary variable y The neural net needs to predict only P(y = 1 . What if: P(y=1x)=max{0,min{1,w'h+b}} (3) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 10/30" }, { "page_index": 126, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_014.png", "page_index": 126, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:50:58+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK : It is better to ensure that there is always a strong gradient whenever the model has the wrong answer Sigmoid output: y=o(w'h+b) (4) : We may see this output as a combination of linear transformation z = wTh + b and an activation function o (z) ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 11 / 30" }, { "page_index": 127, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_015.png", "page_index": 127, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:00+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK Ihe sigmoid activation function saturates when z becomes very negative or very positive The gradient can shrink too small to be useful for learning . Maximum likelihood is almost always the preferred approach to training sigmoid output units ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 12/30" }, { "page_index": 128, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_016.png", "page_index": 128, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:03+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK 3. Softmax Units for Multinoulli Output Distributions Softmax functions: are most often used as the output of a classifier, to represent the probability distribution over n different classes Linear layer predicts unnormalized log probability: z=W`h+b (5) Softmax function: exp(zi) softmax(z)i = (6) Ej exp(zj) Many objective functions other than the log-likelihood do not work as well with the softmax function (e.g. squared error) . Vanishing gradient Deep Learning 13/ 30" }, { "page_index": 129, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_017.png", "page_index": 129, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:05+07:00" }, "raw_text": "Gradient Based Learning: Output Units BK Log-likelihood can undo the exp of the softmax log softmax(z)i = zi -log> exp(zj) (7) j Softmax output is invariant to adding scalar softmax(z) = softmax(z + c) (8) softmax(z) = softmax(z - max zi) Lecturer: Duc Dung Nguyen,PhD.Contact nddung@hcmut.edu.vn Deep Learning 14 / 30" }, { "page_index": 130, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_018.png", "page_index": 130, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:06+07:00" }, "raw_text": "Hidden Units" }, { "page_index": 131, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_019.png", "page_index": 131, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:08+07:00" }, "raw_text": "Hidden Units BK How to choose the type of hidden units ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 15/30" }, { "page_index": 132, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_020.png", "page_index": 132, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:10+07:00" }, "raw_text": "Hidden Units: ReLU BK . Rectified linear units (ReLU): use activation function g() = max{0,} The gradient is useful for learning (no second-order effect) * ReLU is typically used on top of an affine transformation h = g(WTx+b) (9) Initialization is important! ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 16/30" }, { "page_index": 133, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_021.png", "page_index": 133, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:13+07:00" }, "raw_text": "Hidden Units: ReLU Bt Drawback: cannot learned via gradient-based methods on examples for which their activation is 0 Generalization hi = g(z,a)i = max(0,zi)+ ai min(0,zi (10) : Absolute value rectification: fix a; = -1 Leaky ReLU (Maas et al., 2013 o Parametric ReLU (PReLU) (He et al.,2015 ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 17 / 30" }, { "page_index": 134, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_022.png", "page_index": 134, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:15+07:00" }, "raw_text": "Hidden Units: Maxout units BK . Maxout units (Goodfellow et al., 2013): generalize ReLU o Divide z into groups of k values - Each maxout unit outputs the maximum element of one of these groups g(z)i= max zj (11) jEG(i) : G(i) is the set of indices for group i A maxout unit can learn a pieacewise linear, convex function with up to k pieces . Learning the activation function itself Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 18 / 30" }, { "page_index": 135, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_023.png", "page_index": 135, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:17+07:00" }, "raw_text": "Hidden Units: Logistic Sigmoid & Hyperbolic Tangent Hyperbolic tangent activation function: g(z) = tanh(z) = 2o(2z) - 1 (12) The widespread saturation of sigmoid unit can make gradient-based learning very difficult Tangent activation function typically performs better than logistic sigmoid (resemble the identity function more closely) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 19/30" }, { "page_index": 136, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_024.png", "page_index": 136, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:19+07:00" }, "raw_text": "Architecture Design" }, { "page_index": 137, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_025.png", "page_index": 137, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:21+07:00" }, "raw_text": "Architecture Design BK The architecture refers to the overall structure of the network How many units it should have . How these units should be connected to each other . Most NN are organized into groups of units called layers . Chain structure ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 20/30" }, { "page_index": 138, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_026.png", "page_index": 138, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:23+07:00" }, "raw_text": "Architecture Design: Universal Approximation Properties and Depth BK Linear model: represent only linear functions : Easy to train: many loss functions result in convex optimization problems when applied tc linear models . The universal approximation theorem: regardless of what function we are trying to learn, we know that a large MLP will be able to represent this function. We are not guaranteed that the training algorithm will be able to learn that function Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 21/30" }, { "page_index": 139, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_027.png", "page_index": 139, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:25+07:00" }, "raw_text": "Architecture Design: Universal Approximation Properties and Depth BK Learning can fail for two different reasons The optimization algorithm used for training may not be able to find the value of the parameters that corresponds to the desired function The training algorithm might choose the wrong function due to overfitting Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 22/30" }, { "page_index": 140, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_028.png", "page_index": 140, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:28+07:00" }, "raw_text": "Architecture Design: Universal Approximation Properties and Depth BK Depth : A feedforward network with a single layer is sufficient to represent any function : The layer may be infeasibly large and may fail to learn and generalize correctly Deeper models can reduce the number of units required to represent the desired function and can reduce the amount of generalization error ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 23/30" }, { "page_index": 141, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_029.png", "page_index": 141, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:31+07:00" }, "raw_text": "Architecture Design: Other Architectural Considerations BK 96.5 96.0 95.5 95.0 94.5 94.0 93.5 93.0 92.5 92.0 3 4 5 6 7 8 9 10 11 Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 24/30" }, { "page_index": 142, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_030.png", "page_index": 142, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:33+07:00" }, "raw_text": "Architecture Design: Other Architectural Considerations BK 97 3, convolutional 96 3, fully connected 95 11, convolutional 94 93 92 91 0.0 0.2 0.4 0.6 0.8 1.0 Number of parameters X108 ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 25/30" }, { "page_index": 143, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_031.png", "page_index": 143, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:35+07:00" }, "raw_text": "Back-Propagation" }, { "page_index": 144, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_032.png", "page_index": 144, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:37+07:00" }, "raw_text": "Back-Propagation Feed-forward neural network: information flows forward through the network Forward propagation: the inputs x provide the initial information that then propagates up to the hidden units at each layer and finally produces y The back-propagation algorithm: allows the information from the cost flow backwards through the network, in order to compute the gradient ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 26/30" }, { "page_index": 145, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_033.png", "page_index": 145, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:40+07:00" }, "raw_text": "Back-Propagation BK dot a (b) H u(3 relu sum U(1) U(2) ydot sqr matmul x W Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 27/ 30" }, { "page_index": 146, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_034.png", "page_index": 146, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:43+07:00" }, "raw_text": "Back-Propagation BK Chain Rule of Calculus Let x be a real number, and let f and g both be functions mapping from a real number to a real number. Suppose that y = g(x), z = f(g(x)) = f(y) o The chain rule dz dz dy (13) dx dy dx . Generalization: x e Rm,y e R\",g : Rm -> R\", and f : R\" - R z z yi (14) xi yi Oxi Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 28 / 30" }, { "page_index": 147, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_035.png", "page_index": 147, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:45+07:00" }, "raw_text": "Back-Propagation BK . Vector notation: y Arz = (15) dx where @x y is the n x m Jacobian matrix of g The gradient of a variable x can be obtained by multiplying a Jacobian matrix oy by a 0x gradient yz ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 29/30" }, { "page_index": 148, "chapter_num": 3, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_3_b/slide_036.png", "page_index": 148, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:48+07:00" }, "raw_text": "Back-Propagation BK JMLE cross_entropy< U(2 matmul H W(2) U(5) u(6) sqr sum relu U(1 matmul X W(1) U(3) u4 sqr sum Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 30 / 30" }, { "page_index": 149, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_001.png", "page_index": 149, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:51+07:00" }, "raw_text": "BK TP.HCM Machine Learning Genetic Algorithm Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vr Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 150, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_002.png", "page_index": 150, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:52+07:00" }, "raw_text": "Contents BK 1. Tntroductior 2. Genetic Algorithm ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1/19" }, { "page_index": 151, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_003.png", "page_index": 151, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:53+07:00" }, "raw_text": "Introduction" }, { "page_index": 152, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_004.png", "page_index": 152, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:56+07:00" }, "raw_text": "Motivation BK . Evolution is known to be a successful, robust method of nature How do we search the space of hypotheses containing complex interacting parts, where the impact of each part on overall hypothesis is hard to model. : Computer programs are evolved to certain fitness criteria Evolutionary computation = Genetic algorithms + genetic programming ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2/19" }, { "page_index": 153, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_005.png", "page_index": 153, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:51:58+07:00" }, "raw_text": "Genetic Algorithm BK Learning as searching . Analogy to biological evolution The best hypothesis is searched through several generations of hypotheses Next generation hypotheses are produced by mutating and recombining parts of the best current generation hypotheses It is not recommended to search from general-to-specific or from simple-to-complex hypotheses ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 19" }, { "page_index": 154, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_006.png", "page_index": 154, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:00+07:00" }, "raw_text": "Genetic Algorithm" }, { "page_index": 155, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_007.png", "page_index": 155, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:03+07:00" }, "raw_text": "GA Overview BK Genetic Algorithms A1 Gene A1 0 0 0 0 0 0 A2 Chromosome A2 A3 0 0 A5 A4 0 Population A6 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 4/ 19" }, { "page_index": 156, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_008.png", "page_index": 156, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:05+07:00" }, "raw_text": "A Prototypical GA BK Initialize population: P = randomly generated p hypotheses . Evaluate fitness: compute Fitness(h), for each h E P While maxhepFitness(h) < Fitnessthreshold do : Create new generation . Evaluate fitness Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5/ 19" }, { "page_index": 157, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_009.png", "page_index": 157, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:07+07:00" }, "raw_text": "New Generation Creation BK . Selection Crossover . Mutation ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Lcarning 6/ 19" }, { "page_index": 158, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_010.png", "page_index": 158, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:10+07:00" }, "raw_text": "New Generation Creation BK . Selection: Probabilistically select (1 - r)p hypotheses of P to add to the new generation The selection probability of a hypothesis Fitness(hi Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7/ 19" }, { "page_index": 159, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_011.png", "page_index": 159, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:12+07:00" }, "raw_text": "New Generation Creatior BK o Crossover: Probabilistically select (r/2)p pairs of hypotheses from P according to Pr(h : For each pair (h1,h2), produce two offsprings by applying a Crossover operator. : Add all offspring to the new generation Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8/19" }, { "page_index": 160, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_012.png", "page_index": 160, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:14+07:00" }, "raw_text": "New Generation Creation BK Mutation: - Choose m percent of the added hypotheses with uniform distribution - For each, invert one randomly selected bit in its representation ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 9/19" }, { "page_index": 161, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_013.png", "page_index": 161, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:16+07:00" }, "raw_text": "Hypotheses Representation BK . A classification rule as a bit string If expr(A1) ...Aexpr(A;...expr(An Then C=c ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/19" }, { "page_index": 162, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_014.png", "page_index": 162, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:19+07:00" }, "raw_text": "Hypothesis Representation BK Example: lf Wind= Strong Then PlayTennis = Yes 1 1 0 1 0 Machine Learning 11/19" }, { "page_index": 163, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_015.png", "page_index": 163, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:21+07:00" }, "raw_text": "Hypothesis Representation BK . A set of rules as concatenated bit strings: 0 0 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 12/19" }, { "page_index": 164, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_016.png", "page_index": 164, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:23+07:00" }, "raw_text": "Crossover Operator BK : Single-point Two-point . Uniform Machine Lcarning 13 /19" }, { "page_index": 165, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_017.png", "page_index": 165, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:25+07:00" }, "raw_text": "Crossover Operator BK Single-point ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 14 /19" }, { "page_index": 166, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_018.png", "page_index": 166, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:27+07:00" }, "raw_text": "Crossover Operator BK Two-point ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15 /19" }, { "page_index": 167, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_019.png", "page_index": 167, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:29+07:00" }, "raw_text": "Crossover Operator BK Uniform ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 16 /19" }, { "page_index": 168, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_020.png", "page_index": 168, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:33+07:00" }, "raw_text": "Crossover Operator BK Variable-length bit strings A1 Az C A2 A Az C A1 A2 C A1 A2 C 0 1 A2 C A1 A2 C A1 A2 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17/19" }, { "page_index": 169, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_021.png", "page_index": 169, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:34+07:00" }, "raw_text": "Fitness Functior BK Example: Fitness(h) = (correct(h)) correct(h) = percent of all training examples correctly classified by hypothesis h Machine Lcarning 18 /19" }, { "page_index": 170, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_a/slide_022.png", "page_index": 170, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:36+07:00" }, "raw_text": "Inductive bias? BK What is inductive bias? Where is inductive bias in GA? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 19 /19" }, { "page_index": 171, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_001.png", "page_index": 171, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:38+07:00" }, "raw_text": "BK TP.HCM Deep Learning Regularization Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vr Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 172, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_002.png", "page_index": 172, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:41+07:00" }, "raw_text": "Contents BK 1. Parameter Norm Penalties 2. Constrained optimizatior 3. Dataset Augmentation 4. Other Regularization Approaches 5.Dropout ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 1/ 31" }, { "page_index": 173, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_003.png", "page_index": 173, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:41+07:00" }, "raw_text": "Parameter Norm Penalties" }, { "page_index": 174, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_004.png", "page_index": 174, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:43+07:00" }, "raw_text": "Regularization BK Problem in ML: generalization! Regularization: strategies are explicitly designed to reduce the test error, possibly at the expense of increased training error. -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 2/ 31" }, { "page_index": 175, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_005.png", "page_index": 175, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:46+07:00" }, "raw_text": "Regularization BK Most regularization strategies are based on regularizing estimators Regularization of an estimator works by trading increased bias for reduced variance : An effective regularizer: makes a profitable trade, reducing variance significantly while not overly increasing the bias ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 3/31" }, { "page_index": 176, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_006.png", "page_index": 176, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:49+07:00" }, "raw_text": "Parameter Norm Penalties BK Main regularization approaches: limiting the capacity of the model by adding a parameter norm penalty S2(0) to the objective function J J(0,X,y)= J(0,X,y)+aS(e) (1) Different choices of the parameter norm 2 can result in different solutions being referred In NNs, 2 is chosen to penalize on/y the weights of the affine transformation at each layer (leave the bias unregularized) Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 4 / 31" }, { "page_index": 177, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_007.png", "page_index": 177, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:51+07:00" }, "raw_text": "Parameter Norm Penalties BK : The most common parameter norm penalty: 2 (weight decay), also called ridge regression, or Tikhonov regularization J(w,X,y)=J(w,X,y)+aS(w)=J(0,X,y)+ (2) 2 with the corresponding parameter gradient VwJ(w;X,y)=aw+VwJ(w;X,y) (3) . Gradient step w< w- e(aw +VwJ(w;X,y))=(1-ea)w-eVwJ(w;X,y) (4) Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 5/31" }, { "page_index": 178, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_008.png", "page_index": 178, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:54+07:00" }, "raw_text": "Parameter Norm Penalties BK . [1 regularization 2(0)=1w1=>1wil (5) Regularized objective function J(w,X,y)=J(w,X,y)+awi (6) with the corresponding parameter gradient VwJ(w;X,y)=asign(w)+VwJ(w;X,y) (7) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 6 / 31" }, { "page_index": 179, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_009.png", "page_index": 179, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:56+07:00" }, "raw_text": "Parameter Norm Penalties BK The regularization contribution to the gradient no longer scale linearly with each w, 1 regularization results in a solution that is more spare, comparing to 2: some parameters have an optimal value of zero. : Sparsity property of [1 regularization has been used extensively as a feature selection mechanism ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 7/ 31" }, { "page_index": 180, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_010.png", "page_index": 180, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:57+07:00" }, "raw_text": "Constrained optimization" }, { "page_index": 181, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_011.png", "page_index": 181, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:52:59+07:00" }, "raw_text": "Constrained Optimization BK Constrained optimization Find the maximal or minimal value of f(x) for values of x in some set S * Feasible points: points x that lie within the set S Find a solution that is small in some sense . Common approach: impose a norm constraint, such as lxll 1 ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 8/31" }, { "page_index": 182, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_012.png", "page_index": 182, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:01+07:00" }, "raw_text": "Constrained Optimization BK Approach to constrained optimization Modify gradient descent taking the constraint into account . If we use a small constant step size e, we can make gradient descent steps, then project the result back into S If we use a line search, we can search only over step sizes e that yield new x points that are feasible, or we can project each point on the line back into the constraint region ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 9/31" }, { "page_index": 183, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_013.png", "page_index": 183, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:03+07:00" }, "raw_text": "Constrained Optimization Karush-Kuhn-Tucker (KKT): a very general solution to constrained optimization : KKT multipliers: introduce new variables i and ai for each constraint . The generalized Lagrangian is then defined as L(x,A,a)=f(x)+Aig()(x)+TajhG)( (8) .7 ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 10/ 31" }, { "page_index": 184, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_014.png", "page_index": 184, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:05+07:00" }, "raw_text": "Constrained Optimization BK Solve a constrained minimization problem using unconstrained optimization of the generalized Lagrangian . Minimize minxes f(x) is equivalent to min max max L(x,, (9) A a,a0 ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 11/ 31" }, { "page_index": 185, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_015.png", "page_index": 185, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:08+07:00" }, "raw_text": "Constrained Optimization BK This follows because any time the constraints are satisfied max max L(x,,a) = f(x (10) A a,a0 while any time a constraint is violated max max L(x,,a) = x (11) a,a0 No infeasible point will ever be optimal The optimum within the feasible points is unchanged Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 12 / 31" }, { "page_index": 186, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_016.png", "page_index": 186, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:09+07:00" }, "raw_text": "Dataset Augmentation" }, { "page_index": 187, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_017.png", "page_index": 187, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:11+07:00" }, "raw_text": "Dataset Augmentation BK Making ML model generalize better: train on more data! In practice: the amount of data is limited! . Solution: create fake data for training ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 13/31" }, { "page_index": 188, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_018.png", "page_index": 188, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:13+07:00" }, "raw_text": "Dataset Augmentation BK Dataset augmentation: a particularly effective technique for object recognition Image: high dimensional, enormous variety of factors of variation E.g.: rotating, scaling, affine transtormation, etc : Dataset augmentation is effective for speech recognition tasks as well ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 14/31" }, { "page_index": 189, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_019.png", "page_index": 189, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:15+07:00" }, "raw_text": "Dataset Augmentation BK Injecting noise . NN prove not to be very robust to noise . Unsupervised learning: denoising autoencoder : Noise in hidden units: dataset augmentation at multiple levels of abstraction ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 15/31" }, { "page_index": 190, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_020.png", "page_index": 190, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:17+07:00" }, "raw_text": "Dataset Augmentation BK Affine Elastic Noise Distortion Deformation Horizontal Random Hue Shift flip Translation Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 16 / 31" }, { "page_index": 191, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_021.png", "page_index": 191, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:18+07:00" }, "raw_text": "Other Regularization Approaches" }, { "page_index": 192, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_022.png", "page_index": 192, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:22+07:00" }, "raw_text": "Semi-Supervised Learning BK Semi-supervised learning: usually refers to learning a representation h = f(x Learn a representation so that examples from the same class have similar representations : Provide useful cues for how to group examples in representation space . A linear classifier in the new space may achieve better generalization in many cases Principal components analysis (PCA): a pre-processing step before applying a classifiei (on the projected data) -ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 17/31" }, { "page_index": 193, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_023.png", "page_index": 193, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:24+07:00" }, "raw_text": "Semi-Supervised Learning . Construct models in which a generative model of either P(x) or P(x,y) shares parameters with a discriminative model of P(yx) : Trade-off the supervised criterion - log P(yx) with the unsupervised or generative one (such as -log P(x) or -log P(x,y)) The generative criterion expresses a particular form of prior be/ief about the solution to the supervised learning problem ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 18/31" }, { "page_index": 194, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_024.png", "page_index": 194, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:26+07:00" }, "raw_text": "Multitask Learning BK y(1) 2 h(1) h(2) h(3) h(shared) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 19/ 31" }, { "page_index": 195, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_025.png", "page_index": 195, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:29+07:00" }, "raw_text": "Early Stopping BK Early stopping: terminate while validation set performance is better 0.20 Training set loss 0.15 Validation set loss 0.10 0.05 sso 0.00 50 100 150 200 250 Time (epochs) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 20 /31" }, { "page_index": 196, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_026.png", "page_index": 196, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:31+07:00" }, "raw_text": "Parameter Typing and Parameter Sharing BK Parameter sharing: force sets of parameters to be equal Interpret the various models or model components as sharing a unique set of parameters Only a subset of the parameters (the unique set) need to be stored . CNN ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 21/31" }, { "page_index": 197, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_027.png", "page_index": 197, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:32+07:00" }, "raw_text": "Dropout" }, { "page_index": 198, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_028.png", "page_index": 198, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:34+07:00" }, "raw_text": "Bagging BK Bagging (bootstrap aggregating): a techniques to reduce generalization error by combining several models (Breiman, 1994) General strategy in ML: model averaging -> ensemble method -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 22/31" }, { "page_index": 199, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_029.png", "page_index": 199, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:36+07:00" }, "raw_text": "Bagging BK Original dataset First resampled dataset First ensemble member Second resampled dataset Second ensemble member ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 23 / 31" }, { "page_index": 200, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_030.png", "page_index": 200, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:38+07:00" }, "raw_text": "Dropout BK : Dropout: provides a computationally inexpensive but powerful method of regularizing a broad family of models A method of making bagging practical for ensembles of very large neura/ networks .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 24/31" }, { "page_index": 201, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_031.png", "page_index": 201, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:41+07:00" }, "raw_text": "Dropout BK Good for five to ten neural networks Dropout trains the ensemble consisting of all sub-networks that can be formed by removing non-output units from an underlying base network -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 25/31" }, { "page_index": 202, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_032.png", "page_index": 202, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:42+07:00" }, "raw_text": "Dropout BK hi Base network Enseinble of subnetworks ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 26/31" }, { "page_index": 203, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_033.png", "page_index": 203, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:45+07:00" }, "raw_text": "Dropout . Bagging . The models are independent : Each model is trained to convergence on its respective training set Dropout . Models share parameters most models are not explicitly trained at all : It is infeasible to sample all possible subnetworks within the lifetime of the universe The remaining sub-networks to arrive at good settings of the parameters : Dropout can represent an exponential number of models with a tractable amount of memory Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 27 / 31" }, { "page_index": 204, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_034.png", "page_index": 204, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:48+07:00" }, "raw_text": "Dropout BK Assume that the model's role is to output a probability distribution . Bagging: : each model i produces a probability distribution p()(yx) The prediction of the ensemble is given by the arithmetic mean of all of these distributions yx (12) k i=1 Dropout: Each sub-model defined by mask vector defines a probability distribution p(yx, . The arithmetic mean over all masks is given by Z`p(u)p(yx,u) (13) where p() is the probability distribution that was used to sample at training time Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 28/ 31" }, { "page_index": 205, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_035.png", "page_index": 205, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:51+07:00" }, "raw_text": "Dropout BK : Very computationally cheap Using dropout during training requires only O(n) computation per example per update, tc generate n random binary numbers and multiply them by the state Dropout does not significantly limit the type of model or training procedure that can be used ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 29/31" }, { "page_index": 206, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_036.png", "page_index": 206, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:53+07:00" }, "raw_text": "Dropout BK The cost of using dropout in a complete system can be significant /ncrease the size of the model Typically the optimal validation set error is much lower when using dropout : The cost: a much larger model and many more iterations of the training algorithm ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 30/31" }, { "page_index": 207, "chapter_num": 4, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_4_b/slide_037.png", "page_index": 207, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:55+07:00" }, "raw_text": "Dropout BK For very large datasets . Regularization confers little reduction in generalization error : The computational cost may outweigh the benefit of regularization : Very few data samples Dropout is less effective - When additional unlabeled data is available, unsupervised feature learning can gain an advantage over dropout ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 31 / 31" }, { "page_index": 208, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_001.png", "page_index": 208, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:56+07:00" }, "raw_text": "BK TP.HCM Machine Learning Graphical Models Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 209, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_002.png", "page_index": 209, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:53:59+07:00" }, "raw_text": "Contents BK 1. Bayesian Networks (revisited) 2. Naive Bayes Classifier (revisited 3. Hidden Markov Models -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1 / 35" }, { "page_index": 210, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_003.png", "page_index": 210, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:00+07:00" }, "raw_text": "Bayesian Networks (revisited)" }, { "page_index": 211, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_004.png", "page_index": 211, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:02+07:00" }, "raw_text": "Bayesian Networks BK E ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2/35" }, { "page_index": 212, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_005.png", "page_index": 212, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:03+07:00" }, "raw_text": "Bayesian Networks BK Advantages of graphical modeling ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3/35" }, { "page_index": 213, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_006.png", "page_index": 213, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:05+07:00" }, "raw_text": "Bayesian Networks BK Advantages of graphical modeling - Conditional independence p(DC,E,A,B) =p(DC ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 35" }, { "page_index": 214, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_007.png", "page_index": 214, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:07+07:00" }, "raw_text": "Bayesian Networks BK Advantages of graphical modeling . Conditional independence p(DC,E,A,B) = p(DC : Factorization: p(A,B,C,D,E) =p(DC)p(EC)p(CA,B)p(A)p(B E ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 35" }, { "page_index": 215, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_008.png", "page_index": 215, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:08+07:00" }, "raw_text": "Naive Bayes Classifier (revisited)" }, { "page_index": 216, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_009.png", "page_index": 216, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:10+07:00" }, "raw_text": "Naive Bayes Classifier . Each instance x is described by a conjunction of attribute values < a1, a2, ..., an > It is to assign the most probable class c to an instance CnB = arg max(a1,a2,...,anc)p(c cEC II p(aic).p(c) = arg max cEC i=1,n ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 4 / 35" }, { "page_index": 217, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_010.png", "page_index": 217, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:12+07:00" }, "raw_text": "Naive Bayes Classifier BK class observations ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5 / 35" }, { "page_index": 218, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_011.png", "page_index": 218, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:14+07:00" }, "raw_text": "Naive Bayes Classifier BK class observations Joint distribution: p(C,A1,A2,...,An ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 6 / 35" }, { "page_index": 219, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_012.png", "page_index": 219, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:15+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7/35" }, { "page_index": 220, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_013.png", "page_index": 220, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:17+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: . It models a joint distribution: p(C, A ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 35" }, { "page_index": 221, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_014.png", "page_index": 221, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:19+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: : It models a joint distribution: p(C, A) . It can generate any distribution on C and A. ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 35" }, { "page_index": 222, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_015.png", "page_index": 222, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:21+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: . It models a joint distribution: p(C, A) . It can generate any distribution on C and A In contrast to a discriminative model (e.g., CRF) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 35" }, { "page_index": 223, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_016.png", "page_index": 223, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:23+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: . It models a joint distribution: p(C, A . It can generate any distribution on C and A In contrast to a discriminative model (e.g., CRF) : Conditional distribution: P(CA) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 35" }, { "page_index": 224, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_017.png", "page_index": 224, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:25+07:00" }, "raw_text": "Naive Bayes Classifier BK Naive Bayes is a generative model: . It models a joint distribution: p(C,A It can generate any distribution on C and A In contrast to a discriminative model (e.g., CRF) : Conditional distribution: P(CA It discriminates C given A ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 35" }, { "page_index": 225, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_018.png", "page_index": 225, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:26+07:00" }, "raw_text": "Hidden Markov Models" }, { "page_index": 226, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_019.png", "page_index": 226, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:28+07:00" }, "raw_text": "Hidden Markov Models BK Introduction : Example Independence assumptions . Forward algorithm Viterbi algorithm : Training o Application to NER ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8 / 35" }, { "page_index": 227, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_020.png", "page_index": 227, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:30+07:00" }, "raw_text": "Hidden Markov Models BK One of the most popular graphical models Dynamic extension of Bayesian networks Sequential extension of Naive Bayes classifier -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 9/35" }, { "page_index": 228, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_021.png", "page_index": 228, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:32+07:00" }, "raw_text": "Hidden Markov Models BK Example: . Your possible looking prior to the exam: (tired,hungover, scared, fine ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/35" }, { "page_index": 229, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_022.png", "page_index": 229, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:34+07:00" }, "raw_text": "Hidden Markov Models BK Example: - Your possible looking prior to the exam: (tired,hungover, scared, fine) : Your possible activity last night: (TV,pub,party, study) -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/35" }, { "page_index": 230, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_023.png", "page_index": 230, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:36+07:00" }, "raw_text": "Hidden Markov Models Example: . Your possible looking prior to the exam: (tired,hungover, scared, fine) Your possible activity last night: (TV,pub,party,study) Given the sequence of observations of your looking, guess what you did in previous nights -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/35" }, { "page_index": 231, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_024.png", "page_index": 231, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:39+07:00" }, "raw_text": "Hidden Markov Models Example: . Your possible looking prior to the exam: (tired,hungover, scared, fine) Your possible activity last night: (TV,pub,party,study) Given the sequence of observations of your looking, guess what you did in previous nights A model: -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/35" }, { "page_index": 232, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_025.png", "page_index": 232, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:41+07:00" }, "raw_text": "Hidden Markov Models BK Example: . Your possible looking prior to the exam: (tired,hungover, scared, fine) Your possible activity last night: (TV,pub,party,study) - Given the sequence of observations of your looking, guess what you did in previous nights. A model: . Your looking depends on what you did in the night before -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10/35" }, { "page_index": 233, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_026.png", "page_index": 233, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:44+07:00" }, "raw_text": "Hidden Markov Models BK Example: . Your possible looking prior to the exam: (tired,hungover, scared, fine) Your possible activity last night: (TV,pub,party,study) Given the sequence of observations of your looking, guess what you did in previous nights A model: Your looking depends on what you did in the night before Your activity in a night depends on what you did in some previous nights Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10 / 35" }, { "page_index": 234, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_027.png", "page_index": 234, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:46+07:00" }, "raw_text": "Hidden Markov Models BK A finite set of possible observations A finite set of possible hidden states To predict the most probable sequence of underlying stats {y1,Y2,.,Yt} for a given sequence of observations {1, 2,..., T} transits y t-1 Yt NT emits X Xt X- ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 11 / 35" }, { "page_index": 235, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_028.png", "page_index": 235, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:51+07:00" }, "raw_text": "Hidden Markov Models BK 0.05 0.4 Tired 0.3 Transition probability Tired 0.2 Hungover 0.4 Hungover 0.1 0.7 Scared 0.2 Scared 0.2 Fine 0.1 Fine 0.5 Party TV Emission (observation) 0.1 probability 0.1 0.05 0.3 0.2 25 0.25 Pub Study Tired 0.4 Tired 0.3 Hungover 0.2 Hungover 0.05 0.4 Scared 0.1 Scared 0.3 Fine 0.3 Fine 0.35 0.05 0.05" }, { "page_index": 236, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_029.png", "page_index": 236, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:54:56+07:00" }, "raw_text": "Hidden Markov Models BK yp(y1y)=1 0.05 0.4 Tired 0.3 Transition probability Tired 0.2 Hungover ( 0.4 Hungover 0.1 0.7 Scared 0.2 Scared 0.2 Fine 0.1 Fine 0.5 Party TV Emission (observation) 0.1 probability Y.6 Zx p(x1ys)=1 0.1 0.05 0.3 0.2 025 0.25 Pub Study Tired 0.4 Tired 0.3 Hungover 0.2 Hungover 0.05 Scared 0.1 0.4 Scared 0.3 Fine 0.3 Fine 0.35 0.05 0.05 Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 13 / 35" }, { "page_index": 237, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_030.png", "page_index": 237, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:01+07:00" }, "raw_text": "Hidden Markov Models BK HMM conditional independence assumptions State at time t depends only on state at time t - 1. p(ytyt-1,Z) = p(ytyt-1) . Observation at time t depends only on state at time t. P(xtyt,Z) = p(xtyt) 0.05 0.4 Tired 0.3 Transition probability Tired 0.2 Hungover 0.4 0.7 Hungover 0.1 Scared 0.2 Scared 0.2 Fine 0.1 Party Fine 0.5 TV Emission (observation) probability 0.1 6 0.1 0.05 0.3 0.2 25 0.25 Tired Pub Study 0.4 Tired 0.3 Hungover 0.2 0.4 Hungover 0.05 Scared 0.1 Scared 0.3 Fine 0.3 0.05 Fine 0.35 0.05 ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Machine Learning 14 / 35" }, { "page_index": 238, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_031.png", "page_index": 238, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:03+07:00" }, "raw_text": "Hidden Markov Models BK HMM is a generative model: . Joint distributions: p(Y,X)=p(y1,y2,...,yT,1,x2,...,T = I p(ytYt-1):p(xtyt) t=1,T p(y1yo) = p(y1) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15 / 35" }, { "page_index": 239, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_032.png", "page_index": 239, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:09+07:00" }, "raw_text": "Hidden Markov Models BK HMM is a generative model: Joint distributions: p(Y,X) =p(y1, Y2,...,YT,X1,x2, ..., T) I p(ytyt-1):p(xtyt) t=1,T p(y1yo) = p(y1) It can generate any distribution on Y and X 0.05 0.4 Tired 0.3 Transition probability Tired 0.2 Hungover 0.4 0.7 Scared 0.2 Scared 0.2 Fine 0.1 Party Fine 0.5 TV Emission observation probability 0.1 0.1 0.05 0.3 0.2 25 0.25 Pub Study Tired 0.4 Tired 0.3 Hungover 0.2 Hungover 0.05 Scared 0.4 Scared 0.3 Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Machinedearniripe 0.35 0.05 15 / 35" }, { "page_index": 240, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_033.png", "page_index": 240, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:11+07:00" }, "raw_text": "Hidden Markov Models BK HMM is a generative model: Joint distributions p(Y,X)=p(y1,Y2,...,YT,1,x2,...,T = II p(ytYt-1)p(xtyt) t=1,T p(y1yo) = p(y1 It can generate any distribution on Y and X In contrast to a discriminative model (e.g., CRF) Conditional distributions: p(YX) It discriminates Y given X Machine Learning 16 / 35" }, { "page_index": 241, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_034.png", "page_index": 241, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:13+07:00" }, "raw_text": "Hidden Markov Mode BK Forward algorithm To compute the joint probability of the state at the time t being yt and the sequence of observations in the first t steps being {1,2,..., t}: at(yt) =p(yt,1,X2,...,t ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 35" }, { "page_index": 242, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_035.png", "page_index": 242, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:15+07:00" }, "raw_text": "Hidden Markov Mode BK Forward algorithm * To compute the joint probability of the state at the time t being yt and the sequence of observations in the first t steps being {1, 2,...,t}: at(yt) = p(yt,1,x2,...,t . Bayes' theorem gives: p(ytx1,x2,...,t) =p(yt,1,x2,...,xt)/p(x1,x2,...,t = at(yt/p(x1, x2,..., xt) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 35" }, { "page_index": 243, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_036.png", "page_index": 243, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:18+07:00" }, "raw_text": "Hidden Markov Mode BK Forward algorithm To compute the joint probability of the state at the time t being Yt and the sequence of observations in the first t steps being {1, 2,..., t}: Qt(yt) =p(yt,X1,X2,...,t - Bayes' theorem gives: p(ytx1,x2,...,xt) =p(yt,x1,x2,...,xt/p(x1,x2,...,xt =at(yt)/p(x1,x2,..., t) The highest at(yt) is the most likely yt would be given the same {x1,x2,..., t} Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 35" }, { "page_index": 244, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_037.png", "page_index": 244, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:20+07:00" }, "raw_text": "Hidden Markov Models BK Forward algorithm: at(yt)=p(yt,x1,x2,...,xt)=p(yt,Yt-1,x1,x2,...,xt) Yt-1 Yt-1 ,p(xtyt)p(ytyt-1,x1,x2,...,xt-1)p(yt-1,x1,x2,...,xt-1) Yt-1 yt-1 =p(xtyt) >p(ytYt-1)at-1(Yt-1) yt-1 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 18/35" }, { "page_index": 245, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_038.png", "page_index": 245, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:22+07:00" }, "raw_text": "Hidden Markov Models BK Forward algorithm: at(yt)=p(yt,x1,x2,...,xt)=p(yt,Yt-1,x1,x2,...,xt) Yt-1 Yt-1 ,p(xtyt)p(ytyt-1,x1,x2,...,xt-1)p(yt-1,X1,X2,...,xt-1) Yt-1 yt-1 =p(xtyt) >p(ytYt-1)at-1(Yt-1) Yt-1 x1(y1) = p(y1,x1) = p(x1y1)p(y1 Machine Learning 18/35" }, { "page_index": 246, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_039.png", "page_index": 246, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:25+07:00" }, "raw_text": "Hidden Markov Models BK Forward algorithm: at(yt) =p(xtyt) >,p(ytyt-1).at-1(yt-1 Yt-1 Y ai(TV) a.(TVP(TVITV)P(TiredTV 102(TV TV .05 031 (Pub) az(Pub) Pub 022 Party m(Party az(Party .075 011 m(Study (Study) Study 075 .016 C(Study)P(StudyStudy)P(TiredStkoy) x Tired Tired Scared ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 19 / 35" }, { "page_index": 247, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_040.png", "page_index": 247, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:27+07:00" }, "raw_text": "Hidden Markov Models Viterbi algorithm . To compute the most probable sequence of states {y1,Y2,..., Yr} given a sequence of observations {1, 2,..., T}: Y* = arg max p(YX) = arg maxp(Y,X) Y Y ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/35" }, { "page_index": 248, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_041.png", "page_index": 248, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:31+07:00" }, "raw_text": "Hidden Markov Models BK Viterbi algorithm To compute the most probable sequence of states {y1, Y2, ..., Yt} given a sequence of observations {1,2, ..., T}: Y* = arg maxp(YX) = arg maxp(Y,X Y Y Andrew J.Viterbil1 USC VITERB SCHOOLOF ENGINEERING Born March 9, 1935 (age 80) Bergamo, Italy Nationality Ilian,Americar Fducatior Massachusetts Institute of Technology (BS,MS Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20/35" }, { "page_index": 249, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_042.png", "page_index": 249, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:36+07:00" }, "raw_text": "Hidden Markov Models BK . Viterbi algorithm: maxp(y1,Y2, ...,YT, X1, 2, ...,XT) = max max p(y1,Y2, ...,YT, 1,X2, ..., T y1:T YT y1:T-1 = max max {p(xTyT.p(yTyT-1)p(yi,...,yT-1,x1, 2,...,xT-1} YT y1:T- max max < p(xTyT).p(yTyT-1) max p(y1,...,YT-1,2,...,xT- YT YT- y1:T- 0.05 0.4 Tired 0.3 Transition probability Tired 0.2 Hungover 0.4 0.7 Hungover 0.1 Scared 0.2 Scared 0.2 Fine 0.1 Party TV Fine 0.5 Emission (observation) probability 0.1 0.1 0.05 0.3 0.2 0.25 Pub Study Tired 0.4 Tired 0.3 Hungover 0.2 0.4 Hungover 0.05 Scared 0.1 Scared 0.3 Fine 0.3 0.05 Fine 0.35 0.05 ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Machine Learning 21 / 35" }, { "page_index": 250, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_043.png", "page_index": 250, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:38+07:00" }, "raw_text": ". Viterbi algorithm maxp(y1,Y2,..., YT,x1, x2, ..., T) = max max p(y1,Y2,..., YT,1,x2,..., T) y1:T yT Y1:T-1 =max max{p(xTyT).p(yTyT-1)p(y1,...,yT-1,x1, 2,...,xT-1)} YT Yi:T-1 = max max p(xTyT).p(yTyT-1) max p(y1,...,YT-1, 2,..., xT- YT YT-1 y1:T-2 : Dynamic programming : Compute arg maxp(y1,x1) = arg maxp(x1y1).p(y1 y1 y1 . For each t from 2 to T, and for each state yt, compute: arg maxp(yi, y2,...,yT, x1,x2,..., t y1:t-1 Select arg maxp(y1,Y2,..., YT,x1,x2,..., xT y1:T ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22 / 35" }, { "page_index": 251, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_044.png", "page_index": 251, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:40+07:00" }, "raw_text": ". Dynamic programming : Compute arg maxp(y1,x1) = arg maxp(x1y1)·p(y1) y1 y1 -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23 /35" }, { "page_index": 252, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_045.png", "page_index": 252, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:42+07:00" }, "raw_text": ". Dynamic programming : Compute arg maxp(y1,x1) = arg maxp(x1y1)·p(y1 y1 y1 . For each t from 2 to T, and for each state yt, compute: arg maxp(y1,y2,...,yT,x1,x2,..., xt y1t-1 -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23 / 35" }, { "page_index": 253, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_046.png", "page_index": 253, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:44+07:00" }, "raw_text": ". Dynamic programming Compute arg maxp(y1,x1) = arg maxp(x1y1)·p(y1 y1 y1 . For each t from 2 to T, and for each state yt, compute: arg maxp(y1, y2, ..., yT, x1,x2, ..., xt y1:t-1 : Select arg maxp(yi, Y2,..., yT, x1, x2,..., xT y1:T -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23 / 35" }, { "page_index": 254, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_047.png", "page_index": 254, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:51+07:00" }, "raw_text": ". Dynamic programming : Compute arg max p(y1,x1) = arg maxp(x1y1).p(y1 y1 y1 . For each t from 2 to T, and for each state yt, compute: arg max p(y1,y2,...,YT,x1,x2,...,xt y1:t-1 Select arg maxp(y1,y2,...,yT,x1,x2,...,xT y1:T 0.05 0.4 0.7 TV a.(TV)P(TVITV)P(TredTV) (TV Tired 0.3 Tired TV 0.2 05 031 Hungover 0.4 Party TV Hungover 0.1 Scared 0.2 Scared 0.2 Fine 0.1 Fine 0.5 (Pub z(Pub) 0.1 Pub 1 .022 0.1 0.05 0.3 0.2 Party .075 011 0.25 Tired 0.4 Tired 0.3 a(Study az(Study Hungover 0.2 Pub Study Hungover 0.05 Studyl Scared 0.1 Scared 0.3 075 016 Fine 0.3 Fine 0.35 0.4 Tired Tired Scared 0.05 0.05 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23 / 35" }, { "page_index": 255, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_048.png", "page_index": 255, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:55+07:00" }, "raw_text": "Hidden Markov Models BK Could the results from the forward algorithm be used for Viterbi algorithm? a(TV) a.(TV)P(TMTV)P(TredITV) TV a2(TV .05 .031 ai(Pub a2(Pub) Pub .022 Partyi(Party) a2(Party .075 011 a(Study a2(Study) Study .075 .016 L(Study)P(StudylStudy)P(TiredlStady) Tired Tired Scared ecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Machine Learning 24 /35" }, { "page_index": 256, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_049.png", "page_index": 256, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:55:59+07:00" }, "raw_text": "Hidden Markov Models BK 0.004 0.012 TV ai(TV az(TV .05 0.005 031 0.006 ai(Pub az(Pub Pub 0.002 022 0.003 0.012 Partyai(Party) az(Party 075 011 x0015 0.00$ a1(Study) azStudy) Study .075 016 2(Study)P(StudyiStudOOOd1stady) 0.003 0.006 Tired 0.0075 Tired Scared 0.006 Observations 0.001 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 25 / 35" }, { "page_index": 257, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_050.png", "page_index": 257, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:03+07:00" }, "raw_text": "Hidden Markov Models BK Could the results from the forward algorithm be used for Viterbi algorithm? 0.00762 a(TV) a.(TV)P(TMTV)P(TredITV) TV az(TV 0.00144 05 031 0.001735 ai(Pub) 2(Pub) Pub 0.00036 1 .022 0,001.97 Party(Party) a(Party 075 011 0.000375 0.00441 Study Study 075 Q16 0.0009 .(Study)P(StudylStudy)P(Tireal Tired Tired Scared ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 26 / 35" }, { "page_index": 258, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_051.png", "page_index": 258, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:05+07:00" }, "raw_text": "Hidden Markov Models BK Training HMMs: : Topology is designed beforehand. : Parameters to be learned: emission and transition probabilities . Supervised or unsupervised training ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 27 /35" }, { "page_index": 259, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_052.png", "page_index": 259, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:07+07:00" }, "raw_text": "Hidden Markov Models BK Supervised learning . Training data: paired sequences of states and observations (y1,Y2,..,YT, 1, 2,.., t) : p(yi) = number of sequences starting with yi/number of all sequences. p(yjyi) = number of (yi,yj)'s / number of all (yi,y)'s p(xj1yi) = number of (yi,j)'s / number of all (yi,x)'s ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 28/35" }, { "page_index": 260, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_053.png", "page_index": 260, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:11+07:00" }, "raw_text": "Hidden Markov Models BK Supervised learning example FFFBFF BFFBFF FFBFFE FFFFBE HHTHTH THTHTH THHTTH THTTTH BFFEBE FFEBBE BFFFFE BFBFFE THHTHT HHTHHT HHTTHT HTTTHH 2 2 ? H H ? F B T ? T ? 2 2 2 Start ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 29 / 35" }, { "page_index": 261, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_054.png", "page_index": 261, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:13+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning . Only observation sequences are available FFFBFFBFFBFF FTTFBT HHTHTH THTHTH THHTTH THTTTH BTTTBF TFTBBF Brrrrr BFBFFF THHTHT HHTHHT HHTTHT HTTTHH ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 30 / 35" }, { "page_index": 262, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_055.png", "page_index": 262, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:16+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning : Only observation sequences are available FFFBFF BTFBFF FFBFFF FFFFBE HHTHTH THTHTH THHTTH THTTTH BTTTBr TFTBBF BFFFFF BEBTEr THHTHT HHTHHT HHTTHT HTTTHH Iterative improvement of model parameters How? ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 30 / 35" }, { "page_index": 263, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_056.png", "page_index": 263, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:20+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning . Initialize estimated parameters 0.5 0.5 H 0.5 0.5 H 0.5 F B T 0.5 0.5 T 0.5 0.5 0.5 Start ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 31 / 35" }, { "page_index": 264, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_057.png", "page_index": 264, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:23+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning . Initialize estimated parameters 0.5 0.5 0.5 H 0.5 H 0.5 F B T 0.5 0.5 T 0.5 0.5 0.5 Start For each observation sequence, compute the most probable state sequence, using Viterbi algorithm. -ecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Machine Learning 31 / 35" }, { "page_index": 265, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_058.png", "page_index": 265, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:25+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning . Initialize estimated parameters 0.5 0.5 0.5 H 0.5 H 0.5 F B T 0.5 T 0.5 0.5 0.5 0.5 Start For each observation sequence, compute the most probable state sequence, using Viterbi algorithm. Update the parameters using supervised learning on obtained paired state-observation sequences. ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Machine Learning 31 / 35" }, { "page_index": 266, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_059.png", "page_index": 266, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:28+07:00" }, "raw_text": "Hidden Markov Models BK Unsupervised learning . Initialize estimated parameters 0.5 0.5 0.5 H 0.5 H 0.5 F B T 0.5 T 0.5 0.5 0.5 0.5 Start For each observation sequence, compute the most probable state sequence, using Viterbi algorithm. Update the parameters using supervised learning on obtained paired state-observation sequences. Repeat it until convergence Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 31 / 35" }, { "page_index": 267, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_060.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_060.png", "page_index": 267, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:30+07:00" }, "raw_text": "Hidden Markov Models BK Application to NER: * Example: \"Facebook CEO Zuckerberg visited Vietnam\" ORG = iFacebook\" PER = \"Zuckerberg LOC = IVietnam\" NIL =\"CEO\"\"visited\" -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 32 / 35" }, { "page_index": 268, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_061.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_061.png", "page_index": 268, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:32+07:00" }, "raw_text": "Hidden Markov Models BK Application to NER: * Example: \"Facebook CEO Zuckerberg visited Vietnam\" ORG= lFacebookl PER = \"Zuckerberg LOC = \"Vietnam\" NIL =\"CEO\"\"visited\" States = Class labels ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 32 / 35" }, { "page_index": 269, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_062.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_062.png", "page_index": 269, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:34+07:00" }, "raw_text": "Hidden Markov Models Application to NER: * Example: \"Facebook CEO Zuckerberg visited Vietnam\" ORG = \"Facebook\" PER = \"Zuckerberg LOC = \"Vietnam\" NIL =\"CEO\"\"visited\" States = Class labels Observations = Words + Features ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 32 / 35" }, { "page_index": 270, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_063.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_063.png", "page_index": 270, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:37+07:00" }, "raw_text": "Hidden Markov Models BK Application to NER: PERSON START-OF-SENTENCE END-OF-SENTENCE ORGANIZATION (five other name-classes) NOT-A-NAME Bikel, D.M., (1997) A high-performance learning name-finde Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 33 / 35" }, { "page_index": 271, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_064.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_064.png", "page_index": 271, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:39+07:00" }, "raw_text": "Hidden Markov Models BK Application to NER: . What if a name is a multi-word phrase? . Example: \"...John von Neumann is ...\" B-PER = \"John\" l-PER = \"von\"\"Neumann\" O = \"is\" . BIO notation: {B-PER, I-PER, B-ORG, I-ORG, B-LOC, I-LOC, B-MISC, I-MISC, O} ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 34 / 35" }, { "page_index": 272, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_065.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_a/slide_065.png", "page_index": 272, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:42+07:00" }, "raw_text": "Homework BK . Readings : Marsland, S. (2009) Machine learning:An algorithmic perspective. Chapter 15 (graphica models). . Bikel, D. M. etal. (1997) Nymble: a high performance learning name-finder. o HW Apply Viterbi algorithm to find the most probable 3-state sequence in the looking-activitj example in the lecture : Write a program to carry out the unsupervised learning example for HMM in the lecture Discuss on the result, in particular the convergence of the process. -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 35 / 35" }, { "page_index": 273, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_001.png", "page_index": 273, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:43+07:00" }, "raw_text": "BK TP.HCM Deep Learning Optimization for Training Deep Models Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 274, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_002.png", "page_index": 274, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:46+07:00" }, "raw_text": "Contents BK 1. Learning vs. Pure Optimization 2. Challenges in Neural Network Optimization 3. Basic Algorithms 4. Parameter Initialization Strategies 5. Algorithms with Adaptive Learning Rates 6. Approximate Second-Order Methods Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 1/ 45" }, { "page_index": 275, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_003.png", "page_index": 275, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:47+07:00" }, "raw_text": "Learning vs. Pure Optimization" }, { "page_index": 276, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_004.png", "page_index": 276, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:50+07:00" }, "raw_text": "Learning vs. Pure Optimization BK Machine learning usually acts indirectly Typically, the cost function can be written as an average over the training set, such as J(0)=E(x,y)pdatL(f(x;0),y), (1) where L is the per-example loss function, f(x;) is the predicted output when the input is , and pdata is the empirical distribution. In the supervised learning case, y is the target output. -> Objective function with respect to the training set Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 2/ 45" }, { "page_index": 277, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_005.png", "page_index": 277, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:52+07:00" }, "raw_text": "earning vs. Pure Optimization BK I. Empirical Risk Minimization The goal of a ML algorithm is to reduce the expected generalization error - risk : If we knew the true distribution pdata(x, y), risk minimization would be an optimization task solvable by an optimization algorithm We only have a training set of samples -> we have a machine learning problem ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 3/45" }, { "page_index": 278, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_006.png", "page_index": 278, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:54+07:00" }, "raw_text": "earning vs. Pure Optimization BK Minimize the expected loss on the training set Minimize the empirica/ risk (2) i=1 where m is the number of training examples DL: rarely use empirical risk minimization! ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 4 / 45" }, { "page_index": 279, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_007.png", "page_index": 279, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:56:56+07:00" }, "raw_text": "Learning vs. Pure Optimization 2. Surrogate Loss Functions and Early Stopping - Sometimes, the loss function is not one that can be optimized efficiently : Optimizes a surrogate loss function: acts as a proxy but has advantages Minimizes a surrogate loss function but halts when a convergence criterion based on early stopping is satisfied ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 5/45" }, { "page_index": 280, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_008.png", "page_index": 280, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:00+07:00" }, "raw_text": "Learning vs. Pure Optimization BK 3. Batch and Minibatch Algorithms Objective function usually decomposes as a sum over the training examples Optimization: update the parameters based on an expected value of the cost function estimated using on/y a subset of the full cost function . Batch or deterministic gradient methods: process all of the training examples simultaneously in a large batch : Stochastic (or online methods): use only a single example at a time Most algorithms used for DL fall somewhere in between, using more than one but less than all of the training examples Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 6/45" }, { "page_index": 281, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_009.png", "page_index": 281, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:02+07:00" }, "raw_text": "Learning vs. Pure Optimization BK Minibatch sizes: Larger batches: provide a more accurate estimate of the gradient, but with /ess thar linear returns. Multicore architectures are usually underutilized by extremely small batches Using some absolute minimum batch size: no reduction in the time to process a minibatch Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 7/45" }, { "page_index": 282, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_010.png", "page_index": 282, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:04+07:00" }, "raw_text": "Learning vs. Pure Optimization BK Batch size defines performance - Small batch size vs. regularizing -> tradeoff? . Mini bacthes should be selected randomly! ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 8/ 45" }, { "page_index": 283, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_011.png", "page_index": 283, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:05+07:00" }, "raw_text": "Challenges in Neural Network Optimization" }, { "page_index": 284, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_012.png", "page_index": 284, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:07+07:00" }, "raw_text": "Ill-Conditioning BK Causing SGD to get \"stuck\" : even very small steps increase the cost functior Given the Taylor expansion of cost function: f(x(0) - cg) f(x(0)) -egT g + (3) 2 a gradient descent step -eg will add to the cost: 1 Hg -cg'g (4) ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 9 / 45" }, { "page_index": 285, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_013.png", "page_index": 285, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:10+07:00" }, "raw_text": "Local Minima BK Convex optimization: local minima = global minima . What about flat area? - We have reached a good solution if we find a critical point of any kind . Non-convex functions (NN): many local minima ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 10/45" }, { "page_index": 286, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_014.png", "page_index": 286, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:12+07:00" }, "raw_text": "Local Minima BK : Model identifiability problem: a sufficiently large training set can rule out all but one setting of the model's parameters . Models with latent variables are often not identifiable (e.g. weight space symmetry ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 11/45" }, { "page_index": 287, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_015.png", "page_index": 287, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:14+07:00" }, "raw_text": "Local minima BK : Local minima can be problematic if they have high cost in comparison to the global minimum : Serious problem for gradient-based optimization algorithms ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 12/45" }, { "page_index": 288, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_016.png", "page_index": 288, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:17+07:00" }, "raw_text": "Local minima BK 16 1.0 14 0.9 12 0.8 10 0.7 8 0.6 6 0.5 4 0.4 2 0.3 0 0.2 -2 0.1 -50 0 50 100 150 200 250 0 50 100 150 200 250 Training time (epochs Training time (epochs) Lecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 13/ 45" }, { "page_index": 289, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_017.png", "page_index": 289, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:20+07:00" }, "raw_text": "Plateaus, Saddle Points and Other Flat Regions BK Saddle point: point with zero gradient . A local minimum along one cross-section of the cost function and a local maximum along another cross-section In low-dimensional spaces: local minima are common In higher dimensional spaces: local minima are rare and saddle points are more common Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 14/45" }, { "page_index": 290, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_018.png", "page_index": 290, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:24+07:00" }, "raw_text": "Plateaus, Saddle Points and Other Flat Regions BK Zero gradient, and Hessian with.. Minimum Maximum Saddle point 2.0 to.0 1.0 1.5 0.5 0.5 1.0 F1.0 to.0 0.5 1.5 fo.5 20.0 $2.0 1.0 1.0 1.0 1.0 0.5 0.5 0.5 -1.0 0.0 -1.0 0.0 -1.0 -0.5 -0.5 0.5 0.0 0.0 0.5 0.0 0.5 0.0 0.5 0.5 0.5 0.5 1.01.0 1.01.0 1.01.0 Some positive All positive eigenvalues All negative eigenvalues and some negative Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 15/45" }, { "page_index": 291, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_019.png", "page_index": 291, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:27+07:00" }, "raw_text": "Plateaus, Saddle Points and Other Flat Regions BK Important properties of many random functions: eigenvalues of the Hessian become more likely to be positive as we reach regions of lower cost Critical points with high cost are far more likely to be saddle points Critical points with extremely high cost are more likely to be local maxima . Gradient descent empirically seems to be able to escape saddle points in many cases ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 16/45" }, { "page_index": 292, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_020.png", "page_index": 292, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:29+07:00" }, "raw_text": "Cliffs and Exploding Gradients BK NN with many layers: often have extremely steep regions resembling cliffs Reason: multiplication of several large weights together The gradient update step can move the parameters extreme/y far, usually jumping off of the cliff structure altogether ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 17/45" }, { "page_index": 293, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_021.png", "page_index": 293, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:31+07:00" }, "raw_text": "Cliffs and Exploding r Gradients BK (q'm) w ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 18 / 45" }, { "page_index": 294, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_022.png", "page_index": 294, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:33+07:00" }, "raw_text": "Cliffs and Exploding Gradients BK Cliff structures are most common in the cost functions for RNN : Involve a multiplication of many factors, with one factor for each time step Long temporal sequences -> extreme amount of multiplication ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 19/45" }, { "page_index": 295, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_023.png", "page_index": 295, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:34+07:00" }, "raw_text": "Basic Algorithms" }, { "page_index": 296, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_024.png", "page_index": 296, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:36+07:00" }, "raw_text": "Stochastic Gradient Descent (SGD) BK Require: Learning rate schedule 1, 2,. . Require: Initial parameter 0 k<1 while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1), ... , x(m} with Compute gradient estimate: g<- 1V E; L(f(x(i);0),y(i)) Apply update: 0 0 - 9 kk+1 end while Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 20/ 45" }, { "page_index": 297, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_025.png", "page_index": 297, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:39+07:00" }, "raw_text": "Stochastic Gradient Descent (SGD) BK SGD gradient estimator introduces a source of noise: the random sampling of m training examples . Does not vanish! : Crucial parameter: learning rate -? Should be adapted over time A sufficient condition to guarantee convergence 8 ek= , and C (5) k=1 k=1 Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 21/45" }, { "page_index": 298, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_026.png", "page_index": 298, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:41+07:00" }, "raw_text": "Stochastic Gradient Descent (SGD) BK Common practice: decay the learning rate linearly until iteration 7 ek =(1 - a)eo +ae (6) with a = k After iteration T, leave e constant Computation time per update does not grow with the number of training examples ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 22/45" }, { "page_index": 299, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_027.png", "page_index": 299, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:43+07:00" }, "raw_text": "Momentum BK Accelerate learning: high curvature, small but consistent gradients, or noisy gradients : Accumulate an exponentially decaying moving average of past gradients and continues to move in their directior ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 23/45" }, { "page_index": 300, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_028.png", "page_index": 300, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:45+07:00" }, "raw_text": "Momentum BK 20 10 -10 -20 -30 -30-20 -10 0 10 20 ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 24/45" }, { "page_index": 301, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_029.png", "page_index": 301, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:47+07:00" }, "raw_text": "Momentum Bt Require: Learning rate e, momentum parameter a Require: Initial parameter , initial velocity while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1),... , x(m)} with Compute gradient estimate: g- VeE; L(f(x(i);0),y(i)). Compute velocity update: v - av - εg Apply update: + v end while ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 25/45" }, { "page_index": 302, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_030.png", "page_index": 302, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:51+07:00" }, "raw_text": "Nesterov Momentum BK Require: Learning rate e, momentum parameter a Require: Initial parameter 6, initial velocity while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1),. . ., x(m)} with Apply interim update: 0- 0 + av. Compute gradient (at interim point): g< 1V E;L(f(x(i);@),y(i)) Compute velocity update: v - av - cg. Apply update: - + v. end while Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 26/45" }, { "page_index": 303, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_031.png", "page_index": 303, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:52+07:00" }, "raw_text": "Parameter lnitialization Strategies" }, { "page_index": 304, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_032.png", "page_index": 304, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:54+07:00" }, "raw_text": "Parameter Initialization Strategies BK Modern initialization strategies: simple and heuristic The initial parameters need to \"break symmetry\" between different units Set the biases for each unit to heuristically chosen constants Extra parameters (e.g. parameters encoding the conditional variance of a prediction) are usually set to heuristically chosen constants ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 27/45" }, { "page_index": 305, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_033.png", "page_index": 305, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:57:57+07:00" }, "raw_text": "Parameter lnitialization Strategies BK Initialize all the weights in the model to values drawn randomly from a Gaussian o uniform distribution The scale of the initial distribution has a large effect . The outcome of the optimization procedure : The ability of the network to generalize ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 28/45" }, { "page_index": 306, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_034.png", "page_index": 306, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:00+07:00" }, "raw_text": "Parameter Initialization Strategies BK Larger initial weights -> stronger symmetry breaking effect, helping to avoid redundant units * Too large initial weights -> exploding values during forward propagation or back-propagation How to choose? Optimization: the weights should be large enough to propagate information successfully . Regularization: encourage making the weights smaller ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 29/45" }, { "page_index": 307, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_035.png", "page_index": 307, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:03+07:00" }, "raw_text": "Parameter Initialization Strategies BK Choosing bias o If a bias is for an output unit Initialize the bias to obtain the right marginal statistics of the output Assume that the initial weights are small enough that the output of the unit is determined only by the bias Set the bias to the inverse of the activation function applied to the marginal statistics of the output in the training set Sometimes we may want to choose the bias to avoid causing too much saturation at initialization : When a unit controls whether other units are able to participate in a function Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 30/ 45" }, { "page_index": 308, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_036.png", "page_index": 308, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:04+07:00" }, "raw_text": "Algorithms with Adaptive Learning Rates" }, { "page_index": 309, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_037.png", "page_index": 309, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:06+07:00" }, "raw_text": "AdaGrad BK Individually adapts the learning rates of all model parameters: scaling them inversely proportional to the square root of the sum of all of their historical squared values Decreasing rate depends on partial derivative of parameter : The net effect is greater progress in the more gently sloped directions of parameter space ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 31/45" }, { "page_index": 310, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_038.png", "page_index": 310, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:09+07:00" }, "raw_text": "AdaGrad BK Require: Global learning rate e Require: Initial parameter 0 Require: Small constant , perhaps 10-7, for numerical stability Initialize gradient accumulation variable r = 0 while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1), .., x(m)} with corresponding targets y(i) . Compute gradient: g- mVe E L(f(x(i);0), y(). Accumulate squared gradient: r - r + g g. element-wise) Apply update: 0- + 0 end while Lecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 32/45" }, { "page_index": 311, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_039.png", "page_index": 311, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:11+07:00" }, "raw_text": "RMSProp BK Modify AdaGrad to perform better in the non-convex setting Change the gradient accumulation into an exponentially weighted moving average Compared to AdaGrad: introduces a new hyperparameter, P, that controls the length scale of the moving average ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 33/45" }, { "page_index": 312, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_040.png", "page_index": 312, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:13+07:00" }, "raw_text": "RMSProp BK Require: Global learning rate c, decay rate Require: Initial parametcr 0 Require: Small constant , usually 10-6, used to stabilize division by small numbers Initialize accumulation variables r = 0 while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1),. .., x(m)} with Compute gradient: g- mVo E;L(f(x(i);0),y(i)). Accumulate squared gradient: r pr + (1 - )g g. Apply update: 0< 0+ 0 end while Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 34/ 45" }, { "page_index": 313, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_041.png", "page_index": 313, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:15+07:00" }, "raw_text": "Adam BK \"Adam\" derives from the phrase \"adaptive moments : A variant on the combination of RMSProp and momentum with a few important distinctions ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 35/45" }, { "page_index": 314, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_042.png", "page_index": 314, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:19+07:00" }, "raw_text": "Adam BK Require: Step size (Suggested default: 0.001) Require: Exponential decay rates for moment estimates, and 2 in [0,1 (Suggested defaults: 0.9 and 0.999 respectively) Require: Small constant used for numerical stabilization (Suggested default: 10-8) Require: Initial parameters Initialize 1st and 2nd moment variables s = 0, r = 0 Initialize time step t = 0 while stopping criterion not met do Sample a minibatch of m examples from the training set {x(1),..., x(m} with corresponding targets y(i). Compute gradient: g- mVo E;L(f(x(i);@),y(i)) t second-order moment estimate may have high bias early in training - Adam: fairly robust to the choice of hyperparameters -ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 37/45" }, { "page_index": 316, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_044.png", "page_index": 316, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:23+07:00" }, "raw_text": "Choosing the Right Optimization Algorithm BK Which algorithm should one choose? * SGD, SGD with momentum, RMSProp, RMSProp with momentum, AdaDelta, Adam : Your choice is depend largely on the familiarity with the algorithm (for ease of hyperparameter tuning) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 38/45" }, { "page_index": 317, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_045.png", "page_index": 317, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:24+07:00" }, "raw_text": "Approximate Second-Order Methods" }, { "page_index": 318, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_046.png", "page_index": 318, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:26+07:00" }, "raw_text": "Newton's Method BK An optimization scheme based on using a second-order Taylor series expansion to approximate J(0) near some point 0o J(0) J(0o)+(-0o)'oJ(0o) 0H(0-0): (7) where H is the Hessian of J with respect to 6 evaluated at 0o . Newton parameter update rule: 0*= 0o- H-1eJ(0o (8) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 39/ 45" }, { "page_index": 319, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_047.png", "page_index": 319, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:29+07:00" }, "raw_text": "Newton's Method BK Require: Initial parameter 0o Require: Training set of m examples while stopping criterion not met do Compute update: 0 = -H-1g Apply update: 0 = 0 + 0 end while ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 40/45" }, { "page_index": 320, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_048.png", "page_index": 320, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:32+07:00" }, "raw_text": "Conjugate Gradients BK : Efficiently avoid the calculation of the inverse Hessian by iteratively descending conjugate directions : Seek to find a search direction that is conjugate to the previous line search direction i.e. it will not undo progress made in that direction At training iteration t, the next search direction dt takes the form: dt =0J(0)+βtdt-1 (9) t is a coefficient whose magnitude controls how much of the direction, dt-1, we should add back to the current search direction Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 41/45" }, { "page_index": 321, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_049.png", "page_index": 321, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:34+07:00" }, "raw_text": "Conjugate Gradients BK Require: Initial parametcrs 0o Require: Training set of m examples Initialize o = 0 Initialize go = 0 Initializc t = 1 while stopping criterion not rnet do Initialize the gradient gt = 0 Computc gradient: gt- mVo E L(f(x(i);0),y(i)) 9t-19t-1 (Nonlinear conjugate gradient: optionally reset t to zero, for example if t is a multiple of some constant k, such as k = 5) Computc search direction: pt = -gt + βtP t-1 Perform line search to find: e* = argmine m m=1 L(f(x(i);@t + cpi),y(i)) (On a truly quadratic cost function, analytically solve for e* rather than explicitly searching for it) Apply update: 6t+1 = 0t + c*pt t-t+1 end while Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 42/ 45" }, { "page_index": 322, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_050.png", "page_index": 322, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:36+07:00" }, "raw_text": "Conjugate Gradients Bt dt and dt-1, are defined as conjugate if dt H(J)dt-1 = 0 d Hdt-1 =0 (10) Can we calculate the conjugate directions without resorting to these calculations? Two directions, dt and dt-1, are defined as conjugate if dt H(J)dt-1 = 0 dt Hdt-1 = 0 (11) ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 43/ 45" }, { "page_index": 323, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_051.png", "page_index": 323, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:39+07:00" }, "raw_text": "Conjugate Gradients BK Two popular methods for computing the βt : . Fletcher-Reeves: VeJ(0t)VeJ(0t) Bt =F (12) VoJ(0t-1)VeJ(0t-1) Polak-Ribiere: (VeJ(0t)-V0J(0t-1))`V0J(0t) (13) V0J(0t-1) V0J(0t-1) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 44/45" }, { "page_index": 324, "chapter_num": 5, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_5_b/slide_052.png", "page_index": 324, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:42+07:00" }, "raw_text": "BFGS BK The Broyden-Fletcher-Goldfarb-Shanno (BFGS): bring some of the advantages of Newton's method without the computational burden BFGS is similar to CG . Newton's update is given by 0* = 0o - H -1V0J(6o) (14) where H is the Hessian of J with respect to 0 evaluated at 6a ? approximate the inverse with a matrix Mt that is iteratively refined by low rank updates to become a better approximation of H-1 Unlike CG, the success of BFGS is not heavily dependent on the line search finding a point very close to the true minimum along the line Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 45/ 45" }, { "page_index": 325, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_001.png", "page_index": 325, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:44+07:00" }, "raw_text": "BK TP.HCM Machine Learning Support Vector Machine Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 326, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_002.png", "page_index": 326, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:46+07:00" }, "raw_text": "Contents BK 1. Analytical Geometry 2. Maximum Margin Classifiers 3. Lagrange Multipliers 4. Non-linearly Separable Data 5. Soft-margin -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 1 / 33" }, { "page_index": 327, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_003.png", "page_index": 327, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:47+07:00" }, "raw_text": "Analytical Geometry" }, { "page_index": 328, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_004.png", "page_index": 328, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:49+07:00" }, "raw_text": "Analytical Geometry BK y>0 2 y(x) = w.x + Wo y = 0 R1 y< 0 R2 x W y(x) w x 1 lw ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2 / 33" }, { "page_index": 329, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_005.png", "page_index": 329, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:52+07:00" }, "raw_text": "Analytical Geometry BK y > 0 2 y(x) = a.w.x + a.Wo y = 0 R1 y< 0 R2 X W 1 unchanged X1 lwl ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 3 / 33" }, { "page_index": 330, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_006.png", "page_index": 330, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:53+07:00" }, "raw_text": "Maximum Margin Classifiers" }, { "page_index": 331, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_007.png", "page_index": 331, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:55+07:00" }, "raw_text": "Maximum margin classifiers BK Assume that the data are linearly separable : Decision boundary equation: y(x) = w.x +b ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 4 / 33" }, { "page_index": 332, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_008.png", "page_index": 332, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:58:57+07:00" }, "raw_text": "Maximum margin classifiers BK Margin: the smallest distance between the decision boundary and any of the samples y >0 y= 0 y <0 mar ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5 / 33" }, { "page_index": 333, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_009.png", "page_index": 333, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:00+07:00" }, "raw_text": "Maximum margin classifiers BK Margin: the smallest distance between the decision boundary and any of the samples 0 y = 0 y < 0 y margin max margin -ecturer:Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 5 / 33" }, { "page_index": 334, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_010.png", "page_index": 334, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:02+07:00" }, "raw_text": "Maximum margin classifiers BK : Support vectors: samples at the two margins y< 0 max margin ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 6 / 33" }, { "page_index": 335, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_011.png", "page_index": 335, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:03+07:00" }, "raw_text": "Maximum margin classifiers BK Sca/ing y (support vectors) to be 1 or -1: max margin ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 7 / 33" }, { "page_index": 336, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_012.png", "page_index": 336, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:05+07:00" }, "raw_text": "Maximum margin classifiers BK : Signed distance between the decision boundary and a sample :n : y(xn) Iwl ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8 / 33" }, { "page_index": 337, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_013.png", "page_index": 337, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:07+07:00" }, "raw_text": "Maximum margin classifiers BK . Signed distance between the decision boundary and a sample n : y(xn) 1wll : Absolute distance between the decision boundary and a sample n : tn.y(xn) 1wll tn = +1 iff y(n) > 0 and tn = -1 ifF y(xn) < 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 8 / 33" }, { "page_index": 338, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_014.png", "page_index": 338, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:10+07:00" }, "raw_text": "Maximum margin classifiers BK Maximum margin arg max minn(tn.(w.an +l w,b W with the constraint: tn.(w.Xn + b) 1 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 9 / 33" }, { "page_index": 339, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_015.png", "page_index": 339, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:12+07:00" }, "raw_text": "Maximum margin classifiers BK To be optimized: arg min w,b with the constraint: tn.(w.xn +b) Z1 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 10 / 33" }, { "page_index": 340, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_016.png", "page_index": 340, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:13+07:00" }, "raw_text": "Lagrange Multipliers" }, { "page_index": 341, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_017.png", "page_index": 341, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:15+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Joseph-Louis Lagrange born 25 January 1736 - Paris, 10 April 1813; also reported as Giuseppe Luigi Lagrange, was an Italian Enlightenment Era mathematician and as- tronomer. He made significant contributions to the fields of analysis, number theory, and both classical and celestial mechanics. ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 11/ 33" }, { "page_index": 342, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_018.png", "page_index": 342, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:17+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK . Problem: arg max f(x) a with the constraint: g(x) = 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 12 / 33" }, { "page_index": 343, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_019.png", "page_index": 343, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:19+07:00" }, "raw_text": "Optimization using Lagrange multipliers : Solution is the stationary point of the Lagrange function L(x,) = f(x) +A.g(x) such that: L(x,/xn =f(x)/xn+ .g(x)/xn =0 and L(x,/= g(x) = 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 13 / 33" }, { "page_index": 344, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_020.png", "page_index": 344, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:21+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK : Example: f(x)=1 - u2- v2 with the constraint g(x)=u+v-1=0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 14 / 33" }, { "page_index": 345, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_021.png", "page_index": 345, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:23+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK . Lagrange function L(x,A) = f(x) +A.g(x) =(1- u2- v2) +A.(u+v -1) 0L(x,)/u =f(x)/u+ X.g(x)/u = -2u+ =0 bL(x,)/v = f(x)/v + X.g(x)/v = -2v + = 0 L(x,/= g(x) = u+ v -1=0 Solution: u = 1/2 and v = 1/2 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 15 / 33" }, { "page_index": 346, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_022.png", "page_index": 346, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:25+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Example: f(x)=1-u2-v2 with the constraint: u g(x) = u + v -1= 0 g(u,v)= 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 16 / 33" }, { "page_index": 347, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_023.png", "page_index": 347, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:26+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK . Problem: arg max f(x) a with the inequality constraint: g(x) 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 17 / 33" }, { "page_index": 348, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_024.png", "page_index": 348, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:29+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Solution is the stationary point of the Lagrange function: L(x,A) = f(x) + .g(x) such that: L(x,)/xn = f(x)/xn + .g(x)/xn = 0 and g(x) 0 A 0 A.g(x) = 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 18 / 33" }, { "page_index": 349, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_025.png", "page_index": 349, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:31+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK To be optimized: arg min w,b with the constraint: tn.(w.xn +b) Z1) Lagrange function for maximum margin classifier: L(w,b,a)= an.(tn.(w.xn +b) -1) n=1..N tn.(w.xn+b) -1 > 0 an 0 an.(tn.(w.xn +b) -1) =0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 19 / 33" }, { "page_index": 350, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_026.png", "page_index": 350, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:34+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Lagrange function for maximum margin classifier: Iw2- Z L(w,b,a) an.(tn.(w.xn +b) -1) n=l..N Solution for w 0(w,b,a)/w =0 w = an.tn.n n=1..N oL(w,b,a)/0b= an.tn = 0 n=1..M ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 20 /33" }, { "page_index": 351, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_027.png", "page_index": 351, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:36+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Lagrange function for maximum margin classifier: L(w,b,a)= an:(tn.(w.Xn+b) -1) n=1..N : Solution for a: dual representation to be optimized 1 2 2 L*(a) = an an.am.tn.tm.Xn.Xm 2 n=1..M n=1..N m=1.. with the constraints: an 0 an.tn = 0 n=1..N ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 21 / 33" }, { "page_index": 352, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_028.png", "page_index": 352, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:39+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Lagrange function for maximum margin classifier: 51w12_Z a L(w,b,a) = an.(tn.(w.Xn +b) -1) n=1..N Solution for a: dual representation to be optimized 1 L*(a) = an - an.am.tn.tm.Xn.Xm 2 n=1..N n=1..N m=1..N Why optimization via dual representation? Sparsity: an = 0 if n is not a support vector. Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 22 / 33" }, { "page_index": 353, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_029.png", "page_index": 353, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:42+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK . Lagrange function for maximum margin classifier: L(w,b,a) an.(tn.(w.Xn+b) -1) n=1..N an.(tn.(w.xn +b) -1) =0 Solution for b: 1 am.tm.Xm.Xn IS nES where S is the set of support vectors (an # 0) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 23 / 33" }, { "page_index": 354, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_030.png", "page_index": 354, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:44+07:00" }, "raw_text": "Optimization using Lagrange multipliers BK Classification: y(x) = w.x+ b = an.tn.n.x + b n=1..N y(x) >0-+1 y(x)<0--1 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 24 / 33" }, { "page_index": 355, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_031.png", "page_index": 355, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:45+07:00" }, "raw_text": "Non-linearly Separable Data" }, { "page_index": 356, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_032.png", "page_index": 356, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:47+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK Mapping the data points into a high dimensional feature space. Example 1: Original space: (x . New space: (x,x2 0 A ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 25 /33" }, { "page_index": 357, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_033.png", "page_index": 357, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:49+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK Example 2: . Original space: (u,v : New space: ((u2 + v2)1/2,arctan(u/u) arctan(v/u) 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 2 36 xe1/2" }, { "page_index": 358, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_034.png", "page_index": 358, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:51+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK Example 3: XOR function In1 ln2 t O O O O 1 1 1 0 1 1 1 0 In ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 27 / 33" }, { "page_index": 359, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_035.png", "page_index": 359, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:54+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK Example 3: XOR function In1 ln2 In3 Output 1 1 1 O 1 0 0 0 1 1 1 Ina ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 28/33" }, { "page_index": 360, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_036.png", "page_index": 360, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:56+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK . Classification in the new space: y(x) =w.(x) +b= an.tn.$(xn).$(x)+ b n=1..N ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 29 / 33" }, { "page_index": 361, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_037.png", "page_index": 361, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T10:59:59+07:00" }, "raw_text": "Kernel trick for non-linearly separable data Bt Classification in the new space y(x) =w.$(x) +b= an.tn.$(xn).$(x)+ b n=1..N Computational complexity of $(xn).$(x) is high due to the high dimension of $(.)) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 29 / 33" }, { "page_index": 362, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_038.png", "page_index": 362, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:01+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK Classification in the new space y(x) = w.G(x)+ b = an.tn.$(xn).$(x)+ b n=1..N Computational complexity of $(xn).$(x) is high due to the high dimension of $(.) : Kernel trick: β(xn).Q(xm) = K(xn,xm) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 29 / 33" }, { "page_index": 363, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_039.png", "page_index": 363, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:03+07:00" }, "raw_text": "Kernel trick for non-linearly separable data BK A typical kernel function: K(u,v) = (1+u.v) $((u1.u2, ..., ua)) = (1, V2u1,V2u2, ..., V2ud V2u1.u2, V2u1.u3,..,V2ud-1.ud, u?v? $(u).&(v)=1+2 Ui.Vi+ 2 Uj.Vi.Uj.Vj + i=l..d i=1..d-1j=i+1..d i=1..d g(u.$(v)=K(u,v) Is δ(x) guaranteed to be separable? Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 30 / 33" }, { "page_index": 364, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_040.png", "page_index": 364, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:04+07:00" }, "raw_text": "Soft-margir" }, { "page_index": 365, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_041.png", "page_index": 365, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:06+07:00" }, "raw_text": "Soft margin SVM BK Soft-margin SVM: to allow some of the training samples to be misclassified . Slack variable: $ y = y = - 1 Q $ =0 = 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 31/33" }, { "page_index": 366, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_042.png", "page_index": 366, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:09+07:00" }, "raw_text": "Soft margin SVM BK . New constraints: tn.(w.Xn+b) 1-$n $n 0 ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 32 / 33" }, { "page_index": 367, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_043.png", "page_index": 367, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:11+07:00" }, "raw_text": "Soft margin SVM BK . New constraints: tn.(w.xn + b) 1 -$n $n 0 To be minimized: 2=C Z $n n=l..N C > 0: controls the trade-off between the margin and slack variable penalty ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 32 / 33" }, { "page_index": 368, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_a/slide_044.png", "page_index": 368, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:13+07:00" }, "raw_text": "Summary BK SVM is a sparse kernel method. : Soft margin SVM is to deal with non-linearly separable data after kernel mapping -ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Machine Learning 33/33" }, { "page_index": 369, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_001.png", "page_index": 369, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:15+07:00" }, "raw_text": "BK TP.HCM Deep Learning Convolutional Networks Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vn Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 370, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_002.png", "page_index": 370, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:16+07:00" }, "raw_text": "Contents BK 1. Convolutional Networks 2. Pooling 3. Variants of the Basic Convolution Function 4. Problems of Convolutional Networks 5. Random or Unsupervised Features 6. Case Study ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 1 / 52" }, { "page_index": 371, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_003.png", "page_index": 371, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:17+07:00" }, "raw_text": "Convolutional Networks" }, { "page_index": 372, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_004.png", "page_index": 372, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:20+07:00" }, "raw_text": "Convolutional Networks Convolutional networks (Convolutional Neural Networks, or CNNs: a specialized kind of neural network for processing data that has a known, grid-like topology : Convolution: a specialized kind of linear operation - CNNs: are simply NNs that use convolution in place of general matrix multiplication in at least one of their layers ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 2/52" }, { "page_index": 373, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_005.png", "page_index": 373, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:22+07:00" }, "raw_text": "The Convolution Operatior BK . Signal processing w(t-T)dT (1) x(t - T)w(T)dT The convolution operation is typically denoted with an asterisk: s(t) = (x * w)(t) (2) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 3 / 52" }, { "page_index": 374, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_006.png", "page_index": 374, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:24+07:00" }, "raw_text": "The Convolution Operation BK f(t f(T) T Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 4 / 52" }, { "page_index": 375, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_007.png", "page_index": 375, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:25+07:00" }, "raw_text": "The Convolution Operation BK The continuous signal is unrealistic Discrete convolutior x(T)w(t - T (3) T=-X ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 5/ 52" }, { "page_index": 376, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_008.png", "page_index": 376, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:27+07:00" }, "raw_text": "The Convolution Operation BK In ML applications The input is usually a multidimensional array of data : The kernel is usually a multidimensional array of parameters: are adapted by the learning algorithm ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 6/ 52" }, { "page_index": 377, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_009.png", "page_index": 377, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:29+07:00" }, "raw_text": "The Convolution Operation BK Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 7/52" }, { "page_index": 378, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_010.png", "page_index": 378, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:32+07:00" }, "raw_text": "The Convolution Operatior BK We often use convolutions over more than one axis at a time If we have a 2D image I, we also need a 2D kernel K S(i,j)=(I*K)(i,j)=>I(m,n)K(i-m,j-n). (4) Convolution is commutative: S(i,i)=(K*I)(i,j)=>I(i-m,j-n)K(m,n). (5) m ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 8/52" }, { "page_index": 379, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_011.png", "page_index": 379, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:36+07:00" }, "raw_text": "The Convolution Operation BK Input Kernel a d W e h y 2 Output aw + bx bw + cw + dx ey +f z fy + gz gy + hz ew fw g+ gw + hx iy + 32 Jy X kz ky iz ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 9 / 52" }, { "page_index": 380, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_012.png", "page_index": 380, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:38+07:00" }, "raw_text": "The Convolution Operation BK Many ML frameworks implemented convolution as cross-correlation . Misleading ML algorithm: learn the appropriate values of the kernel in the appropriate place An algorithm based on convolution will learn a kernel that is flipped relative to the kernel learned by an algorithm without flipping ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 10/52" }, { "page_index": 381, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_013.png", "page_index": 381, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:40+07:00" }, "raw_text": "The Motivation BK : Spare interactions : Parameter sharing . Equivariant representations Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 11 / 52" }, { "page_index": 382, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_014.png", "page_index": 382, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:42+07:00" }, "raw_text": "The Motivation BK Spare interactions : Traditional networks: every output unit interacts with every input unit Large kernels -> Small kernels Memory efficiency Computation requires fewer operations Units in deeper layers may indirectly interact with larger portion of the input ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 12/52" }, { "page_index": 383, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_015.png", "page_index": 383, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:45+07:00" }, "raw_text": "The Motivation BK Sparse $1 $2 $3 S4 S5 connections due to small convolution X1 x2 3 X4 5 kernel S1 82 S3 S4 $5 Dense connections T1 2 3 4 5 Deep Learning 13 / 52" }, { "page_index": 384, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_016.png", "page_index": 384, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:47+07:00" }, "raw_text": "The Motivation BK Sparse s1 S2 $3 S5 connections due to small convolution 3 05 kernel s1 $5 Dense connections 1 X2 X3 T4 X5 Deep Learning 14/52" }, { "page_index": 385, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_017.png", "page_index": 385, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:49+07:00" }, "raw_text": "The Motivation BK 91 92 93 94 95 h1 h2 hs h4 h5 1 X2 X3 4 5 Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 15 / 52" }, { "page_index": 386, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_018.png", "page_index": 386, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:51+07:00" }, "raw_text": "The Motivation Parameter Sharing . Using the same parameter for more than one function Learn a set of parameters for every location : Does not reduce the runtime of forward propagation but the storage requirement ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 16/52" }, { "page_index": 387, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_019.png", "page_index": 387, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:53+07:00" }, "raw_text": "The Motivation BK Input 1 1 Output Kernel ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 17/ 52" }, { "page_index": 388, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_020.png", "page_index": 388, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:55+07:00" }, "raw_text": "The Motivation BK Equivariance - A function is equivariant means that if the input change, the output changes in the same way : Function f is equivariant to function g if f(g(x)) = g(f(x)) : Convolution is not naturally equivariant to transformations (except translation) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 18 /52" }, { "page_index": 389, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_021.png", "page_index": 389, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:00:57+07:00" }, "raw_text": "The Motivation BK Electrical signal from brain Recording electrode Visual area of brain Stimulus Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 19 / 52" }, { "page_index": 390, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_022.png", "page_index": 390, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:00+07:00" }, "raw_text": "The Motivation BK Complcx laycr terminology Simple laycr terminology Next layer Next layer Convolutional Layer Pooling stage Pooling layer Detector stage: Detector layer: Nonlinearity Nonlinearity e.g.. rectified linear e-g., rectified linear 4 Convolution stage Convolution layer: Affine transform Affine transform A 4 Input to layer Input to layers ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 20/ 52" }, { "page_index": 391, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_023.png", "page_index": 391, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:03+07:00" }, "raw_text": "The Motivation BK C3: f.rmaps 16@10x10 C1feature maps INPUT S4: f. maps 16@5x5 32x32 6@28x28 S2: f. maps C5.layer F6: layer OUTPUT 6@14x14 120 84 10 Full connection Gaussianconnections Convolutions Subsampling Convolutions Subsampling Full connection Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 21/52" }, { "page_index": 392, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_024.png", "page_index": 392, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:06+07:00" }, "raw_text": "The Motivation BK Low-Level Mid-Level High-Level Trainable Feature Feature Feature Classifier e Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 22/52" }, { "page_index": 393, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_025.png", "page_index": 393, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:07+07:00" }, "raw_text": "Examples BK lnput volume: 32x32x3 o 10 5x5 filters with stride 1, pad 2 . Output volume size: ? .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 23/52" }, { "page_index": 394, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_026.png", "page_index": 394, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:09+07:00" }, "raw_text": "Examples BK lnput volume: 32x32x3 * 10 5x5 filters with stride 1, pad 2 . Number of parameters in this layer? .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 24/52" }, { "page_index": 395, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_027.png", "page_index": 395, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:13+07:00" }, "raw_text": "Convolution Layer BK Accepts a volume of size W1 x H1 x Dj Requires four hyperparameters: Number of filters K Their spatial extent F The stride S . The amount of zero padding P o Produces a volume of size W2 x H2 x Dz Wz =(W-F+2P)/S+1 . H2 = (H1 - F + 2P)/S + 1(i.e. width and height are computed equally by symmetry . D2=K With parameter sharing, it introduces F . F : D1 weights per filter, for a total of (F.F.D1).K weights and K biases. In the output volume, the d-th depth slice (of size W2 x H2) is the result of performing a valid convolution of the d-th filter over the input volume with a stride of S, and then offset by d-th bias. Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 25/52" }, { "page_index": 396, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_028.png", "page_index": 396, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:16+07:00" }, "raw_text": "Examples BK POOL POOL POOL RELU RELU RELU RELU RELU RELU CONV CONV CONV CONV CONV CONV FC car truck airplane ship horse Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 26/52" }, { "page_index": 397, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_029.png", "page_index": 397, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:17+07:00" }, "raw_text": "Pooling" }, { "page_index": 398, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_030.png", "page_index": 398, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:19+07:00" }, "raw_text": "Pooling BK : Pooling function: replaces the output of the net at a certain location with a summary statistic of the nearby outputs The representation becomes approximately invariant to small translations of the input - We care whether some feature is present rather than where it is ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 27/52" }, { "page_index": 399, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_031.png", "page_index": 399, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:21+07:00" }, "raw_text": "Pooling BK Adding an infinitely strong prior: the function the layer learns must be invariant to small translations . What if we pool over the outputs of separately parameterized convolutions? .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 28/52" }, { "page_index": 400, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_032.png", "page_index": 400, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:24+07:00" }, "raw_text": "Pooling BK Large response Large response in pooling unit in pooling unit Large Large response response in detector in detector unit 1/ unit 3 5 3 S Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 29/52" }, { "page_index": 401, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_033.png", "page_index": 401, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:26+07:00" }, "raw_text": "Pooling BK Pooling is essential for handling inputs of varying size Complicate some kinds of neural network architectures that use top-down informatior (Boltzmann machines and autoencoders) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 30/52" }, { "page_index": 402, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_034.png", "page_index": 402, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:33+07:00" }, "raw_text": "Pooling BK Output of softmax: Qutput of softmax: Qutput of softmax: 1,000 class 1,000 class 1,000 class probabilities probabilities probabilities t t t Output of matrix Output of matrix Output of average multiply: 1,000 units multiply: 1.000 units pooling: lxlx1,000 t t t Output of reshape to Output of reshape to Output of vector: vector: convolution: 16.384 units 576 units 16x16x1,000 + t Qutput oi pooling Output of pooling to Output ol pooling with stride 4: 3x3 grid: 3x3x64 with stride 4: 16x16x64 16x16x64 t 4 t Output ol Output ol Output ol convolution + convolution couvolution + ReLU: 64x64x64 ReLU: 64x64x64 ReLU: 64x64x64 4 4 Output of pooling Output of pooling Output of pooling with stride 4: with stride 4: with stride 4: 64x64x64 64x64x64 64x64x64 t t t Output of Output of Output of convolution convolution convolution RcLU: 256x256x64 RcLU: 256x256x64 RcLU: 256x256x64 4 t Input image: Input inage Input image: 256x256x3 256x256x3 256x256x3 Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 31 / 52" }, { "page_index": 403, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_035.png", "page_index": 403, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:34+07:00" }, "raw_text": "Variants of the Basic Convolution Function" }, { "page_index": 404, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_036.png", "page_index": 404, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:36+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK The functions used in practice differ slightly An operation that consists of many applications of convolution in paralle At each layer: extract many kinds of features, at many locations The input is usually not just a grid of real values ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 32/52" }, { "page_index": 405, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_037.png", "page_index": 405, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:38+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK . Stride . Padding or no padding? Valid convolution Same convolutior Full convolution . Unshared convolutions ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 33/52" }, { "page_index": 406, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_038.png", "page_index": 406, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:40+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK Strided convolution Downsampling Convolution ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 34/ 52" }, { "page_index": 407, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_039.png", "page_index": 407, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:42+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK Without zero padding With zero padding ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 35 /52" }, { "page_index": 408, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_040.png", "page_index": 408, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:45+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK Local connection $2 53 $5 like convolution g i 2 3 5 but no sharing 53 b Convolution Fully connected 5 ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 36 / 52" }, { "page_index": 409, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_041.png", "page_index": 409, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:47+07:00" }, "raw_text": "Variants of the Basic Convolution Function BK Multiplication matrix is spare . Duplication in the matrix ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 37/52" }, { "page_index": 410, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_042.png", "page_index": 410, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:49+07:00" }, "raw_text": "Types of convolutions BK Kernel size: defines the field of view of the convolution - Stride: defines the step size of the kernel when traversing the image Padding: defines how the border of a sample is handled ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 38/52" }, { "page_index": 411, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_043.png", "page_index": 411, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:51+07:00" }, "raw_text": "Problems of Convolutional Networks" }, { "page_index": 412, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_044.png", "page_index": 412, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:53+07:00" }, "raw_text": "Structured Outputs BK CNN can be used to output high-dimentional structured object y(1) y(2) 2(3) W V W V H(1 H(2) H(3 U U U ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 39/52" }, { "page_index": 413, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_045.png", "page_index": 413, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:55+07:00" }, "raw_text": "Data Types BK 1D: audio waveform, skeleton animation data : 2D: audio data (FFT), color image data : 3D: Medical imaging data (CT scans), color video data -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 40/52" }, { "page_index": 414, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_046.png", "page_index": 414, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:56+07:00" }, "raw_text": "Random or Unsupervised Features" }, { "page_index": 415, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_047.png", "page_index": 415, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:57+07:00" }, "raw_text": "Random or Unsupervised Features BK Training CNN Learning the features : Unsupervised training Used features that are not trained in a supervised fashior ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 41 /52" }, { "page_index": 416, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_048.png", "page_index": 416, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:01:59+07:00" }, "raw_text": "Random or Unsupervised Features BK Basic strategies for obtaining convolution kernels . Initialize them randomly Design by hand - Learn the kernels with an unsupervised criterior ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 42/52" }, { "page_index": 417, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_049.png", "page_index": 417, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:01+07:00" }, "raw_text": "Case Study" }, { "page_index": 418, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_050.png", "page_index": 418, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:04+07:00" }, "raw_text": "Full (simplified) AlexNet architecture BK 5 Convolutional Layers 1000 ways Softmax 192 2048 dense 19 128 224 dense dense 1000 55 192 192 128Max Max pooling 2048 224 Max 2048 Stride 128 of4 pooling pooling 3 Fully-Connected Layers Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 43/52" }, { "page_index": 419, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_051.png", "page_index": 419, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:07+07:00" }, "raw_text": "Full (simplified AlexNet architecture BK 227x227x3 INPUT 55x55x96 CONV1: 96 11x11 filters at stride 4, pad 0 27x27x96 MAX POOL1: 3x3 filters at stride 2 27x27x96 NORM1: Normalization layer 27x27x256 CONV2: 256 5x5 filters at stride 1, pad 2 13x13x256 MAX POOL2: 3x3 filters at stride 2 13x13x256 NORM2: Normalization layer 13x13x384 CONV3: 384 3x3 filters at stride 1, pad 1 13x13x384 CONV4: 384 3x3 filters at stride 1, pad 1 13x13x256 CONV5: 256 3x3 filters at stride 1, pad 1 6x6x256 MAX POOL3: 3x3 filters at stride 2 4096 FC6: 4096 neurons 4096 FC7: 4096 neurons 1000 FC8: 1000 neurons (class scores Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 44/ 52" }, { "page_index": 420, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_052.png", "page_index": 420, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:10+07:00" }, "raw_text": "Full (simplified AlexNet architecture BK First use of ReLU Used Norm layers (not common anymore) - heavy data augmentation Dropout 0.5 Batch size 128 SGD Momentum 0.9 Learning rate 1e-2, reduced by 10 manually when val accuracy plateaus L2 weight decay 5e-4 7 CNN ensemble: 18.2% -i. 15.4% ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 45/52" }, { "page_index": 421, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_053.png", "page_index": 421, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:12+07:00" }, "raw_text": "ZFNet BK Modified version of AlexNet : CONV1: change from (11x11 stride 4) to (7x7 stride 2) . C0NV3,4,5: instead of 384, 384, 256 filters use 512, 1024, 512 * ImageNet top 5 error: 15.4% -i 14.8% -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 46/52" }, { "page_index": 422, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_054.png", "page_index": 422, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:14+07:00" }, "raw_text": "VGGNet BK [Simonyan and Zisserman, 2014] o Only 3x3 CONV stride 1, pad 1 and 2x2 MAX POOL stride 2 11.2% top 5 error in ILSVRC 2013 -> 7.3% top 5 error Number of parameters? -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 47/52" }, { "page_index": 423, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_055.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_055.png", "page_index": 423, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:21+07:00" }, "raw_text": "VGGNet BK . INPUT: [224x224x3] memory: 224*224*3=150K params: 0 CONV3-64:224x224x64]memory:224*224*64=3.2M params:3*3*3)*64=1,728 . CONV3-64:224x22464]memory:224*224*64=3.2Mparams:3*3*64)*64=36,864 . POOL2: [112x112x64] memory: 112*112*64=800K params: 0 C0NV3-128:112x112x128] memory: 112*112*128=1.6M params:(3*3*64)*128=73,728 . C0NV3-128: [112x112x128] memory: 112*112*128=1.6M params: (3*3*128)*128 = 147,456 . POOL2:[56x56x128] memory: 56*56*128=400K params: 0 CONV3-256:56x56x256] memory: 56*56*256=800K params: (3*3*128)*256 = 294,912 . CONV3-256:[56x56x256] memory:56*56*256=800K params:(3*3*256)*256=589,824 . CONV3-256:56x56x256]memory:56*56*256=800Kparams:3*3*256)*256=589,824 POOL2: [28x28x256] memory: 28*28*256=200K params: 0 . CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*256)*512 = 1,179,648 . CONV3-512:[28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296 CONV3-512: [28x28x512] memory: 28*28*512=400K params: (3*3*512)*512 = 2,359,296 . P0OL2: [14x14x512] memory: 14*14*512=100K params: 0 (not counting biases) . CONV3-512: [14x14x512] memory: 14*14*512=100K . CONV3-512:1414512] memory: 14*14*512=100K . CONV3-512: [14x14x512] memory: 14*14*512=100K POOL2:77512] memory: 7*7*512=25K params: 0 FC:[114096]memory:4096 params:7*7*512*4096=102,760,448 FC: [1x1x4096] memory: 4096 params: 4096*4096 = 16,777.216 FC: [1x1x1000] memory: 1000 params: 4096*1000 = 4,096,000 .ecturer: l Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 48/52" }, { "page_index": 424, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_056.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_056.png", "page_index": 424, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:23+07:00" }, "raw_text": "VGGNet BK : TOTAL memory: 24M * 4 bytes = 93MB / image (only forward! *2 for bwd) : TOlAL params: 138M parameters -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 49/52" }, { "page_index": 425, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_057.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_057.png", "page_index": 425, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:25+07:00" }, "raw_text": "GoogLeNet 0011001 Convolution Pooling Softmax Concat/Normalize ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 50/52" }, { "page_index": 426, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_058.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_058.png", "page_index": 426, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:29+07:00" }, "raw_text": "GoogLeNet BK Filter Filter concatenation concatenation 3x3 convolutions 5x5 convolutions 1x1 convolutions 1x1 convoiutions 3x3convolutions 5x5convolutions 3x3 max pooling 1x1convolutions 1x1 convolutions 11 convolutions 3x3 max pocling Frevious layer Previous layer (a) Inception module, naive version (b) Inception module with dimension reductions ILSVRC 2014 winner (6.7% top 5 error) Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 51/ 52" }, { "page_index": 427, "chapter_num": 6, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_059.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_6_b/slide_059.png", "page_index": 427, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:31+07:00" }, "raw_text": "GoogLeNet BK Fun features : Only 5 million params! (Removes FC layers completely) Compared to AlexNet : 12X less params 2x more compute - 6.67% (vs. 16.4% -ecturer:Duc Dung Nguyen,PhD.Contactnddung@hcmut.edu.vn Deep Learning 52/52" }, { "page_index": 428, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_001.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_001.png", "page_index": 428, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:32+07:00" }, "raw_text": "BK TP.HCM Deep Learning Recurrent Neural Networks Lecturer: Duc Dung Nguyen, PhD Contact: nddung@hcmut.edu.vr Faculty of Computer Science and Engineering Hochiminh city University of Technology" }, { "page_index": 429, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_002.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_002.png", "page_index": 429, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:34+07:00" }, "raw_text": "Contents BK 1. Unfolding Computational Graphs 2. Recurrent Neural Networks 3. Bidirectional RNNs 4. Encoder-Decoder Architectures 5. Other RNNs 6. The Long Short-Term Memory ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 1 / 47" }, { "page_index": 430, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_003.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_003.png", "page_index": 430, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:35+07:00" }, "raw_text": "Unfolding Computational Graphs" }, { "page_index": 431, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_004.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_004.png", "page_index": 431, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:37+07:00" }, "raw_text": "Unfolding Computational Graphs BK Computational graph: a way to formalize the structure of a set of computations Unfolding a recursive or recurrent computation into a computational graph that has a repetitive structure ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 2/47" }, { "page_index": 432, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_005.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_005.png", "page_index": 432, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:39+07:00" }, "raw_text": "Unfolding Computational Graphs BK . The classical form of a dynamical system s(t)= f(s(t-1);0 (1)) where s(t) is called the state of the system ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 3/ 47" }, { "page_index": 433, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_006.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_006.png", "page_index": 433, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:41+07:00" }, "raw_text": "Unfolding Computational Graphs BK s... s(t+1) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 4/ 47" }, { "page_index": 434, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_007.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_007.png", "page_index": 434, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:43+07:00" }, "raw_text": "Unfolding Computational Graphs BK Any function involving recurrent can be considered as a feedforward network A single, shared model allows generalization to sequence lengths that did not appear in the training set Allows the model to be estimated with far fewer training examples : The unfolded graph provides an explicit description of which computations to perform Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 5/47" }, { "page_index": 435, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_008.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_008.png", "page_index": 435, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:44+07:00" }, "raw_text": "Recurrent Neural Networks" }, { "page_index": 436, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_009.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_009.png", "page_index": 436, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:46+07:00" }, "raw_text": "Motivation BK one to one one to many many to one many to many many to many Vanilla Neural Networks Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 6 / 47" }, { "page_index": 437, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_010.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_010.png", "page_index": 437, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:47+07:00" }, "raw_text": "Motivation BK 20 79 ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 7 / 47" }, { "page_index": 438, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_011.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_011.png", "page_index": 438, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:49+07:00" }, "raw_text": "Recurrent Neural Networks BK h(t-1) h(t) h(t+1) Unfold x(t-1) x(t+1) 2c(t) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 8 / 47" }, { "page_index": 439, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_012.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_012.png", "page_index": 439, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:52+07:00" }, "raw_text": "Recurrent Neural Networks BK Some examples of important design patterns: RNNs that produce an output at each time step and have recurrent connections between hidden units . RNNs that produce an output at each time step and have recurrent connections only from the output at one time step to the hidden units at the next time step RNNs with recurrent connections between hidden units, that read an entire sequence and then produce a single output Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 9/47" }, { "page_index": 440, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_013.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_013.png", "page_index": 440, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:55+07:00" }, "raw_text": "Recurrent Neural Networks BK L(t-1) .t) L(t+1) o(t-1) t Unfold 1V V W W W W W h(t-1) h(t) h(t+1) U U U x(t x(t+1) Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 10/ 47" }, { "page_index": 441, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_014.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_014.png", "page_index": 441, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:02:58+07:00" }, "raw_text": "Recurrent Neural Networks BK y(t-1) L(t-1) L(t) L(t+1) o(t-1) ot) W W W W Av Av V W Unfold h(t-1) h(t) h(t+1) U U U x(t-1) x(t+1) Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 11 / 47" }, { "page_index": 442, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_015.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_015.png", "page_index": 442, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:01+07:00" }, "raw_text": "Recurrent Neural Networks BK L(r) V h(t-1) h(t) h(r) M W U U x(t-1) x(t) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 12/47" }, { "page_index": 443, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_016.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_016.png", "page_index": 443, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:03+07:00" }, "raw_text": "Teacher Forcing and Networks with Output Recurrence BK Less powerful: lacks hidden-to-hidden recurrent connections Cannot simulate a universal Turing machine The output units are explicitly trained to matched the training set targets - unlikely to capture the necessary information about the past history - Describe full state of the system? ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 13/47" }, { "page_index": 444, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_017.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_017.png", "page_index": 444, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:05+07:00" }, "raw_text": "Teacher Forcing and Networks with Output Recurrence BK Teacher forcing A procedure that emerges from the maximum likelihood criterion: during training, the model receives the ground truth output y(t) as input at time t + 1. The conditional maximum likelihood criterion is log p(y(1),y(2)x(1),x(2)) = log p 2 During training: feeding the model's own output back into itself - these connections should be fed with the target values specifying what the correct output should be Lecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 14 / 47" }, { "page_index": 445, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_018.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_018.png", "page_index": 445, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:08+07:00" }, "raw_text": "Teacher Forcing and Networks with Output Recurrence BK (t-1 1t L(t) W o(t-1) o(t-1) W Av V V V h(t-1) h(t) h(t-1) h(t U U x(t-1) xt) x(t-1) x(t) Train time Test time LecturerDuc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 15 / 47" }, { "page_index": 446, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_019.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_019.png", "page_index": 446, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:09+07:00" }, "raw_text": "Teacher Forcing and Networks with Output Recurrence Disadvantages When the network is going to be used in an open-loop mode - problem! Inputs in training can be quite different from the test time ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 16/47" }, { "page_index": 447, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_020.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_020.png", "page_index": 447, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:11+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK Training: applies the generalized back-propagation algorithm to the unrolled computational graph : Back-propagation through time (BPTT) algorithm ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 17/47" }, { "page_index": 448, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_021.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_021.png", "page_index": 448, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:13+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK The nodes of our computational graph include the parameters U, V,W,b and c . The sequence of nodes are indexed by t for x(t), h(t), o(t) and L(t) . For each node N we need to compute the gradient VvL recursively : Start the recursion with the nodes immediately preceding the final loss OL (3) L(t) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 18 / 47" }, { "page_index": 449, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_022.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_022.png", "page_index": 449, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:15+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK Assume: probabilities over the output The gradient V,(t)L on the outputs at time step t, for all i,t: OL OL L(t) (Vo(t)L)i= (t)) (4) 0ot OL(t) Lecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 19 / 47" }, { "page_index": 450, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_023.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_023.png", "page_index": 450, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:18+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK Back-propagation Starting from the end of the sequence . At the final time step T , h(+) only has o(+) as a descendent, so its gradient is simple @h(t+1)) o(t) Vh(t)L = V h(t+1 (Vo(t)L) Oh(t) Oh(t) (5) W(V h(t+1)L)diag +VT(V.(t)L) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 20/ 47" }, { "page_index": 451, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_024.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_024.png", "page_index": 451, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:21+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK o How about Vw f? used only at time step t Use Vw(t) to denote the contribution of the weights at time step t to the gradient ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 21/47" }, { "page_index": 452, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_025.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_025.png", "page_index": 452, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:24+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK The gradient on the remaining parameters do(t) VcL = Vo(t)L = ZV.cL dc t t Oh(t) VbL= Vh(t)L = diag (6) b(t) t OL VvI=CZ o(t) t ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 22/ 47" }, { "page_index": 453, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_026.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_026.png", "page_index": 453, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:27+07:00" }, "raw_text": "Computing the Gradient in a Recurrent Neural Network BK OL VwL= h(t) diag Vh(t) L)h(t-1) (7) OL vv1=2 Vu(t)h(t) h(t) t diag h(t) L)x(t) ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 23/ 47" }, { "page_index": 454, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_027.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_027.png", "page_index": 454, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:29+07:00" }, "raw_text": "Recurrent Networks as Directed Graphical Models BK 2 3) y y(5) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 24/ 47" }, { "page_index": 455, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_028.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_028.png", "page_index": 455, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:31+07:00" }, "raw_text": "Recurrent Networks as Directed Graphical Models BK Difficult to predict missing values in the middle of the sequence The price for the reduced number of parameters: optimizing the parameters may be difficult ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 25/47" }, { "page_index": 456, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_029.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_029.png", "page_index": 456, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:33+07:00" }, "raw_text": "Modeling Sequences Conditioned on Context with RNNs BK Some common ways of providing an extra input to an RNN: - As an extra input at each time step : As the initial state h(0) Both ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 26/47" }, { "page_index": 457, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_030.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_030.png", "page_index": 457, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:36+07:00" }, "raw_text": "Modeling Sequences Conditioned on Context with RNNs BK (t-1) (t) (t+1 L(t-1) L(t) U L(t+1) t o(t+1) V V v W h(t-1) h(t) h(t+1) R R R R R a Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 27/47" }, { "page_index": 458, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_031.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_031.png", "page_index": 458, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:39+07:00" }, "raw_text": "Modeling Sequences Conditioned on Context with RNNs BK (t-1) (t+1) L(t-1) L(t) L(t+1) R R R o(t-1) o(t+1) V W W W h(t-1) h(t) h(t+1) x(t-1) x(t) x(t+1) Lecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 28/47" }, { "page_index": 459, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_032.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_032.png", "page_index": 459, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:41+07:00" }, "raw_text": "Bidirectional RNNs" }, { "page_index": 460, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_033.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_033.png", "page_index": 460, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:42+07:00" }, "raw_text": "Bidirectional RNNs BK So far: \"causual\" structure - What if a prediction of y(t) depend on the whole input sequence? Bidirectional RNNs ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 29/47" }, { "page_index": 461, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_034.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_034.png", "page_index": 461, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:44+07:00" }, "raw_text": "Bidirectional RNNs BK - Let remember the past And think about the future ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 30/47" }, { "page_index": 462, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_035.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_035.png", "page_index": 462, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:46+07:00" }, "raw_text": "Bidirectional RNNs BK L(t) Lt+1 o(t+1) t-1) h(t-1) h(t) h(t+1) x(t-1) x(t) x(t+1) Lecturer: Duc Dung Nguyen,PhD. Contact: nddung@hcmut.edu.vn Deep Learning 31/ 47" }, { "page_index": 463, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_036.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_036.png", "page_index": 463, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:47+07:00" }, "raw_text": "Encoder-Decoder Architectures" }, { "page_index": 464, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_037.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_037.png", "page_index": 464, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:49+07:00" }, "raw_text": "Encoder-Decoder Seguence-to-Sequence Architectures BK . Can we map one sequence to another? The input and the output are not necessary to have the same length Examples? .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 32/47" }, { "page_index": 465, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_038.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_038.png", "page_index": 465, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:51+07:00" }, "raw_text": "Encoder-Decoder Seguence-to-Sequence Architectures BK Examples : Speech recognition Machine translation . Question answering . Etc. ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 33/47" }, { "page_index": 466, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_039.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_039.png", "page_index": 466, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:53+07:00" }, "raw_text": "Encoder-Decoder Sequence-to-Sequence Architectures BK Input to RNN: context : Produce a representation of context C Context C: a vector or sequence of vectors that summarize the input sequence X =(x(1),..,x(nx)) ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 34/47" }, { "page_index": 467, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_040.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_040.png", "page_index": 467, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:56+07:00" }, "raw_text": "Encoder-Decoder Seguence-to-Seguence Architectures BK Encode 2) x.. [T2g Decode 1 (2) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 35/47" }, { "page_index": 468, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_041.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_041.png", "page_index": 468, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:58+07:00" }, "raw_text": "Encoder-Decoder Sequence-to-Sequence Architectures BK Encoder and decoder do not need to have the same size Limitation: the output of encoder has a dimension that is too small to properly summarize a long sequence ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 36/47" }, { "page_index": 469, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_042.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_042.png", "page_index": 469, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:03:59+07:00" }, "raw_text": "Other RNNs" }, { "page_index": 470, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_043.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_043.png", "page_index": 470, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:01+07:00" }, "raw_text": "Deep Recurrent Networks BK h ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 37/ 47" }, { "page_index": 471, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_044.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_044.png", "page_index": 471, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:03+07:00" }, "raw_text": "Recursive Neural Networks BK A generalization of recurrent networks : A different kind of computational graph: a deep tree, rather than the chain-like structure of RNNs E.g.: processing data structures as input to neural nets (both in NLP as well as in CV ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 38/47" }, { "page_index": 472, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_045.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_045.png", "page_index": 472, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:05+07:00" }, "raw_text": "Recursive Neural Networks BK VI VI x4) ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 39 / 47" }, { "page_index": 473, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_046.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_046.png", "page_index": 473, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:07+07:00" }, "raw_text": "Recursive Neural Networks BK Advantage: for a sequence of the same length T, the depth can be drastically reduced from T to O(log T) Might help deal with long-term dependencies How to best structure the tree? ecturer:Duc Dung Nguyen,PhD.Contact:nddung@hcmut.edu.vn Deep Learning 40/47" }, { "page_index": 474, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_047.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_047.png", "page_index": 474, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:10+07:00" }, "raw_text": "The Challenge of Long-term Dependencies BK 4 3 2 1 2 1 3 0 .4 -60 -40 -20 20 40 60 Input coordinate Lecturer: Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 41/47" }, { "page_index": 475, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_048.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_048.png", "page_index": 475, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:11+07:00" }, "raw_text": "The Long Short-Term Memory" }, { "page_index": 476, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_049.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_049.png", "page_index": 476, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:13+07:00" }, "raw_text": "The Long Short-Term Memory LSTM (Hochreiter and Schmidhuber, 1997) LSTM: introduce self-loops to produce paths where gradient can flow for a long durations . Make the weight on this self-loop conditioned on the context, rather than fixed : Gates : Time scale of integration can be changed dynamically ecturer:Duc Dung Nguyen,PhD.Contact: nddung@hcmut.edu.vn Deep Learning 42/47" }, { "page_index": 477, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_050.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_050.png", "page_index": 477, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:15+07:00" }, "raw_text": "The Long Short-Term Memory BK * The time scale of integration can change based on the input sequence . The time constants are output by the model itself. .ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 43/47" }, { "page_index": 478, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_051.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_051.png", "page_index": 478, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:17+07:00" }, "raw_text": "The Long Short-Term Memory BK output self-loop state input nput gate forget gate output gate ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 44 / 47" }, { "page_index": 479, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_052.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_052.png", "page_index": 479, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:19+07:00" }, "raw_text": "The Long Short-Term Memory BK Forget gate (8) 1 Internal state bi+Ui,jx(t)+ Wi (9) 9: ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 45/ 47" }, { "page_index": 480, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_053.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_053.png", "page_index": 480, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:22+07:00" }, "raw_text": "The Long Short-Term Memory BK External input gate t b+ZUjx)+Z W9.h(t-1) (10) 9 : Output gate qi t (11) b+Ev,x+E t (12) Hi 7 j ecturer: Duc Dung Nguyen, PhD.Contact: nddung@hcmut.edu.vn Deep Learning 46/ 47" }, { "page_index": 481, "chapter_num": 7, "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_054.png", "metadata": { "doc_type": "slide", "course_id": "CO3117", "source_file": "/workspace/data/converted/CO3117_Machine_Learning/Chapter_7/slide_054.png", "page_index": 481, "language": "en", "ocr_engine": "PaddleOCR 3.2", "extractor_version": "1.0.0", "timestamp": "2025-11-01T11:04:24+07:00" }, "raw_text": "The Long Short-Term Memory LSTM networks learns long-term dependencies better Optimization : Clipping gradient . Regularizing: encourage information flow . Case studies: Memory networks (Westion et al., 2014) Neural Turing machine (Graves et al., 2014) Multiple object recognition with attention (Ba et al.) : Image captioning ecturer: Duc Dung Nguyen, PhD. Contact: nddung@hcmut.edu.vn Deep Learning 47/ 47" } ] }